Introduction

As a company using artificial intelligence (AI) and advanced analytics in the public sector, we are committed to ensuring that our practices are in line with the highest standards of information governance and ethics. We understand the importance of transparency, fairness, governance, and security in the use of AI, and we have implemented robust measures to ensure that our AI systems operate in a responsible and ethical manner.

Our mission is to change lives through data. As such, advanced analytics and AI form a fundamental part of the products and services provided by xantura, helping us to identify vulnerable individuals who may require extra support and assistance, across domains like adult social care and homelessness. The recent guidance published by the Department for Science, Innovation & Technology (DSIT), “Introduction to AI Assurance“, is therefore especially relevant to our operations.

In this document, we aim to showcase how we are effectively managing the risks associated with the assurance of AI within our organisation. We invite others to be similarly transparent about their processes, in an effort to build trust and confidence within the AI sector in the UK.

 

 

In Practice

Any data that we receive from our clients, be it local authorities, police, or health providers, first undergoes a client-side pseudonymisation process. In this process, personally identifiable information (PII) is encrypted. This enables us to create powerful predictive models from source-system data without ever knowing the personal details of individuals. As a result, while we can surface profiles of individuals who are at risk of becoming homeless, for instance, only the client can de-pseudonymise the individual for a suitable intervention. 

 

 

Transparency & Explainability

As the predictions of AI models may impact the individuals that they surface, it is essential to be able to explain specific predictions. To address this, we use tools that can show the decision path taken by a model. These decision paths show which variables within the data were found to be important for a particular prediction, as well as which variables are generally important across an entire population. For instances where this approach isn’t feasible, particularly in cases involving complex algorithms like Large Language Models (LLMs), we mandate human oversight to iteratively validate every output before the model is used in a production environment. 

 

 

Accountability & Governance

To ensure responsible provision and oversight of AI systems throughout their lifecycle, our governance framework also includes comprehensive monitoring and alert mechanisms. These mechanisms are designed to track deployed AI models for performance consistency, data integrity, and relevance, promptly identifying and addressing any deviations from expected outcomes. Alerts are configured to notify relevant stakeholders when model performance deteriorates beyond predefined thresholds, enabling timely interventions. This approach guarantees clear accountability and effective governance across the AI system’s entire lifecycle, from deployment to ongoing management, and helps to make transparent some of the most impactful processes within xantura. 

Fairness

Having our processes set up as above allows us to uphold individuals’ legal rights and to prevent unfair discrimination or adverse outcomes. This is achieved by monitoring our AI models for potential biases by evaluating their performance discrepancies among demographic subgroups. It is important to note that protected characteristics are generally not utilised as predictive features within our models. Instead, they serve to assess the fairness of model predictions, ensuring equitable outcomes across all demographics. In certain cases, however, to address and mitigate unfairness, we strategically use protected characteristics (e.g. age, sex) to identify and support disadvantaged individuals who might otherwise be overlooked. This approach allows us to tailor interventions and provide necessary assistance to those disproportionately affected by specific conditions, thus promoting fairness and equality in the application of AI technologies. 

 

 

Accountability & Redress

In addition to this, our framework emphasises accountability and recourse, by setting out to safeguard the rights of individuals and organisations potentially impacted by AI decisions. This is achieved by inviting client participation throughout the model development process, ensuring their perspectives and feedback are integrated, providing mechanisms for clients to question or challenge specific outputs of AI components, and ensuring that final decisions, especially those with significant consequences, rest with human decision-makers on the client side. While our AI systems may offer recommendations, the ultimate authority lies with individuals who can interpret, question, and override AI suggestions based on a comprehensive understanding of the context. This approach not only fosters trust in AI applications but also ensures that users and affected parties have meaningful opportunities to contest decisions that could affect them. 

 

 

Safety, Security, & Robustness

Finally, all the preceding considerations are supported by hosting all AI systems, and associated infrastructure, within the Microsoft Azure ecosystem. This is in alignment with the guiding principle that AI systems should operate in a robust, secure, and safe manner, with ongoing identification, assessment, and management of risks. The chosen ecosystem is specifically designed to safeguard business-critical data, and so with the above processes remaining entirely within Azure services, such as Azure Machine Learning Studio and Azure DevOps, we make sure that any work carried out is comprehensively protected, with appropriate permissions set in place, to guarantee that all data is processed securely and safely. 

 

 

Summary

In summary, artificial intelligence is a tool that can be used to effect significant changes across a wide number of domains. It is the responsibility of its users and implementers to demonstrate that all the necessary considerations have been made with respect to transparency, fairness, governance, security, etc. In this document, we hope to have shown an insight into how xantura is enacting these principles throughout its work, and we encourage others to be similarly open – to foster the right, trusting environment to propel the UK as a leader in innovation in the AI space.