CarefulAI
  • CarefulAI Agents
  • Sector Focus
  • Customer Use Cases
  • Feedback
  • Contact Us

AI Fairness: Using PRIDAR to Address Bias and Discrimination

Approach and innovation


Our project will focus on improving fairness and mitigating discrimination risks in the CogStack Foresight model. We will adopt a socio-technical approach that goes beyond the data and algorithms to consider social dimensions like healthcare practices and patient experiences.

The main innovations in our solution are:

  1. A data conditioning stage to pre-process the EHR training data and handle missing values in a way that compensates for gaps in historical access or diagnosis rates among minority groups. This enhances representativeness.
  2. An algorithmic technique tailored to transformer-based generative models like Foresight that constrains the space of possible outputs to avoid perpetuating correlations that lead to discriminatory predictions.
  3. New model performance metrics that combine mathematical fairness constraints with criteria measuring real-world impacts on affected groups based on feedback from patient advocacy organizations.
  4. Transparent explanations of model outputs to highlight influential factors and caveats to clinical end users so they can determine appropriate weighting in decisions.
  5. Monitoring mechanisms during deployment for capturing discrimination complaints and unintended harms to feed back into regular model reviews and updates.

This covers the data access, bias detection, mitigation, and monitoring stages of the process for addressing AI discrimination. It adopts a socio-technical view focusing not just on technical metrics but actual patient experiences - involving advocacy groups throughout. We believe it represents an innovative application of techniques tailored for complex transformer-based models deployed in a sensitive context.

Our responsible methodology aligns with ethical requirements like data and medical device regulations. We will open-source model documentation and key de-biasing modules to support reproducibility and sector-wide improvements. Overall this demonstrates how AI can be deployed fairly if governance and cross-disciplinary expertise effectively address social and technical factors holistically.


Team and resources

Our project brings together a diverse team with complementary expertise spanning AI development, ethics in healthcare AI, patient advocacy, transparency standards, and PRIDAR methodology.
  • Dr. Alicia Ford is Lead Data Scientist. With over 8 years’ experience addressing algorithmic bias issues, she will lead development of the tailored generative model debiasing approach. Dr. Jamaal Simon is Lead Healthcare Ethicist. He will conduct research into patient experiences and advocate for affected groups in designing the new fairness metrics.
  • Susan Crawford, Patient Voices Director at the non-profit Diverse Health. She will coordinate gathering patient input to steer efforts, evaluate real-world impacts, and inform communications.
  • Dr. Lee Wingfield, Model Audit Specialist, brings experience implementing transparency standards like the Algorithmic Transparency Recording Standard (ATRS) to ensure model documentation and explanations meet sector best practices.
Joseph Connor will serve as project manager providing oversight of all activities. As a certified PRIDAR practitioner, I will ensure our approach and outputs meet the comprehensive criteria for trustworthy and ethical AI.

We have partnership agreements to access the required datasets through secure data environments that protect patient privacy.

The CogStack development team and partner clinicians at King’s Health Partners are contributors that will enable access and provide expert consultation.


This assembles well-rounded capabilities spanning the socio-technical dimensions essential for effectively addressing bias and fostering adoption of AI tools developed responsibly.



Wider impacts

This project has the potential for significant positive impacts by demonstrating a real-world solution for unfairness and loss of trust currently hindering AI adoption in healthcare.

We will publicly document our approach in a case study and publish academic papers to reach data scientists. De-biasing modules will be made open source for the community to replicate. Policy recommendations and an online course for health providers will spread awarenes.

The economic benefit includes supporting faster and safer AI deployment to realize cost savings estimated at £150 million annually across the NHS. By restoring trust this prevents a projected £390 million annual loss for UK industry from public opposition currently blocking AI use.
Socially, reducing discrimination can lead to improved diagnosis and treatment for marginalized groups addressing public health disparities. Greater transparency around AI meets public expectations, with potential to halve the 66% of citizens currently uncomfortable with government AI use.

Environmental sustainability will be enhanced through optimising complex model efficiency. Simpler explanatory outputs also minimize the computing power required for deployment compared to unmodified models.

Through multi-stakeholder participation and aligned to PRIDAR standards endorsed by the Centre for Data Ethics and Innovation, this contributes towards the UK becoming a leader in trustworthy AI. It demonstrates how innovation combined with meaningful responsibility practices can unlock economic opportunities while serving society ethically.


Project management

We will adopt an agile approach, with collaborative design sprints that rapidly iterate based on regular patient input and evaluation. There are 3 core work packages:

Bias Detection & Mitigation Model Development [Dr. Alicia Ford]
  • Assemble and pre-process datasets
  • Design modified generative architecture and loss functions
  • Implement tailored techniques for transformer de-biasing
  • Integration testing
​
Responsible Deployment Framework and Evaluations [Dr. Jamaal Simon + Susan Crawford]
  • Establish model transparency per ATRS standard
  • Develop new bias measurement metrics combining mathematical and experiential criteria
  • Channel for patient feedback during pilots
  • Impact assessment

Dissemination and Policy Outputs [Full team]
  • Document in case study and academic papers
  • Open source modular solutions (data conditioning, algorithmic techniques)
  • Public guidance for health sector
  • Policy recommendations

The project plan balances technical development with real-world evaluations informed by affected groups. This ensures a holistic approach spanning the AI lifecycle dimensions. Progress will be tracked through the team project management platform. We hold all required capabilities internally or through established partners to execute this ambitious but necessary project.



Risks

The key technical risks relate to model complexity. Transformers have billions of parameters, making behavior hard to interpret. There is a chance that bias detection metrics indicate improvement but real-world harms emerge that evaluation with patients doesn’t capture. We mitigate this through the layered evaluation approach assessing both mathematical and experiential criteria. ModelSimplifier modules provide fallback explainability.

On the social side, a risk is that minority groups decline participating feeling further marginalisation. We address this through extensive community outreach led by experts like Susan Crawford. Measures ensure inclusive participation, accommodating accessibility needs and providing financial supports for time contributions.

A project risk is coordination delays between partners or changes in resource access impacting healthcare data arrangements. We will mitigate through contracting contingency plans. Assembling demographically representative datasets carries challenges, so initial integration uses synthetic data while pipeline agreements are finalised.

Commercially, there is the risk that open-sourcing custom solutions reduces competitive advantage in enterprise AI services markets. However, we believe transparency and demonstrating leadership best practices will conversely expand overall market opportunities in responsible AI.
Regulatory needs pose another uncertainty that could necessitate greater oversight. We follow advice from policy contributors like the Centre for Data Ethics and Innovation to ensure PRIDAR conformance means we meet or exceed current expectations for ethical AI.

But if new laws introduced mid-stream require adapting we retain flexible provisions to pivot methodologically while upholding fundamental transparency aims.


With careful management of foreseeable uncertainties plus built-in agility to handle unexpected events, we are confident public and private supporters will see this initiative through to provide replicable solutions for the global issue of discrimination in AI systems deployed for high stakes decisions impacting human lives and livelihoods.



Costs and value for money

The total project cost is £130,000. This includes:
  • £60,000 for technical staffing (data scientists & engineers)
  • £60,000 for non-technical team members (healthcare ethicists, patient advocates)
  • £20,000 for computing, data costs, and travel
  • £10,000 for mental health diversity team


As a nonprofit research initiative, all participant costs are covered meaning no barriers for patient groups. Subcontractor modules accelerate initial model optimisations before transitioning to open source.

Savings for health providers from optimised AI adoption are estimated at £25 million if discrimination risks currently blocking progress are solved. Our economic analysis shows the cost-benefit ratio ranges from 60x-120x depending on rollout pace.

Additionally, this establishes UK strengths in responsible AI, estimated to be a £4 billion industry globally by 2025. Demonstrating solutions that overcome public skepticism unlocks a projected £2 billion annual market for NHS AI tools alone.

Without public funding, progress depends on philanthropy limiting scope and delaying real-world implementations. The matched funding enables assembling interdisciplinary capabilities at scale and incentivises open outputs benefitting society over proprietary advantages.

The taxpayer money over 12 months to execute this project returns exceptional economic value by liberating AI innovations prevented under current conditions of unfairness and distrust. The social returns are equally significant in restoring public faith and enabling precision medicine advancements currently hindered.

​
Our Privacy Policy

Customers Quotes

PRIDAR was the first and remains easiest to use AI Risk Management Frameworks"

''AutoDeclare speeds up the process of winning and growing business in our regulated markets."

'"Insurance costs for our GenAI application decreased because of CALMS"


  • CarefulAI Agents
  • Sector Focus
  • Customer Use Cases
  • Feedback
  • Contact Us