The Generative AI Safety and Information Security Open Network..
The GenAIsis Open Network
The case for the GenAIsis Network
Situation:
DCB 0129 is the only IT risk design framework that is required in UK healthcare legislation. It is of value as it recognises that clinical and non clinical AI systems pose a potential risk to healthare user. It is of value to the process of making GenAI systems safe and secure in healthcare as it is where the genesis of a potential healthcare system is manifested. It is useful, in the process of designing or fine tuning a GenAI model as it where the risks to good design are documented.
Other standards that relate to GenAI that could be applied in UK healthcare are optional. These standards include.
- ISO 31000:2018 Risk management Guidelines Provides a general framework for identifying, assessing, and mitigating risks associated with generative AI in healthcare, like bias, transparency, and data privacy issues.
- ISO 14971:2019 Medical devices. The Application of risk management to medical devices. Applicable to AI-powered medical devices, guides assessing safety and effectiveness throughout the development and use of such devices, minimizing potential harm to patients.
- ISO 27001:2013 Information security management system Requirements. This helps establish secure systems for handling sensitive healthcare data used by generative AI, reducing risks of breaches and unauthorized access.
- ISO 27018:2014 Information technology fpr protected privacy cloud service. Ensures adequate privacy protection for healthcare data stored or processed in cloud-based solutions related to generative AI, addressing privacy concerns.
- ISO 13485:2016 Medical devices Quality management systems Requirements for regulatory purposes. Promotes robust quality management practices for developing and deploying generative AI tools in healthcare, contributing to reliable and trustworthy solutions.
- ISO 37001:2016 Anti-bribery management systems. Requirements and guidance.Helps maintain ethical practices within healthcare institutions using generative AI, minimizing risks of bias and unfair decision-making based on financial incentives.
- BS 30440 The standard for the validation of AI in healtcare. This sets an evidence framework for life cycle of AI in healthcare.
Awareness of the above issues amongst GenAI developers is scant, and only DCB 0129 fits well into the process or developing and deploying GenAI models.
Likewise awareness of good GenAI best practice in healthcare is low.
Problems:
- Limited Knowledge Sharing: Without an open network, healthcare providers might lack access to the latest information and best practices in AI safety and security, clinical risk will therefore increase.
- Inconsistent Standards Application: Different organizations might interpret and apply standards variably, leading to inconsistent levels of safety and security in AI applications.
- Isolation of Smaller Entities: Smaller healthcare providers or developers might not have the same resources as larger entities, leading to disparities in the implementation of safety and security measures.
- Rapid Technological Changes: The fast-paced evolution of Gen AI technology can outpace individual organizations' ability to keep up with best practices and the latest standards.
Implications:
- Without an open shared knowledge base, healthcare AI development might suffer from reinvented solutions and unaddressed common challenges.
- Patient safety and data security could be compromised due to uneven application of safety and security standards.
- The lack of a collaborative platform might slow down the overall progress and innovation in healthcare AI.
Next Steps:
Set up a network of orgnaisations focused on delivering safe design of generative AI around DCB 0129 the only legal requirement for healthcare AI systems. This will involve enabling
- Healthcare Providers and AI Developers to actively participate in an open network for sharing best practices, updates, and insights regarding AI safety and security, and engage in collaborative problem-solving.
- Industry Experts and Academics to contribute to the network by sharing research findings, emerging trends, and technological advancements relating to clinical generative AI safety.
- Regulatory Bodies to more easily obtain accurate and valuable information on clinical generative AI best practice.
- Funding Bodies to invest in the establishment and maintenance of open governance networks thay facilitate continuous scalable learning and collaboration across the healthcare AI sector
GenAIsis aims to be an open network for sharing safety and security practices in line with DCB 0129 the only legal requirement for such systems.
It adds significant value by
- fostering collaboration across the healthcare generative AI ecosystem
- ensuring that all entities, regardless of size, have access to the latest knowledge and best practices in generative AI
Potential Partners of GenAIsis Open Network
- CarefulAI- is the UK's only independent regulatory science design firm, and leads in the development of pro-innovation methods to increase generative AI's safe use in healthcare
- The Alan Turing Institute - As the UK's national institute for data science and artificial intelligence, they would provide research grounding around capabilities and limitations of generative models alongside advising on rigorous testing methodologies.
- DeepMind Health - Their experience developing complex healthcare algorithms could inform approaches balancing innovation and safety for generative AI. They would lead on model development and prototyping.
- The Centre for Data Ethics and Innovation - They would spearhead policy considerations around mitigating risks such as data bias and misinformation with generative healthcare technologies.
- National Voices - As the UK's leading patient and service-user advocacy coalition, they would ensure applications address real patient needs and values like privacy and transparency.
Together this regulatory science network would convene developers, clinicians and the public around a technology with vast potential to aid diagnosis, personalised care and population health once core questions of accountability and equitability have been addressed.
The Discovery phase would
The Implementation phase would apply these learnings to
- CarefulAI- is the UK's only independent regulatory science design firm, and leads in the development of pro-innovation methods to increase generative AI's safe use in healthcare
- The Alan Turing Institute - As the UK's national institute for data science and artificial intelligence, they would provide research grounding around capabilities and limitations of generative models alongside advising on rigorous testing methodologies.
- DeepMind Health - Their experience developing complex healthcare algorithms could inform approaches balancing innovation and safety for generative AI. They would lead on model development and prototyping.
- The Centre for Data Ethics and Innovation - They would spearhead policy considerations around mitigating risks such as data bias and misinformation with generative healthcare technologies.
- National Voices - As the UK's leading patient and service-user advocacy coalition, they would ensure applications address real patient needs and values like privacy and transparency.
Together this regulatory science network would convene developers, clinicians and the public around a technology with vast potential to aid diagnosis, personalised care and population health once core questions of accountability and equitability have been addressed.
The Discovery phase would
- focus stakeholder alignment around issues like transparent provenance for generated data, evaluating model fidelity across populations, and clinical integration with adequate human oversight.
- We would propose policy guidance and demonstration toolkits for generating synthetic yet realistic datasets for training diagnostics, modeling treatment responses, and democratising GenAI in healthcare.
The Implementation phase would apply these learnings to
- real-world pilot studies,
- draft detailed regulatory frameworks around risk management,
- expand international collaboration
- public understanding around appropriate user Cases balancing innovation and ethics.
Summary
CarefulAI aims to spearhead development of ethical frameworks guiding the accountable integration of generative AI in healthcare. This is not something that one firm can, or should do on its own. Accountable safe and secure integraton of Generative AI in healthcare is best achived by an open community dedicated to the cause. This is more easily achieved if one accepts that doing this in line with DCB 0129 provides the network with a scalable solution across UK healthcare.
We recognise GenAI is an emerging technology allows modeling healthcare complexity better than ever before, enabling more personalised and predictive treatment. However, existing policy guidance surrounding
The GenAIsis open network would break ground assessing generative models against metrics like
We will develop
The open network will combine
Patient advocacy is too often omitted from technology development. The absent the voices of those ultimately impacted by GenAI leads to poor product design.
We will activate patients themselves to inform assessment protocols balancing accelerating progress and managing risks.
Our frameworks will give manufacturers clear pathways forward and regulators expanded competencies for evaluation.
Through
CarefulAI presents a bold, inclusive vision for integrating generative AI's capabilities to transform decision making while upholding public trust.
Our breakthrough, multipartner approach recognises governance itself demands innovation, the open network represents a watershed opportunity for the UK to lead in advancing artificial intelligence for patient benefit worldwide.
The network will produce
CarefulAI aims to spearhead development of ethical frameworks guiding the accountable integration of generative AI in healthcare. This is not something that one firm can, or should do on its own. Accountable safe and secure integraton of Generative AI in healthcare is best achived by an open community dedicated to the cause. This is more easily achieved if one accepts that doing this in line with DCB 0129 provides the network with a scalable solution across UK healthcare.
We recognise GenAI is an emerging technology allows modeling healthcare complexity better than ever before, enabling more personalised and predictive treatment. However, existing policy guidance surrounding
- transparency
- interpretability
- unbiased model performance
The GenAIsis open network would break ground assessing generative models against metrics like
- analytic validity
- clinical validity
- utility, and
- patient safety
We will develop
- sophisticated toolkits allowing developers to demonstrate scientifically sound evidence generation and decision making while protecting patient privacy and safety in line with DCB 0129.
The open network will combine
- globally leading expertise in AI innovation
- medical ethics
- regulatory policy
- patient advocacy
- clincian advocacy
Patient advocacy is too often omitted from technology development. The absent the voices of those ultimately impacted by GenAI leads to poor product design.
We will activate patients themselves to inform assessment protocols balancing accelerating progress and managing risks.
Our frameworks will give manufacturers clear pathways forward and regulators expanded competencies for evaluation.
Through
- unprecedented knowledge sharing
- convening stakeholders across data science, clinical implementation, and oversight policy,
CarefulAI presents a bold, inclusive vision for integrating generative AI's capabilities to transform decision making while upholding public trust.
Our breakthrough, multipartner approach recognises governance itself demands innovation, the open network represents a watershed opportunity for the UK to lead in advancing artificial intelligence for patient benefit worldwide.
The network will produce
- material resources and
- leadership expanding regulatory science
- enable itself to sustainably harness generative techniques responsibly improving countless lives.
Public Message
Healthcare stands at the cusp of a new era in which generative artificial intelligence allows modeling biological complexity with unprecedented precision, enabling more personalized and predictive decision making. However, existing policy guidance surrounding transparent development, eliminating bias, and validating real-world utility continues to fall dangerously behind these rapid advances.
CarefulAI recognizes the vast potential of techniques like AI-synthetic data, in silico clinical trials, and biomarker discovery to transform areas from risk stratification to clinical diagnostics and drug and clinical pathway development. But actualising this promise demands evolving open frameworks, assessing scientific rigor, clinical integration, public transparency and simultaneously guarding safety.
Our network convenes globally leading expertise across AI innovation, medical ethics, regulatory policy and patient advocacy to develop sophisticated toolkits allowing developers to demonstrate accountability while protecting privacy.
We will activate patients themselves to inform protocols balancing accelerating progress and managing risks.
The consortium will deliver material frameworks, enable selfegulatory expansion, and international collaboration allowing controlled adoption of breakthroughs improving countless lives.
Too often technology progresses absent the voices of those impacted. Our inclusive vision upholds earning public trust as fundamental to integration. We break ground tackling opaque development, inadequate population testing, and real-world validity gaps undermining adoption of generative healthcare AI to date.
Through unprecedented knowledge sharing across data science, clinical implementation and oversight policy, CarefulAI presents a bold vision for governance innovation itself to match Generative AI's pace.
Our multipartner approach represents a watershed opportunity for the UK to lead in advancing artificial intelligence responsibly for patient benefit worldwide.
We will produce
Healthcare stands at the cusp of a new era in which generative artificial intelligence allows modeling biological complexity with unprecedented precision, enabling more personalized and predictive decision making. However, existing policy guidance surrounding transparent development, eliminating bias, and validating real-world utility continues to fall dangerously behind these rapid advances.
CarefulAI recognizes the vast potential of techniques like AI-synthetic data, in silico clinical trials, and biomarker discovery to transform areas from risk stratification to clinical diagnostics and drug and clinical pathway development. But actualising this promise demands evolving open frameworks, assessing scientific rigor, clinical integration, public transparency and simultaneously guarding safety.
Our network convenes globally leading expertise across AI innovation, medical ethics, regulatory policy and patient advocacy to develop sophisticated toolkits allowing developers to demonstrate accountability while protecting privacy.
We will activate patients themselves to inform protocols balancing accelerating progress and managing risks.
The consortium will deliver material frameworks, enable selfegulatory expansion, and international collaboration allowing controlled adoption of breakthroughs improving countless lives.
Too often technology progresses absent the voices of those impacted. Our inclusive vision upholds earning public trust as fundamental to integration. We break ground tackling opaque development, inadequate population testing, and real-world validity gaps undermining adoption of generative healthcare AI to date.
Through unprecedented knowledge sharing across data science, clinical implementation and oversight policy, CarefulAI presents a bold vision for governance innovation itself to match Generative AI's pace.
Our multipartner approach represents a watershed opportunity for the UK to lead in advancing artificial intelligence responsibly for patient benefit worldwide.
We will produce
- actionable resources
- capabilities and leadership that sustainably harness generative techniques to reach their revolutionary potential
- improve medicine with GenAI within a framework of ethics and accountability.
How this fits the scope of innovation with regulatory science in the UK
CarefulAI’s proposed open regulatory science network is focused on developing generative AI in healthcare that strongly aligns with the aim and scope of UK Innovation policy
Our consortium combines globally leading expertise in AI capabilities, medical ethics, regulatory policy, and patient advocacy.
Our discovery phase supports the network to collaboratively identifying challenges around
- opaque development processes
- Inadequate clinical testing
- algorithmic bias that currently inhibit advancing Generative AI.
- Framing each of these within the legal requirement to maintain a DCB 0129 register.
We will host
- workshops aligning stakeholders
- identify priorities like transparent provenance for synthetic data
- consider methids of evaluating model fidelity across patient populations
- methods of measutin real-world utility
- development of the principles od DCB 0129 to consider clinical, and, technical, output, outcome and commercial risk
The Implementation phase would apply these learnings into sophisticated toolkits, demonstration studies, and ultimately regulatory guidance.
We shall support UK government innovation plans through outputs that facilitate
- knowledge transfer for evidence-based policymaking
- strengthening international partnerships, and
- increasing opportunities to embed generative AI responsibly across clinical, technical and regulatory landscape.
Spanning data science pioneers, frontline medical institutions, oversight bodies, and crucially healthcare consumers themselves, our network offers an unprecedented chance to match governance with AI's pace of advancement.
We fulfill aims to
- progress regulatory science itself as key to unlocking life-changing technology's benefit ethically.
- Our collaborative approach develops tangible resources and competencies so this innovation and regulation evolve together towards improving outcomes.