Q83 Group Notes
Sarah:
Now the content has been written I have put it under sections as you requested. As we expected, there are many interrelated themes. Given the target is a web site resource (e.g. https://digitalhealth.wales/tec-cymru/vc-service ) and the focus is a practical guide to people who are not involved in AI/Tech: I have taken the liberty of putting last week's content into web environment as only Peggy and I were at our last Thursday meeting.
It has
1. Proposed headings
2. A potential metaphor for one of your animations; Ambulance Development
I have also wrapped some of the Tello Board and Power Point Presentation processes into an 'Risk Mitigation' section. What do you think? Can we get some feedback?
John:
I think we need more detail on the change models for adoption. What model do you want to hang AI implementation around: PDSA, NASSS? Some of the issues we could raise are in the 'Models for Adoption' section below. Also, if we are not going to use the guide Peggy brought over from NWIS, then I think we need to do more on the section 'Issues facing planners'. It could be a flow diagram signposting to the resources taken from the Trello Boards. Also the URLS in the Trello Boards cover guidance inside and outside of Wales. Which ones do you want us to add?
John/Helen/Mike(?)
What bits of this are we presenting to the board? We could present the flow diagram discussed above, mapping known products against a NICE DHT framework Perhaps elements from the Appetite for Risk section. We would need to be careful. Against this framework some AI in AB and in the Ecosystem should cause concern. So unlike the framework I have emailed, the framework below does not name specific AI.
As always an and all feedback is welcome
Now the content has been written I have put it under sections as you requested. As we expected, there are many interrelated themes. Given the target is a web site resource (e.g. https://digitalhealth.wales/tec-cymru/vc-service ) and the focus is a practical guide to people who are not involved in AI/Tech: I have taken the liberty of putting last week's content into web environment as only Peggy and I were at our last Thursday meeting.
It has
1. Proposed headings
2. A potential metaphor for one of your animations; Ambulance Development
I have also wrapped some of the Tello Board and Power Point Presentation processes into an 'Risk Mitigation' section. What do you think? Can we get some feedback?
John:
I think we need more detail on the change models for adoption. What model do you want to hang AI implementation around: PDSA, NASSS? Some of the issues we could raise are in the 'Models for Adoption' section below. Also, if we are not going to use the guide Peggy brought over from NWIS, then I think we need to do more on the section 'Issues facing planners'. It could be a flow diagram signposting to the resources taken from the Trello Boards. Also the URLS in the Trello Boards cover guidance inside and outside of Wales. Which ones do you want us to add?
John/Helen/Mike(?)
What bits of this are we presenting to the board? We could present the flow diagram discussed above, mapping known products against a NICE DHT framework Perhaps elements from the Appetite for Risk section. We would need to be careful. Against this framework some AI in AB and in the Ecosystem should cause concern. So unlike the framework I have emailed, the framework below does not name specific AI.
As always an and all feedback is welcome
AI in the NHS
AI is going to change the NHS. But the acceptable level of change depends on ones risk perspective, motivations and resources. It is much like making a change to a moving Ambulance when the change can be a big as replacing the paramedics with robots, or as little as washing the vehicle
Risk Mitigation
Using the Ambulance analogy. There are many changes one could make to an ambulance. Some are very risky, like swapping a paramedic with a robot. Swapping an ambulance with a new type of ambulance is less risky. Cleaning and ambulance is has even less risky. This said. You would do none of this if the ambulance was in motion.
Three assets needs to be in place to Mitigate Risk
AI Sandboxes
These are safe places that ideally replicate the nearest scenario to the real world. Government is developing its own sandboxes and so are the major players: who are academic institutions and AI developers. It could be argued that the most effective scenario is that local health board SMT. clinical and IT teams set up their own AI sandboxes with academics and local firms to drive AI developments.
These would be clinically led, with pre-defined key performance indicators (KPIs) set for AI. They would have support from end user to board level. They would also have Intellectual Property (IP) and Information Governance (IG) agreements in place and a model explaining how data is shared in line with Information Commissioners Office (ICO) guidelines on systems that mirror health board systems.
In practice AI will enter and leave a Sandbox depending on whether it is seen as fit for purpose and can be safely deployed in the NHS. A visualisation of the processes in a Sandbox is shown below
Three assets needs to be in place to Mitigate Risk
AI Sandboxes
These are safe places that ideally replicate the nearest scenario to the real world. Government is developing its own sandboxes and so are the major players: who are academic institutions and AI developers. It could be argued that the most effective scenario is that local health board SMT. clinical and IT teams set up their own AI sandboxes with academics and local firms to drive AI developments.
These would be clinically led, with pre-defined key performance indicators (KPIs) set for AI. They would have support from end user to board level. They would also have Intellectual Property (IP) and Information Governance (IG) agreements in place and a model explaining how data is shared in line with Information Commissioners Office (ICO) guidelines on systems that mirror health board systems.
In practice AI will enter and leave a Sandbox depending on whether it is seen as fit for purpose and can be safely deployed in the NHS. A visualisation of the processes in a Sandbox is shown below
AI Standards
AI compliance is a new and changing field. So standards are changing rapidly. Organisations like the government, MHRA, NICE have different evidence criteria and standards.
A standard that directly affects AI is NHSD/NWIS Digital Co-ordination Board standards. Specifically, the DCB 0129 / 160 standards. These require that all IT projects have a trained clinical safety officer in place. Such officers control the creation and release of a clinical risk / hazard and mitigation log.
The easiest standard to communicate in the NHS is the NICE Digital Health Technology Evidence Framework. (NICE DHT). In essence it says that for a particular pathway one should ensure evidence exists about a particular digital health technology i.e. AI. The level of evidence relates to whether AI aims to have a clinical outcome or not. There are three Tiers in NICE DHT. The lowest tier (Tier 1) has evidence requirements that can take up to 6 months to gather. The highest (Tier 3) can take between 2 and 5 years gather. In practice tier 1 projects are best undertaken in-house by NHS teams, a Tier 2 and above will involve a collaboration between the NHS, Academia and Industry.
Other relevant standards are covered in the Governments Digital Technology Assessment Criteria (DTAC)
A Dissemination Strategy
It is appropriate to disseminate progress (including software and models) on NICE DHT Tier 1 AI under a licence that does not preclude or prejudice its use. As it can and should be deployed quickly. This is best enabled using open source licences. Some like MIT enable the private sector to embody the AI into their solutions and sell it. Others like AGPL do not allow re-sale. NICE DHT Tier 2-3 AI is best disseminated under a non-exclusive Lambert agreement because such AI projects often involve partnerships with academia and industry. Lambert Agreements are template documents freely available from the Intellectual Property offices web site.
AI Portfolio Monitoring
Over and above the clinical risk of bringing AI into use there is a cost risk. The principle cost risk being the time and effort associated with bringing a NICE Tier 1-3 solution to bare within a pathway. For a NICE DHT Tier 1 service this can be around £0.5 million, for a tier 2-3 product this could be between £2-5 million. It is therefore wise for health Boards and Senior Management to work with end users and undertake portfolio monitoring. In these AI developments based around clinical themes are stratified by: the level of risk associated with the AI and an organisations readiness to implement a theme. A method of visualising a portfolio, which incorporates a reporting on Sandbox Progress, Assurance against a Standard, and Organisations Preparedness to deploy AI is shown below.
AI compliance is a new and changing field. So standards are changing rapidly. Organisations like the government, MHRA, NICE have different evidence criteria and standards.
A standard that directly affects AI is NHSD/NWIS Digital Co-ordination Board standards. Specifically, the DCB 0129 / 160 standards. These require that all IT projects have a trained clinical safety officer in place. Such officers control the creation and release of a clinical risk / hazard and mitigation log.
The easiest standard to communicate in the NHS is the NICE Digital Health Technology Evidence Framework. (NICE DHT). In essence it says that for a particular pathway one should ensure evidence exists about a particular digital health technology i.e. AI. The level of evidence relates to whether AI aims to have a clinical outcome or not. There are three Tiers in NICE DHT. The lowest tier (Tier 1) has evidence requirements that can take up to 6 months to gather. The highest (Tier 3) can take between 2 and 5 years gather. In practice tier 1 projects are best undertaken in-house by NHS teams, a Tier 2 and above will involve a collaboration between the NHS, Academia and Industry.
Other relevant standards are covered in the Governments Digital Technology Assessment Criteria (DTAC)
A Dissemination Strategy
It is appropriate to disseminate progress (including software and models) on NICE DHT Tier 1 AI under a licence that does not preclude or prejudice its use. As it can and should be deployed quickly. This is best enabled using open source licences. Some like MIT enable the private sector to embody the AI into their solutions and sell it. Others like AGPL do not allow re-sale. NICE DHT Tier 2-3 AI is best disseminated under a non-exclusive Lambert agreement because such AI projects often involve partnerships with academia and industry. Lambert Agreements are template documents freely available from the Intellectual Property offices web site.
AI Portfolio Monitoring
Over and above the clinical risk of bringing AI into use there is a cost risk. The principle cost risk being the time and effort associated with bringing a NICE Tier 1-3 solution to bare within a pathway. For a NICE DHT Tier 1 service this can be around £0.5 million, for a tier 2-3 product this could be between £2-5 million. It is therefore wise for health Boards and Senior Management to work with end users and undertake portfolio monitoring. In these AI developments based around clinical themes are stratified by: the level of risk associated with the AI and an organisations readiness to implement a theme. A method of visualising a portfolio, which incorporates a reporting on Sandbox Progress, Assurance against a Standard, and Organisations Preparedness to deploy AI is shown below.
Models of Adoption
The NHS has many models for changing its customs and practice. Whichever one you choose there are some unique issues associated with AI that one needs to consider. These are:
- People fear robots
- People fear putting their lives in the hands of robots
- AI is not a robot
- AI is powered by data
- Ai is a software application
- Treat AI as a software application
- Implement Ai as you would software applications
- If you are purchasing AI buy it as a service not a licence
- Recognise AI is rapidly changing so buy modular micro services
- Validate the AI in a sandboxed pathway that mirrors the real world deployment
- Test AI pathways with real world data
- If you looking to deploy AI, make sure the host organisation is ready for it
- Understand the AI market is outside of the NHS's control
- Get close to the AI supply chain and make sure what is available is fit for purpose using Sandboxes
- Understand and communicate the motivations of early adopters (with examples)
- Understand and communicate the motivations of slow adopters (with examples)
- Enabling Architects of AI have empathy with the issues people face in the deployment of AI
Empathising with the Issues Facing People
AI cannot be brought into being within the NHS without teams of people. These people include those inside and outside the NHS. If there is no empathy with the issues each may face then AI implementations will suffer. The following sections give an overview of the issues facing
1. Staff and Patients
2. Funders and Data Guardians
3. Risk Management
4. Technical Teams
5. Researchers
6. Developers
7. Procurement
8. Users
9. Planners
and background on each issue
1. Staff and Patients
2. Funders and Data Guardians
3. Risk Management
4. Technical Teams
5. Researchers
6. Developers
7. Procurement
8. Users
9. Planners
and background on each issue
Staff and Patients
Understanding the value of AI
It is useful to think of AI as technology that can be used to classify or predict outcomes e.g., cancer as shown on an Xray, or the need to send a person a reminder. When you understand how wrong you can be in that prediction or classification, you are better able to understand the value of AI in your work.
Once you have answered these questions, you need to ask yourself, what data have I got access to that will enable me to come to that conclusion? This is because AI is not an artificial intelligence, it is a set of techniques to look at data and change the way in which systems operate based on that data.
For example, using cancer identification on x-ray as an example, AI will analyse the pixels in the image and come to a conclusion on what it represents based on what it has been trained to identify.
It is useful to think of AI as technology that can be used to classify or predict outcomes e.g., cancer as shown on an Xray, or the need to send a person a reminder. When you understand how wrong you can be in that prediction or classification, you are better able to understand the value of AI in your work.
Once you have answered these questions, you need to ask yourself, what data have I got access to that will enable me to come to that conclusion? This is because AI is not an artificial intelligence, it is a set of techniques to look at data and change the way in which systems operate based on that data.
For example, using cancer identification on x-ray as an example, AI will analyse the pixels in the image and come to a conclusion on what it represents based on what it has been trained to identify.
The Importance of Data
Development of AI is fully dependent on data. Gathering data that enables AI to make predictions or classifications of a model that is fit for purpose is a highly governed area. The data protection act and GDPR place significant restrictions on what data can be used and for what purpose. As a consequence, before embarking on use, research or development, it is wise to start with an understanding about the implications and impact of AI on a user case.
The information governance teams who act as guardians of public data will want to consider evidence in data protection impact assessments (DPIAs)and compliance with the Data Protection Act, GDPR and ICO guidelines.
Development of AI is fully dependent on data. Gathering data that enables AI to make predictions or classifications of a model that is fit for purpose is a highly governed area. The data protection act and GDPR place significant restrictions on what data can be used and for what purpose. As a consequence, before embarking on use, research or development, it is wise to start with an understanding about the implications and impact of AI on a user case.
The information governance teams who act as guardians of public data will want to consider evidence in data protection impact assessments (DPIAs)and compliance with the Data Protection Act, GDPR and ICO guidelines.
How do you train AI to classify or Predict
The answer is that you give a data scientist access to information. With that data, a scientist and their computer program will plot the features of the data and come to a conclusion as to what this means or predicts.
This is very much like plotting data on a graph and and making predictions from the graph when you have one set of data and aim to use it to draw a conclusion from the graph. How wrong you can afford to be when drawing a line identifies how wrong you can afford to be with your prediction or classification.
In the example of the cancer identifier, you would need a high level of confidence that the data accurately represents 'probably cancer' or 'probably not cancer'. The consequences and potential harm resulting from getting this wrong are significant compared to a different analysis used to predict for example whether or not an individual is likely to attend a seminar.
The answer is that you give a data scientist access to information. With that data, a scientist and their computer program will plot the features of the data and come to a conclusion as to what this means or predicts.
This is very much like plotting data on a graph and and making predictions from the graph when you have one set of data and aim to use it to draw a conclusion from the graph. How wrong you can afford to be when drawing a line identifies how wrong you can afford to be with your prediction or classification.
In the example of the cancer identifier, you would need a high level of confidence that the data accurately represents 'probably cancer' or 'probably not cancer'. The consequences and potential harm resulting from getting this wrong are significant compared to a different analysis used to predict for example whether or not an individual is likely to attend a seminar.
Where is AI technology in Wales
- 'NudgeShare' is a technology to promote self-care. It uses local differential privacy and on phone machine learning to nudge app users towards self-care behaviours and to pro-actively seek regular contact with people in their care network.
- AutoFAQ, is a technology that can ingest audio recordings and enable ICO registered information processes to cluster and visualise the demand for GP services using NLP and NLU
- LineSafe is an automated system for ensuring that a Nasogastric tube is in the correct and safe anatomical position
- RiTTA is a text chatbot. It provides real time answers to common questions at any time of the day or night. Importantly, these are conversations, with tailored answers, not just a directory of information.
- MineAct is a technology that identifies cognitive dissonance in a young person’s Minecraft chat and instigates Acceptance and Commitment Therapy via a text chat interface.
- Amazon Alexa, when used in an acute care setting can suggest drug dosages to unidentified users
- IBM Watson, when used to recommend actions that have a clinical outcome for COVID.
How AI is deployed
It is common practice that AI technology is deployed within hardware and software. Increasingly it is deployed as a service. Users of the technology are given license to use it under terms. Consideration of the terms is important particularly where the technology can adapt itself based on data it is given access to.
It is common practice that AI technology is deployed within hardware and software. Increasingly it is deployed as a service. Users of the technology are given license to use it under terms. Consideration of the terms is important particularly where the technology can adapt itself based on data it is given access to.
Licensing AI’s use
Within the health and social care system of the United Kingdom a common practice is to require that purchases of technology provided are within an open-source license. This ensures that users are not locked into particular technologies, and the support and development thereof. This said your ability to do this will very much depend upon whether you have the ability to influence how AI is made. If an organisation is providing data to train AI it is in a position to influence the terms of the AI license.
Within the health and social care system of the United Kingdom a common practice is to require that purchases of technology provided are within an open-source license. This ensures that users are not locked into particular technologies, and the support and development thereof. This said your ability to do this will very much depend upon whether you have the ability to influence how AI is made. If an organisation is providing data to train AI it is in a position to influence the terms of the AI license.
It is useful to present AI in non technical terms. One method is to present it within the themes that underpin health and social care service delivery. For example, below AI developments are set against the Welsh Care Quality Framework.
Theme 1: Person Centred Care
Person centred care refers to a process that is people focused, promotes independence and autonomy, provides choice and control and is based on a collaborative team philosophy.
It is possible that AI could be used to understand people’s needs and views and help build relationships with family members e.g., via voice interfaces. Humans and therefore AI can be trained to put such information accurately into context by comparison to holistic, spiritual, pastoral and religious dimensions. As a consequence, it is important that feedback be gathered from service users and staff on their confidence in the support provided by an AI system.
Person centred care refers to a process that is people focused, promotes independence and autonomy, provides choice and control and is based on a collaborative team philosophy.
It is possible that AI could be used to understand people’s needs and views and help build relationships with family members e.g., via voice interfaces. Humans and therefore AI can be trained to put such information accurately into context by comparison to holistic, spiritual, pastoral and religious dimensions. As a consequence, it is important that feedback be gathered from service users and staff on their confidence in the support provided by an AI system.
Theme: Staying Healthy
The principle of staying healthy is to ensure that people in Wales are well informed to manage their own health and wellbeing. In the main it is assumed that staff’s role is to nudge notions of good human behaviour by protecting people from harming themselves or by making them aware of good lifestyle choices. The care system is particularly interested in inequalities amongst communities which subsequently place a high demand on the public care system.
AI is being deployed to nudge human behaviours that have a good effect on the length and quality of human life such as smoking cessation, regular activity and exercise, socialisation, education etc. This could be manifested as nudges staff provide to people based on service users self-reported activity, or from automated nudges that arise from electronic devices service users have access to e.g., smart watches and phones.
At this time ‘wellbeing advice’ from staff and AI solutions is an un-regulated field. In Wales, healthcare staff aim to deliver clinical care that is pertinent to their speciality as well as advising service users and signposting them to support that encourages positive health behaviours. This is embodied in the technique of 'making every contact count'. It is therefore appropriate to suggest that reports on any AI system's fitness for purpose accounts for the frequency at which wellbeing advice is provided at the same time as focussed clinical care and advice.
The principle of staying healthy is to ensure that people in Wales are well informed to manage their own health and wellbeing. In the main it is assumed that staff’s role is to nudge notions of good human behaviour by protecting people from harming themselves or by making them aware of good lifestyle choices. The care system is particularly interested in inequalities amongst communities which subsequently place a high demand on the public care system.
AI is being deployed to nudge human behaviours that have a good effect on the length and quality of human life such as smoking cessation, regular activity and exercise, socialisation, education etc. This could be manifested as nudges staff provide to people based on service users self-reported activity, or from automated nudges that arise from electronic devices service users have access to e.g., smart watches and phones.
At this time ‘wellbeing advice’ from staff and AI solutions is an un-regulated field. In Wales, healthcare staff aim to deliver clinical care that is pertinent to their speciality as well as advising service users and signposting them to support that encourages positive health behaviours. This is embodied in the technique of 'making every contact count'. It is therefore appropriate to suggest that reports on any AI system's fitness for purpose accounts for the frequency at which wellbeing advice is provided at the same time as focussed clinical care and advice.
Theme: Safe care
The principle of safe care is to ensure that people in Wales are protected from harm and supported to protect themselves from known harm. In practice the people who are best placed to advise of known harm are clinical and technical people who have had Safety Officer Training. In UK law AI system design and implementation needs to have an associated clinical/technical safety plan and hazard log (e.g., DCB0129 and DCB0160). System suppliers are responsible for DCB0129, and system users are responsible for DCB0160. The mitigations in a risk register should be manifested is systems design or training material.
If a system is making decisions with no person in the decision-making loop it is likely to be classed as a medical device. In this case users should expect such a device to have evidence of compliance with UK MHRA and potentially UK MDR. The process required to demonstrate evidence of compliance is lengthy and can take between 2 and 5 years.
MHRA and MDR compliance are not sufficient as stand-alone assurances of safety. The fitness for purpose for users and clinical situations still needs to be evaluated by health and social care practitioners and should not be overlooked.
Theme: Effective Care
The principle of effective care is that people receive the right care and support as locally as possible and are enabled to contribute to making that care successful.
People have a right not to be subject to a decision based solely on automated processing (ref Article 22 of GDPR) so it is wise to ask if they wish AI to make a decision about their care before using it.
AI systems systems that automate the storage of information or dissemination of clinical information (e.g., voice recognition systems) can be expected. The business cases to support service users within their homes, in a timely manner, and remotely will be compelling.
The principle of effective care is that people receive the right care and support as locally as possible and are enabled to contribute to making that care successful.
People have a right not to be subject to a decision based solely on automated processing (ref Article 22 of GDPR) so it is wise to ask if they wish AI to make a decision about their care before using it.
AI systems systems that automate the storage of information or dissemination of clinical information (e.g., voice recognition systems) can be expected. The business cases to support service users within their homes, in a timely manner, and remotely will be compelling.
Theme: Dignified care
The principle of dignified care is that the people in Wales are treated with dignity and respect and treat others the same. Fundamental human rights to dignity, privacy and informed choice must be protected at all times, and the care provided must take account of the individual’s needs, abilities and wishes.
One of the main features of this standard is that staff and service user's voice is heard. Service users have, on average, two generations of families who have grown up with human-to-human interaction and advice and see this as the norm, however, there has been an accelerating trend of people taking advice from search engines and via video. Like people, AI systems can be trained to undertake active listening, motivational interviewing, and deliver advice with compassion and empathy.
AI methods to automatically understand the empathy rapport and trust within such interactions are being developed.
,Theme: Timely Care
To ensure the best possible outcome, people’s conditions should be diagnosed promptly and treated according to clinical need. The timely provision of appropriate advice can have a huge impact on service user’s health outcomes. AI systems are being deployed to give service users faster access to care service e.g., automated self-booking systems and triage systems.
To ensure the best possible outcome, people’s conditions should be diagnosed promptly and treated according to clinical need. The timely provision of appropriate advice can have a huge impact on service user’s health outcomes. AI systems are being deployed to give service users faster access to care service e.g., automated self-booking systems and triage systems.
Theme: Individual care
The principle of individual care is that people are treated as individuals, reflecting their own needs and responsibilities. This is manifested by staff respecting service user’s needs and wishes, or those of their chosen advocates, and in the provision of support when people have access issues e.g., sensory impairment, disability etc.
AI that understands the needs of individuals can be as simple as mobile phone technology that knows when people are near the phone and can be contacted or translates one language into another. In the main, this class of AI interprets a service users’ needs and represents them. Such AI can be provided by a service e.g., a booking system, or the service user e.g., communication app or an automated personal health record.
Theme: Individual care
The principle of individual care is that people are treated as individuals, reflecting their own needs and responsibilities. This is manifested by staff respecting service user’s needs and wishes, or those of their chosen advocates, and in the provision of support when people have access issues e.g., sensory impairment, disability etc.
AI that understands the needs of individuals can be as simple as mobile phone technology that knows when people are near the phone and can be contacted or translates one language into another. In the main, this class of AI interprets a service users’ needs and represents them. Such AI can be provided by a service e.g., a booking system, or the service user e.g., communication app or an automated personal health record.
Theme: Staff and resources
The principle is that people in Wales can find information about how their NHS is resourced and make effective use of resources. This goes beyond clinical and technical safety, governance and leadership, it extends to providing staff a with sense of agency to improve services. A health service must determine the workforce requirements to deliver high quality safe care and support. As discussed in the Topol Review a better understanding within staff of the strengths and weaknesses of AI is an issue that needs to be addressed.
One of the key challenges when clinicians face the prospect of using AI is what to focus resources on. Too often they are faced with AI purchasing decisions with no framework to understand the value, fitness for purpose or time to market issues associated with AI. F Supporting staff to make decisions that synthesise service development requirements identified through co-production between health care teams and service users with evidence informed AI application has the best potential to optimise resources.
The principle is that people in Wales can find information about how their NHS is resourced and make effective use of resources. This goes beyond clinical and technical safety, governance and leadership, it extends to providing staff a with sense of agency to improve services. A health service must determine the workforce requirements to deliver high quality safe care and support. As discussed in the Topol Review a better understanding within staff of the strengths and weaknesses of AI is an issue that needs to be addressed.
One of the key challenges when clinicians face the prospect of using AI is what to focus resources on. Too often they are faced with AI purchasing decisions with no framework to understand the value, fitness for purpose or time to market issues associated with AI. F Supporting staff to make decisions that synthesise service development requirements identified through co-production between health care teams and service users with evidence informed AI application has the best potential to optimise resources.
Funders and Data Guardians
Governing AI.
Classifiers and prediction tools are usually embedded in hardware and software used in health and social care. The providers of that hardware and software are governed under two practices, contract and health guidance / law. In the UK this guidance is provided by NICE and MHRA. The permitted margin for error for a classifier / prediction algorithm within a situation is in the hands of the supplier who uses the algorithm and the contract they have with the NHS.
In situations where such AI technologies do not have a clinical outcome (for example in booking people onto a training course) the risk of harm is low. This means that the health and social care system does not try and police the development and implementation of such technologies. Conversely, where the potential for risk to humans is high, the policing is rigorous.
Depending upon where the risk to humans exists, this policing comes in the form of requirements to adhere to standards and independent assessment. In some cases, this mandates the provision of extensive evidence and research to support compliance with government agency guidance or law. The governance of standard adherence is nation specific. For example, AI technology that complies with UK regulation cannot automatically claim readiness for deployment in the United States and vice versa.
Important questions about AI technology to consider include:
How much error or inaccuracy is acceptable? (How wrong can you afford to be?)
What are the possible consequences of inaccuracies or error?
What mitigating steps can be taken to reduce the impact or seriousness of any acceptable error?
What evidence is necessary to demonstrate that the AI is fit for purpose?
In the United Kingdom NICE guidelines on evidence are a very good way of understanding what information is needed based on the purpose of the AI you are faced with. To start with it is good to place the AI technology on an evidenced needed Tier, based on its purpose.
Classifiers and prediction tools are usually embedded in hardware and software used in health and social care. The providers of that hardware and software are governed under two practices, contract and health guidance / law. In the UK this guidance is provided by NICE and MHRA. The permitted margin for error for a classifier / prediction algorithm within a situation is in the hands of the supplier who uses the algorithm and the contract they have with the NHS.
In situations where such AI technologies do not have a clinical outcome (for example in booking people onto a training course) the risk of harm is low. This means that the health and social care system does not try and police the development and implementation of such technologies. Conversely, where the potential for risk to humans is high, the policing is rigorous.
Depending upon where the risk to humans exists, this policing comes in the form of requirements to adhere to standards and independent assessment. In some cases, this mandates the provision of extensive evidence and research to support compliance with government agency guidance or law. The governance of standard adherence is nation specific. For example, AI technology that complies with UK regulation cannot automatically claim readiness for deployment in the United States and vice versa.
Important questions about AI technology to consider include:
How much error or inaccuracy is acceptable? (How wrong can you afford to be?)
What are the possible consequences of inaccuracies or error?
What mitigating steps can be taken to reduce the impact or seriousness of any acceptable error?
What evidence is necessary to demonstrate that the AI is fit for purpose?
In the United Kingdom NICE guidelines on evidence are a very good way of understanding what information is needed based on the purpose of the AI you are faced with. To start with it is good to place the AI technology on an evidenced needed Tier, based on its purpose.
The role of the Care Quality Commission
If an AI supplier is providing software as a service, and that service is a clinical service it is most likely that they will need to register with the care quality commission. A directory of registered suppliers is available.
If an AI supplier is providing software as a service, and that service is a clinical service it is most likely that they will need to register with the care quality commission. A directory of registered suppliers is available.
How funding sources effect AI deployment
For staff and clinicians involved in the health and social care sector, it is recommended that they take an interest in how potential AI is being funded. Diverse funding streams are associated with a range of advantages and disadvantages. Government agencies expect long-term returns on public money invested. Businesses can have the same expectation. However, firms often require a return on investment within two years in the form of profitable income. Venture capitalists have much higher expectations, this can be a 10x return on their investment within two years.
With AI solutions that can fit into the NICE Tier 1 evidence framework the evidence of fitness for purpose can be self-reported. There is no need for independent verification of fitness for purpose. These types of applications attract all types of funding as the return on investment can be within one year.
Where more evidence is needed, the longer the research and development process for a particular type of AI is. Lengthy development processes are dependent on more significant funds . Developments in AI that require evidence in Tier 2+ tend to be funded by University research and development grants. When the evidence they provide enables the private sector to deploy a solution within around one year it will attract private sector funding.
How funding sources effect AI deployment
For staff and clinicians involved in the health and social care sector, it is recommended that they take an interest in how potential AI is being funded. Diverse funding streams are associated with a range of advantages and disadvantages. Government agencies expect long-term returns on public money invested. Businesses can have the same expectation. However, firms often require a return on investment within two years in the form of profitable income. Venture capitalists have much higher expectations, this can be a 10x return on their investment within two years.
With AI solutions that can fit into the NICE Tier 1 evidence framework the evidence of fitness for purpose can be self-reported. There is no need for independent verification of fitness for purpose. These types of applications attract all types of funding as the return on investment can be within one year.
Where more evidence is needed, the longer the research and development process for a particular type of AI is. Lengthy development processes are dependent on more significant funds . Developments in AI that require evidence in Tier 2+ tend to be funded by University research and development grants. When the evidence they provide enables the private sector to deploy a solution within around one year it will attract private sector funding.
Unlocking IP
To ensure that private sector funding does not lock Health and social care organisations out of other potential developments, it is important to have a formal agreement with private sector funders. Effective agreements are generally associated with the standard operating procedures of hosts in the health and social care sector. However most do not have access to guidance on managing associated intellectual property. In this case it is useful to make reference to Lambert agreements on the IPO web site.
The choice of agreement will depend upon the board within an organisation. Many AI developments are conducted on the Lambert Agreement 4.
To ensure that private sector funding does not lock Health and social care organisations out of other potential developments, it is important to have a formal agreement with private sector funders. Effective agreements are generally associated with the standard operating procedures of hosts in the health and social care sector. However most do not have access to guidance on managing associated intellectual property. In this case it is useful to make reference to Lambert agreements on the IPO web site.
The choice of agreement will depend upon the board within an organisation. Many AI developments are conducted on the Lambert Agreement 4.
Where to Build AI Models
Significant public monies are spent trying to bring data into large data resources. In practice this is a very difficult task and assumes that models need to be created centrally and distributed nationally. This assumption needs to be tested when in fact it is often acceptable to build models locally and deploy them nationally. This technique of federated machine learning has a number of advantages for information governance, not least that data does not need to leave the boundaries of a health board's firewall to be useful in the development of AI.
Where to Build AI Models
Significant public monies are spent trying to bring data into large data resources. In practice this is a very difficult task and assumes that models need to be created centrally and distributed nationally. This assumption needs to be tested when in fact it is often acceptable to build models locally and deploy them nationally. This technique of federated machine learning has a number of advantages for information governance, not least that data does not need to leave the boundaries of a health board's firewall to be useful in the development of AI.
Risk Managment
Risk and AI
When dealing with health and social care there is a duty of care on staff to be conscious of the risks associated with treating and not treating a condition. In essence if you were to multiply how wrong you can afford to be, by the risk of harm, then you understand how important it is that your classifier or prediction needs to be correct.
When dealing with health and social care there is a duty of care on staff to be conscious of the risks associated with treating and not treating a condition. In essence if you were to multiply how wrong you can afford to be, by the risk of harm, then you understand how important it is that your classifier or prediction needs to be correct.
The Role of Clinical Safety Officers
An effective way of being able to understand the impact of technologies on clinicians, users and clinical practice is to comply with and self-certify against the Data Coordination Boards standards using Clinical Safety Officers. In particular, DCB0129 and DCB0160.
The reason for these standards being useful is that they address the subject of risk of harm and the mitigation of it. In the consideration of risk of harm, AI users find themselves considering the implications of system design on any person or persons that may be affected by it. This opens up conversations about use of data, inferences, how they are created and the implications of making decisions relating to AI use. If a supplier is to self-certify against DCB0160 they need to follow procedures which are very similar to DCB0129
In the absence of a CE mark both of these Standards are helpful. It could be argued that even with the existence of a CE Mark, evidence of an AI supplier and user using both of these standards illustrates a commitment to the safe use of AI in health and social care. In UK law they are a mandatory requirement of the design, provision and monitoring of an IT system within the health and social care setting
To self-certify against DCB 0129 / 0160 a Clinical Safety Officer needs to be in place. Clinical safety officers are clinicians who have been trained in the effective implementation of these standards. The cost of such training is very low.
An effective way of being able to understand the impact of technologies on clinicians, users and clinical practice is to comply with and self-certify against the Data Coordination Boards standards using Clinical Safety Officers. In particular, DCB0129 and DCB0160.
The reason for these standards being useful is that they address the subject of risk of harm and the mitigation of it. In the consideration of risk of harm, AI users find themselves considering the implications of system design on any person or persons that may be affected by it. This opens up conversations about use of data, inferences, how they are created and the implications of making decisions relating to AI use. If a supplier is to self-certify against DCB0160 they need to follow procedures which are very similar to DCB0129
In the absence of a CE mark both of these Standards are helpful. It could be argued that even with the existence of a CE Mark, evidence of an AI supplier and user using both of these standards illustrates a commitment to the safe use of AI in health and social care. In UK law they are a mandatory requirement of the design, provision and monitoring of an IT system within the health and social care setting
To self-certify against DCB 0129 / 0160 a Clinical Safety Officer needs to be in place. Clinical safety officers are clinicians who have been trained in the effective implementation of these standards. The cost of such training is very low.
The value of a CE mark
If a product has CE marking understanding its intended use is very important. The duty of care of how technology is used in health and social care sits with the user, not the supplier. It is therefore important not to assume that because a supplier has a CE mark that this technology is certified for the purpose you wish to apply it to. Firms that carry the CE marking, and for what purpose, will often be found on the MHRA web site
If a product has CE marking understanding its intended use is very important. The duty of care of how technology is used in health and social care sits with the user, not the supplier. It is therefore important not to assume that because a supplier has a CE mark that this technology is certified for the purpose you wish to apply it to. Firms that carry the CE marking, and for what purpose, will often be found on the MHRA web site
Technical Teams
Data Classification and Protection
Data about people and their lives has never been more readily available. Historically people could only analyse coded data (e.g. Diagnoses, Symptoms, Test & Procedures Results), nowadays they can easily also cross reference this with patients reported data. Data on people's lives, clinical and social encounters, family and work history and genetics is now possible. These data sources can be analysed to predict health and social care trends and individual outcomes. Data needs to be classed and protected as a valuable asset.
Speeding up Life Science, Biomedical and Service Analytics
AI has the potential to accelerate academic research and optimise scientific process. The ability to swiftly analyse and visualise biomedical data is improving triage and decision making. Swifter sharing of this data has made pan regional decision making more effective. Coupled with developments in cyber security, data sharing inside and outside of health and social care has become easier and safer. The ease and speed with which one can obtain and analyse data and draw insights improves management of service and health outcomes. Pathway management and the systems within it create a myriad of data sources that can be used to improved the efficient, safe, and compliant delivery and development of health and social care services
People in the AI Value Chain
AI is dependent upon mathematics, statistics, computer science, computing power and a human's will to learn. Humans involved in AI have many titles and skills. They vary from:
- Front line workers (nurses, doctors, allied health professionals, administrative staff etc) who gather, analyse, and control data relating to human interactions and use of management systems
- Life Scientists who manage and research laboratory facility and clinical data
- Epidemiologists & statisticians who look at public health
- Data scientists who look at data analytics, modelling and systems analysis
- Technology experts skilled in developing and implementing electronic patient/hospital records systems and specific data types (images, text, coded data, voice etc)
- Management who use methodologies and communication tools to engender system use
Optimising Organisational Outcomes
The NHS and social care sector has over 1.5 million staff each with their own systems. The myriad of providers and programmes means that data is heavily siloed. In practice few have data of enough quantity and quality to make centralised AI systems a realistic option. The potential to do federated AI development is significant. The opportunity to integrate data AI model development is therefore a current realistic challenge.
The opportunities to improve: workflows, predictive analytics, visualisations, data search, data tracking, curation of data and model development and deployment is apparent. All of which could improve efficiency and effectiveness of existing and arising health and social care outcomes. The opportunities to manage resources flexibly, dynamically and in real time will increase using AI.
An expanded data and computer science service
This will require the increased use of containerisation of AI model and pathway systems, the development of micro services, cyber security systems, image processing, natural language processing , voice model development and curation, graphical processing, web development etc. The delivery of these using agile techniques by people trained to understand complex and black box algorithms will be necessary on a variety of data sources. It will be increasingly necessary to track data from source, re-use and cross reference health and public domain data, create and curate ground truth data for pathways, and generate and track insights as far as they affect pathways.
Developments in data analysis & the importance of data stewardship
The health and social care system has historically relied upon clinical trial and survey data. Increasingly it is using structured electronic health record data to predict and classify health outcomes. Due to the increasing accuracy and commoditisation of natural language tools, linking structured data to inferences from text using text analytics will become widespread. Cross referencing these with dispensing records and service funding planning will increase the capability of value based healthcare analysis.
Cross referencing such data with analysis of patient discussions has the potential to improve the understanding about the effectiveness of interactions, and opportunities for more effective triage. The move towards cross-referencing health data to real world social data (wearable etc) is common practice in private healthcare economies and can be expected to gain momentum in the UK. Sensitive data stewardship and privacy protecting techniques will be increasingly demanded by government agencies tasked with information governance. Increased due dilligence around the use of online monitoring can be expected, particularly when techniques used in the advertising sector for population and retail consumer analysis become widespread in the monitoring of prescription compliance and dispensing. The formal assurance of methods and techniques in this domain can be expected when they affect services that may have a clinical outcome.
The changing world of bioinformatics
The cost of generating and analysing biological data has significantly reduced. The ability to scale and manage data often sits in research establishments and universities who have government funded state of the art infrastructures, cloud computing, security testing and students to analyse health and social care themes. The opportunities for local clinicians and carers to direct how data is analysed, tagged, structured for local health outcomes is clear. In Wales this has manifested itself in the local data analytics groups within the National Data Repository project.
Data about people and their lives has never been more readily available. Historically people could only analyse coded data (e.g. Diagnoses, Symptoms, Test & Procedures Results), nowadays they can easily also cross reference this with patients reported data. Data on people's lives, clinical and social encounters, family and work history and genetics is now possible. These data sources can be analysed to predict health and social care trends and individual outcomes. Data needs to be classed and protected as a valuable asset.
Speeding up Life Science, Biomedical and Service Analytics
AI has the potential to accelerate academic research and optimise scientific process. The ability to swiftly analyse and visualise biomedical data is improving triage and decision making. Swifter sharing of this data has made pan regional decision making more effective. Coupled with developments in cyber security, data sharing inside and outside of health and social care has become easier and safer. The ease and speed with which one can obtain and analyse data and draw insights improves management of service and health outcomes. Pathway management and the systems within it create a myriad of data sources that can be used to improved the efficient, safe, and compliant delivery and development of health and social care services
People in the AI Value Chain
AI is dependent upon mathematics, statistics, computer science, computing power and a human's will to learn. Humans involved in AI have many titles and skills. They vary from:
- Front line workers (nurses, doctors, allied health professionals, administrative staff etc) who gather, analyse, and control data relating to human interactions and use of management systems
- Life Scientists who manage and research laboratory facility and clinical data
- Epidemiologists & statisticians who look at public health
- Data scientists who look at data analytics, modelling and systems analysis
- Technology experts skilled in developing and implementing electronic patient/hospital records systems and specific data types (images, text, coded data, voice etc)
- Management who use methodologies and communication tools to engender system use
Optimising Organisational Outcomes
The NHS and social care sector has over 1.5 million staff each with their own systems. The myriad of providers and programmes means that data is heavily siloed. In practice few have data of enough quantity and quality to make centralised AI systems a realistic option. The potential to do federated AI development is significant. The opportunity to integrate data AI model development is therefore a current realistic challenge.
The opportunities to improve: workflows, predictive analytics, visualisations, data search, data tracking, curation of data and model development and deployment is apparent. All of which could improve efficiency and effectiveness of existing and arising health and social care outcomes. The opportunities to manage resources flexibly, dynamically and in real time will increase using AI.
An expanded data and computer science service
This will require the increased use of containerisation of AI model and pathway systems, the development of micro services, cyber security systems, image processing, natural language processing , voice model development and curation, graphical processing, web development etc. The delivery of these using agile techniques by people trained to understand complex and black box algorithms will be necessary on a variety of data sources. It will be increasingly necessary to track data from source, re-use and cross reference health and public domain data, create and curate ground truth data for pathways, and generate and track insights as far as they affect pathways.
Developments in data analysis & the importance of data stewardship
The health and social care system has historically relied upon clinical trial and survey data. Increasingly it is using structured electronic health record data to predict and classify health outcomes. Due to the increasing accuracy and commoditisation of natural language tools, linking structured data to inferences from text using text analytics will become widespread. Cross referencing these with dispensing records and service funding planning will increase the capability of value based healthcare analysis.
Cross referencing such data with analysis of patient discussions has the potential to improve the understanding about the effectiveness of interactions, and opportunities for more effective triage. The move towards cross-referencing health data to real world social data (wearable etc) is common practice in private healthcare economies and can be expected to gain momentum in the UK. Sensitive data stewardship and privacy protecting techniques will be increasingly demanded by government agencies tasked with information governance. Increased due dilligence around the use of online monitoring can be expected, particularly when techniques used in the advertising sector for population and retail consumer analysis become widespread in the monitoring of prescription compliance and dispensing. The formal assurance of methods and techniques in this domain can be expected when they affect services that may have a clinical outcome.
The changing world of bioinformatics
The cost of generating and analysing biological data has significantly reduced. The ability to scale and manage data often sits in research establishments and universities who have government funded state of the art infrastructures, cloud computing, security testing and students to analyse health and social care themes. The opportunities for local clinicians and carers to direct how data is analysed, tagged, structured for local health outcomes is clear. In Wales this has manifested itself in the local data analytics groups within the National Data Repository project.
Researchers
The role of IRAS and HRA
On occasions when the deployment of a tier 2+ AI is expected it is most likely that formal research will be required. This often takes the place of local trials. To ensure that the success and failure of such trials can be effectively reported it is worthwhile registering trials with IRAS and HRA. Registrations are a requirement of academic research applications and coverage within publications within the health and social care space.
On occasions when the deployment of a tier 2+ AI is expected it is most likely that formal research will be required. This often takes the place of local trials. To ensure that the success and failure of such trials can be effectively reported it is worthwhile registering trials with IRAS and HRA. Registrations are a requirement of academic research applications and coverage within publications within the health and social care space.
When to start evidence gathering for MHRA
A challenge at this time is the paucity of authorised bodies able to independently show compliance against medical device legislation. Suppliers of AI technology can expect to wait over a year to gain access to a notarised body capable of confirming compliance. If a supplier of AI technology waits till such a body is available to begin gathering evidence of appropriateness, the time it takes to bring technology to market will be potentially devastating on the business case for its use.
This is important because what constitutes technology that needs to be governed by MHRA and NICE can be expected to change and become more encompassing over the next couple of years: when medical device legislation begins to cover all but a few areas of health and social care.
A challenge at this time is the paucity of authorised bodies able to independently show compliance against medical device legislation. Suppliers of AI technology can expect to wait over a year to gain access to a notarised body capable of confirming compliance. If a supplier of AI technology waits till such a body is available to begin gathering evidence of appropriateness, the time it takes to bring technology to market will be potentially devastating on the business case for its use.
This is important because what constitutes technology that needs to be governed by MHRA and NICE can be expected to change and become more encompassing over the next couple of years: when medical device legislation begins to cover all but a few areas of health and social care.
The cost of compliance
In synopsis. If the AI has an impact on the outcomes of health and social care, it is most likely to fall into one of the NICE DHT categories. The burden of evidence in each category increases exponentially as you move from Tier 1 to Tier 3. Tier 1 products are self-certified. In practice unless you already have a CE mark for Tier 2+ technologies, then AI technology needs to provide evidence to what is known as a notarised body. This notarising body will indicate if the AI complies with the evidence required. The overarching body in the United Kingdom that manages the demand for the provision of evidence is the MHRA.
In 2020, the cost of compliance associated with providing this evidence ranges from approximately £0.5m for a Tier 1 product, to £2-5 million for a tier 2 and 3 products. For a Tier 1 products the timescale to evidence compliance can be between six and 12 months, the timescales to evidence of compliance for a Tier 2 or 3 product can be between two and five years.
The cost of compliance is an important subject for all staff and health and social care. This is not because they incur such costs. The primary reason is that suppliers of AI enabled technologies need to show evidence of compliance. The time and effort of health and social care staff in the creation / collation of evidence needs to be considered when developing or accepting a business case.
The cost of compliance
In synopsis. If the AI has an impact on the outcomes of health and social care, it is most likely to fall into one of the NICE DHT categories. The burden of evidence in each category increases exponentially as you move from Tier 1 to Tier 3. Tier 1 products are self-certified. In practice unless you already have a CE mark for Tier 2+ technologies, then AI technology needs to provide evidence to what is known as a notarised body. This notarising body will indicate if the AI complies with the evidence required. The overarching body in the United Kingdom that manages the demand for the provision of evidence is the MHRA.
In 2020, the cost of compliance associated with providing this evidence ranges from approximately £0.5m for a Tier 1 product, to £2-5 million for a tier 2 and 3 products. For a Tier 1 products the timescale to evidence compliance can be between six and 12 months, the timescales to evidence of compliance for a Tier 2 or 3 product can be between two and five years.
The cost of compliance is an important subject for all staff and health and social care. This is not because they incur such costs. The primary reason is that suppliers of AI enabled technologies need to show evidence of compliance. The time and effort of health and social care staff in the creation / collation of evidence needs to be considered when developing or accepting a business case.
Reducing the cost of compliance
In academia, the technical time and cost associated with developing AI for just one part of a pathway can be three years and £0.5- 1 million. The cost of testing, validating and supplying technology often falls to the supply chain nearest to end users of the research. This is often too late. Many research and development programs are instigated without involvement of end users and safety officers from the onset. A common pitfall is to apply for funding, or be given funding, to produce a proof of concept without addressing the technical and clinical risks associated with the potential implementation of AI from the onset. If research is to be realised as developed product gathering compliance data from the onset helpful to researchers and the clients they seek to support.
In academia, the technical time and cost associated with developing AI for just one part of a pathway can be three years and £0.5- 1 million. The cost of testing, validating and supplying technology often falls to the supply chain nearest to end users of the research. This is often too late. Many research and development programs are instigated without involvement of end users and safety officers from the onset. A common pitfall is to apply for funding, or be given funding, to produce a proof of concept without addressing the technical and clinical risks associated with the potential implementation of AI from the onset. If research is to be realised as developed product gathering compliance data from the onset helpful to researchers and the clients they seek to support.
Developers
Who should lead AI Development?
There are many different types of people involved in the research and the development of AI solutions: academic researchers, specialists in particular technologies such as text, image, sound analysis, mathematical modellers, data scientists, computer scientists.
Research should be led by Health and social care sector staff potentially affected by its deployment. They are often very aware of the actual problems that need to be solved and whether they are enduring. The time commitment to design, develop and provide supporting evidence for AI solutions means that short-term problems are not an optimal focus for AI technology.
There are many different types of people involved in the research and the development of AI solutions: academic researchers, specialists in particular technologies such as text, image, sound analysis, mathematical modellers, data scientists, computer scientists.
Research should be led by Health and social care sector staff potentially affected by its deployment. They are often very aware of the actual problems that need to be solved and whether they are enduring. The time commitment to design, develop and provide supporting evidence for AI solutions means that short-term problems are not an optimal focus for AI technology.
Safely developing AI user business cases in a Sandbox
Over and above the general requirements for the provision of IT systems within health and social care, perhaps the most important facet of using AI is effective use of a sandbox. These are places where data is put but has no way of being cross-referenced to identify the public. With this data in either closed or open repositories, AI providers can test and prove the efficacy of their solutions. The proving of the efficacy of a solution is best considered in-situ, with mirrors of the systems used in practice.
Simply validating the accuracy and appropriateness of an algorithm outside of a mirror of a pathway in which it is to be used is erroneous. The reason for this is algorithms behave differently when they are linked together in a system. Also algorithms behave differently within different computer systems. They are are often vehicle dependent. If a user is to effectively understand the validity of using an AI system within a pathway it is more meaningful to test it in practice with real-world data within a specify pathway with specific technology used in that pathway.
Over and above the general requirements for the provision of IT systems within health and social care, perhaps the most important facet of using AI is effective use of a sandbox. These are places where data is put but has no way of being cross-referenced to identify the public. With this data in either closed or open repositories, AI providers can test and prove the efficacy of their solutions. The proving of the efficacy of a solution is best considered in-situ, with mirrors of the systems used in practice.
Simply validating the accuracy and appropriateness of an algorithm outside of a mirror of a pathway in which it is to be used is erroneous. The reason for this is algorithms behave differently when they are linked together in a system. Also algorithms behave differently within different computer systems. They are are often vehicle dependent. If a user is to effectively understand the validity of using an AI system within a pathway it is more meaningful to test it in practice with real-world data within a specify pathway with specific technology used in that pathway.
Procurement
The Role of Procurement
It could be argued that unless procurement departments take an active role in the design and development of AI then changes to existing custom & practice will not happen. An erroneous assumption is that you can buy an AI product that will work independently in a pathway. A correct assumption is that at best, one is buying integrated assets and services to which one can and should apply KPIs. These KPIS when related to health and social care services can be related to the outputs. Each solution is likely to have different KPIs depending upon the application of the AI technology in practice. The best people to set these KPIs are safety officers and end users.
It could be argued that unless procurement departments take an active role in the design and development of AI then changes to existing custom & practice will not happen. An erroneous assumption is that you can buy an AI product that will work independently in a pathway. A correct assumption is that at best, one is buying integrated assets and services to which one can and should apply KPIs. These KPIS when related to health and social care services can be related to the outputs. Each solution is likely to have different KPIs depending upon the application of the AI technology in practice. The best people to set these KPIs are safety officers and end users.
Continuous monitoring
Being locked into, or out of, technology, when the rate of change of AI is so great is unwise. Short term contracts with very specific KPIs are advisable in this situation. The fact that ISO and medical device legislation recommends continuous monitoring is helpful to AI system buyers. Using continuous monitoring with the option of contract extensions is a more effective way of engaging AI suppliers, than committing to one particular technology platform or a long-term development contract.
Being locked into, or out of, technology, when the rate of change of AI is so great is unwise. Short term contracts with very specific KPIs are advisable in this situation. The fact that ISO and medical device legislation recommends continuous monitoring is helpful to AI system buyers. Using continuous monitoring with the option of contract extensions is a more effective way of engaging AI suppliers, than committing to one particular technology platform or a long-term development contract.
Sellers of AI and how to engage them
Sellers to the health and social care sector sometimes present AI as being a solution to yet unidentified problems. These should not be the primary concern of people in health and social care. Organisations that seek to promote AI for problems that are not enduring should be avoided. In addition, it is highly advisable that clinicians are not drawn into validating individual models for suppliers. The reason for this is that whilst an AI model can be fit for purpose in one case, when such models are joined together, they may behave differently from how they operated when they were first validated.
If staff in the healthcare sector are drawn into discussions with existing suppliers of technology, it is better to focus on understanding whether an AI model / technology is fit for purpose in a pathway. To test whether AI is appropriate for a pathway it is highly advisable that this is done with data that reflects real-world situations and with people who would directly be affected by AI's deployment. Having a safe area where technology is tested within a pathway can significantly reduce the burden on staff. In essence it places the burden of proof of fitness for purpose on suppliers and not the health and social care system.
Sellers to the health and social care sector sometimes present AI as being a solution to yet unidentified problems. These should not be the primary concern of people in health and social care. Organisations that seek to promote AI for problems that are not enduring should be avoided. In addition, it is highly advisable that clinicians are not drawn into validating individual models for suppliers. The reason for this is that whilst an AI model can be fit for purpose in one case, when such models are joined together, they may behave differently from how they operated when they were first validated.
If staff in the healthcare sector are drawn into discussions with existing suppliers of technology, it is better to focus on understanding whether an AI model / technology is fit for purpose in a pathway. To test whether AI is appropriate for a pathway it is highly advisable that this is done with data that reflects real-world situations and with people who would directly be affected by AI's deployment. Having a safe area where technology is tested within a pathway can significantly reduce the burden on staff. In essence it places the burden of proof of fitness for purpose on suppliers and not the health and social care system.
Managing risk
An important feature of developing an AI enabled pathway is to understand any potential clinical risks arising. The type of clinical risks arising are usually associated with how user would interact with the AI. Other risks are generally covered by guidance notes and standards that technology providers will have access to. Asking a supplier to evidence their compliance with the guidance procedure called DTAC is a useful process to go through before engaging engineers.
An effective way of managing technical and clinical risks associated with an AI solution is to engage the services of a clinical safety officer. If your organisation is deploying IT, a clinical safety officer will exist. As previously mentioned, this is a legal requirement under DCB0129 and DCB160.
Involving such an officer early on in design, development, and research reduces the cost of its potential implementation. In particular, the process of managing a risk register enables researchers and developers to gather information on what makes AI fit for purpose. If one also prepared a risk register with potential end users, many of the difficulties associated with implementing AI technology are reduced. Gathering such evidence from the onset of a research and development project will increase its value. It can be expected that all AI will eventually need to mitiigate against risks, and evidence real-world benefits, irrespective of whether they are classified as a medical device at this time.
An important feature of developing an AI enabled pathway is to understand any potential clinical risks arising. The type of clinical risks arising are usually associated with how user would interact with the AI. Other risks are generally covered by guidance notes and standards that technology providers will have access to. Asking a supplier to evidence their compliance with the guidance procedure called DTAC is a useful process to go through before engaging engineers.
An effective way of managing technical and clinical risks associated with an AI solution is to engage the services of a clinical safety officer. If your organisation is deploying IT, a clinical safety officer will exist. As previously mentioned, this is a legal requirement under DCB0129 and DCB160.
Involving such an officer early on in design, development, and research reduces the cost of its potential implementation. In particular, the process of managing a risk register enables researchers and developers to gather information on what makes AI fit for purpose. If one also prepared a risk register with potential end users, many of the difficulties associated with implementing AI technology are reduced. Gathering such evidence from the onset of a research and development project will increase its value. It can be expected that all AI will eventually need to mitiigate against risks, and evidence real-world benefits, irrespective of whether they are classified as a medical device at this time.
Supplier standards
Suppliers who do not comply with, or who are not working towards ISO13485 / ICE52304, should not be able expected to provide the continuing real-world evidence of fitness required by software is a medical device legislation. Organisations that comply with such standards are able to develop technology that achieves a CE status. The acceptability of a CE will not be diminished following Brexit however suppliers of AI technology will need to register their devices, as appropriate, with MHRA.
Suppliers who do not comply with, or who are not working towards ISO13485 / ICE52304, should not be able expected to provide the continuing real-world evidence of fitness required by software is a medical device legislation. Organisations that comply with such standards are able to develop technology that achieves a CE status. The acceptability of a CE will not be diminished following Brexit however suppliers of AI technology will need to register their devices, as appropriate, with MHRA.
Being and Effective Data Guardian
If you were faced with an AI supplier that is using technology to develop its own software using data for which you are a guardian, it is important to consider the implications of its use against the Data protection Act and GDPR. Both of which put a duty of care on the health and social care system. This duty is to make sure that where technology is applied to decisions about the public, they are aware of, and agree to, these decisions, and have a right not to have algorithmic decisions made about them. How data is processed by AI software and hardware should mirror the data privacy terms of service of a health and social care provider using it. If it does not, then the contract for its use should stipulate which privacy agreement is given precedent over the other and how.
Users
AI Innovation starts in Universities
Most frequently data is required to design and test a model to make a classification or a prediction. Research and development organisations are the primary producers of such models. Currently, most AI models are developed by people who have a PhD in a particular technical discipline. As one often finds PhD's within academia, universities often leads on the development of new and emerging AI. It is advisable that users direct research around the systems and pathways they, and their organisation are committed to using. When such innovations become proven or commodity in nature Msc Computer Science students are frequently used to build AI into systems used within pathways.
Using Real world data to validate pathways of AI, not just AI models
The data needed to prove the efficacy of AI as a model will most likely change when the AI is embedded in a system. Paying attention as to how such data may change over time is worth considering early on in the innovative process. This is particularly important is one is considering joining models together as their performance may change when they are in a pathway of systems. It is for this reason that users are encouraged to validate models against 'real world' data within end-to-end testing of AI enabled pathways.
Most frequently data is required to design and test a model to make a classification or a prediction. Research and development organisations are the primary producers of such models. Currently, most AI models are developed by people who have a PhD in a particular technical discipline. As one often finds PhD's within academia, universities often leads on the development of new and emerging AI. It is advisable that users direct research around the systems and pathways they, and their organisation are committed to using. When such innovations become proven or commodity in nature Msc Computer Science students are frequently used to build AI into systems used within pathways.
Using Real world data to validate pathways of AI, not just AI models
The data needed to prove the efficacy of AI as a model will most likely change when the AI is embedded in a system. Paying attention as to how such data may change over time is worth considering early on in the innovative process. This is particularly important is one is considering joining models together as their performance may change when they are in a pathway of systems. It is for this reason that users are encouraged to validate models against 'real world' data within end-to-end testing of AI enabled pathways.
The impact of compliance regulation on AI users
The amount of effort needed to apply to use AI in health and social care varies. When planning to use AI it is useful to segment suppliers in terms of where they are in the cycle of showing evidence of compliance. Then where they want to be, and therefore the data demands placed on your use of the technology.
Evidence of compliance often relates to showing a statistical link between a technology's purpose and the outcomes associated with that technology’s use. For technologies that are NICE DMHT tier 3 and above and have not been classified as such, the supplier will want to show a comparison of the technology against existing customer practice. In some cases, this may involve formal research and potentially randomised control trials. In the latter case, use of the technology should not be considered before a research/evidence proposal is logged with MHRA and HRA.
The amount of effort needed to apply to use AI in health and social care varies. When planning to use AI it is useful to segment suppliers in terms of where they are in the cycle of showing evidence of compliance. Then where they want to be, and therefore the data demands placed on your use of the technology.
Evidence of compliance often relates to showing a statistical link between a technology's purpose and the outcomes associated with that technology’s use. For technologies that are NICE DMHT tier 3 and above and have not been classified as such, the supplier will want to show a comparison of the technology against existing customer practice. In some cases, this may involve formal research and potentially randomised control trials. In the latter case, use of the technology should not be considered before a research/evidence proposal is logged with MHRA and HRA.
The impact of users on AI
AI technology will need to comply with the standard operating procedures of internal departments e.g., procurement, information technology, Informatics, information governance, intellectual property, health and safety etc. A common error that occurs when applying AI is to assume that because it may be free, or offsite, etc is that it does not have an associated cost. This cost may manifest itself to internal departments. Understanding what this cost is prior to implementing AI in a pathway is highly recommended.
AI will have an increasing impact on the use of human resources within a health and social care setting. The opportunities to automate decision-making is becoming apparent to those people involved in the managing of people and digital transformation. An understanding of the human cost and benefit of implementing of AI will become increasingly important. It is advisable to begin discussions on the use of AI with people responsible for these activities before decisions are made to bring technologies into practice.
AI technology will need to comply with the standard operating procedures of internal departments e.g., procurement, information technology, Informatics, information governance, intellectual property, health and safety etc. A common error that occurs when applying AI is to assume that because it may be free, or offsite, etc is that it does not have an associated cost. This cost may manifest itself to internal departments. Understanding what this cost is prior to implementing AI in a pathway is highly recommended.
AI will have an increasing impact on the use of human resources within a health and social care setting. The opportunities to automate decision-making is becoming apparent to those people involved in the managing of people and digital transformation. An understanding of the human cost and benefit of implementing of AI will become increasingly important. It is advisable to begin discussions on the use of AI with people responsible for these activities before decisions are made to bring technologies into practice.
Users Roles in Medical Device Regulation (MDR) Compliance
An important facet of MDR is the continuous monitoring in place to ensure compliance. User defined KPIs will be useful when applying AI that is subject to medical device regulations. The main facet of this compliance is evidence that users understand an AI system is fit for purpose and safe: much like with the pharmaceutical Yellow Card Scheme. In Wales, managers can give users and those affected by AI to have a sense of agency over AI’s use if they continually measure its success in a pathway against the tenets of the Welsh Care Quality Framework.
An important facet of MDR is the continuous monitoring in place to ensure compliance. User defined KPIs will be useful when applying AI that is subject to medical device regulations. The main facet of this compliance is evidence that users understand an AI system is fit for purpose and safe: much like with the pharmaceutical Yellow Card Scheme. In Wales, managers can give users and those affected by AI to have a sense of agency over AI’s use if they continually measure its success in a pathway against the tenets of the Welsh Care Quality Framework.
Planners
.
All AI needs a Home and to survive contact with end users
It is good practice to contact the head of IT, Informatics, Information Governance, and Safety departments before embarking on a development. Getting involved in the development of AI only to find it does not have a home in your own organisations customs and practices is a very frustrating experience for all concerned. Before committing to AI initiatives it is wise to collect evidence about:
This evidence should be provided from clinical / social care end users and have been approved by their management, senior management team and the board. Procurement departments should provide advice of how training AI can be used to decrease the cost of use. They should be mindful of opportunities to mirror end user KPIs in contractural KPIs.
Depending upon the answers to the above questions it is expected that health and social care organisations can prioritise resources targeted at AI ideas, products, development and research.
The resources targeted at AI's use will depend on the evidence needed to prove its value in a pathway. When developing a portfolio of AI projects this is an important consideration. The time taken to get to User Acceptance Testing relates directly to the costs of bringing AI to market. Understanding this cost should will have a direct impact on plans to share risk with those in the AI supply chain e.g. academic establishments and AI firms.
All AI needs a Home and to survive contact with end users
It is good practice to contact the head of IT, Informatics, Information Governance, and Safety departments before embarking on a development. Getting involved in the development of AI only to find it does not have a home in your own organisations customs and practices is a very frustrating experience for all concerned. Before committing to AI initiatives it is wise to collect evidence about:
- What is the enduring need for the AI?
- Does the AI have a clinical outcome?
- If there is a potential clinical outcome who is the supplier/developers clinical safety officer, and who is the same internally?
- If the AI has a clinical outcome what is the evidence showing it is fit for purpose, and risks are mitigated?
- Who has validated the AI is fit for purpose in a pathway, or a process?
- What happens to people if the AI fails?
- Does MHRA classify the AI as a medical device, and if so what type
- Does the AI make autonomous decisions about what happens to people?
- Are users able to explain autonomous decisions to the people the affected by them?
- Do people have the option of not being judged by the AI?
- What systems in a pathway does the AI need to operate with?
- What data are you providing to the AI system to maintain its accuracy?
- What evidence exists to show the AI is compliant with legislation and local needs?
- What evidence is there that the technical costs of maintaining the AI are realistic?
- What evidence is there that the compliance costs of maintaining the AI are realistic?
- Who is responsible for training the AI?
- Who is going to the guardian of data used to train the AI?
- Who is going to validate the AI is fit for purpose once it is deployed?
- How often are you going to review if AI in a pathway is fit for purpose?
- What KPIs will be used to measure AI fitness for purpose in a pathway?
- Who can switch the AI off if they deem it not fit for purpose?
- Under what information governance and commercial terms will you share data?
- Under what commercial terms will resulting IP or benefits be shared?
This evidence should be provided from clinical / social care end users and have been approved by their management, senior management team and the board. Procurement departments should provide advice of how training AI can be used to decrease the cost of use. They should be mindful of opportunities to mirror end user KPIs in contractural KPIs.
Depending upon the answers to the above questions it is expected that health and social care organisations can prioritise resources targeted at AI ideas, products, development and research.
The resources targeted at AI's use will depend on the evidence needed to prove its value in a pathway. When developing a portfolio of AI projects this is an important consideration. The time taken to get to User Acceptance Testing relates directly to the costs of bringing AI to market. Understanding this cost should will have a direct impact on plans to share risk with those in the AI supply chain e.g. academic establishments and AI firms.
Where to Learn More
Learning about AI
The NHS and local authorities are largely locked into IT systems from proprietary suppliers. These suppliers often have their own AI tools and services and learning material. Organisations like technology hubs e.g., TEC Cymru, Life Sciences Hub, NHSX, GovTec, Academic Labs, and government agencies hold events to promote progress in AI’s use.
Where to go for help
This depends on your role. If you are a member of a health and social care organisation it is most likely that you are best advised to concentrate on the identification of enduring problems with your management team. This group should then approach your IT/Informations teams to understand what pipeline of projects they are prepared to support and standard operating procedures for IT, Informatics, IG, Cyber Security etc will help to ensure that AI has a home when it arrives within health and social care system.
Early involvement with Health Technology Wales will help to focus thinking on the potential assessment criteria of AI that needs to comply with NICE guidelines. Direct contact with MHRA will help you understand up and coming changes in the legislation and its impact on the health and social care setting.
Trello Resources List as URLS
The NHS and local authorities are largely locked into IT systems from proprietary suppliers. These suppliers often have their own AI tools and services and learning material. Organisations like technology hubs e.g., TEC Cymru, Life Sciences Hub, NHSX, GovTec, Academic Labs, and government agencies hold events to promote progress in AI’s use.
Where to go for help
This depends on your role. If you are a member of a health and social care organisation it is most likely that you are best advised to concentrate on the identification of enduring problems with your management team. This group should then approach your IT/Informations teams to understand what pipeline of projects they are prepared to support and standard operating procedures for IT, Informatics, IG, Cyber Security etc will help to ensure that AI has a home when it arrives within health and social care system.
Early involvement with Health Technology Wales will help to focus thinking on the potential assessment criteria of AI that needs to comply with NICE guidelines. Direct contact with MHRA will help you understand up and coming changes in the legislation and its impact on the health and social care setting.
Trello Resources List as URLS