Dear Stubbsie
Below is a letter I wrote to to my friend Stubbsie, a paramedic, a fabulous man and famous.
My local health board (ABUHB) asked for these views before Mr Stubbs, but given the need to start a debate on what AI is and could be in the NHS, it appears worthwhile sharing them now...
Dear Mr Stubbsie
The success or failure of AI in the NHS is dependent upon people who have empathy and curiosity. People who will lead the design, development and deployment of AI, not wait until it is presented to them; people who prefer action rather than talk; people who care about people, not kudos. People like you.
I have broken the themes you could consider into:
Good luck
___________________________________________________
BIG ISSUES
AI in the NHS
AI is going to change the NHS. But the acceptable level of change depends on one's risk perspective, motivations and resources. It is much like making a change to a moving ambulance when the difference can be as big as replacing the paramedics with robots or as little as washing the vehicle. It is possible. But this is going to be problematic: because who wants to slow down an ambulance.
Risk Mitigation
Using the Ambulance analogy. There are many changes one could make to an ambulance. Some are very risky, like swapping a paramedic with a robot. Switching an ambulance with a new type of ambulance is less risky. Cleaning an ambulance has even less risk. This said: you would do none of this if the ambulance was in motion.
Suppose you are going to make a change to an ambulance. It is probably best done in a workshop. Us AI folk have a fancy name for them. We call them sandboxes. Places where you can tinker, and if you break something, it does not matter.
Certain people need to be confident that a change to an ambulance is safe. They usually have a checklist and lists of hurdles suppliers of kit need to comply with. Us AI folk have similar people. We call them Clinical Safety Officers, who work to a standard. Their checklists are called DCB's, DHT and DTAC.
Much like with an ambulance: you have a checklist to sign an ambulance out of the workshop. Us AI folk have similar lists. We call them software licences and agreements like AGPL and Lambert agreements (when non-NHS folk use our stuff).
It is a terrible idea if you tinker with all your ambulances at the same time in your workshop, so you have to have some way of choosing what to tinker with and why your job list. We have the same kind of list. But we have a posh name for it. We call it a portfolio, which we manage.
Models of Adoption
Models for adopting AI in the NHS vary. In the main. It tends to happen by stealth. An AI supplier will propose a non-AI system. They then gather user data and systems data. They draw insights and then sell those insights back to the NHS as a service. This is akin to having a speed monitoring system on your ambulance that monitors how fast you go on/off a 'blue light'—then having the data sold to the ambulance service as insight. It tends to go down badly.
We have model of adoption, we call the 2022 framework.
In it, the NHS (the AI Funder) is prepared to engage us and accepts specific roles concerning the management of the AI development process. This is akin to an ambulance station taking control of a new vehicle fully aware of its value and maintenance schedule. In this framework, AI Users (typically NHS staff and sometimes the public) have various roles. Most importantly, they (not just NHS management) understand the value of the AI they are involved with developing and manage its deployment and continuous monitoring. In our model, they can also switch the AI we develop off. Much like giving a paramedic the power to say whether they will or will not travel in the ambulance given to them. A key role as an AI supplier is to be transparent about the management and protection of data.
We are working with other suppliers that use the data and insights we use and generate. Ensure that a data protection and GDPR statement is in place before we begin an AI development and secure it is up to date. Guiding the NHS buyers and users through compliance procedures and reducing the technical risks and associated costs of using systems and methods, the AI we develop depends upon. In practice, this role is akin to an ambulance supplier posting engineering staff to your station and working with your teams to make sure their vehicles are fit for purpose and nasty surprises minimised.
On the subject of nasty surprises: what are they, I hear you ask. Well. Would you believe it? Some people think AI is a humanoid-looking robot, and they fear putting their lives in the hands of such creatures! It is often better to describe AI as what it is, i.e. advanced computerised guessing, which is regularly linked to a computerised system or systems used to deliver help in a care pathway. It is much like the ABS that stops an ambulance from skidding when the driver brakes hard.
Other nasty surprises occur when the AI gets something wrong (and they always will), and you cannot switch it off or adapt it without a high cost. This is often in the NHS's direct control. It is better to buy AI under a service KPI than a product under a long-term licence. To think of AI as a system that can and should change over time is wise; the longer one uses AI, the wiser it should become. But much like people, this is not always the case. So contracting with AI suppliers like one would do with short term contract labour (with well-defined KPIs) is good practice.
Empathising with the Issues Facing People
To be a good manager of an ambulance station, you need to understand the role and responsibilities of people and good practice. As and when you come across people involved in the design, development and implementation of AI, the following descriptions of personas may help you understand what people need to understand and the issues they face
Staff and Patients
AI Sandboxes
These are safe places where you play with AI to see if and how it can be broken with real data and real users. It is much like when somebody gives the Ambulance service a new vehicle to test before buying one. Not technology survives contact with people and data, and ideally, these sandboxes should have both. Like with an ambulance, if you can get access to the people who specify, develop, test, make, supply and buy them, you can influence what is produced. The same can be said with AI. The most effective scenario is that local health board management, clinical and IT teams set up their own AI sandboxes with academics and local firms to drive AI developments.
If you were not involved in the design of an ambulance you would end up using; you would be circumspect about that ambulance's design. So it is with AI. AI developments should be clinically led, with pre-defined key performance indicators (KPIs). These developments would have support from the end-user to the board level. They would also have Intellectual Property (IP) and Information Governance (IG) agreements in place and a model explaining how data is shared in line with Information Commissioners Office (ICO) guidelines. Not having these agreements and models in place would be like saying to you, 'Thanks for your help designing this fab new ambulance. You and your health board are not going to receive a penny. Copyright of your ideas is not going to be attributed to you / and your health board.' (which is against the law)
AI Standards
Imagine you were designing a new type of vehicle that had never been tested. You would probably end up developing your standards or apply existing ones to this new vehicle when appropriate. This is akin to what is happening at the moment in AI. AI compliance is a unique and changing field. So standards are changing rapidly. Organisations like the government, MHRA, NICE have different evidence criteria and standards, and they are trying to decide what fits.
There is some good news. A standard that directly affects AI is the DCB 0129 / 160 standards. These require that all IT projects have a trained clinical safety officer in place. Such officers control creating and releasing a clinical risk/hazard and mitigation log that users need to make. They are a bit like the manual you get when you buy a vehicle showing you what to do when things go wrong: like a tyre puncture.
A similar standard is NICE Digital Health Technology Evidence Framework. (NICE DHT). In essence, it says that for a particular pathway, one should ensure evidence exists about particular digital health technology, i.e. AI. The level of evidence relates to whether AI aims to have a clinical outcome or not. There are three Tiers in NICE DHT. The lowest tier (Tier 1) has evidence requirements that can take up to 6 months to gather. The highest (Tier 3) can take between 2 and 5 years to compile. These standards are comparable to the measures the government places on the medical provider. The main difference is that in practice, NICE Tier 1 evidence can be gathered relatively simply, however a collaboration between the NHS, Academia and Industry.
Other relevant standards are covered in the Governments Digital Technology Assessment Criteria (DTAC); this operates a bit like a MOT testing schedule. Unlike a MOT, DTAC is applicable for new AI as well as old AI.
Commercialisation
The commercial strategy for selling/buying AI should depend upon your power. If you are involved in designing and developing AI, then you are in a position of strength. You can negotiate a good deal. If you wait till after the AI is developed, then your power is reduced. Using the ambulance design analogy: If your design is built with your parts, you would negotiate a good deal and licence its use. So it is with AI. But in this case, the parts are data you share to create the AI and your time used to interpret the insights and predictions the AI makes.
It is appropriate to suggest that NICE DHT Tier 1 AI be released under a AGPL licence that does not preclude or prejudice its use, e.g. MIT licence, which enables the private sector to embody the AI into their solutions and sell it, or others like AGPL that do not allow re-sale. This process is a bit like allowing another Ambulance service to use your ambulance design but precluding them from selling it back to you.
NICE DHT Tier 2-3 AI best disseminated under a non-exclusive Lambert agreement: because such AI projects often involve partnerships with academia and industry. Lambert Agreements are template documents freely available from the Intellectual Property offices web site. This is much like saying, you can sell the ambulance design, but my health board gets some revenue back from each deal.
Over and above the clinical risk of bringing AI into use, there is a cost risk. The principal cost risk being the time and effort associated with obtaining a NICE Tier 1-3 solution to bear within a pathway. For a NICE DHT Tier 1 service, this can be around £0.5 million; for a tier 2-3 product, this could be between £2-5 million.
AI Portfolio Monitoring
AI Portfolio Monitoring is, in essence, a job checklist. It guides you to decide what jobs to do first. It is a good idea to have a set of criteria for assessing the risk of an ambulance failing. So it is with AI. Below is the CarefulAI Criteria we use with NHS customers to prioritise AI jobs. The riskier the status, the more critical it is to address the AI issue. A direct analogy is a MOT. Suppose your vehicle is at risk. It will fail its MOT. If a vehicle has some concerns to a MOT assessor, they will list a series of advisories in your MOT assessment. With the CarefulAI Criteria, the main difference is that customers and staff are encouraged to report an AI system's risk status. This is akin to asking an ambulance driver or paramedic to assess their vehicle for its potential MOT status.
How funding sources affect AI deployment
For staff and clinicians involved in the health and social care sector, it is recommended that they take an interest in how potential AI is being funded. Diverse funding streams are associated with a range of advantages and disadvantages. Government agencies expect long-term returns on public money invested. Businesses can have the same expectation. However, firms often require a return on investment within two years in the form of profitable income. Venture capitalists have much higher expectations; this can be a 10x return on their investment within two years.
With AI solutions that can fit into the NICE Tier 1 evidence framework: the evidence of fitness for purpose can be self-reported, with no need for independent verification. These types of applications attract all types of funding as the return on investment can be within one year.
Where more evidence is needed, the research and development process for a particular type of AI is longer. Lengthy development processes are dependent on more significant funds. Developments in AI that require evidence in Tier 2+ tend to be funded by University research and development grants. When the evidence they provide enables the private sector to deploy a solution within around one year, it will attract private sector funding.
_____________________________________________________________
A STRATEGIC FRAMEWORK FOR AI IN THE NHS
Theme: Person-Centred Care
Person-centred care refers to a people-focused process, promotes independence and autonomy, provides choice and control based on a collaborative team philosophy.
There is a possibility that AI could be used to understand people's needs and views and help build relationships with family members, e.g., via voice interfaces. Therefore, humans can be trained to put such information accurately into context by comparison to holistic, spiritual, pastoral and religious dimensions. Consequently, feedback must be gathered from service users and staff on their confidence in the support provided by an AI system.
Theme: Staying Healthy
The principle of staying healthy is to ensure that people in Wales are well informed to manage their health and wellbeing. In the main, it is assumed that staff's role is to nudge notions of good human behaviour by protecting people from harming themselves or making them aware of good lifestyle choices. The care system is particularly interested in inequalities amongst communities which subsequently place a high demand on the public care system.
AI is being deployed to nudge human behaviours that affect the length and quality of human life, such as smoking cessation, regular activity and exercise, socialisation, education, etc. This could be manifested as nudges staff provide to people based on service users self-reported activity or from automated nudges that arise from electronic devices service users have access to, e.g., smartwatches and phones.
At this time, 'wellbeing advice' from staff and AI solutions is an unregulated field. In Wales, healthcare staff aim to deliver clinical care pertinent to their speciality, advise service users, and signpost them to support positive health behaviours. This is embodied in the technique of 'making every contact count'. It is, therefore, appropriate to suggest that reports on any AI system's fitness for purpose accounts for the frequency at which wellbeing advice is provided at the same time as focussed clinical care and advice.
Theme: Safe care
The principle of safe care is to ensure that people in Wales are protected from harm and supported to protect themselves from known harm. The best places to advise of known harm in practice are clinical and technical people who have had Safety Officer Training. In UK law AI system design and implementation need to have an associated clinical/technical safety plan and hazard log (e.g., DCB0129 and DCB0160). System suppliers are responsible for DCB0129, and system users are responsible for DCB0160. The mitigations in a risk register should be manifested in systems design or training material.
If a system is making decisions with no person in the decision-making loop, it is likely to be classed as a medical device. In this case, users should expect such a device to have evidence of compliance with UK MHRA and potentially UK MDR. The process required to demonstrate proof of compliance is lengthy and can take between 2 and 5 years.
MHRA and MDR compliance is not sufficient as stand-alone assurances of safety. The fitness for purpose for users and clinical situations still needs to be evaluated by health and social care practitioners and should not be overlooked.
Theme: Effective Care
The principle of effective care is that people receive the proper care and support as locally as possible and are enabled to contribute to making that care successful.
People have a right not to be subject to a decision based solely on automated processing (ref Article 22 of GDPR), so it is wise to ask if they wish AI to decide on their care before using it.
AI systems that automate the storage of information or dissemination of clinical information (e.g., voice recognition systems) can be expected. The business cases to support service users within their homes promptly and remotely will be compelling.
Theme: Dignified care
The principle of dignified care is that the people in Wales are treated with dignity, respect, and others. We must protect fundamental human rights to dignity, privacy and informed choice at all times, and the care provided must take account of the individual's needs, abilities and wishes.
One of the main features of this standard is that staff and service user's voice is heard. Service users have, on average, two generations of families who have grown up with human-to-human interaction and advice and see this as the norm; however, there has been an accelerating trend of people taking advice from search engines and via video. As people, we can train AI systems to undertake active listening, motivational interviewing and deliver advice with compassion and empathy.
AI methods to automatically understand the empathy, rapport and trust within such interactions are being developed.
Theme: Timely Care
To ensure the best possible outcome: people's conditions should be diagnosed promptly and treated according to clinical need. The timely provision of appropriate advice can have a significant impact on service user's health outcomes. AI systems are being deployed to give service users faster access to care service, e.g. automated self-booking systems and triage systems.
Theme: Individual care
The principle of individual care is that people are treated as individuals, reflecting their own needs and responsibilities. This care is manifested by staff respecting service users' needs and wishes, their chosen advocates, and providing support when people have access issues, e.g. sensory impairment, disability, etc.
AI that understands the needs of individuals can be as simple as mobile phone technology that knows when people are near the phone and can be contacted or translates one language into another. In the main, this class of AI interprets a service users' needs and represents them. Such AI can be provided by a service, e.g. a booking system, or the service user, e.g., a communication app or an automated personal health record.
Theme: Staff and resources
The principle is that people in Wales can find information about how their NHS is resourced and effectively uses resources. This goes beyond clinical and technical safety, governance and leadership. It extends to providing staff with a sense of agency to improve services. A health service must determine the workforce requirements to deliver high-quality, safe care and support. As discussed in the Topol Review, a better understanding within the staff of the strengths and weaknesses of AI is an issue that needs to be addressed.
One of the critical challenges when clinicians face the prospect of using AI is what to focus resources on. Too often, they are faced with AI purchasing decisions with no framework to understand the value, fitness for purpose or time to market issues associated with AI. Supporting staff to make decisions that synthesise service development requirements, identified through co-production between health care teams and service users with evidence-informed AI application, gives the best potential to optimise resources.
_____________________________________________________________
GOVERNANCE AND AI
Classifiers and prediction tools are usually embedded in hardware and software used in health and social care. The providers of that hardware and software are governed under two practices; contract and health guidance/law. In the UK, this guidance is provided by NICE and MHRA. The acceptable margin for error for a classifier/prediction algorithm within a situation is in the hands of the supplier who uses the algorithm and the contract they have with the NHS.
In situations where such AI technologies do not have a clinical outcome (for example, booking people onto a training course), the risk of harm is low. This means that the health and social care system does not try and police the development and implementation of such technologies. Conversely, where the potential for risk to humans is high, policing is rigorous.
Depending upon where the risk to humans exists, this policing comes in requirements to adhere to standards and independent assessment. In some cases, this mandates the provision of extensive evidence and research to support compliance with government agency guidance or law. The governance of standard adherence is nation-specific. For example, AI technology that complies with UK regulation cannot automatically claim readiness for deployment in the United States and vice versa.
Important questions about AI technology to consider include:
In the United Kingdom, NICE guidelines on evidence are an excellent way of understanding what information is needed based on the purpose of the AI you are faced with.
The role of the Care Quality Commission
If an AI supplier is providing software as a service, and that service is a clinical service, it is most likely that they will need to register with the care quality commission. A directory of registered suppliers is available.
The Role of Clinical Safety Officers
An effective way of understanding the impact of technologies on clinicians, users, and clinical practice is to comply with and self-certify against the Data Coordination Boards standards using Clinical Safety Officers. In particular, DCB0129 and DCB0160.
These standards are useful because they address the subject of risk of harm and the mitigation of it. In consideration of the risk of harm, AI users find themselves considering the implications of system design on any person or persons that may be affected by it. This opens up conversations about the use of data, inferences, how they are created and the implications of making decisions relating to AI use. If a supplier is to self-certify against DCB0160, they need to follow very similar procedures to DCB0129.
In the absence of a CE mark, both of these Standards are helpful. It could be argued that even with the existence of a CE Mark, evidence of an AI supplier and user using both of these standards illustrates a commitment to the safe use of AI in health and social care. In UK law, they are a mandatory requirement of designing, providing, and monitoring an IT system within the health and social care setting.
To self-certify against DCB 0129 / 0160, a Clinical Safety Officer needs to be in place. Clinical safety officers are clinicians who have been trained in the effective implementation of these standards. The cost of such training is meager
The value of a CE mark
If a product has CE marking, understanding its intended use is very important. The duty of care of how technology is used in health and social care sits with the user, not the supplier. Therefore, it is important not to assume that because a supplier has a CE mark, this technology is certified for the purpose you wish to apply it too. Firms that carry the CE marking, and for what purpose, will often be found on the MHRA website.
Data Classification and Protection
Data about people and their lives have never been more readily available. Historically people could only analyse coded data (e.g. Diagnoses, Symptoms, Test & Procedures Results); nowadays, they can easily cross-reference this with patients reported data. Data on people's lives, clinical and social encounters, family and work history and genetics is now possible. We can analyse these data sources to predict health and social care trends and individual outcomes. Data needs to be classed and protected as a valuable asset.
The role of IRAS and HRA
On occasions when the deployment of a tier 2+ AI is expected, it is most likely that formal research will be required. This often takes the form of local trials. To ensure that we can effectively report the success and failure of such trials, it is worthwhile registering trials with IRAS and HRA. Registrations are a requirement of academic research applications and coverage within publications within the health and social care space.
When to start evidence gathering for MHRA
A challenge at this time is the paucity of authorised bodies able to independently show compliance against medical device legislation. Suppliers of AI technology can expect to wait over a year to gain access to a notarised body capable of confirming compliance. Suppose a supplier of AI technology waits until such a body is available to gather evidence of appropriateness. In that case, the time it takes to bring technology to market will be potentially devastating on the business case for its use.
This is important because what constitutes technology that needs to be governed by MHRA and NICE can be expected to change and become more encompassing over the next couple of years: when medical device legislation begins to cover all but a few health and social care areas.
The cost of compliance
In synopsis: if the AI has an impact on the outcomes of health and social care, it is most likely to fall into one of the NICE DHT categories. The burden of evidence in each category increases exponentially as you move from Tier 1 to Tier 3. Tier 1 products are self-certified. In practice, unless you already have a CE mark for Tier 2+ technologies, then AI technology needs to provide evidence to what is known as a notarised body. This notarising body will indicate if the AI complies with the evidence required. The overarching body in the United Kingdom that manages the demand for the provision of evidence is the MHRA.
In 2020, the cost of compliance associated with providing this evidence ranged from approximately £0.5m for a Tier 1 product to £2-5 million for tier 2 and 3 products. For a Tier 1 product, the timescale to evidence compliance can be between six and 12 months, the timescales to evidence of compliance for a Tier 2 or 3 product can be between two and five years.
The cost of compliance is an important subject for all staff and health and social care. This is not because they incur such costs. The primary reason is that suppliers of AI-enabled technologies need to show evidence of compliance. The time and effort of health and social care staff in the creation/collation of evidence needs to be considered when developing or accepting a business case.
Reducing the cost of compliance
In academia, the technical time and cost associated with developing AI for just one part of a pathway can be three years and £0.5- 1 million. The cost of testing, validating and supplying technology often falls to the supply chain nearest to end-users of the research. This is often too late. Many research and development programs are instigated without the involvement of end-users and safety officers from the onset. A common pitfall is to apply for funding or be given funding to produce a proof of concept without addressing the technical and clinical risks associated with the potential implementation of AI from the onset. If research is to be realised as developed product gathering compliance data from the onset helpful to researchers and the clients they seek to support
Who should lead AI Development?
There are many different types of people involved in the research and the development of AI solutions: academic researchers, specialists in particular technologies such as text, image, sound analysis, mathematical modellers, data scientists, computer scientists.
Research should be led by Health and social care sector staff potentially affected by its deployment. They are often very aware of the actual problems that need to be solved and enduring. The time commitment to design, develop and provide supporting evidence for AI solutions means that short-term problems are not an optimal focus for AI technology.
Safely developing AI user business cases in a Sandbox.
Over and above the general requirements for the provision of IT systems within health and social care, perhaps the most critical facet of using AI is the effective use of a sandbox. These are places where data is put but has no way of being cross-referenced to identify the public. With this data in either closed or open repositories, AI providers can test and prove the efficacy of their solutions. The proving of the efficacy of a solution is best considered in-situ, with mirrors of the systems used in practice.
Simply validating the accuracy and appropriateness of an algorithm outside of a mirror of a pathway in which it is to be used is erroneous. The reason for this is algorithms behave differently when they are linked together in a system. Also, algorithms behave differently within different computer systems. They are often vehicle dependent. Suppose a user effectively understands the validity of using an AI system within a pathway. In that case, it is more meaningful to test it in practice with real-world data within a specific pathway with specific technology used in that pathway.
Procurement
The Role of Procurement
It could be argued that unless procurement departments take an active role in the design and development of AI, then changes to existing customs & practices will not happen. An erroneous assumption is that you can buy an AI product that will work independently in a pathway. A correct assumption is that, at best, one is buying integrated assets and services to which one can and should apply KPIs. These KPIs, when related to health and social care services, can be related to the outputs. Each solution is likely to have different KPIs depending upon the application of the AI technology in practice. The best people to set these KPIs are safety officers and end-users.
Continuous monitoring
Being locked into or out of technology when the rate of change of AI is so great is unwise. Short term contracts with very specific KPIs are advisable in this situation. The fact that ISO and medical device legislation recommends continuous monitoring is helpful to AI system buyers. Using continuous monitoring with the option of contract extensions is a more effective way of engaging AI suppliers than committing to one particular technology platform or a long-term development contract.
Sellers of AI and how to engage them
Sellers to the health and social care sector sometimes present AI as a solution to unidentified problems. These should not be the primary concern of people in health and social care. Organisations that seek to promote AI for problems that are not enduring should be avoided. In addition, clinicians shouldn't be drawn into validating individual models for suppliers. The reason for this is that whilst an AI model can be fit for purpose in one case, when such models are joined together, they may behave differently from how they operated when they were first validated.
Suppose staff in the healthcare sector are drawn into discussions with existing technology suppliers. In that case, it is better to focus on understanding whether an AI model/technology is fit for purpose in a pathway. To test whether AI is appropriate for a pathway, this should be done with data that reflect real-world situations and with people who would directly be affected by AI's deployment. Having a safe area where technology is tested within a pathway can significantly reduce the burden on staff. In essence, it places the burden of proof of fitness for purpose on suppliers and not the health and social care system.
Managing risk
An important feature of developing an AI-enabled pathway is to understand any potential clinical risks arising. The type of clinical risks arising is usually associated with how users would interact with the AI. Guidance notes and standards generally cover other risks that technology providers will have access to. Asking a supplier to evidence their compliance with the guidance procedure called DTAC is a useful process to go through before engaging engineers.
An effective way of managing technical and clinical risks associated with an AI solution is to engage the services of a clinical safety officer. If your organisation is deploying IT, a clinical safety officer will exist. As previously mentioned, this is a legal requirement under DCB0129 and DCB160.
Involving such an officer early on in design, development, and research reduces the cost of its potential implementation. In particular, the process of managing a risk register enables researchers and developers to gather information on what makes AI fit for purpose. If one also prepared a risk register with potential end-users, many of the difficulties associated with implementing AI technology would be reduced. Gathering such evidence from the onset of a research and development project will increase its value. We can expect that all AI will eventually need to mitigate against risks and evidence real-world benefits, irrespective of whether they are classified as a medical device at this time.
Supplier standards
Suppliers who do not comply with or who are not working towards ISO13485 / ICE52304 should not be able/ expected to provide the continuing real-world evidence of fitness required by software in medical device legislation. Organisations that comply with such standards can develop technology that achieves a CE status. The acceptability of a CE will not be diminished following Brexit; however, suppliers of AI technology will need to register their devices, as appropriate, with MHRA.
Being an Effective Data Guardian
Suppose you are faced with an AI supplier who is using technology to develop its software, using data for which you are a guardian. In that case, it is important to consider the implications of its use against the Data Protection Act and GDPR. Both of which put a duty of care on the health and social care system. This duty is to make sure that where technology is applied to decisions about the public, they are aware of and agree to these decisions and have a right not to have algorithmic decisions made about them. How data is processed by AI software and hardware should mirror the data privacy terms of service of a health and social care provider using it. If it does not, then the contract for its use should stipulate which privacy agreement is given precedent over the other and how
Managing Innovation.
Most frequently, data is required to design and test a model to make a classification or a prediction. Research and development organisations are the primary producers of such models. Currently, most AI models are developed by people who have a PhD in a particular technical discipline. As one often finds PhD's within academia, universities often lead to developing new and emerging AI. Users should direct research around the systems and pathways they and their organisation are committed to using. When such innovations become proven or commodities, MSc Computer Science students are frequently used to build AI systems used within pathways.
Using Real-world data to validate pathways of AI, not just AI models
The data needed to prove the efficacy of AI as a model will most likely change when the AI is embedded in a system. Paying attention to how much data may change over time is worth considering early on in the innovative process. This is particularly important if one considers joining models together as their performance may change when they are in a pathway of systems. It is for this reason that users are encouraged to validate models against 'real world' data within end-to-end testing of AI-enabled pathways.
_____________________________________________________________
AI's POTENTIAL IMPACT ON THE NHS
People in the AI Value Chain
AI depends on mathematics, statistics, computer science, computing power and a human's will to learn. Humans involved in AI have many titles and skills. They vary from:
- Front line workers (nurses, doctors, allied health professionals, administrative staff etc.) who gather, analyse, and control data relating to human interactions and use of management systems
- Life Scientists who manage and research laboratory facility and clinical data
- Epidemiologists & statisticians who look at public health
- Data scientists who look at data analytics, modelling and systems analysis
- Technology experts skilled in developing and implementing electronic patient/hospital records systems and specific data types (images, text, coded data, voice etc.)
- Management who use methodologies and communication tools to engender systems use
Optimising Organisational Outcomes
The NHS and social care sector has over 1.5 million staff each with its systems. The myriad of providers and programmes means that data is heavily siloed. In practice, few have data of enough quantity and quality to make centralised AI systems a realistic option. The potential to do federated AI development is significant. The opportunity to integrate data AI model development is, therefore, a current realistic challenge.
The opportunities to improve: workflows, predictive analytics, visualisations, data search, data tracking, curation of data and model development and deployment is apparent. All of which could improve the efficiency and effectiveness of existing and arising health and social care outcomes. The opportunities to manage resources flexibly, dynamically and in real-time will increase using AI.
An expanded data and computer science service
This will require the increased use of containerisation of AI model and pathway systems, the development of microservices, cybersecurity systems, image processing, natural language processing, voice model development and curation, graphical processing, web development etc. The delivery of these using agile techniques by people trained to understand complex and black-box algorithms will be necessary on various data sources. It will be increasingly essential to track data from the source, re-use and cross-reference health and public domain data, create and curate ground truth data for pathways, and generate and track insights as far as they affect pathways.
The changing world of bioinformatics
The cost of generating and analysing biological data has significantly reduced. The ability to scale and manage data often sits in research establishments and universities with government-funded state of the art infrastructures, cloud computing, security testing and students to analyse health and social care themes. The opportunities for local clinicians and carers to direct how data is analysed, tagged, structured for local health outcomes is clear. In Wales, this has manifested itself in the local data analytics groups within the National Data Repository project.
Researchers
Speeding up Life Science, Biomedical and Service Analytics
AI has the potential to accelerate academic research and optimise the scientific process. The ability to swiftly analyse and visualise biomedical data is improving triage and decision making. Swifter sharing of this data has made pan-regional decision-making more effective. Coupled with developments in cybersecurity, data sharing inside and outside of health and social care has become more accessible and safer. The ease and speed with which one can obtain and analyse data and draw insights improve the management of service and health outcomes. Pathway management and the systems within it create a myriad of data sources that one can use to enhance the efficient, safe, and compliant delivery and development of health and social care services.
Increasing the importance of data stewardship
The health and social care system has historically relied upon clinical trials and survey data. Increasingly it is using structured electronic health record data to predict and classify health outcomes. Due to the increasing accuracy and commoditisation of natural language tools, linking structured data to inferences from text using text analytics will become widespread. Cross-referencing these with dispensing records and service funding planning will increase the capability of value-based healthcare analysis.
Cross-referencing such data with analysis of patient discussions has the potential to improve the understanding of the effectiveness of interactions and opportunities for more effective triage. The move towards cross-referencing health data to real-world social data (wearable etc.) is common practice in private healthcare economies. It can be expected to gain momentum in the UK. Sensitive data stewardship and privacy-protecting techniques will be increasingly demanded by government agencies tasked with information governance. Increased due diligence around the use of online monitoring can be expected, particularly when techniques used in the advertising sector for population and retail consumer analysis become widespread in the monitoring of prescription compliance and dispensing. The formal assurance of methods and techniques in this domain can be expected when they affect services with a clinical outcome.
_____________________________________________________________
THE NHS' POTENTIAL IMPACT ON AI
The impact of compliance regulation on AI users
The amount of effort needed to apply to use AI in health, and social care varies. When planning to use AI, it is useful to segment suppliers in terms of where they are in the cycle of showing evidence of compliance. Then where they want to be, and therefore the data demands to be placed on your use of the technology.
Evidence of compliance often relates to showing a statistical link between a technology's purpose and the outcomes associated with that technology's use. For NICE DMHT tier 3 and above technologies and have not been classified as such, the supplier will want to show a comparison of the technology against existing customer practice. In some cases, this may involve formal research and potentially randomised control trials. In the latter case, the use of the technology should not be considered before a research/evidence proposal is logged with MHRA and HRA.
The impact of users on AI
AI technology will need to comply with the standard operating procedures of internal departments, e.g., procurement, information technology, Informatics, information governance, intellectual property, health and safety etc. A common error when applying AI is to assume that because it may be free, offsite, etc., it does not have an associated cost. This cost may manifest itself in internal departments. Understanding what this cost is before implementing AI in a pathway is highly recommended.
AI will have an increasing impact on the use of human resources within a health and social care setting. The opportunities to automate decision-making are becoming apparent to people involved in managing people and digital transformation. An understanding of the human cost and benefit of implementing AI will become increasingly important. It is advisable to begin discussions on the use of AI with people responsible for these activities before decisions are made to bring technologies into practice.
Users Roles in Medical Device Regulation (MDR) Compliance
An important facet of MDR is the continuous monitoring in place to ensure compliance. User-defined KPIs will be useful when applying AI that is subject to medical device regulations. The main facet of this compliance is evidence that users understand an AI system is fit for purpose and safe: much like with the pharmaceutical Yellow Card Scheme. In Wales, managers can give users and those affected by AI a sense of agency over AI's use if they continually measure its success in a pathway against the tenets of the Welsh Care Quality Framework.
Using Real-world data to validate pathways of AI, not just AI models
The data needed to prove the efficacy of AI as a model will most likely change when the AI is embedded in a system. Paying attention to how much data may change over time is worth considering early on in the innovative process. This is particularly important if one considers joining models together as their performance may change when they are in a pathway of systems. It is for this reason that users are encouraged to validate models against 'real world' data within end-to-end testing of AI-enabled pathways.
The impact of compliance regulation on AI users
The amount of effort needed to apply to use AI in health, and social care varies. When planning to use AI, it is useful to segment suppliers in terms of where they are in the cycle of showing evidence of compliance. Then where they want to be, and therefore the data demands to be placed on your use of the technology.
Evidence of compliance often relates to showing a statistical link between a technology's purpose and the outcomes associated with that technology's use. For NICE DMHT tier 3 and above technologies and have not been classified as such, the supplier will want to show a comparison of the technology against existing customer practice. In some cases, this may involve formal research and potentially randomised control trials. In the latter case, the use of the technology should not be considered before a research/evidence proposal is logged with MHRA and HRA.
The Impact of Medical Device Regulation (MDR) Compliance
An important facet of MDR is the continuous monitoring in place to ensure compliance. User-defined KPIs will be useful when applying AI that is subject to medical device regulations. The main facet of this compliance is evidence that users understand an AI system is fit for purpose and safe: much like with the pharmaceutical Yellow Card Scheme. In Wales, managers can give users and those affected by AI to have a sense of agency over AI's use if they continually measure its success in a pathway against the tenets of the Welsh Care Quality Framework.
This evidence should be provided from clinical/social care end-users and approved by their management, senior management team and the board. Procurement departments should provide advice on how training AI can be used to decrease the cost of use. They should be mindful of opportunities to mirror end-user KPIs in contractual KPIs.
Depending upon the answers to the above questions: it is expected that health and social care organisations can prioritise resources targeted at AI ideas, products, development and research.
The resources targeted at AI's use will depend on the evidence needed to prove its value in a pathway. When developing a portfolio of AI projects, this is an important consideration. The time taken to get to User Acceptance Testing relates directly to the costs of bringing AI to market. Understanding this cost will directly impact plans to share risk with those in the AI supply chain, e.g. academic establishments and AI firms.
_____________________________________________________________
WHERE TO LEARN MORE
This depends on your role in the future, Stubbsie.
Suppose you are a member of a health and social care organisation. In that case, it is most likely that you are best advised to concentrate on the identification of enduring problems with your management team. This group should then approach your IT/Information teams to understand what pipeline of projects they are prepared to support and standard operating procedures for IT, Informatics, IG, Cyber Security etc. will help ensure that AI has a home when it arrives within a health and social care system.
Suppose you are not part of a health and social care organisation and want to still do things in AI. The ramblings above should help you navigate your way around some of the important subjects. But if you are interested to 'get stuck in' bear in mind the following 5 take-aways when you learn more
1. Understanding the value of AI
It is helpful to think of AI as a technology used to classify or predict outcomes, e.g., cancer as shown on an X-ray or the need to send a person a reminder. When you understand how wrong you can be in that prediction or classification, you can better understand the value of AI in your work.
Once you have answered these questions, you need to ask yourself, what data have I got access to that will enable me to reach that conclusion? This is because AI is not artificial intelligence; it is a set of techniques to look at data and change how systems operate based on that data.
For example, using cancer identification on x-ray as an example, AI will analyse the pixels in the image and conclude what it represents based on what it has been trained to identify.
2. The Importance of Data
The development of AI is entirely dependent on data. Gathering data that enables AI to make predictions or classifications of a model fit for purpose is a highly governed area. The data protection act and GDPR place significant restrictions on what data can be used and for what purpose. Consequently, before embarking on use, research, or development, it is wise to understand the implications and impact of AI on a user case.
The information governance teams who act as guardians of public data will want to consider evidence in data protection impact assessments (DPIAs)and compliance with the Data Protection Act, GDPR and ICO guidelines.
Significant public monies are spent trying to bring data into extensive data resources. This is a very difficult task in practice and assumes that models need to be created centrally and distributed nationally.
This assumption needs to be tested when it is often acceptable to build models locally and deploy them nationally. This technique of federated machine learning has several advantages for information governance. Not least that data does not need to leave the boundaries of a health board's firewall to be useful in developing AI.
3. How AI is deployed
It is common practice that AI technology deploys within hardware and software. Increasingly it is deployed as a service. Users of the technology gain a license to use it under terms. Consideration of the terms is essential, particularly where the technology can adapt itself based on the data it is given access to.
4. Unlocking IP
To ensure that private sector funding does not lock health and social care organisations out of other potential developments: it is vital to have a formal agreement with private sector funders. Effective agreements are generally associated with the standard operating procedures of hosts in the health and social care sector. However, most do not have access to guidance on managing associated intellectual property. In this case, it is useful to refer to Lambert agreements on the IPO website.
The choice of the agreement will depend upon the board within an organisation. Many AI developments are conducted on the Lambert Agreement 4.
5. Licensing AI's use
Within the health and social care system of the United Kingdom, a common practice is to require that purchases of technology provided are within an open-source license. Ensuring users are not locked into particular technologies and the support and development thereof. This said, your ability to do this will very much depend upon whether you can influence how AI is made. If an organisation is providing data to train AI, it can affect the terms of the AI license.
The success or failure of AI in the NHS is dependent upon people who have empathy and curiosity. People who will lead the design, development and deployment of AI, not wait until it is presented to them; people who prefer action rather than talk; people who care about people, not kudos. People like you.
I have broken the themes you could consider into:
- Big Issues
- A Framework for Assessing AI in Clinical Terms
- Governance
- AI's potential impact on the NHS
- The NHS' potential impact on AI
- Where to learn more, and five take-aways to bear in mind when you are learning more.
Good luck
___________________________________________________
BIG ISSUES
AI in the NHS
AI is going to change the NHS. But the acceptable level of change depends on one's risk perspective, motivations and resources. It is much like making a change to a moving ambulance when the difference can be as big as replacing the paramedics with robots or as little as washing the vehicle. It is possible. But this is going to be problematic: because who wants to slow down an ambulance.
Risk Mitigation
Using the Ambulance analogy. There are many changes one could make to an ambulance. Some are very risky, like swapping a paramedic with a robot. Switching an ambulance with a new type of ambulance is less risky. Cleaning an ambulance has even less risk. This said: you would do none of this if the ambulance was in motion.
Suppose you are going to make a change to an ambulance. It is probably best done in a workshop. Us AI folk have a fancy name for them. We call them sandboxes. Places where you can tinker, and if you break something, it does not matter.
Certain people need to be confident that a change to an ambulance is safe. They usually have a checklist and lists of hurdles suppliers of kit need to comply with. Us AI folk have similar people. We call them Clinical Safety Officers, who work to a standard. Their checklists are called DCB's, DHT and DTAC.
Much like with an ambulance: you have a checklist to sign an ambulance out of the workshop. Us AI folk have similar lists. We call them software licences and agreements like AGPL and Lambert agreements (when non-NHS folk use our stuff).
It is a terrible idea if you tinker with all your ambulances at the same time in your workshop, so you have to have some way of choosing what to tinker with and why your job list. We have the same kind of list. But we have a posh name for it. We call it a portfolio, which we manage.
Models of Adoption
Models for adopting AI in the NHS vary. In the main. It tends to happen by stealth. An AI supplier will propose a non-AI system. They then gather user data and systems data. They draw insights and then sell those insights back to the NHS as a service. This is akin to having a speed monitoring system on your ambulance that monitors how fast you go on/off a 'blue light'—then having the data sold to the ambulance service as insight. It tends to go down badly.
We have model of adoption, we call the 2022 framework.
In it, the NHS (the AI Funder) is prepared to engage us and accepts specific roles concerning the management of the AI development process. This is akin to an ambulance station taking control of a new vehicle fully aware of its value and maintenance schedule. In this framework, AI Users (typically NHS staff and sometimes the public) have various roles. Most importantly, they (not just NHS management) understand the value of the AI they are involved with developing and manage its deployment and continuous monitoring. In our model, they can also switch the AI we develop off. Much like giving a paramedic the power to say whether they will or will not travel in the ambulance given to them. A key role as an AI supplier is to be transparent about the management and protection of data.
We are working with other suppliers that use the data and insights we use and generate. Ensure that a data protection and GDPR statement is in place before we begin an AI development and secure it is up to date. Guiding the NHS buyers and users through compliance procedures and reducing the technical risks and associated costs of using systems and methods, the AI we develop depends upon. In practice, this role is akin to an ambulance supplier posting engineering staff to your station and working with your teams to make sure their vehicles are fit for purpose and nasty surprises minimised.
On the subject of nasty surprises: what are they, I hear you ask. Well. Would you believe it? Some people think AI is a humanoid-looking robot, and they fear putting their lives in the hands of such creatures! It is often better to describe AI as what it is, i.e. advanced computerised guessing, which is regularly linked to a computerised system or systems used to deliver help in a care pathway. It is much like the ABS that stops an ambulance from skidding when the driver brakes hard.
Other nasty surprises occur when the AI gets something wrong (and they always will), and you cannot switch it off or adapt it without a high cost. This is often in the NHS's direct control. It is better to buy AI under a service KPI than a product under a long-term licence. To think of AI as a system that can and should change over time is wise; the longer one uses AI, the wiser it should become. But much like people, this is not always the case. So contracting with AI suppliers like one would do with short term contract labour (with well-defined KPIs) is good practice.
Empathising with the Issues Facing People
To be a good manager of an ambulance station, you need to understand the role and responsibilities of people and good practice. As and when you come across people involved in the design, development and implementation of AI, the following descriptions of personas may help you understand what people need to understand and the issues they face
Staff and Patients
AI Sandboxes
These are safe places where you play with AI to see if and how it can be broken with real data and real users. It is much like when somebody gives the Ambulance service a new vehicle to test before buying one. Not technology survives contact with people and data, and ideally, these sandboxes should have both. Like with an ambulance, if you can get access to the people who specify, develop, test, make, supply and buy them, you can influence what is produced. The same can be said with AI. The most effective scenario is that local health board management, clinical and IT teams set up their own AI sandboxes with academics and local firms to drive AI developments.
If you were not involved in the design of an ambulance you would end up using; you would be circumspect about that ambulance's design. So it is with AI. AI developments should be clinically led, with pre-defined key performance indicators (KPIs). These developments would have support from the end-user to the board level. They would also have Intellectual Property (IP) and Information Governance (IG) agreements in place and a model explaining how data is shared in line with Information Commissioners Office (ICO) guidelines. Not having these agreements and models in place would be like saying to you, 'Thanks for your help designing this fab new ambulance. You and your health board are not going to receive a penny. Copyright of your ideas is not going to be attributed to you / and your health board.' (which is against the law)
AI Standards
Imagine you were designing a new type of vehicle that had never been tested. You would probably end up developing your standards or apply existing ones to this new vehicle when appropriate. This is akin to what is happening at the moment in AI. AI compliance is a unique and changing field. So standards are changing rapidly. Organisations like the government, MHRA, NICE have different evidence criteria and standards, and they are trying to decide what fits.
There is some good news. A standard that directly affects AI is the DCB 0129 / 160 standards. These require that all IT projects have a trained clinical safety officer in place. Such officers control creating and releasing a clinical risk/hazard and mitigation log that users need to make. They are a bit like the manual you get when you buy a vehicle showing you what to do when things go wrong: like a tyre puncture.
A similar standard is NICE Digital Health Technology Evidence Framework. (NICE DHT). In essence, it says that for a particular pathway, one should ensure evidence exists about particular digital health technology, i.e. AI. The level of evidence relates to whether AI aims to have a clinical outcome or not. There are three Tiers in NICE DHT. The lowest tier (Tier 1) has evidence requirements that can take up to 6 months to gather. The highest (Tier 3) can take between 2 and 5 years to compile. These standards are comparable to the measures the government places on the medical provider. The main difference is that in practice, NICE Tier 1 evidence can be gathered relatively simply, however a collaboration between the NHS, Academia and Industry.
Other relevant standards are covered in the Governments Digital Technology Assessment Criteria (DTAC); this operates a bit like a MOT testing schedule. Unlike a MOT, DTAC is applicable for new AI as well as old AI.
Commercialisation
The commercial strategy for selling/buying AI should depend upon your power. If you are involved in designing and developing AI, then you are in a position of strength. You can negotiate a good deal. If you wait till after the AI is developed, then your power is reduced. Using the ambulance design analogy: If your design is built with your parts, you would negotiate a good deal and licence its use. So it is with AI. But in this case, the parts are data you share to create the AI and your time used to interpret the insights and predictions the AI makes.
It is appropriate to suggest that NICE DHT Tier 1 AI be released under a AGPL licence that does not preclude or prejudice its use, e.g. MIT licence, which enables the private sector to embody the AI into their solutions and sell it, or others like AGPL that do not allow re-sale. This process is a bit like allowing another Ambulance service to use your ambulance design but precluding them from selling it back to you.
NICE DHT Tier 2-3 AI best disseminated under a non-exclusive Lambert agreement: because such AI projects often involve partnerships with academia and industry. Lambert Agreements are template documents freely available from the Intellectual Property offices web site. This is much like saying, you can sell the ambulance design, but my health board gets some revenue back from each deal.
Over and above the clinical risk of bringing AI into use, there is a cost risk. The principal cost risk being the time and effort associated with obtaining a NICE Tier 1-3 solution to bear within a pathway. For a NICE DHT Tier 1 service, this can be around £0.5 million; for a tier 2-3 product, this could be between £2-5 million.
AI Portfolio Monitoring
AI Portfolio Monitoring is, in essence, a job checklist. It guides you to decide what jobs to do first. It is a good idea to have a set of criteria for assessing the risk of an ambulance failing. So it is with AI. Below is the CarefulAI Criteria we use with NHS customers to prioritise AI jobs. The riskier the status, the more critical it is to address the AI issue. A direct analogy is a MOT. Suppose your vehicle is at risk. It will fail its MOT. If a vehicle has some concerns to a MOT assessor, they will list a series of advisories in your MOT assessment. With the CarefulAI Criteria, the main difference is that customers and staff are encouraged to report an AI system's risk status. This is akin to asking an ambulance driver or paramedic to assess their vehicle for its potential MOT status.
How funding sources affect AI deployment
For staff and clinicians involved in the health and social care sector, it is recommended that they take an interest in how potential AI is being funded. Diverse funding streams are associated with a range of advantages and disadvantages. Government agencies expect long-term returns on public money invested. Businesses can have the same expectation. However, firms often require a return on investment within two years in the form of profitable income. Venture capitalists have much higher expectations; this can be a 10x return on their investment within two years.
With AI solutions that can fit into the NICE Tier 1 evidence framework: the evidence of fitness for purpose can be self-reported, with no need for independent verification. These types of applications attract all types of funding as the return on investment can be within one year.
Where more evidence is needed, the research and development process for a particular type of AI is longer. Lengthy development processes are dependent on more significant funds. Developments in AI that require evidence in Tier 2+ tend to be funded by University research and development grants. When the evidence they provide enables the private sector to deploy a solution within around one year, it will attract private sector funding.
_____________________________________________________________
A STRATEGIC FRAMEWORK FOR AI IN THE NHS
Theme: Person-Centred Care
Person-centred care refers to a people-focused process, promotes independence and autonomy, provides choice and control based on a collaborative team philosophy.
There is a possibility that AI could be used to understand people's needs and views and help build relationships with family members, e.g., via voice interfaces. Therefore, humans can be trained to put such information accurately into context by comparison to holistic, spiritual, pastoral and religious dimensions. Consequently, feedback must be gathered from service users and staff on their confidence in the support provided by an AI system.
Theme: Staying Healthy
The principle of staying healthy is to ensure that people in Wales are well informed to manage their health and wellbeing. In the main, it is assumed that staff's role is to nudge notions of good human behaviour by protecting people from harming themselves or making them aware of good lifestyle choices. The care system is particularly interested in inequalities amongst communities which subsequently place a high demand on the public care system.
AI is being deployed to nudge human behaviours that affect the length and quality of human life, such as smoking cessation, regular activity and exercise, socialisation, education, etc. This could be manifested as nudges staff provide to people based on service users self-reported activity or from automated nudges that arise from electronic devices service users have access to, e.g., smartwatches and phones.
At this time, 'wellbeing advice' from staff and AI solutions is an unregulated field. In Wales, healthcare staff aim to deliver clinical care pertinent to their speciality, advise service users, and signpost them to support positive health behaviours. This is embodied in the technique of 'making every contact count'. It is, therefore, appropriate to suggest that reports on any AI system's fitness for purpose accounts for the frequency at which wellbeing advice is provided at the same time as focussed clinical care and advice.
Theme: Safe care
The principle of safe care is to ensure that people in Wales are protected from harm and supported to protect themselves from known harm. The best places to advise of known harm in practice are clinical and technical people who have had Safety Officer Training. In UK law AI system design and implementation need to have an associated clinical/technical safety plan and hazard log (e.g., DCB0129 and DCB0160). System suppliers are responsible for DCB0129, and system users are responsible for DCB0160. The mitigations in a risk register should be manifested in systems design or training material.
If a system is making decisions with no person in the decision-making loop, it is likely to be classed as a medical device. In this case, users should expect such a device to have evidence of compliance with UK MHRA and potentially UK MDR. The process required to demonstrate proof of compliance is lengthy and can take between 2 and 5 years.
MHRA and MDR compliance is not sufficient as stand-alone assurances of safety. The fitness for purpose for users and clinical situations still needs to be evaluated by health and social care practitioners and should not be overlooked.
Theme: Effective Care
The principle of effective care is that people receive the proper care and support as locally as possible and are enabled to contribute to making that care successful.
People have a right not to be subject to a decision based solely on automated processing (ref Article 22 of GDPR), so it is wise to ask if they wish AI to decide on their care before using it.
AI systems that automate the storage of information or dissemination of clinical information (e.g., voice recognition systems) can be expected. The business cases to support service users within their homes promptly and remotely will be compelling.
Theme: Dignified care
The principle of dignified care is that the people in Wales are treated with dignity, respect, and others. We must protect fundamental human rights to dignity, privacy and informed choice at all times, and the care provided must take account of the individual's needs, abilities and wishes.
One of the main features of this standard is that staff and service user's voice is heard. Service users have, on average, two generations of families who have grown up with human-to-human interaction and advice and see this as the norm; however, there has been an accelerating trend of people taking advice from search engines and via video. As people, we can train AI systems to undertake active listening, motivational interviewing and deliver advice with compassion and empathy.
AI methods to automatically understand the empathy, rapport and trust within such interactions are being developed.
Theme: Timely Care
To ensure the best possible outcome: people's conditions should be diagnosed promptly and treated according to clinical need. The timely provision of appropriate advice can have a significant impact on service user's health outcomes. AI systems are being deployed to give service users faster access to care service, e.g. automated self-booking systems and triage systems.
Theme: Individual care
The principle of individual care is that people are treated as individuals, reflecting their own needs and responsibilities. This care is manifested by staff respecting service users' needs and wishes, their chosen advocates, and providing support when people have access issues, e.g. sensory impairment, disability, etc.
AI that understands the needs of individuals can be as simple as mobile phone technology that knows when people are near the phone and can be contacted or translates one language into another. In the main, this class of AI interprets a service users' needs and represents them. Such AI can be provided by a service, e.g. a booking system, or the service user, e.g., a communication app or an automated personal health record.
Theme: Staff and resources
The principle is that people in Wales can find information about how their NHS is resourced and effectively uses resources. This goes beyond clinical and technical safety, governance and leadership. It extends to providing staff with a sense of agency to improve services. A health service must determine the workforce requirements to deliver high-quality, safe care and support. As discussed in the Topol Review, a better understanding within the staff of the strengths and weaknesses of AI is an issue that needs to be addressed.
One of the critical challenges when clinicians face the prospect of using AI is what to focus resources on. Too often, they are faced with AI purchasing decisions with no framework to understand the value, fitness for purpose or time to market issues associated with AI. Supporting staff to make decisions that synthesise service development requirements, identified through co-production between health care teams and service users with evidence-informed AI application, gives the best potential to optimise resources.
_____________________________________________________________
GOVERNANCE AND AI
Classifiers and prediction tools are usually embedded in hardware and software used in health and social care. The providers of that hardware and software are governed under two practices; contract and health guidance/law. In the UK, this guidance is provided by NICE and MHRA. The acceptable margin for error for a classifier/prediction algorithm within a situation is in the hands of the supplier who uses the algorithm and the contract they have with the NHS.
In situations where such AI technologies do not have a clinical outcome (for example, booking people onto a training course), the risk of harm is low. This means that the health and social care system does not try and police the development and implementation of such technologies. Conversely, where the potential for risk to humans is high, policing is rigorous.
Depending upon where the risk to humans exists, this policing comes in requirements to adhere to standards and independent assessment. In some cases, this mandates the provision of extensive evidence and research to support compliance with government agency guidance or law. The governance of standard adherence is nation-specific. For example, AI technology that complies with UK regulation cannot automatically claim readiness for deployment in the United States and vice versa.
Important questions about AI technology to consider include:
- How much error or inaccuracy is acceptable? (How wrong can you afford to be?)
- What are the possible consequences of inaccuracies or errors?
- What mitigating steps can be taken to reduce the impact or seriousness of any acceptable error?
- What evidence is necessary to demonstrate that the AI is fit for its purpose?
In the United Kingdom, NICE guidelines on evidence are an excellent way of understanding what information is needed based on the purpose of the AI you are faced with.
The role of the Care Quality Commission
If an AI supplier is providing software as a service, and that service is a clinical service, it is most likely that they will need to register with the care quality commission. A directory of registered suppliers is available.
The Role of Clinical Safety Officers
An effective way of understanding the impact of technologies on clinicians, users, and clinical practice is to comply with and self-certify against the Data Coordination Boards standards using Clinical Safety Officers. In particular, DCB0129 and DCB0160.
These standards are useful because they address the subject of risk of harm and the mitigation of it. In consideration of the risk of harm, AI users find themselves considering the implications of system design on any person or persons that may be affected by it. This opens up conversations about the use of data, inferences, how they are created and the implications of making decisions relating to AI use. If a supplier is to self-certify against DCB0160, they need to follow very similar procedures to DCB0129.
In the absence of a CE mark, both of these Standards are helpful. It could be argued that even with the existence of a CE Mark, evidence of an AI supplier and user using both of these standards illustrates a commitment to the safe use of AI in health and social care. In UK law, they are a mandatory requirement of designing, providing, and monitoring an IT system within the health and social care setting.
To self-certify against DCB 0129 / 0160, a Clinical Safety Officer needs to be in place. Clinical safety officers are clinicians who have been trained in the effective implementation of these standards. The cost of such training is meager
The value of a CE mark
If a product has CE marking, understanding its intended use is very important. The duty of care of how technology is used in health and social care sits with the user, not the supplier. Therefore, it is important not to assume that because a supplier has a CE mark, this technology is certified for the purpose you wish to apply it too. Firms that carry the CE marking, and for what purpose, will often be found on the MHRA website.
Data Classification and Protection
Data about people and their lives have never been more readily available. Historically people could only analyse coded data (e.g. Diagnoses, Symptoms, Test & Procedures Results); nowadays, they can easily cross-reference this with patients reported data. Data on people's lives, clinical and social encounters, family and work history and genetics is now possible. We can analyse these data sources to predict health and social care trends and individual outcomes. Data needs to be classed and protected as a valuable asset.
The role of IRAS and HRA
On occasions when the deployment of a tier 2+ AI is expected, it is most likely that formal research will be required. This often takes the form of local trials. To ensure that we can effectively report the success and failure of such trials, it is worthwhile registering trials with IRAS and HRA. Registrations are a requirement of academic research applications and coverage within publications within the health and social care space.
When to start evidence gathering for MHRA
A challenge at this time is the paucity of authorised bodies able to independently show compliance against medical device legislation. Suppliers of AI technology can expect to wait over a year to gain access to a notarised body capable of confirming compliance. Suppose a supplier of AI technology waits until such a body is available to gather evidence of appropriateness. In that case, the time it takes to bring technology to market will be potentially devastating on the business case for its use.
This is important because what constitutes technology that needs to be governed by MHRA and NICE can be expected to change and become more encompassing over the next couple of years: when medical device legislation begins to cover all but a few health and social care areas.
The cost of compliance
In synopsis: if the AI has an impact on the outcomes of health and social care, it is most likely to fall into one of the NICE DHT categories. The burden of evidence in each category increases exponentially as you move from Tier 1 to Tier 3. Tier 1 products are self-certified. In practice, unless you already have a CE mark for Tier 2+ technologies, then AI technology needs to provide evidence to what is known as a notarised body. This notarising body will indicate if the AI complies with the evidence required. The overarching body in the United Kingdom that manages the demand for the provision of evidence is the MHRA.
In 2020, the cost of compliance associated with providing this evidence ranged from approximately £0.5m for a Tier 1 product to £2-5 million for tier 2 and 3 products. For a Tier 1 product, the timescale to evidence compliance can be between six and 12 months, the timescales to evidence of compliance for a Tier 2 or 3 product can be between two and five years.
The cost of compliance is an important subject for all staff and health and social care. This is not because they incur such costs. The primary reason is that suppliers of AI-enabled technologies need to show evidence of compliance. The time and effort of health and social care staff in the creation/collation of evidence needs to be considered when developing or accepting a business case.
Reducing the cost of compliance
In academia, the technical time and cost associated with developing AI for just one part of a pathway can be three years and £0.5- 1 million. The cost of testing, validating and supplying technology often falls to the supply chain nearest to end-users of the research. This is often too late. Many research and development programs are instigated without the involvement of end-users and safety officers from the onset. A common pitfall is to apply for funding or be given funding to produce a proof of concept without addressing the technical and clinical risks associated with the potential implementation of AI from the onset. If research is to be realised as developed product gathering compliance data from the onset helpful to researchers and the clients they seek to support
Who should lead AI Development?
There are many different types of people involved in the research and the development of AI solutions: academic researchers, specialists in particular technologies such as text, image, sound analysis, mathematical modellers, data scientists, computer scientists.
Research should be led by Health and social care sector staff potentially affected by its deployment. They are often very aware of the actual problems that need to be solved and enduring. The time commitment to design, develop and provide supporting evidence for AI solutions means that short-term problems are not an optimal focus for AI technology.
Safely developing AI user business cases in a Sandbox.
Over and above the general requirements for the provision of IT systems within health and social care, perhaps the most critical facet of using AI is the effective use of a sandbox. These are places where data is put but has no way of being cross-referenced to identify the public. With this data in either closed or open repositories, AI providers can test and prove the efficacy of their solutions. The proving of the efficacy of a solution is best considered in-situ, with mirrors of the systems used in practice.
Simply validating the accuracy and appropriateness of an algorithm outside of a mirror of a pathway in which it is to be used is erroneous. The reason for this is algorithms behave differently when they are linked together in a system. Also, algorithms behave differently within different computer systems. They are often vehicle dependent. Suppose a user effectively understands the validity of using an AI system within a pathway. In that case, it is more meaningful to test it in practice with real-world data within a specific pathway with specific technology used in that pathway.
Procurement
The Role of Procurement
It could be argued that unless procurement departments take an active role in the design and development of AI, then changes to existing customs & practices will not happen. An erroneous assumption is that you can buy an AI product that will work independently in a pathway. A correct assumption is that, at best, one is buying integrated assets and services to which one can and should apply KPIs. These KPIs, when related to health and social care services, can be related to the outputs. Each solution is likely to have different KPIs depending upon the application of the AI technology in practice. The best people to set these KPIs are safety officers and end-users.
Continuous monitoring
Being locked into or out of technology when the rate of change of AI is so great is unwise. Short term contracts with very specific KPIs are advisable in this situation. The fact that ISO and medical device legislation recommends continuous monitoring is helpful to AI system buyers. Using continuous monitoring with the option of contract extensions is a more effective way of engaging AI suppliers than committing to one particular technology platform or a long-term development contract.
Sellers of AI and how to engage them
Sellers to the health and social care sector sometimes present AI as a solution to unidentified problems. These should not be the primary concern of people in health and social care. Organisations that seek to promote AI for problems that are not enduring should be avoided. In addition, clinicians shouldn't be drawn into validating individual models for suppliers. The reason for this is that whilst an AI model can be fit for purpose in one case, when such models are joined together, they may behave differently from how they operated when they were first validated.
Suppose staff in the healthcare sector are drawn into discussions with existing technology suppliers. In that case, it is better to focus on understanding whether an AI model/technology is fit for purpose in a pathway. To test whether AI is appropriate for a pathway, this should be done with data that reflect real-world situations and with people who would directly be affected by AI's deployment. Having a safe area where technology is tested within a pathway can significantly reduce the burden on staff. In essence, it places the burden of proof of fitness for purpose on suppliers and not the health and social care system.
Managing risk
An important feature of developing an AI-enabled pathway is to understand any potential clinical risks arising. The type of clinical risks arising is usually associated with how users would interact with the AI. Guidance notes and standards generally cover other risks that technology providers will have access to. Asking a supplier to evidence their compliance with the guidance procedure called DTAC is a useful process to go through before engaging engineers.
An effective way of managing technical and clinical risks associated with an AI solution is to engage the services of a clinical safety officer. If your organisation is deploying IT, a clinical safety officer will exist. As previously mentioned, this is a legal requirement under DCB0129 and DCB160.
Involving such an officer early on in design, development, and research reduces the cost of its potential implementation. In particular, the process of managing a risk register enables researchers and developers to gather information on what makes AI fit for purpose. If one also prepared a risk register with potential end-users, many of the difficulties associated with implementing AI technology would be reduced. Gathering such evidence from the onset of a research and development project will increase its value. We can expect that all AI will eventually need to mitigate against risks and evidence real-world benefits, irrespective of whether they are classified as a medical device at this time.
Supplier standards
Suppliers who do not comply with or who are not working towards ISO13485 / ICE52304 should not be able/ expected to provide the continuing real-world evidence of fitness required by software in medical device legislation. Organisations that comply with such standards can develop technology that achieves a CE status. The acceptability of a CE will not be diminished following Brexit; however, suppliers of AI technology will need to register their devices, as appropriate, with MHRA.
Being an Effective Data Guardian
Suppose you are faced with an AI supplier who is using technology to develop its software, using data for which you are a guardian. In that case, it is important to consider the implications of its use against the Data Protection Act and GDPR. Both of which put a duty of care on the health and social care system. This duty is to make sure that where technology is applied to decisions about the public, they are aware of and agree to these decisions and have a right not to have algorithmic decisions made about them. How data is processed by AI software and hardware should mirror the data privacy terms of service of a health and social care provider using it. If it does not, then the contract for its use should stipulate which privacy agreement is given precedent over the other and how
Managing Innovation.
Most frequently, data is required to design and test a model to make a classification or a prediction. Research and development organisations are the primary producers of such models. Currently, most AI models are developed by people who have a PhD in a particular technical discipline. As one often finds PhD's within academia, universities often lead to developing new and emerging AI. Users should direct research around the systems and pathways they and their organisation are committed to using. When such innovations become proven or commodities, MSc Computer Science students are frequently used to build AI systems used within pathways.
Using Real-world data to validate pathways of AI, not just AI models
The data needed to prove the efficacy of AI as a model will most likely change when the AI is embedded in a system. Paying attention to how much data may change over time is worth considering early on in the innovative process. This is particularly important if one considers joining models together as their performance may change when they are in a pathway of systems. It is for this reason that users are encouraged to validate models against 'real world' data within end-to-end testing of AI-enabled pathways.
_____________________________________________________________
AI's POTENTIAL IMPACT ON THE NHS
People in the AI Value Chain
AI depends on mathematics, statistics, computer science, computing power and a human's will to learn. Humans involved in AI have many titles and skills. They vary from:
- Front line workers (nurses, doctors, allied health professionals, administrative staff etc.) who gather, analyse, and control data relating to human interactions and use of management systems
- Life Scientists who manage and research laboratory facility and clinical data
- Epidemiologists & statisticians who look at public health
- Data scientists who look at data analytics, modelling and systems analysis
- Technology experts skilled in developing and implementing electronic patient/hospital records systems and specific data types (images, text, coded data, voice etc.)
- Management who use methodologies and communication tools to engender systems use
Optimising Organisational Outcomes
The NHS and social care sector has over 1.5 million staff each with its systems. The myriad of providers and programmes means that data is heavily siloed. In practice, few have data of enough quantity and quality to make centralised AI systems a realistic option. The potential to do federated AI development is significant. The opportunity to integrate data AI model development is, therefore, a current realistic challenge.
The opportunities to improve: workflows, predictive analytics, visualisations, data search, data tracking, curation of data and model development and deployment is apparent. All of which could improve the efficiency and effectiveness of existing and arising health and social care outcomes. The opportunities to manage resources flexibly, dynamically and in real-time will increase using AI.
An expanded data and computer science service
This will require the increased use of containerisation of AI model and pathway systems, the development of microservices, cybersecurity systems, image processing, natural language processing, voice model development and curation, graphical processing, web development etc. The delivery of these using agile techniques by people trained to understand complex and black-box algorithms will be necessary on various data sources. It will be increasingly essential to track data from the source, re-use and cross-reference health and public domain data, create and curate ground truth data for pathways, and generate and track insights as far as they affect pathways.
The changing world of bioinformatics
The cost of generating and analysing biological data has significantly reduced. The ability to scale and manage data often sits in research establishments and universities with government-funded state of the art infrastructures, cloud computing, security testing and students to analyse health and social care themes. The opportunities for local clinicians and carers to direct how data is analysed, tagged, structured for local health outcomes is clear. In Wales, this has manifested itself in the local data analytics groups within the National Data Repository project.
Researchers
Speeding up Life Science, Biomedical and Service Analytics
AI has the potential to accelerate academic research and optimise the scientific process. The ability to swiftly analyse and visualise biomedical data is improving triage and decision making. Swifter sharing of this data has made pan-regional decision-making more effective. Coupled with developments in cybersecurity, data sharing inside and outside of health and social care has become more accessible and safer. The ease and speed with which one can obtain and analyse data and draw insights improve the management of service and health outcomes. Pathway management and the systems within it create a myriad of data sources that one can use to enhance the efficient, safe, and compliant delivery and development of health and social care services.
Increasing the importance of data stewardship
The health and social care system has historically relied upon clinical trials and survey data. Increasingly it is using structured electronic health record data to predict and classify health outcomes. Due to the increasing accuracy and commoditisation of natural language tools, linking structured data to inferences from text using text analytics will become widespread. Cross-referencing these with dispensing records and service funding planning will increase the capability of value-based healthcare analysis.
Cross-referencing such data with analysis of patient discussions has the potential to improve the understanding of the effectiveness of interactions and opportunities for more effective triage. The move towards cross-referencing health data to real-world social data (wearable etc.) is common practice in private healthcare economies. It can be expected to gain momentum in the UK. Sensitive data stewardship and privacy-protecting techniques will be increasingly demanded by government agencies tasked with information governance. Increased due diligence around the use of online monitoring can be expected, particularly when techniques used in the advertising sector for population and retail consumer analysis become widespread in the monitoring of prescription compliance and dispensing. The formal assurance of methods and techniques in this domain can be expected when they affect services with a clinical outcome.
_____________________________________________________________
THE NHS' POTENTIAL IMPACT ON AI
The impact of compliance regulation on AI users
The amount of effort needed to apply to use AI in health, and social care varies. When planning to use AI, it is useful to segment suppliers in terms of where they are in the cycle of showing evidence of compliance. Then where they want to be, and therefore the data demands to be placed on your use of the technology.
Evidence of compliance often relates to showing a statistical link between a technology's purpose and the outcomes associated with that technology's use. For NICE DMHT tier 3 and above technologies and have not been classified as such, the supplier will want to show a comparison of the technology against existing customer practice. In some cases, this may involve formal research and potentially randomised control trials. In the latter case, the use of the technology should not be considered before a research/evidence proposal is logged with MHRA and HRA.
The impact of users on AI
AI technology will need to comply with the standard operating procedures of internal departments, e.g., procurement, information technology, Informatics, information governance, intellectual property, health and safety etc. A common error when applying AI is to assume that because it may be free, offsite, etc., it does not have an associated cost. This cost may manifest itself in internal departments. Understanding what this cost is before implementing AI in a pathway is highly recommended.
AI will have an increasing impact on the use of human resources within a health and social care setting. The opportunities to automate decision-making are becoming apparent to people involved in managing people and digital transformation. An understanding of the human cost and benefit of implementing AI will become increasingly important. It is advisable to begin discussions on the use of AI with people responsible for these activities before decisions are made to bring technologies into practice.
Users Roles in Medical Device Regulation (MDR) Compliance
An important facet of MDR is the continuous monitoring in place to ensure compliance. User-defined KPIs will be useful when applying AI that is subject to medical device regulations. The main facet of this compliance is evidence that users understand an AI system is fit for purpose and safe: much like with the pharmaceutical Yellow Card Scheme. In Wales, managers can give users and those affected by AI a sense of agency over AI's use if they continually measure its success in a pathway against the tenets of the Welsh Care Quality Framework.
Using Real-world data to validate pathways of AI, not just AI models
The data needed to prove the efficacy of AI as a model will most likely change when the AI is embedded in a system. Paying attention to how much data may change over time is worth considering early on in the innovative process. This is particularly important if one considers joining models together as their performance may change when they are in a pathway of systems. It is for this reason that users are encouraged to validate models against 'real world' data within end-to-end testing of AI-enabled pathways.
The impact of compliance regulation on AI users
The amount of effort needed to apply to use AI in health, and social care varies. When planning to use AI, it is useful to segment suppliers in terms of where they are in the cycle of showing evidence of compliance. Then where they want to be, and therefore the data demands to be placed on your use of the technology.
Evidence of compliance often relates to showing a statistical link between a technology's purpose and the outcomes associated with that technology's use. For NICE DMHT tier 3 and above technologies and have not been classified as such, the supplier will want to show a comparison of the technology against existing customer practice. In some cases, this may involve formal research and potentially randomised control trials. In the latter case, the use of the technology should not be considered before a research/evidence proposal is logged with MHRA and HRA.
The Impact of Medical Device Regulation (MDR) Compliance
An important facet of MDR is the continuous monitoring in place to ensure compliance. User-defined KPIs will be useful when applying AI that is subject to medical device regulations. The main facet of this compliance is evidence that users understand an AI system is fit for purpose and safe: much like with the pharmaceutical Yellow Card Scheme. In Wales, managers can give users and those affected by AI to have a sense of agency over AI's use if they continually measure its success in a pathway against the tenets of the Welsh Care Quality Framework.
This evidence should be provided from clinical/social care end-users and approved by their management, senior management team and the board. Procurement departments should provide advice on how training AI can be used to decrease the cost of use. They should be mindful of opportunities to mirror end-user KPIs in contractual KPIs.
Depending upon the answers to the above questions: it is expected that health and social care organisations can prioritise resources targeted at AI ideas, products, development and research.
The resources targeted at AI's use will depend on the evidence needed to prove its value in a pathway. When developing a portfolio of AI projects, this is an important consideration. The time taken to get to User Acceptance Testing relates directly to the costs of bringing AI to market. Understanding this cost will directly impact plans to share risk with those in the AI supply chain, e.g. academic establishments and AI firms.
_____________________________________________________________
WHERE TO LEARN MORE
This depends on your role in the future, Stubbsie.
Suppose you are a member of a health and social care organisation. In that case, it is most likely that you are best advised to concentrate on the identification of enduring problems with your management team. This group should then approach your IT/Information teams to understand what pipeline of projects they are prepared to support and standard operating procedures for IT, Informatics, IG, Cyber Security etc. will help ensure that AI has a home when it arrives within a health and social care system.
Suppose you are not part of a health and social care organisation and want to still do things in AI. The ramblings above should help you navigate your way around some of the important subjects. But if you are interested to 'get stuck in' bear in mind the following 5 take-aways when you learn more
1. Understanding the value of AI
It is helpful to think of AI as a technology used to classify or predict outcomes, e.g., cancer as shown on an X-ray or the need to send a person a reminder. When you understand how wrong you can be in that prediction or classification, you can better understand the value of AI in your work.
Once you have answered these questions, you need to ask yourself, what data have I got access to that will enable me to reach that conclusion? This is because AI is not artificial intelligence; it is a set of techniques to look at data and change how systems operate based on that data.
For example, using cancer identification on x-ray as an example, AI will analyse the pixels in the image and conclude what it represents based on what it has been trained to identify.
2. The Importance of Data
The development of AI is entirely dependent on data. Gathering data that enables AI to make predictions or classifications of a model fit for purpose is a highly governed area. The data protection act and GDPR place significant restrictions on what data can be used and for what purpose. Consequently, before embarking on use, research, or development, it is wise to understand the implications and impact of AI on a user case.
The information governance teams who act as guardians of public data will want to consider evidence in data protection impact assessments (DPIAs)and compliance with the Data Protection Act, GDPR and ICO guidelines.
Significant public monies are spent trying to bring data into extensive data resources. This is a very difficult task in practice and assumes that models need to be created centrally and distributed nationally.
This assumption needs to be tested when it is often acceptable to build models locally and deploy them nationally. This technique of federated machine learning has several advantages for information governance. Not least that data does not need to leave the boundaries of a health board's firewall to be useful in developing AI.
3. How AI is deployed
It is common practice that AI technology deploys within hardware and software. Increasingly it is deployed as a service. Users of the technology gain a license to use it under terms. Consideration of the terms is essential, particularly where the technology can adapt itself based on the data it is given access to.
4. Unlocking IP
To ensure that private sector funding does not lock health and social care organisations out of other potential developments: it is vital to have a formal agreement with private sector funders. Effective agreements are generally associated with the standard operating procedures of hosts in the health and social care sector. However, most do not have access to guidance on managing associated intellectual property. In this case, it is useful to refer to Lambert agreements on the IPO website.
The choice of the agreement will depend upon the board within an organisation. Many AI developments are conducted on the Lambert Agreement 4.
5. Licensing AI's use
Within the health and social care system of the United Kingdom, a common practice is to require that purchases of technology provided are within an open-source license. Ensuring users are not locked into particular technologies and the support and development thereof. This said, your ability to do this will very much depend upon whether you can influence how AI is made. If an organisation is providing data to train AI, it can affect the terms of the AI license.