CarefulAI's Mental Health Work
CarefulAI was initially set up by Prof. Joseph Connor in Wales, the birthplace of the NHS, to fill a gap in the provision of safer AI services in the NHS.
In 2017, Joseph identified the gap when he was the head of AI Innovation at NHS England (the NHS being the world's largest healthcare provider) designing mental health triage chatbots for depression and anxiety, and mental health triage systems for the NHS with Microsoft technology
These AI Agent and triage approaches have been subsequently developed and deployed in the NHS. They also form the bedrock of many private sector mental health triage systems that process millions of mental health enquiries per annum
In 2018, it became transparent that the sector were unwilling to invest time and money in automating suicide prevention. So CarefulAI did.
It subsequently took the suicide ontology approach originally designed for the UK's Samaritans (The UK's largest crisis hotline) by Cardiff University, expanded it with partners and validated it with clinical safety officers, so that it became regulatory compliant for use in the NHS. The ontology has been free to use under an AGPLv3 licence in the UK. The original work was developed upon and self harm language added to the ontology. It is used each day, globally, triggering 4000+ investigations per day (as of 2025) and called SISDA.
Early warning language associated with cognitive dissonance has since been provided as a separate service for use in chat agents to assist early interventions.
Currently CarefulAI methods are deployed on google, IBM, Microsoft and AWS cloud services platforms for screening email, text and chat interfaces.
As an self hosted service SISDA is deployed by the majority of the largest healthcare providers in the world, and those who provide digital mental health services. Per annum if helps to signpost millions of people to help.
SISDA has identified that 13% of enquiries to low acuity depression and anxiety mental health services, and 3% of ADHD and ASD services contains language need to be screened further focus. But there is a problem associated with SISDA's use. This problem is is global one. There are not enough people to screen SISDA alerts.
Consequently CarefulAI has developed and is testing 'AI as a Judge' techniques with the digital mental health community.
Coupling 'AI as a judge' with SISDA has reduced SISDA false positives by 31%. However this reduction can fall to 3% depending upon the prompts used by a mental health service provider. To manage this issue CarefulAI developed an approach called PLIM. It actively involves clinical safety officers in the prompt-LLM design. It has subsequently been rolled out by the UK government as general best practice. When use false positives fall to 5% which increases the satisfaction on agencies tasked with following up suicidal ideation and self harm alerts.
The challenge with PLIM is that it places a requirement on AI engineering teams to access clinical safety offers. To decrease this dependency CarefulAI created Cris an AI Agent that is trained to act as an adviser in the area of clinical risk management (in line with DCB 0129).
If you chat with Clare. You will understand that CarefulAI are used across the digital mental health and wellbeing sector to enable mental health service providers to scale their digital offerings based upon our frameworks. For example:
Tri: A method of triaging anxiety, depression, ADHD, OCD, BPD, PTSD, and phobias.
Sunrise: An agent to explore mood improvements.
Emo: An agent that encourages users to explore their emotions.
CarerCare: a method of signposting non-paid carers to resilience support
Will: An agent that encourages users to explore the five ways to wellbeing method.
Crit: An agent that encourages critical thinking......
At this time, CarefulAI is at the centre of AI Risk Assessment and Mitigation in the mental health domain. It works across the sector, from customers, payors, providers, standards bodies and regulators we recognise the pressure on each, and the AI sector. For this reason we are petitioning for a new approach to Prompt and LLMs use in Mental Health.
The benefits of this approach
Increasing accuracy of mental health screening
Decrease cost of managing mental health enquiries
Learnings from Prompt-LLM development are shared globally
It is shown below
In 2017, Joseph identified the gap when he was the head of AI Innovation at NHS England (the NHS being the world's largest healthcare provider) designing mental health triage chatbots for depression and anxiety, and mental health triage systems for the NHS with Microsoft technology
These AI Agent and triage approaches have been subsequently developed and deployed in the NHS. They also form the bedrock of many private sector mental health triage systems that process millions of mental health enquiries per annum
In 2018, it became transparent that the sector were unwilling to invest time and money in automating suicide prevention. So CarefulAI did.
It subsequently took the suicide ontology approach originally designed for the UK's Samaritans (The UK's largest crisis hotline) by Cardiff University, expanded it with partners and validated it with clinical safety officers, so that it became regulatory compliant for use in the NHS. The ontology has been free to use under an AGPLv3 licence in the UK. The original work was developed upon and self harm language added to the ontology. It is used each day, globally, triggering 4000+ investigations per day (as of 2025) and called SISDA.
Early warning language associated with cognitive dissonance has since been provided as a separate service for use in chat agents to assist early interventions.
Currently CarefulAI methods are deployed on google, IBM, Microsoft and AWS cloud services platforms for screening email, text and chat interfaces.
As an self hosted service SISDA is deployed by the majority of the largest healthcare providers in the world, and those who provide digital mental health services. Per annum if helps to signpost millions of people to help.
SISDA has identified that 13% of enquiries to low acuity depression and anxiety mental health services, and 3% of ADHD and ASD services contains language need to be screened further focus. But there is a problem associated with SISDA's use. This problem is is global one. There are not enough people to screen SISDA alerts.
Consequently CarefulAI has developed and is testing 'AI as a Judge' techniques with the digital mental health community.
Coupling 'AI as a judge' with SISDA has reduced SISDA false positives by 31%. However this reduction can fall to 3% depending upon the prompts used by a mental health service provider. To manage this issue CarefulAI developed an approach called PLIM. It actively involves clinical safety officers in the prompt-LLM design. It has subsequently been rolled out by the UK government as general best practice. When use false positives fall to 5% which increases the satisfaction on agencies tasked with following up suicidal ideation and self harm alerts.
The challenge with PLIM is that it places a requirement on AI engineering teams to access clinical safety offers. To decrease this dependency CarefulAI created Cris an AI Agent that is trained to act as an adviser in the area of clinical risk management (in line with DCB 0129).
If you chat with Clare. You will understand that CarefulAI are used across the digital mental health and wellbeing sector to enable mental health service providers to scale their digital offerings based upon our frameworks. For example:
Tri: A method of triaging anxiety, depression, ADHD, OCD, BPD, PTSD, and phobias.
Sunrise: An agent to explore mood improvements.
Emo: An agent that encourages users to explore their emotions.
CarerCare: a method of signposting non-paid carers to resilience support
Will: An agent that encourages users to explore the five ways to wellbeing method.
Crit: An agent that encourages critical thinking......
At this time, CarefulAI is at the centre of AI Risk Assessment and Mitigation in the mental health domain. It works across the sector, from customers, payors, providers, standards bodies and regulators we recognise the pressure on each, and the AI sector. For this reason we are petitioning for a new approach to Prompt and LLMs use in Mental Health.
The benefits of this approach
Increasing accuracy of mental health screening
Decrease cost of managing mental health enquiries
Learnings from Prompt-LLM development are shared globally
It is shown below