Gratis Data Science Work
As part of its commitment to social responsibility in data science, CarefulAI is helping to building the foundation of safe, scalable mental health AI
CarefulAI delivers clinically validated AI for mental health triage and suicide-risk detection. The company was founded by Prof. Joseph Connor, former Head of AI Innovation at NHS England, after identifying a critical gap: mental health services needed scalable, reliable AI to detect high-risk language — and no one was building it.
We developed SISDA, a suicide and self-harm detection ontology originally based on work for the Samaritans and expanded with clinical safety officers to become NHS-compliant. Released under AGPLv3, SISDA is now used globally, triggering more than 4,000+ investigations every day and deployed by many of the world’s largest healthcare providers.
But SISDA revealed a systemic problem: there are not enough human reviewers to handle high-risk alerts. So we built AI as a Judge, reducing SISDA false positives by up to 31%, and created PLIM, a prompt-safety framework now adopted by the UK government. To further scale safe deployment, we built Cris, an AI adviser aligned with clinical risk standards.
Today, CarefulAI powers safe screening and triage across Google, IBM, Microsoft and AWS platforms and supports a suite of mental-health AI agents used throughout the digital wellbeing sector.
We’re now investing in a new global Prompt-LLM experience sharing network for safe LLM use in mental health...
one that increases accuracy, reduces operational cost, and shares safety learnings worldwide ...
PLIM+
Details are shown below
CarefulAI delivers clinically validated AI for mental health triage and suicide-risk detection. The company was founded by Prof. Joseph Connor, former Head of AI Innovation at NHS England, after identifying a critical gap: mental health services needed scalable, reliable AI to detect high-risk language — and no one was building it.
We developed SISDA, a suicide and self-harm detection ontology originally based on work for the Samaritans and expanded with clinical safety officers to become NHS-compliant. Released under AGPLv3, SISDA is now used globally, triggering more than 4,000+ investigations every day and deployed by many of the world’s largest healthcare providers.
But SISDA revealed a systemic problem: there are not enough human reviewers to handle high-risk alerts. So we built AI as a Judge, reducing SISDA false positives by up to 31%, and created PLIM, a prompt-safety framework now adopted by the UK government. To further scale safe deployment, we built Cris, an AI adviser aligned with clinical risk standards.
Today, CarefulAI powers safe screening and triage across Google, IBM, Microsoft and AWS platforms and supports a suite of mental-health AI agents used throughout the digital wellbeing sector.
We’re now investing in a new global Prompt-LLM experience sharing network for safe LLM use in mental health...
one that increases accuracy, reduces operational cost, and shares safety learnings worldwide ...
PLIM+
Details are shown below