SISDA V3
Innovation:
This project will develop an AI-powered conversational agent to provide personalised support for at-risk individuals. It leverages cutting-edge NLP models like GPT-3 pre-trained on counselling dialogues to enable natural conversations.
Going beyond just text, it incorporates multimodal analysis of speech, language and facial cues using partnerships with leaders like Affectiva to better assess mental states and suicide risk. Profiling of personality, trauma history and demographics allows highly tailored conversations for enhanced engagement, using advanced recommendation techniques from Microsoft Research.
Most existing suicide prevention tools rely on simple risk checklists or offer only one-way chatbots. Our solution is powered by deep learning on real conversations to provide two-way empathetic dialogue.
This level of contextual understanding and personalisation is lacking in current tools. Our solution aims to fill these gaps and significantly move the needle on suicide prevention by linking to the electronic care record where it exists.
Methodology:
We will employ a human-centred, participatory methodology with extensive engagement of partners and stakeholders throughout the project.
Clinicians, mental health experts and crucially, survivors of suicide attempts, will be involved in co-design workshops to identify key needs and define evaluation metrics. Conversational flows will be refined through focus groups with target minority communities to ensure cultural competence.
Training data will undergo comprehensive auditing by external bias experts to minimize harms. Rigorous clinical evaluation will include randomised controlled trials with 300 participants to demonstrate improved risk detection over standard clinical assessments.
Usage data, user feedback and outcomes will be continually monitored once deployed at partner hospitals and online forums to identify any limitations and iteratively improve the system.
The project plan provides details on milestones, work packages and timelines for developing the solution. Our team has successfully delivered 8 healthcare AI projects and is experienced in responsible and transparent development of such societally sensitive technologies.
Fairness and Ethics:
Involvement of marginalised communities and advocates throughout the project will help address structural inequities perpetuated in existing tools. Bias testing frameworks will enable systematically evaluating and minimising biases along attributes like race, gender identity, age and culture.
Data practices, transparency and accountability mechanisms will be audited by external experts. In addition, extensive consultations with ethicists and civil society groups will inform consent, privacy and accountability policies.
Partners like Samaritans will support engaging underserved groups in participatory design.
Data diversity will be improved by sourcing conversations in minority languages and contexts through partnerships with international groups like ReachOut.
Such robust participation and auditing will enhance the solution's equity and acceptability.