DioSim
DioSim, developed in 2017 by Professor Joseph Connor and his data scientists, was designed to help healthcare providers quantify the time and energy expended during two-party dialogues to achieve specific outcomes. Research has shown that optimal results typically emerge from natural, friendly conversations between two participants (human-human or human-AI), while stressful interactions yield less efficient outcomes.
Faced with limited access to real-world healthcare conversation data, the team creatively utilised two distinct BBC sources: The Listening Project, which provided examples of friendly dialogues, and Andrew Marr Political Interviews, which represented more stressful conversational exchanges. The team identified distinctive metrics to differentiate between these two types of recorded conversations by applying acoustic analysis, video analytics, and natural language understanding techniques.
Faced with limited access to real-world healthcare conversation data, the team creatively utilised two distinct BBC sources: The Listening Project, which provided examples of friendly dialogues, and Andrew Marr Political Interviews, which represented more stressful conversational exchanges. The team identified distinctive metrics to differentiate between these two types of recorded conversations by applying acoustic analysis, video analytics, and natural language understanding techniques.
Data Features of High and Low Stress
Building upon Professor Connor's 2017 DioSim research, CarefulAI is focused on dialogue analysis within therapeutic settings. Their primary objective is to establish comprehensive benchmarks for comparing therapeutic relationships in both human-to-human interactions and agent-to-human sessions. This research aims to provide a systematic framework for evaluating and contrasting the effectiveness and development of therapy when delivered through traditional human counselling versus AI-facilitated therapeutic approaches.
Future Directions
The next development phase focuses on expanding DioSim's capabilities by creating and deploying specialised conversational agents. The research agenda encompasses:
1. Agent Development and Implementation
- Construction of DioSim-based conversational agents
- Integration with existing therapeutic frameworks and protocols
2. Performance Evaluation Framework
- Implementation of established DioSim metrics for human-agent interactions
- Integration of specialised assessment tools, including:
- SISDA library & API for crisis language detection and response timing
- CogDiss library & API for identifying and addressing cognitive distortions
- Development of additional performance measures focusing on response timing and intervention effectiveness
3. Testing Environment Creation
- Design and implementation of a controlled testing environment
- Development of standardised scenarios for consistent evaluation
- Creation of protocols for measuring agent performance against established benchmarks
- Integration of real-time monitoring and analysis capabilities
The ultimate goal is to establish a comprehensive evaluation framework that objectively compares human and AI-driven therapeutic interventions while ensuring adherence to clinical standards and best practices.
The next development phase focuses on expanding DioSim's capabilities by creating and deploying specialised conversational agents. The research agenda encompasses:
1. Agent Development and Implementation
- Construction of DioSim-based conversational agents
- Integration with existing therapeutic frameworks and protocols
2. Performance Evaluation Framework
- Implementation of established DioSim metrics for human-agent interactions
- Integration of specialised assessment tools, including:
- SISDA library & API for crisis language detection and response timing
- CogDiss library & API for identifying and addressing cognitive distortions
- Development of additional performance measures focusing on response timing and intervention effectiveness
3. Testing Environment Creation
- Design and implementation of a controlled testing environment
- Development of standardised scenarios for consistent evaluation
- Creation of protocols for measuring agent performance against established benchmarks
- Integration of real-time monitoring and analysis capabilities
The ultimate goal is to establish a comprehensive evaluation framework that objectively compares human and AI-driven therapeutic interventions while ensuring adherence to clinical standards and best practices.
Developed in collaboration with a therapeutic community of practice: