Research and Development
We implemented the principles of Responsible AI in our research and development. A full-screen version is shown here. Our primary focus is the creation of AI methods to manage the safe design and development of AI systems, our secondary objective is the creation of methods that enable humans to deploy AI safely. Progress can be segmented into three areas.
Design Risk Tools Research
Deployment Risk Tools Research
Assurance Risk Tool Research
- CALMS a method of mitigating Bias and Risk of Harm in Large Language Models
- PLIM Prompt Language Model Improvement Method
- HAL trained to guide users about potential hallucination in LLM dialogue
- DioSim a cost effective way of modelling two-party dialogue agents (Therapy Case Study)
- Bev: is a COM-B Avatar Agent
- Amira is a dialogue Agent designed to develop a user's empathy skills
- DTxAI a method of reducing the risk in Digital Therapeutics AI
- AutoPrevis - a method to automate pre-visualisations using GENAI tools
- ComputeNovelty - users driven novelty, safety design in open ended systems
- PriorArt a method to manage Prior Art and Copyright infringements
- AutoESG trained to automate ESG evidence gathering and reporting
- Public Interest AI
Deployment Risk Tools Research
- Midas a selection of AI Agents designed to improve the investment readiness of digital health firms
- Atlas personalised AI learning and development for the: creative ; transport ; agriculture and construction sector
- UXAI NHS AI Agent Design and Testing
- ASRI - Using Chatbots in Adult ADHD Self Reporting Information Gathering
- PromptMH - a community of practice dedicated to making GenAI safe use in mental health
- Arun - increasing the reach of VR based well-being support
Assurance Risk Tool Research
- Framework For Change - Socio-Technical Framework for AI development
- PRIDAR - the first and most popular Framework for Visualising AI Risk for Investors
- Responsible AI - real-world implementation challenges and solutions
- AutoRCT - saving time and money in Randomised Control Trials
- Model Transparency, Design and Reporting
- PRIDAR v BS 30440 v Standards thematic comparison
- Compliance Team Scoping - an example based in the need for BS30440 compliance
- BS 42001 v 30440 - Evidence Comparison
- Eli trained to elicit discussion about AI Risks in line with the IDEA Protocol (PoC in 2025)
Featured by