About Us
CarefulAI is focussed on:
AI Assurance
Each of these tools IP is protected. Tool use is controlled via License Builder
AI Safety Research
Research services are focussed on:
AI for Good
CarefulAI was born out of a series of 'AI for Good' initiatives. It is still a very important facet of our work, for example we.
IP arising from this work is also provided free and open source in the UK via Terms.
AI Assurance
- Our AI Safety Tool PRIDAR is used by suppliers & buyers to choose between AI investments.
This is embodied in BSI30440 (the UK Standard for the Validation of AI in healthcare) and is the basis of similar standards development in regulatory led market segments. - UXAI and Model Feedback is used by groups to validate alignment of AI outputs and inputs to human custom and practice.
- AutoDeclare is being developed by CarefulAI and Notified Bodies as a method of ensuring continuing compliance to multiple standards.
Each of these tools IP is protected. Tool use is controlled via License Builder
AI Safety Research
Research services are focussed on:
- explainable AI e.g. transparent policy, design, model labels and model feedback.
- AI aligned and validated by users via UXAI
- Ethical Language Models (e.g. LLMs) and Text-To-Speech (TTS) interfaces e.g. ARDAR.
AI for Good
CarefulAI was born out of a series of 'AI for Good' initiatives. It is still a very important facet of our work, for example we.
- Supervise UCL IXN Msc Computer Science Student Projects.
- Mitigate the effects of Loneliness in Carers by provision of a networking tool.
- Creativity rights protection via PriorArtAI
IP arising from this work is also provided free and open source in the UK via Terms.