CarefulAI
Chairman's Twitter Feed
  • AI Assurance
  • About
  • Contact Us

About Us

CarefulAI is focussed on:

AI Assurance

  1. Our AI Safety Tool PRIDAR is used by suppliers & buyers to choose between AI investments.

    This is embodied in BSI30440 (the UK Standard for the Validation of AI in healthcare) and is the basis of similar standards development in regulatory led market segments.

  2. UXAI and Model Feedback is used by groups to validate alignment of AI outputs and inputs to human custom and practice.
    ​
  3. AutoDeclare is being developed by CarefulAI and Notified Bodies as a method of ensuring continuing compliance to multiple standards.

Each of these tools IP is protected.  Tool use is controlled via License Builder


AI Safety Research

Research services are focussed on:

  1. explainable AI e.g. transparent policy,  design, model labels and model feedback.

  2. AI aligned and validated by users via UXAI

  3. Ethical Language Models (e.g. LLMs) and Text-To-Speech (TTS) interfaces e.g. ARDAR. 

AI for Good

CarefulAI was born out of a series of 'AI for Good' initiatives.  It is still a very important facet of our work, for example we.

  1. Supervise UCL IXN Msc Computer Science Student Projects.

  2. Mitigate the effects of Loneliness in Carers by provision of a networking tool.

  3. Creativity rights protection via PriorArtAI

IP arising from this work is also provided free and open source in the UK via Terms.

London Office

Customers Quotes

'"PRIDAR was the first and remains easiest to use AI Risk Management Framework'  Sector Accreditation Lead

''I love your mantra.  AI is Artificial Ignorance until it is validated by people for people.'  Governance Manager

'You deliver technically complex subjects beautifully' AI Programme Manger


Picture