What Do AI Compliance Customers Want
The Road to AutoDeclare: Automation of Compliance Activities
SITUATION
In 2022 advancements in natural language processing (NLP) resulted in the use of large language models (LLMs). There are several LLMs available from major developers like Google and OpenAI e.g. ChatGPT. With these models it is possible to generate everything from simple essays to code e.g. Open AI Codex (that powers Microsoft's GitHub 'Copilot', 'Tabnine', Google’s 'T5', Carnegie Mellon University's 'Polycoder' and 'Cogram'. GPTs offer significant potential to improve the trustworthiness of AI.
For example researchers like CarefulAI are are now working on GPTs that can take organisation value statements. Like those of CarefulAI:
- 'AI that is validated for people, by people, will be trusted and therefore have an effect more quickly'
- 'Users should set targets for accuracy, sensitivity and specificity, these should take priority over academic and industrial priorities.'
- 'AI needs to be trusted, so the motives behind its design, and its fitness for purpose should be made transparent, and privacy protected.'
- ‘AI is only as good as it can be proven to be at any point in time’
to build AI systems that can embody and work to these values.
PROBLEM
Humans and regulators will always want to validate their compliance with best practice around trustworthy AI.
As a consequence, established AI trustworthiness frameworks like PRIDAR from CarefulAI, and new ones like BSI30440, ALTAI, and Plot 4ai, along with guidance from governments and regulators, will still be used in AI compliance auditing.
But, hitherto, auditing against standards and frameworks has been led by human subject matter experts. Alas there are not enough subject matter experts to meet the demand, despite it being a $2.8 billion industry.
Also, the backlog to start auditing to frameworks has been quoted at two years in some industries e.g. the software and a medical device sector.
IMPLICATIONS
Being late to market, because of the lack of human auditors of AI products, can cause businesses and sectors to fail.
Also in the near future code will change quickly with the assistance of AI, and safety auditing needs to remain relevant and be continuous not static.
So human led auditing, it could be argued, is no longer safe on its own.
NEXT STEPS
As a leader in the validation of AI trustworthiness CarefulAI discovered in 2022 that between 40% to 88% of the auditable evidence of an AI systems trustworthiness exists in an AI supplier's codebase and system design.
Normally one needs to be an expert in Ml/AI to understand it: because it is not well described (commented on) in the code base. In 2023, AutoDeclare brought to market, to enable AI suppliers to show compliance to a range of standards in a live and dynamic way.
In short, AutoDeclare adds Trustworthiness annotations into code that relate to regulatory and standards compliance. These can be read and understood by humans, or by systems.
If you are interested to work with or observe: CarefulAI, researchers, regulators, developers and AI platform leads as they work to bring the principles of AutoDeclare to market
Get in Touch