CarefulAI created AutoDeclare in response to the Thomson Reuters Cost of Compliance Research in 2021. Calls to action included
AutoDeclare Standards Compliance
AutoDeclare's data flows
The user experience is as follows:
- The AutoDeclare user uploads their evidence
- Typically evidence consists of Standard Operating Procedures (e.g. a pdf) relating to the standard they wish to comply with
- Once the user presses upload AutoDeclare converts the docs to text
- When the user presses Submit AutoDeclare reads the evidence and compares it each clause in the chosen standard
- It then returns a 'Yes' if there is evidence, and what that evidence is, or states if 'No' if evidence does nor exist
- AutoDeclare does this per clause of the chosen standard, and calculates a level of compliance
Below are some screen shots of AutoDeclare's use with BS 30440: the standard for the validation of AI in healthcare
This is the evidence upload screen...
The typical output per clause is shown below...
For a standard with 30 sections the SOP compliance review process takes 5 minutes.
The user is also given indication of the size of the task to achieve compliance to the standard, and related standards.
AutoDeclare Code Safety
Below is an example AutoDeclare's annotation of a suppliers Code, within their code editor.
In the case shown, it is an audit of compliance to BS 30440: the standard for the validation of AI in healthcare
In the case shown, it is an audit of compliance to BS 30440: the standard for the validation of AI in healthcare
AutoDeclare LLM Safety
Errors in GenAI content creation are common, and they can have a devastating effect on regulations. Consequently, it is really important to use AutoDeclare as an early warning system in LLM QMS approaches like CarefulAI's CLAMs and PLIM shown below.
Errors in GenAI content creation are common, and they can have a devastating effect on regulations. Consequently, it is really important to use AutoDeclare as an early warning system in LLM QMS approaches like CarefulAI's CLAMs and PLIM shown below.
For AI systems and Software, the areas where savings are made is shown below
AutoDeclare the Validity of LLM Outputs
AutoDeclare combats hallucinations through rigorous post-generation verification. LLMs often produce convincing but inaccurate content. AutoDeclare retrieves evidence to verify each claim. It removes unsupported statements. It adds qualifying language to partial matches. It corrects inaccuracies with sourced information. It inserts citations for transparency. This process maintains fluent generation while ensuring factual accuracy. Users can easily distinguish verified facts from speculative content. The result is more reliable AI-generated information with clear evidence trails
Download a standard today and let AutoDeclare help you focus your compliance efforts
Standards providers include: