Accelerating Safe AI: Using AI Methods
Accelerating Safe AI Investment
- PRIDAR - the first and most popular Framework for Visualising AI Supply Chain Risk for Investors
- Responsible AI - real-world implementation challenges and solutions
- Dora advises on the Digital Operational Resilience Act
- Elara AI Governance Consultant specialising in AI Agents
- Alina: ISO 42001 Implementation Consultant
- Elin the AI Auditor
- Tracy promoting traceability in AI systems
- Ann an adviser on the EU AI Act
- Red aims to help people understand a Red Team's focus, purpose and deployment
- AutoDeclare transforms unstructured content into strategic intelligence
- AutoRCT - saving time and money in Randomised Control Trials
- Robyn - an AI Agent designed to help you evaluate your thinking around a value proposition
- Atlas - an AI Agent that creates a personalised AI learning and development planning
- ComputeNovelty - a method of computing novelty, safety design in open ended systems
- PriorArt - a method to manage Prior Art and Copyright infringements
- PLIM - a Prompt Language Model Improvement Method to reduce errors in LLM models
- CALMS a method of mitigating Bias and Risk of Harm in Large Language Models
- InsightScholar a method of deploying privacy protecting, hallucination reducing LLMs
- EmuteToken a method of securing the managing provenance of data and tokens used in LLMS
- Lara a tool to understand the risks and limitations in LLM based chatbots
- DioSim a cost effective way of modelling two-party dialogue agents (Therapy Case Study)
- The CriticalAIonAI podcast
Featured by