Accelerating Safe AI: Using AI Methods
Accelerating Safe AI Investment
- PRIDAR - the first and most popular Framework for Visualising AI Supply Chain Risk for Investors
- Responsible AI - real-world implementation challenges and solutions
- Riley - the Edge Case Risk Analyst
- Ethan: AI Governance Adviser
- Atlas - personalised AI learning and development planning Agent
- Alina: ISO 42001 Implementation Consultant
- Dora advises on the Digital Operational Resilience Act
- Elara AI Governance Consultant specialising in AI Agents
- Elin the AI Auditor
- Tracy promoting traceability in AI systems
- Ann an adviser on the EU AI Act
- Red aims to help people understand a Red Team's focus, purpose and deployment
- Yell - AI side affect agent
- AutoDeclare transforms unstructured content into strategic intelligence
- AutoRCT - saving time and money in Randomised Control Trials
- Yell - AI side affect agent
- Strat: - an AI agent designed to help you build a route to market plane
- Robyn - an AI Agent designed to help you evaluate your thinking around a value proposition
- Val - and AI agent designed to enable you to put calculate value of your AI Innovations
- Alex - and AI agent designed to help you plan and exit for your investment
- ComputedNovelty - a method of computing novelty
- PriorArt - a method to manage Prior Art and Copyright infringements
- PLIM - a Prompt Language Model Improvement Method to reduce errors in LLM models
- CALMS a method of mitigating Bias and Risk of Harm in Large Language Models
- InsightScholar a method of deploying privacy protecting, hallucination reducing LLMs
- EmuteToken a method of securing the managing provenance of data and tokens used in LLMS
- FCR a method of validating reasoning in LLMS
- RMC a method of enabling LLMs to have a greater impact via personalised communication
- Lara a tool to understand the risks and limitations in LLM based chatbots
- DioSim a cost effective way of modelling two-party dialogue agents (Therapy Case Study)
- The CriticalAIonAI podcast
Featured by