Preface
I fear the message below may come over wrongly. Please take it as sincere encouragement. If 6 people on the edge of the Atlantic on a small Island can do impactful ChatGPT deployment planning then OpenAI and Microsoft can.
You have got this!
In response to a request for context around CarefulAI's focus and services
Ah. Context….
I can give you some. But to understand CarefulAI it presupposes you understand UK and EU politics, government agencies and their supply chains, and our background, motives and biases. Better via a verbal chat I think. DM me a number.
But in the meantime...
CarefulAI was originally set up to do ethical AI model design in the risk-averse nature of the NHS.
In waiting for LLMs to hit the ground we gained credibility by doing tech for good work in NLP. For example researching and deploying a text-based suicide mitigation tool (sisda) via clinician-led Alignment etc, etc. See Model Labels on the site for more info. We enjoy this work e.g. SISDA helps clinicians to mitigate 100k people at risk globally.
However (and I think it is a sign of the times) our most popular model is a 'sniff test' for people considering the deployment of ML/AI. It (PRIDAR) is now embedded in the NHS, and financial service compliance process in EMEA. It is the framework behind many standards and is referenced in some e.g. BSI30440.
Now it appears we are leading sector, LPs and Gov Depts adviser on processes for de-risking AI deployment.
But, this was not our aim. The team would prefer to be making ethical LMs: rather than guiding deployment. Hence our move to showing evidence of compliance to standards at a code and data level using AI e.g. AutoDeclare
I think it is fair to say, we are on a journey….
As for my Ask of OpenAI
As a maker of ML / Al, we have taken mitigating steps over the past two years to enable the effective deployment of LMs in Europe: a risk-averse continent. OpenAI’s plans and methods are not transparent.
OpenAI launched ChatGPT as an A/B test. Now it is a product. It appears people either want to use lMs, or, stop them. There is no middle ground. One needs to accept OpenAI's ChatGPT has become the 'hoover' of the AI sector. To many people ChatGPT has become the phrase one uses to describe LMs. So when ChatGPT catches a cold the sector catches flu. And you appear to have a cold. OpenAI appears to have no road map to ethical deployment: just a backdoor route to deployment embedded in meeting and doc tools (e.g. Microsofts).
Please recognise. In the EU there needs to be an ethical road map. LMs' impact on providers and users will be significant. It needs to be managed. I feel civil unrest will arise, and major backlash will be experienced against LMs which will be detrimental to society if OpenAI do not implement an ethical road map. Hence why I reached out to you.
In response to your ask. What can 'I' do….
Well. You can, via OpenAI, engage LPs and Gov in the development of an ethical road map. Piggyback on our work, as you need to do this quickly. If OpenAI does not, I fear it will be OpenAI, not Microsoft who takes the first backlash, then the LM and AI industry will also feel the pain.
You could start by improving your deployment guidance and case studies. For example, you could ask people to start a conversation about people having their own plans for the use of ChatGPT (and other language models)
In a recent Clinically led NHS Tech event, I asked people if they had a plan for using ChatGPT. 1:56 had a plan.
Then I proposed they needed one, and asked 6 people who did not know each other or me to challenge me on the subject.
In 1 hour they decided their organisations needed a plan. Then they, not I, using PRIDAR themes, surfaced questions each person could ask of themselves and their teams. They then categorised these questions and built and tested ChatGPT prompts to address the issues arising from each question.
8 hours later they presented at the same event and converted 31 more people to planning for ChatGPT. In 8 weeks they could get the 1.5 million people in their organisation ready to discuss the use of language models.
But
At the same event 'planners' concluded that they were doing OpenAI’s job.
They felt enabling planning for the deployment of ChatGPT is something you should do. The best quote was.
'On the subject of enabling ChatGPT planning with users. If Yoda ran OpenAI. He would say to his staff (and Microsoft).
'Do or Do Not. There is no Try'".
They have a point
What I Would Like Open AI to Do?
Do the industry and society a favour. Crack on with encouraging ChatGPT planning-type work. Give examples of good practice in planning and encourage risk-mitigating prompt design.
You have got this!
All the best
Joseph Connor
[email protected]
I fear the message below may come over wrongly. Please take it as sincere encouragement. If 6 people on the edge of the Atlantic on a small Island can do impactful ChatGPT deployment planning then OpenAI and Microsoft can.
You have got this!
In response to a request for context around CarefulAI's focus and services
Ah. Context….
I can give you some. But to understand CarefulAI it presupposes you understand UK and EU politics, government agencies and their supply chains, and our background, motives and biases. Better via a verbal chat I think. DM me a number.
But in the meantime...
CarefulAI was originally set up to do ethical AI model design in the risk-averse nature of the NHS.
In waiting for LLMs to hit the ground we gained credibility by doing tech for good work in NLP. For example researching and deploying a text-based suicide mitigation tool (sisda) via clinician-led Alignment etc, etc. See Model Labels on the site for more info. We enjoy this work e.g. SISDA helps clinicians to mitigate 100k people at risk globally.
However (and I think it is a sign of the times) our most popular model is a 'sniff test' for people considering the deployment of ML/AI. It (PRIDAR) is now embedded in the NHS, and financial service compliance process in EMEA. It is the framework behind many standards and is referenced in some e.g. BSI30440.
Now it appears we are leading sector, LPs and Gov Depts adviser on processes for de-risking AI deployment.
But, this was not our aim. The team would prefer to be making ethical LMs: rather than guiding deployment. Hence our move to showing evidence of compliance to standards at a code and data level using AI e.g. AutoDeclare
I think it is fair to say, we are on a journey….
As for my Ask of OpenAI
As a maker of ML / Al, we have taken mitigating steps over the past two years to enable the effective deployment of LMs in Europe: a risk-averse continent. OpenAI’s plans and methods are not transparent.
OpenAI launched ChatGPT as an A/B test. Now it is a product. It appears people either want to use lMs, or, stop them. There is no middle ground. One needs to accept OpenAI's ChatGPT has become the 'hoover' of the AI sector. To many people ChatGPT has become the phrase one uses to describe LMs. So when ChatGPT catches a cold the sector catches flu. And you appear to have a cold. OpenAI appears to have no road map to ethical deployment: just a backdoor route to deployment embedded in meeting and doc tools (e.g. Microsofts).
Please recognise. In the EU there needs to be an ethical road map. LMs' impact on providers and users will be significant. It needs to be managed. I feel civil unrest will arise, and major backlash will be experienced against LMs which will be detrimental to society if OpenAI do not implement an ethical road map. Hence why I reached out to you.
In response to your ask. What can 'I' do….
Well. You can, via OpenAI, engage LPs and Gov in the development of an ethical road map. Piggyback on our work, as you need to do this quickly. If OpenAI does not, I fear it will be OpenAI, not Microsoft who takes the first backlash, then the LM and AI industry will also feel the pain.
You could start by improving your deployment guidance and case studies. For example, you could ask people to start a conversation about people having their own plans for the use of ChatGPT (and other language models)
In a recent Clinically led NHS Tech event, I asked people if they had a plan for using ChatGPT. 1:56 had a plan.
Then I proposed they needed one, and asked 6 people who did not know each other or me to challenge me on the subject.
In 1 hour they decided their organisations needed a plan. Then they, not I, using PRIDAR themes, surfaced questions each person could ask of themselves and their teams. They then categorised these questions and built and tested ChatGPT prompts to address the issues arising from each question.
8 hours later they presented at the same event and converted 31 more people to planning for ChatGPT. In 8 weeks they could get the 1.5 million people in their organisation ready to discuss the use of language models.
But
At the same event 'planners' concluded that they were doing OpenAI’s job.
They felt enabling planning for the deployment of ChatGPT is something you should do. The best quote was.
'On the subject of enabling ChatGPT planning with users. If Yoda ran OpenAI. He would say to his staff (and Microsoft).
'Do or Do Not. There is no Try'".
They have a point
What I Would Like Open AI to Do?
Do the industry and society a favour. Crack on with encouraging ChatGPT planning-type work. Give examples of good practice in planning and encourage risk-mitigating prompt design.
You have got this!
All the best
Joseph Connor
[email protected]