UXAI
Alignment of the User Experience of an AI tool with human good practice is important. We call our approach UXAI. It is best explained by example:
UXAI and Large Language Models (LLMs)
LLMs are made available across the Internet and in some cases are embodied in other tools e.g Microsoft Office et al.
Via LLMs people can more easily answer questions about content, draft emails, write code, create conversational agents, tutor on a range of subjects, translate languages and even simulate characters for video games.
Prior to using a particular LLM it is wise that end users validate their fitness for purpose for themselves, their professions and their employers. This can be done in many ways, via workshops, events or online.
UXAI Alignment involves groups of six or more people developing a plan to optimise benefits and mitigate risks associated with a LLM. This is done as follows
After describing
People are then encourage to test the LLM. This is not done with PPI data. It is also done in a manner that protects the testers privacy and the organisations for whom they work. Tests are conducted
People are then encouraged to develop questions they need to raise with their professions and employers. These are then discussed and groups are asked to rank the importance of themes arising in the questions created.
To priortise themes PRIDAR is used. An example of a risk register is shown below.
Via LLMs people can more easily answer questions about content, draft emails, write code, create conversational agents, tutor on a range of subjects, translate languages and even simulate characters for video games.
Prior to using a particular LLM it is wise that end users validate their fitness for purpose for themselves, their professions and their employers. This can be done in many ways, via workshops, events or online.
UXAI Alignment involves groups of six or more people developing a plan to optimise benefits and mitigate risks associated with a LLM. This is done as follows
After describing
- how to use the LLM
- issues around effective use of the LLM
- how the LLM retains data to learn from the user
People are then encourage to test the LLM. This is not done with PPI data. It is also done in a manner that protects the testers privacy and the organisations for whom they work. Tests are conducted
- Without prompts
- With Prompts
- With Prompt and changes in other parameters e.g. temperatures
- With and without third party interfaces
People are then encouraged to develop questions they need to raise with their professions and employers. These are then discussed and groups are asked to rank the importance of themes arising in the questions created.
To priortise themes PRIDAR is used. An example of a risk register is shown below.
Testers then develop a plan to mitigate these risks
UXAI attendees are then actively encouraged to run their own UXAI events, and encourage others to do the same.
UXAI attendees are then actively encouraged to run their own UXAI events, and encourage others to do the same.
CarefulAI PRIDAR users can save the output of their staffs deliberations around LLMs using the form below
UXAI and Suicide Prevention AI
SISDA enables clinicians to be alerted to suicidal ideation language in text (see Model Feedback). Without SISDA clinicians would need to look at thousands of self referral requests each week.
Prior to SISDA being used, it is validated by mental health clinicians as fit for purpose. This ensures its use is aligned with good practice. For background on SISDA please see its model label under 'Motives and Models'
In SISDAs case the users are clinicians. The intent being considered is suicidal ideation in text.
UXAI Alignment involves groups of three or more senior clinicians validating the fitness for purpose of the SISDA's outputs.
They do this by annotating the output of an AI approach with one of the following labels:
For example:
Prior to SISDA being used, it is validated by mental health clinicians as fit for purpose. This ensures its use is aligned with good practice. For background on SISDA please see its model label under 'Motives and Models'
In SISDAs case the users are clinicians. The intent being considered is suicidal ideation in text.
UXAI Alignment involves groups of three or more senior clinicians validating the fitness for purpose of the SISDA's outputs.
They do this by annotating the output of an AI approach with one of the following labels:
- To be Validated
- Evidence of possible Intent
- Campaigns relating to the Intent
- Flippant Reference to the Intent
- Information or Support for the Intent
- Memorial or Condolence for the Intent
- Reporting of the Intent
- None of the above
For example: