AI, work and health
A transcript of tech investor David Sacks using ChatGPT-4 to research a blog on a points-based ‘give-to-get’ model of crowdsourcing startups convinced me to trial using ChatGPT in my non-clinical workstreams. I know colleagues are doing the same- asking ChatGPT to prepare a slide deck, summarise an abstract or using dialogue to generate and refine ideas. I have also trialled using AI to generate visuals such as for this blog post.
ChatGPT has increased my productivity. The NLP assimilated responses remove a ‘layer’ from the workload and are a more advanced starting point for many tasks. I have not used ChatGPT for end-to-end admin yet but this video demonstrates the potential functionality - ChatGPT can make a dinner reservation, suggest a meal plan and place an online grocery order accordingly using plugins.
Job displacement, and the consequences of this for wider social inequality is a definite risk of AI. But what about the risks to humans working with and alongside AI? Here are a few that I have come across so far while working with ChatGPT. Given that human/ AI interaction at work will become more widespread, employers need to upskill and adapt alongside AI to identify and contain these risks to their employees and their organisations.
1) Hallucination
ChatGPT can generate responses that are false, or false in context of the input provided by the user. Recognising hallucination is difficult. ChatGPT does not caveat its responses with degree of conviction the way a human might. Everything is presented convincingly and the only way to reliably separate reality from hallucination is manually checking responses. The Microsoft AI platform Bing Chat links responses to sources which is a helpful starting point.
2) Bias
There is a risk of AI carrying and perpetuating biases from the data it is trained on. Humans have biases too (e.g. diagnostic bias- something we train ourselves to detect and counter in general practice). Developing insight into our own biases is challenging enough, but easier than developing insight into bias within AI models where the decision-making process is opaque and everything is presented with conviction. As well as risks of bias being carried forward into work, algorithms personalising content for an individual biased according to their inputs can create a ‘filter bubble’ which can lead to mental health risks where users are shown harmful or inappropriate content.
3) Confidentiality
Unless using a downloadable model run on a secure server, all our interactions with AI may be used to train new models. This means work, inputs and conversations are shared with the AI…and anyone else using it. User control over data is evolving. OpenAI have introduced the ability to turn off chat history in ChatGPT, informing users that conversations that are started when chat history is disabled will not be used to train and improve their AI models.
4) Opacity
The current consensus is that AI models are only as intelligent as the data they are trained on and can only take action based on inputs. On the other hand, ‘capability overhang’ where AI models have power and capabilities unknown to their creators at release is a risk to anyone using them and society more widely, particularly because it is difficult to track the decision making behind each response. Experts are worried about humans losing control over this rapidly evolving technology.
Deploying AI alongside a clinical workforce
There are many use cases for AI in healthcare. The potential of AI around clinical decision making is exciting given the implications for access to healthcare and outcomes. While confidentiality is a bottleneck to more widespread innovation, hallucination, bias and opacity are serious limitations. Any human biases will be perpetuated, making it difficult for human users working alongside AI to detect and challenge suboptimal decision making. Perhaps ongoing blinded human triangulation to some extent will be mandatory in some settings even once models making decisions using clinical data are validated and live. Lower hanging fruit use cases of AI alongside a clinical workforce may be around operational and administrative aspects of healthcare, where the stakes of decision making are lower and the risks are more contained. Deploying AI alongside a clinical workforce in non-clinical domains could still have significant implications for service user experience, access and outcomes.
This article was researched with the help of ChatGPT.