Working Well with AI: The Role of Occupational Health
Together with Prof Kaveh Asanati and Dr Advait Sarkar, I write in the latest issue of Occupational Medicine on why understanding and managing human interaction with generative AI should be a priority for occupational health (OH) professionals.
This article, which has been covered in mainstream media this morning, is an attempt to bridge two worlds that do not yet talk to each other enough: occupational health and technology. It brings together OH clinicians and academics with technology researchers and practitioners. These kinds of multidisciplinary collaborations and forums are essential if the specialty is to remain current and future-focused. As importantly, reaching more mainstream audiences with the expertise that occupational health brings is important for the future of sustainable, healthy and productive work.
I first came across Dr Advait Sarkar and colleagues’ work when my academic GP colleague Dipesh Gopal forwarded me a paper from Microsoft and Carnegie Mellon University examining how knowledge workers’ cognitive processes change when working with AI.
Many organizations are now rolling out internal AI platforms. Few are thinking about AI’s longer-term impact on workforce skills and the potential consequences for quality of work. Two key risks stand out:
Skill decay in human workers, and
Uncritical use of AI outputs.
These vulnerabilities may overlap and reinforce one another.
Going back to Dr Sarkar’s original work at Microsoft, here are some key insights useful to occupational health practice
1. AI stewardship
The study introduced the concept of stewardship: employees should not be passive recipients of AI outputs, but active guides: briefing, monitoring, and refining AI-assisted work.
This framing keeps humans clearly accountable for quality and accuracy. We already know that role ambiguity increases risk, contributing to work falling between the cracks and to work-related stress. Organizations that embrace stewardship may find that clearer human–AI role definitions improve both productivity and work quality.
2. AI and skills: strengthening or weakening cognition
The research showed that AI shifts cognitive effort across tasks e.g., from information gathering to information verification. This is useful for organizations aiming to optimize skills in AI-enabled roles.
However, the study did not examine whether reduced engagement in certain cognitive activities leads to skill decay over time. This remains a critical question. Organizations interested in retaining human capability should be actively seeking to protect against this risk.
3. The confidence paradox: Trust in AI versus trust in self
When workers trust AI more, they engage in less critical thinking.
When workers trust their own expertise, they engage in more critical thinking.
This creates a high-risk scenario: if an employee has low confidence in their own ability but high trust in AI, they may accept AI outputs with minimal scrutiny.
To reduce over-reliance and decision-making errors, organizations should:
Train employees on cognitive bias and decision-making blind spots, including how confidence affects trust in technology
Promote a culture of human oversight, particularly where accountability continues to sit with people rather than systems
In a Society of Occupational Medicine editorial published this week, we explore these and other risks of AI on workforce health in greater depth, alongside the many potential upsides of AI for work and health. These include opportunities to scale occupational health services and extend their reach to more of the working-age population.
Lara’s take
Critical thinking and the critical use of resources are central to high-quality work. In clinical medicine, for example, we are trained to recognize and mitigate diagnostic bias. This is notoriously difficult. Gaining insight into one bias can easily introduce another; for example, a clinician who realizes they tend to under-investigate may begin to over-investigate instead.
In coaching, part of the value lies in helping clients challenge internal narratives that may contain long-standing, unexamined biases. Often, the real “value add” comes from an external perspective to call out bias - something that is hard to generate internally when you are immersed in the situation.
We are only beginning to understand the risks of bias in how human workforces use generative AI. These risks operate on two levels:
AI redistributes human cognitive effort over time, potentially reshaping how we think
How critically we engage with AI, influenced by our beliefs about both the technology and ourselves




