Thoughts on AI Scribes for Clinical documentation in NZ
Artificial intelligence has been used in healthcare for many years. But it is the new, “generative scribes” that are quickly being adopted. There is significant interest from clinicians in NZ with many trying Nabla and Hiedi (or similar products), to reduce the amount of time practitioners spend writing up their clinical documentation.
These AI assistants are designed to ease the workload on medical practitioners by capturing audio recordings of a consultation between a practitioner and a patient and using a combination of speech-to-text transcription and large language models (LLMs) to produce a structured medical note for a patient’s medical records. Some AI assistants now claim to expand on this with single click, instant second opinions on diagnosis and treatment, past patient notes or lab reports available during the consult.
These large language models (LLM’s) are nothing new, nor the speech-to-text engines, which have been around with products like Dragon for over 30 years. What is new, is the ease with which start-ups from anywhere in the world with a few developers can now create their own, or plug into existing LLM’s and make “AI” scribes to sell or provide free to NZ and around the world.
With such rapid developments in AI technology, the proliferation of providers, and the buzz they have created quite evident, some tough questions do need to be asked by practices and health care professionals.
Most start-ups do not have the wealth of experience in the clinical space that existing players & world-wide vendors have built up over many years. Do practices know who is supplying the service? What of their history, their backing and longevity? Have clinicians investigated the potential privacy and misdiagnosis risks before implementing specific AI scribes?
A survey by Otago University survey was conducted in February to March 2024, with nearly 200 GPs providing insights into their experiences with GenAI scribe tools used for clinical notes. Less than 60% of these Kiwi GPs had asked for patient consent before using an AI scribe during consultations. The survey also revealed concerns about patient consent and data privacy as critical issues, with only 65% of the surveyed GP’s having thoroughly reviewed the terms and conditions or the EULA (end user licences agreement) of the tools themselves, before use.
Obtaining explicit patient consent before using an AI scribe / digital assistant in a consultations is essential, but with many scribes used in NZ having their data based overseas, are we raising questions of data sovereignty and the depth / breadth / appropriateness of their statistical data used to train the model matching our kiwi market?
Mutaz Shegewi, a senior research director with IDC’s Worldwide Healthcare Provider Digital Strategies group is reported in recent Computer World article as saying: “Clinicians may also become de-skilled as over-reliance on the outputs of AI diminishes critical thinking,” “Large-scale deployments will likely raise issues concerning patient data privacy and regulatory compliance. The risk for bias, inherent in any AI model, is also huge and might harm underrepresented populations.”
AI is known to have its own statistical biases, with a higher ratio of sex and racial biases well documented within LLMs. GenAI models are inherently prone to bias due to their training on datasets that may disproportionately represent certain populations or scenarios.
Bioethicist and lawyer Rochelle Style in a recent article for eHealthNews reported by Rebecca McBeth states; “New Zealand has very poor health literacy, and probably even poorer AI health literacy. So, there is an issue about being able to first know what you are consenting to and giving patients real choice as once these tools become embedded, what will happen if people do not want their consultations recorded and summarised?”
Bioethicist and lawyer Rochelle Style in the article for eHealthNews stated “I still think there is a bit of puffery in what some of the platforms are claiming …”, also adding that there are potential legal risks if clinicians do not carefully review all documentation.
It is undeniable that the wheels of change are in motion, Pandora’s box has been opened. And like so many technologies that have come before, it is up to users to determine how AI will be used and if the results are for the greater good. As with anything new, professionals must do their own research and pick partners carefully, looking at longevity in the industry, investment in R&D and their history of success. We must also figure out how to effectively use the tools, counsel patients and clinicians on appropriately safe, and informed use and remain educated on developments.
From our perspective at Voice Power NZ Ltd, we believe in the ‘fundamental theorem of informatics’ – the assumption that the combination of human and computer together should outperform either on their own. This is true also for AI assistants and scribes that listen in the background and produce full clinician notes. These are not a complete solution, as they miss the vital step of clinicians own reflective narrative.
Voice Power staff have for 20 years been at the forefront of AI driven Dragon speech recognition in NZ healthcare. In the “noughties” (2000-2010) implementing systems for Radiologist throughout the country. In the “Teens” (2010-2019) Pathologist, GP’s, and Specialists. Since 2020 we have seen a huge growth in Psychiatrist, Psychologist and Mental Health workers.
Over the decades we have empowered clinicians to create in their own words, with their own experience, actionable, clear notes, letters and reports. Documenting at the speed of sound with their own knowledge driving the clinical narrative.
Nuance Dragon solutions have millions of users, over 10 thousand healthcare organisations; delivering proven savings and ROI case studies in NZ and throughout the world. Microsoft bought Nuance – Dragon in 2021 for a record $20 billion and have now combined it with their large ChatGPT investment in producing the award winning – Dragon® Ambient eXperience™.
DAX Copilot combines proven AI and Large Language Models (LLMs) and is backed by Microsoft’s scale, strength, and security. Award-winning AI built on a rich clinical data sets, anchored in 1B+ minutes of medical dictation and 15M+ ambient encounter annotated notes annually. Established on the platform which the existing Dragon Medical One (widely used throughout NZ & Australia) sits, automating clinical tasks & notes, all from one vendor, on one platform (Microsoft Azure).
DAX Copilot will be coming to APAC & NZ sometime in the future, and we will let customers know as soon as dates are confirmed. Microsoft have already opened their first hyperscale data centre in New Zealand, ensuring sensitive data is kept on our shores. The extra time for DAX to deploy here helps ensure data sovereignty and the statistical data used to train models match our regional and specific kiwi needs.
In the meantime, we encourage clinicians to try available AI scribes with care; but always remember, it’s your voice, your words that patients and downstream healthcare professionals need to hear.
David Shepherd
Director – Voice Power NZ Ltd
See Nuance – DAX Copilot Demo Video here
Australia & NZ – DMO and DAX Copilot future
Articles that I drew upon that are worth a good read:
22/11/24 – Blog: Would you like a second opinion with your AI scribe? By Kate McDonald PULSE IT
3/12/24 – Will AI help doctors decide whether you live or die? By Lucas Mearian ComputerWorld
12/12/24 – Microsoft opens first hyperscale data centre in New Zealand