by Mina Karabit March 15, 2023 3 min read

There is no doubt that ChatGPT has been having its moment in the spotlight. It is a controversial artificial intelligence (AI) based chat bot that relies on a large language model to predict which words should go together in a sentence by analyzing large amounts of sample text and algorithms. Think of it as the more advanced form of your phone’s predictive text function that promises it will transform how we produce text, search the web, educate ourselves, and conduct business. 

One of ChatGPT’s latest achievements is that it almost passed the US Medical Licensing Exam. Typically, a human candidate will prepare between 300 to 400 hours to take this three-part exam, which touches on various subjects from basic science to bioethics. Most often, the human candidate will take the first part of the exam at the end of their second year of medical school, with the second and third parts to be taken in the fourth year of medical school and after the first year of residency, respectively. Without any specialized training or reinforcement from its “human handlers,” ChatGPT performed at or near the passing threshold for the three parts of the exam. So even though ChatGPT does not “know” anything, it can constructplausible-sounding sentences by analyzing massive quantities of online material and use it to its advantage to answer the questions posed by the licensing exam.

Although it has done exceedingly well thus far, ChatGPT is not ready to replace your medical professional. Work still needs to be done to improve the reliability of large language models. Since the bot’s output is plausible-sounding sentences, sometimes the probability of a phrase will result in widely ridiculous conclusions — and of course, other times, it sounds brilliant. 

Even though ChatGPT is not poised to replace medical professionals, it could become a tool within healthcare to assist with medical education and even clinical decision-making. For example, there may be a use of ChatGPT as a brainstorming tool, allowing clinicians to enter a list of symptoms to generate differential diagnoses. How patients and clinicians act on the information will be key —especially as ChatGPT is far from perfect and will make mistakes (something it admits freely). 

Take a simplified example: you can ask, “what is the best treatment for diabetes?” and a bot could respond with the name of the diabetes drug “metformin.” The answer is not necessarily the best treatment we have, but the technology believes it is because after scanning massive amounts of online data, it “learns” the word “metformin” will often appear alongside the words “diabetes treatment.” Once the technology detects that there is a high probability between the two concepts, it uses it as the answer to the question. However, this is not the same as a reasoned clinical response. For example, a clinician may well say that a patient’s history and other relevant background make them a better candidate for different medication or a different regiment altogether that may not rely on medication. 

So, it can be a little dangerous when a layperson relies on the information from ChatGPT to decide what medications to stop taking or supplements to start taking. Without expert human input,i.e., the physician, the layperson could be taking detrimental actions to their health. However, if the physician and the patient are using the tool collaboratively, there could be opportunities for better decision-making and fewer mistakes. No healthcare professional can read the mass quantities of research information available in medical journals, textbooks, etc. Artificial intelligence can and does so in a matter of seconds to minutes and, therefore, can assist healthcare professionals in staying current. 

What remains to be seen (and the subject of a future post) is the extent that artificial intelligence tools like ChatGPT will be regulated and by whom. There is also the question of how it will change the nature of legal cases where a professional relies on the technology and who bears the liability when things go wrong. With the rapid rise of this technology, we can expect that these questions will need to be answered sooner rather than later.  

Note: like artificial intelligence, our blog post is not a substitute for legal advice tailored to your situation. Please contact us to see if we are able to assist you.

To learn more about Wise Health Law and our services, please contact us!



Also in Blog

The Health Care Consent Act and Minors
The Health Care Consent Act and Minors

by Valerie Wise November 24, 2024 4 min read

The Divisional Court reminds healthcare professionals what they need to consider under the Healthcare Consent Act when treating minors.

man consulting doctor for surgery
Greater Access to Gender-Affirming Surgery in Ontario

by Mina Karabit April 25, 2024 4 min read

The Divisional Court confirms the rights of transgender and non-binary individuals to receive funding for gender-affirming care that aligns with their non-binary experiences of gender.
Physiotherapy clinic
Regulated Health Professionals can be Held Accountable for Business Practices

by Mina Karabit April 20, 2024 2 min read

The Divisional Court confirms that Colleges can have jurisdiction to consider the business practices of regulated health professionals.