Dr ChatGPT will see you now

6 min read
First Published: 
Aug 2023

Key Learnings contained in this article:

The rapid development of artificial intelligence (AI) is a disruptive force, from which the healthcare sector is not immune. ChatGPT and other AI-powered chatbots such as Google’s BARD and Med-PaLM2 utilise deep learning algorithms called Large Language Models (LLMs) to generate conversation-style content. LLMs have the power to change the healthcare sector for the better by allowing for more efficient and effective healthcare practices, ultimately enhancing the quality of medical services and outcomes.

How are large language models relevant to healthcare?

LLMs hold significant relevance in the healthcare landscape, offering physicians a range of benefits that can enhance their work and interactions within the medical field.

While we’ll get to the specific benefits shortly, it’s important to note the ability of LLMs to increase efficiency, reducing the time needed for many tasks. A 2023 meta-analysis found that 43% of emergency department healthcare workers suffered from burnout. This phenomenon reverberates throughout healthcare, culminating in adverse outcomes such as diminished patient satisfaction, increased medical errors, elevated physician turnover rates, and reduced overall productivity.

Here, AI emerges as a potential salve. By automating specific tasks and procedures, AI can effectively reclaim precious time within a healthcare professional's day, allowing for more meaningful patient interactions and the possibility of attaining a healthier work-life balance.

AI is here to stay and, as with every other industry, the choice is to learn to understand and utilise these tools or to get left behind. In a survey of 1,000 Americans and an additional 500 healthcare professionals, 25% of the general public participants stated that they were more likely to talk to an AI chatbot than attend therapy. While 53% of participants in this survey felt that AI cannot replace doctors, 25% said that they would not visit a doctor that refuses to utilise AI.

The benefits of LLMs

Patient Education

LLMs have the potential to significantly enhance patient education by providing clear and accessible medical information. LLMs can generate patient-friendly explanations of complex medical concepts, procedures, and treatment options with their advanced language capabilities. These explanations can be tailored to suit individual patient needs and comprehension levels, making medical information more understandable and engaging.

Symptom Assessment

LLMs can assist healthcare providers in accurately interpreting and analysing patient-reported symptoms. By offering guided prompts and suggesting relevant questions for a more comprehensive symptom profile, LLMs can help patients articulate their symptoms more effectively.

K Health is an innovative AI-powered platform that connects patients to medical professionals through a chatbot interface. K Health predicts and delivers the most relevant follow-up questions, ensuring a comprehensive symptom assessment. By simply inputting a symptom, such as 'stomach ache,' the chatbot engages patients in a natural conversation, asking a series of relevant health-related questions. The information is then distilled into a concise summary, along with up to five potential diagnoses, which is then shared with the doctor or nurse. K Health is trained on specific health datasets, assuring its medical accuracy and enhancing its utility as a reliable and efficient diagnostic aid.

Utilising LLMs in symptom assessment streamlines the diagnostic process and contributes to more accurate and timely medical evaluations, ultimately improving patient care and treatment outcomes.

Mental Health Assistance

LLMs can provide individuals with an accessible and private space to discuss their emotional well-being. Through conversation-style interactions, LLMs can help users express their thoughts and feelings, providing a preliminary outlet for emotional release. LLMs can also offer general coping strategies and mindfulness techniques. While LLMs are not a replacement for professional mental health services, they can bridge gaps in access to care and provide immediate support during crises.

Patient Personas

AI-powered chatbots can assist doctors in refining their communication skills across various patient scenarios by simulating conversations with distinct patient personas. This allows physicians to practise tailored discussions. For instance, by asking the chatbot to imitate a parent or guardian who is dismissive of a diagnosis, a doctor can practise this difficult conversation. This practice mitigates the potential for mistakes and difficult interactions, ultimately leading to improved outcomes for both healthcare professionals and patients.

Efficient Documentation

Copy generation technology, such as LLMs, offers a valuable solution for the time-consuming task of creating patient notes and handling correspondence with both patients and insurance companies. Physicians have noted that time spent on documentation is a contributor to burnout among medical professionals. Medical practitioners can input key information and LLMs can generate comprehensive notes and communications, saving them precious time. Doctors will then need to review these outputs for accuracy.

Ethical Limitations

The question of entrusting one's health to AI models raises concerns, especially when these technologies are developed and managed by profit-oriented corporations. AI models are directed and programmed according to the desires of the developer, and decisions made within the algorithms may be influenced by various factors, potentially compromising the objective medical needs of patients. Ensuring that LLMs are not influenced by external factors like advertisements, sponsors, or political agendas requires rigorous safeguards.


The integration of sensitive and protected health information into LLMs introduces significant data privacy concerns. The potential for patient data to become accessible to developer company employees, vulnerability to hacking, and the lack of transparency surrounding how these companies manage and store input information raises concerns about the use of these models for sensitive data. To navigate this issue, institutions will need to establish rigorous protocols for de-identifying data and securing informed consent from patients.


LLMs generate text based on their training data. This training data typically comes from well-funded institutions in high-income, English-speaking countries, leading to a hidden bias in the representation of medical knowledge. This bias encompasses factors such as race, sex, language, and culture, resulting in a skewed perspective of health and disease.

How much should patients know about doctors using LLMs?

The extent to which patients should be informed about doctors using LLMs is a complex ethical consideration. When utilising LLMs using a patient's personal data, transparency is paramount. Patients should be provided with a general understanding of how LLMs are used to assist in medical decision-making, without overwhelming them with technical details. Patients should also have the option to express any concerns they might have about AI involvement in their treatment. Striking the right balance between informing patients without causing undue stress or confusion requires clear communication that emphasises the collaborative nature of AI-assisted healthcare and maintains patient trust in the doctor-patient relationship.

What if doctors rely on LLMs or other AI instead of research and guidelines?

While AI can offer valuable insights and recommendations, it should be viewed as a complementary tool rather than a replacement for human clinical judgement and expertise. LLMs generate text based on probabilities, not actual understanding. This means that these models can often produce erroneous yet plausible content. Ideally, doctors should use AI technologies as tools to augment their decision-making process, integrating AI recommendations with their own clinical judgement, research, and guidelines. This balanced approach ensures that patients receive the best possible care by leveraging both the advantages of AI and the expertise of human healthcare professionals.


Large language models and other AI are rapidly changing the landscape of the medical sector. While these new technologies have the potential to improve many aspects of healthcare, it also introduces ethical complexities that require thoughtful examination. Achieving equilibrium between the pursuit of technological growth and the ethical dimensions, as well as safeguarding patient privacy, is of utmost importance.

We'll deliver straight to your inbox

We take your privacy very seriously and will never share your details with other parties.
You're subscribed! We'll send you a welcome email shortly, keep an eye out and if you don't find it perhaps check the (sometimes over-zealous) spam folder.
Oops! Something went wrong while submitting the form.
Beth Howe
Medical Writer
Bachelors in Biomedical Sciences, Bachelors in Biochemistry and Molecular Biology
Beth Howe is a passionate medical writer and member of the Australasian Medical Writers Association. With a degree from Victoria University of Wellington, she began her career during the COVID-19 pandemic, aiming to combat misinformation with factual scientific communication. Specialising in transforming complex research into accessible content, Beth's work spans from research manuscripts to informative health articles.
Share this post

Discover the Power of Communication with Rx

Embark on your medcomms journey with Rx today and experience the difference of working with a world-class medical communications agency.

Child playing in autumn leaves
Copyright Rx Communications Ltd