AI tools such as ChatGPT claim to be on the brink of revolutionising healthcare. What potential do they have, and do the benefits outweigh the risks?

The technological advances that have defined the past two years have arguably been those in generative artificial intelligence (AI). For something that has only become a household name in the past year, generative AI has sent waves through almost every imaginable industry. It has already proven its potential in identifying new antibodies that can help address infectious diseases (Callaway, 2023) and designing novel drugs for diseases such as fibrosis and cancer (Buntz, 2023). Some estimates say it will grow the global economy by 7% in the next decade – healthcare industries included (Sherwood, 2023).

Among the biggest sources of excitement and anxiety in AI have been large language models (LLMs), such as LLaMA by Meta (the company behind Facebook), PaLM by Google or OpenAI’s GPT models, which power its ChatGPT bot. Users feed in normal sentences and receive an answer back. Each LLM performs differently, but many are designed to handle everything from translating huge bodies of text and summarising articles, to holding conversations with users and even writing creative content based on prompts. LLMs are capable of impressive feats of logical reasoning and conversation that are leagues above basic chatbots.

ChatGPT is the most well-known LLM technology. It hit 100 million active users just two months after launching in November 2022, making it the fastest-growing consumer application to date (Hu, 2023). Its developer, OpenAI, is now valued at almost $30bn (Reuters, 2023). However, this rapid adoption and growth has led some to grow fearful: an open letter, signed in March by the likes of Elon Musk and Apple’s co-founder Steve Wozniak, called for the halt of AI training – citing ‘profound risks to society and humanity’ (Future of Life, 2023).

Natural language processing – the field of AI research behind LLMs – is rapidly evolving, and its long-term impacts on the healthcare sector remain to be seen. It is likely that LLMs will have profound consequences on every stage of healthcare delivery, from the education of podiatry students to the services patients receive. So what could the future look like?

ChatGPT in practice

Tom Goom is a running specialist and works as a physiotherapist at Body Rehab Studios in Hove. Tom has been experimenting with ChatGPT in his practice and says that AI is often best implemented in time-consuming tasks that practice owners can get bogged down in, including drafting emails, providing feedback, and writing blog posts or newsletters.

He says that by experimenting with different prompts, podiatrists can generate responses tailored to their needs. However, vague and generic prompts will often lead to poor results. Tom says: ‘It’s almost like a conversation, where you start by saying, “I’d like this, please.” When it produces something, you can go, “No, actually, I’d like it a little bit more like this,” until it creates something you want.’

Through some trial and error, Tom has created a template prompt that makes ChatGPT generate training programmes, which he can reuse for different patients. ‘You want to give ChatGPT a role – for example, by saying, “You’re an expert running coach” – and give as much detail as you can.’ Details may include time goals, starting distances or frequency of runs per week.

He informs his patients that he uses AI to create their plans, and discusses the process of how he designs his prompts, using their feedback to adjust the programme. Patients are often intrigued, Tom finds. However, he advises carefully checking ChatGPT’s outputs and warns that healthcare professionals should be cautious of trusting AI-generated responses – especially in high-risk situations, such as in the treatment of complex or severe injuries.

Tom also says podiatrists should be aware that AI could change the dynamic between patients and healthcare professionals. Just like ‘Dr Google’, patients may use this new technology for research and arrive with preconceptions about diagnoses or ideal treatment plans. Professionals still have an unreplaceable role in healthcare: ‘ChatGPT doesn’t provide the same level of care, support, empathy, guidance and problem-solving that many health professionals have.’

Mastering your own data

LLMs could drastically advance the tools that patients have at their disposal. HealthGPT is an experimental iOS app developed by researchers from Stanford University that plugs OpenAI’s GPT model into CardinalKit: a suite of tools that can retrieve an individual’s health information from devices such as their phone, as well as medical records.

Though still in the early stages of development, it should allow patients to easily query their own health information. The current issue, says Dr Vishnu Ravi, physician and lead developer for CardinalKit, is that patients and doctors have access to large amounts of data but no easy way to identify what is relevant to their condition.

He gives the example of someone reporting leg pain: ‘There’s probably a bunch of data sitting on your phone that can clue the doctor into the actual reason for what’s going on, but maybe [the patient] doesn’t know that.’ HealthGPT allows users to quickly type in questions about their symptoms, and then CardinalKit retrieves the data and uses AI to generate a response based on what it sees.

Such technology could benefit people with complicated medical histories that need recounting to every new healthcare professional they interact with. The tool could also help them monitor their conditions with more agency, Vishnu says. ‘On the healthcare professionals’ side, imagine a future where you have an electronic health record filled with years of data points. [With LLMs,] you can ask for a quick summary of this person’s care for their diabetes over the last three years.’ This could allow podiatrists to comb through huge amounts of information in a matter of minutes, rather than hours. 

A powerful tool

Dr Oliver Aalami, vascular surgeon and director of the Stanford University Byers Center for Biodesign – the university arm that developed CardinalKit – does not believe that AI-powered technologies such as HealthGPT will replace professional healthcare for the vast majority of patients. Instead, he sees them as complementary. ‘You can get more precise, personalised insights as a patient [from HealthGPT] that you can present to your doctor,’ says Oliver. ‘As a patient, you can get a fine-tuned story.’

Tom points out that it is important for professionals to understand the range of tools that patients may have access to. ‘Patients have been googling their symptoms and going on Facebook groups and public forums, and even reading research papers, for years. Our patients are more and more informed. Which is a good thing as long as the information is useful and accurate,’ he says.

However, Tom – along with the HealthGPT team – cautions that LLM technologies are imperfect. They sometimes suffer from ‘hallucinations’: when AI generates responses that are inaccurate or nonsensical. The outputs can seem credible to an untrained eye but contain major inaccuracies that diverge from the text they have been trained on.

This can manifest as outright lies or statements that are misleading. Tom asked ChatGPT to summarise the evidence and recommendations from a 2016 paper he published on proximal hamstring tendinopathy. ‘It made quite a few mistakes,’ he says. ‘It recommended things we don’t usually recommend, like stretching, which tends to make it worse. The problem is that if you don’t have some knowledge of the topic, you might not spot those mistakes.’

Hence, Tom says, he does not use ChatGPT in situations he deems high-risk. Oliver adds that keeping humans in the loop, to review results and ensure they are reasonable, is a safety net for using AI in healthcare.

‘One thing we like about GPT-4 [the latest version of the GPT language model] is that they have guardrails in place to not provide outright medical advice,’ Vishnu says. For example, when asked how it could be used in a podiatry business, ChatGPT adds a disclaimer that it ‘is an AI language model and should not be used as a substitute for professional medical advice or diagnosis’ and will suggest you ‘consult with a licensed podiatrist for any foot-related concerns’.

Patient privacy

One significant concern for LLMs, when used by patients, healthcare professionals or those who develop medical tools, is patient data security. The main framework for data use and patient protection is the Europe-wide General Data Protection Regulation (GDPR).

Silvia Stefanelli is a lawyer and co-owner of Studio Legale Stefanelli & Stefanelli. An expert in healthcare law and data protection, she says that GDPR already contains appropriate rules to protect patients’ data in AI. GDPR requires that patient data be processed in an accurate, fair and transparent manner: Silvia says this means healthcare professionals have a duty to ensure that any patient data given to an AI is accurate, and that the AI itself is intelligent enough to give accurate results. 

If an AI system is used to infer data about people, this process  must be fair. In other words, it must avoid inadvertently discriminating against patients on the basis of protected characteristics, such as race. Some generative AI systems have shown bias in their results: for example, an earlier model of GPT made gender-based assumptions, including that doctors were men and nurses were women (Kotek, 2023).

Systems are only ever as good as the data that they are trained on. Future apps utilising LLM technologies will have to ensure that their models are fine-tuned using an inclusive body of healthcare literature – especially if one day they may be used for patient care (Paulus and Kent, 2020). For now, healthcare professionals should research whether the LLM they are using has protections against such bias.

Finally, GDPR’s rules around transparency mean that patients must be informed about how their data is being processed and stored. Podiatrists not only have an ethical obligation to inform their patients if their data is being processed using LLMs, but also a legal one.

Secure storage

Data security also presents a problem. Because LLMs often need a lot of processing power, models such as GPT-4 send users’ prompts – possibly containing personal health data – to the cloud. Cloud storage providers must have adequate capabilities to protect against data breaches or cyber attacks, but they aren’t infallible. A bug in ChatGPT allowed active users to view the payment information of others during a nine-hour window in March. OpenAI rapidly patched the bug and stated that the company was committed to protecting user privacy (OpenAI, 2023).

Silvia says that podiatrists who want to use ChatGPT should ensure that the cloud provider has adequate capabilities to protect against data loss – for example, from hacker attacks – and must verify where data is stored. She says that if the data is transferred outside the EU, its storage and processing must still abide by GDPR protections.

In the case of a data breach, it is the data controller – possibly an individual podiatrist – who has legal liability and could be fined for mishandling that data. ‘Even if the data has been transferred to ChatGPT [at the time a breach occurred], the data controller is liable for it,’ Silvia says.

OpenAI stores user data for 30 days before deleting it; the company also makes it clear that human AI trainers review conversations to improve the company’s systems and ensure content compliance. Users can opt out of this in their account settings. With these factors in mind, it is important that healthcare professionals take measures to not share confidential data when using LLMs with generic settings.

The HealthGPT team are currently experimenting with ways of making their suite of medical tools run locally on a patient or healthcare professional’s device, so that it’s more secure. ‘The ultimate goal is that we want to make apps in a very privacy-preserving manner,’ says Vishnu.

There is almost zero doubt that LLMs will affect stakeholders in healthcare delivery, from researchers to patients and professionals (both prospective and current). And AI technologies are changing at an incredible pace, along with governmental and institutional efforts to regulate their use. It is likely that the current uses for LLMs and their impacts on podiatry will shift rapidly in the next few years.

Understanding accessible LLMs such as ChatGPT could become invaluable to staying on top of advancements in the space, says Tom. Professionals do not have to be particularly technologically knowledgeable to get started, he adds: ‘The only way to learn is to experiment a little bit.’

Call to action

Familiarise yourself with the College’s standards for managing patient data: bit.ly/RCPod-patient-data-guidance

Watch a series of webinars about LLMs and academia: bit.ly/QAA-ChatGPT

Education in the age of AI

The training and education of new podiatrists is also being impacted by AI. Ben Bullen, the College’s head of education and professional development, says LLMs have the potential to allow students to cheat in examinations or assessments, and universities are taking this seriously. In April, essay-checking software Turnitin – used by universities to detect plagiarism – announced that it could now identify student work written by AI with 98% confidence (Henebery, 2023). 

‘What we’re doing is being very deliberate in our assessment methodology,’ Ben says. ‘For example, asking knowledge-based questions in an open-book format could tempt students to use AI and not apply their own knowledge. We need to be careful about what assessment we use because some are associated with issues of academic integrity. We have to be practical here. We need to think about what the aim of education is and move away from an idea where it’s only about imparting knowledge.’

Instilling an attitude in future podiatrists that goes beyond caring about achieving grades or academic progression will be integral, he adds.

HCPC-registered podiatrists undertaking postgraduate study should also be aware of the potential implications of academic misconduct, including abuse of AI and plagiarism. Universities have internal procedures to investigate suspected cases of academic misconduct. If an HCPC registrant has an academic misconduct investigation upheld against them, this must be reported to the HCPC’s fitness-to-practise department.

What are large language models (LLMs) in AI?

Large language models (LLMs) use neural networks to mimic the learning processes of a human brain through a set of algorithms. They are composed of multiple layers of interconnected nodes, like artificial neurons.

The exact composition differs between LLMs, but they are all designed to process and understand a massive repository of text. OpenAI’s GPT models, for example, were trained on a mixture of publicly available text data from the internet, plus data created by its human trainers.

LLMs can be trained and optimised to perform specific tasks such as summarising articles, answering questions and writing stories. ChatGPT has been tuned to have human-like conversations with users, who can ask it questions and prompt it to answer within specific constraints.

Six potential uses for ChatGPT in podiatry

  1. Virtual assistants: patients could use the chatbot to schedule appointments and access health information remotely
  2. Instant translation: ChatGPT currently supports 50 languages, including Chinese, Japanese and Arabic
  3. Summarising patient records: however, podiatrists must treat confidential data with the utmost caution
  4. Disease surveillance: ChatGPT can monitor and summarise global health data from online databases
  5. Medical writing: advanced chatbots can write in natural language and could help with writing blog posts or private case notes.

 

Words: Jacklin Kwan

The Podiatrist, July/August 2023

SHARE