Technology companies are now introducing artificial intelligence chatbots specifically designed to answer medical questions.
OpenAI has recently introduced a new program called ChatGPT Health, which can use patients’ medical records and data to respond to health related queries.
What can these bots do?
Companies say these programs are not a replacement for doctors but are meant to assist patients. They can explain complex test reports, help patients prepare for doctor appointments, and highlight important trends hidden in medical records.
Dr Robert Wachter, an expert at the University of California, says these tools are better than a Google search because they provide more personalized information. He advises patients to provide as much detail as possible to chatbots in order to receive better responses.
Experts say that for symptoms such as chest pain, difficulty breathing, or severe headache, patients should seek immediate medical attention rather than relying on chatbots.
Privacy concerns:
Experts have warned that medical information shared with chatbots is not protected under the same laws that apply to doctors and hospitals.
OpenAI and Anthropic claim that they keep users’ medical information separate and do not use it to train their models.
Flaws in research:
A study by the University of Oxford found that although AI chatbots can achieve 95 percent accuracy in written case scenarios, problems arise during real interactions with people.
Patients often fail to provide necessary information to chatbots and are unable to distinguish between good and bad information provided by AI.
Dr Wachter says that to increase confidence in information received from chatbots, users can consult multiple bots. He himself enters information into both ChatGPT and Google Gemini, and feels more confident when both provide similar answers.
Experts say these tools should be used with caution and sound judgment, and important health related decisions should not rely solely on chatbots.
