As hundreds of millions of people increasingly rely on chatbots for advice, technology companies are developing specialized programs to answer health-related inquiries. Despite the enthusiasm surrounding artificial intelligence, independent testing of this technology is still in its early stages. Studies indicate that programs like ChatGPT can successfully pass advanced medical exams, but they often struggle when interacting with humans. A recent Oxford University study involving 1,300 participants found that individuals who used AI-powered chatbots to search for hypothetical health conditions did not make better decisions than those who used regular internet searches or personal assessments. When presented with comprehensive, written medical scenarios, these chatbots accurately identified the underlying condition in 95% of cases. 'The problem wasn't that; the problem was interacting with real participants, who often don't provide enough detail,' said lead researcher Adam Mehdi from the Oxford Institute for Internet. Companies assure users that their data is stored separately and not used to train models, but experts advise caution. Independent studies on the accuracy of these technologies remain limited. Furthermore, some responses can be misleading, making it difficult for users to distinguish correct information. Experts recommend consulting multiple programs to verify information, maintaining skepticism, and avoiding complete reliance on AI for health decisions, whether simple or critical. However, they warn against using chatbots in emergencies like chest pain or shortness of breath, where immediate medical attention is required. One of the primary concerns regarding these programs is privacy. Experts suggest providing as much detail as possible to get better responses. Company Anthropic offers similar features to users of the Claude chatbot. Nevertheless, both companies emphasize that these tools are not a substitute for specialized medical care and should not be used for diagnosis. The advantage of these chatbots is that they offer more personalized information than regular internet searches, taking into account the user's medical context. Information shared with AI companies is not protected by HIPAA laws, which safeguard sensitive medical data. In January, OpenAI launched ChatGPT Health, a program that analyzes medical records and wearable device data to answer health questions.
AI in Healthcare: Hopes and Risks
Studies show chatbots like ChatGPT can pass medical exams but struggle with human interaction. Experts warn of risks and advise caution.