PC & Mobile technology
15.01.2026 07:33

Share with others:

Share

Doctors see a role for artificial intelligence in healthcare, but not necessarily as a chatbot

Artificial intelligence is increasingly appearing in conversations about the future of healthcare, but doctors are divided on its role. While many see great potential for AI to improve access to care and system efficiency, they are much more hesitant about using it as a healthcare advisor.
Doctors see a role for artificial intelligence in healthcare, but not necessarily as a chatbot

Dr. Sina Bari, a surgeon and leading AI expert in healthcare at iMerit, told TechCrunch that a patient was misled by ChatGPT when he suggested a medication, and the patient was shown a transcript of a ChatGPT conversation that stated the medication had a 45 percent risk of pulmonary embolism.

Upon closer inspection, Dr. Bari found that the information referred to a narrow subset of tuberculosis patients and was not relevant to the specific patient. He said the example illustrates the risks of uncritically using general AI chatbots for health advice.

Why ChatGPT Health Still Inspires Optimism

Despite such cases, Dr. Bari welcomed the new ChatGPT Health feature with more enthusiasm than concern. He believes it is important that the existing practice of AI conversations about health be formalized and processed with appropriate safeguards to protect data and users.

ChatGPT Health allows users to have health conversations in a separate, more private environment, with OpenAI emphasizing that it doesn't use these conversations to train its models. Users can also upload health records or sync data from apps like Apple Health and MyFitnessPal for more personalized responses.

Privacy and regulatory issues

The possibility of connecting health data to AI raises new questions. Itai Schwartz, co-founder of MIND, warns about the transfer of sensitive data from HIPAA-compliant systems to non-HIPAA providers. He says it will be interesting to see how regulators respond to such practices.

Despite concerns, the numbers show that AI chatbots have already become an important source of information: more than 230 million people use ChatGPT every week for health-related questions.

AI as an aid to doctors, not a replacement

According to Dr. Nigam Shah, a professor of medicine at Stanford and chief data scientist at Stanford Health Care, a bigger problem than AI's incorrect answers is the long wait times in the healthcare system. In the US, patients can wait three to six months to see a primary care doctor.

In this context, Dr. Shah sees greater potential for AI in healthcare providers. Administrative tasks can take up to half of a doctor's working time, which directly reduces the number of patients treated.

Stanford is developing a system called ChatEHR that integrates directly into electronic health records, giving doctors faster access to key patient information. Early adopters say the system reduces time spent searching for information and allows for more direct contact with patients.

Anthropic is also developing AI tools for healthcare professionals and insurance companies. They recently introduced Claude for Healthcare, a solution that aims to reduce the time spent on administrative procedures such as prior approvals from insurance companies.

The inevitable tension between technology and medicine

As Dr. Bari points out, there is a fundamental tension in the introduction of artificial intelligence into healthcare: doctors have a primary responsibility to patients, while technology companies also act in the interests of shareholders.

Patients expect us to be cautious and skeptical"," says Dr. Bari. "This precaution is key to their protection.”


Interested in more from this topic?
artificial intelligence


What are others reading?

_framework('