Blogs
31/10/2023
Artificial intelligence (AI) is progressively permeating the healthcare domain, particularly in assisting medical practitioners in achieving precise diagnostic outcomes. Dr. Michael Mansour, based at Massachusetts General Hospital, stands as an early proponent of AI, harnessing its capabilities to streamline medical information retrieval. He is originally specializes in invasive fungal infections in transplant patients.
A pivotal instrument in this realm is UpToDate, often likened to a "Google for doctors," uniting over 2 million users across 44,000 healthcare organizations spanning 190 countries by offering expert-authored medical content. Dr. Mansour relies on UpToDate to elucidate mysterious infections when patients present with perplexing symptoms, enabling him to cross-reference their condition with potential diagnoses.
Mansour gives his hypothetical case: "If I meet a patient who is visiting from Hawaii." The hypothetical patient's symptoms make Mansour worry about an infection that the patient acquired back home, so he types "Hawaii" and "infection" into the learning machine.
However, the present AI tools grapple with limitations in furnishing highly tailored information. This is where generative AI enters the fray. An experimental iteration of UpToDate is currently undergoing testing, harnessing generative AI to supply doctors with context-aware, highly specific information. The architect behind UpToDate, Wolters Kluwer Health, aspires to facilitate more interactive and coversational interactions between doctors and the database, reminiscent of consultations with seasoned clinicians.
"If you have a question, it can maintain the context of your question," says Dr. Peter Bonis, chief medical officer for Wolters Kluwer Health. "And saying, 'Oh, I meant this,' or 'What about that?' And it knows what you're talking about and can guide you through, in much the same way that you might ask a master clinician to do that."
Wolters Kluwer Health currently offers an AI-enhanced program in a beta version for rigorous testing. Dr. Bonis emphasizes the need for complete reliability before its widespread release.
During initial tests, Dr. Bonis noted some instances of errors in the AI program, referred to in the realm of large language model AI as "hallucinations." He encountered a situation where the program referenced a journal article outside of his expertise. On further investigation, the study was non-existent in that journal. Dr. Bonis questioned the AI, and it acknowledged creating the information.
However, with ongoing refinement, the medical community recognizes AI's vast potential in aiding doctors with diagnoses in radiology. It's currently employed as a radiological tool, significantly assisting in CT scans and X-rays. OpenEvidence, developed by experts from Harvard University, MIT, and Cornell University, utilizes AI to comprehensively analyze recent medical research studies, delivering synthesized information to users.
This advancement in AI technology signifies a promising future for healthcare, offering improved diagnostic tools and supporting medical professionals in their decision-making processes.
Dr. June-Ho Kim, who directs a program on primary care innovation at Ariadne Labs, which is a partnership of Brigham and Women's Hospital and the Harvard T.H. Chan School of Public Health, admits about the prep work procedure: "It's a time-consuming and very haphazard process". He also says: "You could see a large language model that's able to digest that and produce kind of natural language summaries of it being incredibly useful."
What Dr. Kim saying is that AI technology may also help primary care physicians care for patients without needing the assistance of specialists. "It will free up specialist time to focus on the more complex cases that they need to really in on, rather than the ones that could be answered through a few questions," he says.
A research study featured in the Journal of Medical Internet Research in August scrutinized the diagnostic proficiency of the widely used ChatGPT program. Researchers administered 36 clinical scenarios to ChatGPT and observed an accuracy rate of 77% in providing final diagnoses. However, when presented with more constrained patient data derived from initial interactions with physicians, the diagnostic accuracy of ChatGPT diminished to 60%.
Dr. Marc Succi of Mass General Brigham, who was one of the paper's authors, says that AI for healthcare needs improvement. "We've drilled down on specific parts of the clinical visit where it needs to improve before it is ready for prime time,” he says.
Succi believes that AI will eventually prove to be a trusted medical tool similar to the stethoscope. She says: "AI won't replace doctors, but doctors who use AI will replace doctors who do not."
Mansour, a specialist focused on fungal infections post-transplant procedures at Massachusetts General Hospital, expresses an aspiration that AI will afford him additional time to engage with patients, restoring that patient-doctor relationship. "Instead of spending those extra minutes searching things, you could allow me to go and talk to that person about their diagnosis, about what to expect for management," he says.
---------------------------------------
Explore the cutting-edge advancement in generates automatic medical reports, provides smart patient-record search and summarization with the latest feature of DrAidTM for Patient EMR Analytics. This innovative functionality is generates automatic medical reports and provides smart patient-record search and summarization based on ChatGPT (Generative AI), enhancing patient outcomes with personalized treatment recommendations, clinical decision support, and real-time alerts for timely interventions.
Learn more about how DrAidTM for Patient EMR Analytics transforms the way doctors work!