search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems















 


 








 





  





  


 





 





 


 


 





  


Figure 2. Chart-constrained LLM recommendations


progress notes, laboratory values and radiological reports to assist clinicians in information synthesis.


Moreover, there may be a future in which LLM capabilities expand to include clinical management suggestions. We demonstrate one example use case in the setting of a common IR consult for acute gastrointestinal hemorrhage (Figure 2), wherein each additional note and lab value updates the LLM recommendations to a care team. Finally, LLMs could also support the advocacy of healthcare professionals and patients alike, through assistance in drafting appeals for denied preauthorization requests or contesting insurance claims deemed medically necessary.


Postprocedure care AI has previously been used, though not widely clinically implemented, for surveillance of complications. Similar deployment of LLM chatbots would be feasible; however, LLMs can state and perpetuate medical inaccuracies.10 As such, LLM-based postprocedure chatbots may still be far from clinical use. However, plug-in tools are currently being developed to restrict LLM information access to a specific source(s), which may improve chatbot accuracy.


LLM integration into electronic medical health records may become a reality with the recent announcement of co-innovation between OpenAI and Epic Healthcare Systems.


Ease of communication A common cause of stress for patients is their inability to interpret radiological reports. Patients are notified when their radiological report is uploaded to their chart even before their physician has had the chance to follow up and elaborate on the findings. LLMs show great utility in this space, with the ability to simplify the radiologist’s medical jargon to a patient-friendly report. LLMs can effectively translate radiological reports to that of an 8th-grade reading level or below.11


specialty-specific radiological report summarizations for referring providers.


Limitations Data privacy


Although OpenAI has taken steps to promote data privacy and website security, they do not follow HIPAA- compliant practices.12


OpenAI’s current


privacy policy admits collecting uncensored personal data submitted to GPT services and using the data to train its LLMs. In its current state, GPT is not ready to be integrated into hospital systems and poses a threat to personal health information. However, OpenAI is compliant with the California Consumer Privacy Act and EU’s General Data Protection Regulations demonstrating their commitment to data security and hopefully integrating future HIPAA- compliant practices.13


Given


the accessibility of medical documents in patient charts, LLM simplification of radiology reports potentially empowers patients to understand their conditions better, reduces patient anxiety and decreases unscheduled calls from patients. Similarly, LLMs can generate


GPT hallucinations A current shortcoming of GPT is the production of hallucinations. Hallucinations are GPT-produced responses that seem entirely reasonable but are fabricated—a limitation of being trained on a dataset that runs up until September 2021. GPT does not actually know what is right and wrong; it simply relays information that it was trained on. Fortunately, LLM frameworks such as LangChain


irq.sirweb.org | 13


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40