Developing A Question Answering Model for Critical Care Medicine

MIDS logo
: Health, Research
: 2025

Critical care providers need to make high-stake clinical decisions on a daily basis. Making research-informed decisions is challenging because of the sheer amount of research results published in the field everyday with potentially conflicting conclusions regarding the same clinical questions. This project aims to develop a medical question answering model, leveraging large language language models (LLM), to facilitate the decision making process of physicians in critical care medicine.

The project team developed a model capable of retrieving relevant research papers regarding clinical questions and synthesizing a summary of the retrieved documents. The underlying approach used was knowledge-graph-based retrieval augmented generation (RAG), which allowed effective information storage and retrieval. The retrieved information was used for LLMs to synthesize the content of the model’s response to help physicians grasp the overview of the research papers and their stances regarding the clinical question of interest. The effectiveness of the model was demonstrated using a homebrew dataset curated by medical doctors on the team.

Mentor: Yue Jiang