vlievin / medical-reasoning

Medical reasoning using large language models

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Note: This repository (medical-reasoning) is a snapshot of the code used to produce our results up to December 2022, including Codex 5-shot CoT results for version 2 of our paper. The repository is now archived and will not receive further updates, ensuring maximum reproducibility of the results. For our latest research using open-source large language models, please refer to the med-chain repository, the new official repository for ongoing work. It can be accessed at https://github.com/MotzWanted/med-chain

Medical Reasoning using GPT-3.5

Official legacy repository for the paper Can large language models reason about medical questions?

PWC PWC PWC

Abstract

Although large language models (LLMs) often produce impressive outputs, it remains unclear how they perform in real-world scenarios requiring strong reasoning skills and expert domain knowledge. We set out to investigate whether GPT-3.5 (Codex and InstructGPT) can be applied to answer and reason about difficult real-world-based questions. We utilize two multiple-choice medical exam questions (USMLE and MedMCQA) and a medical reading comprehension dataset (PubMedQA). We investigate multiple prompting scenarios: Chain-of-Thought (CoT, think step-by-step), zero- and few-shot (prepending the question with question-answer exemplars) and retrieval augmentation (injecting Wikipedia passages into the prompt). For a subset of the USMLE questions, a medical expert reviewed and annotated the model's CoT. We found that InstructGPT can often read, reason and recall expert knowledge. Failure are primarily due to lack of knowledge and reasoning errors and trivial guessing heuristics are observed, e.g.\ too often predicting labels A and D on USMLE. Sampling and combining many completions overcome some of these limitations. Using 100 samples, Codex 5-shot CoT not only gives close to well-calibrated predictive probability but also achieves human-level performances on the three datasets. USMLE: 60.2%, MedMCQA: 57.5% and PubMedQA: 78.2%.

CoT Samples

Samples of generated CoTs for the USMLE, MedMCQA and PubMedQA datasets can be accessed here.

More samples will be made available through ThoughtSource ⚡.

Setup

Install poetry
curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python -
Install dependencies
poetry install
Setup Elasticsearch
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.14.1-linux-x86_64.tar.gz
tar -xzf elasticsearch-7.14.1-linux-x86_64.tar.gz

To run ElasticSearch navigate to the elasticsearch-7.14.1 folder in the terminal and run ./bin/elasticsearch.

Running one experiment

Use poetry run to load and run using the poetry environment.

poetry run experiment <args>
# Example
poetry run experiment engine=code dataset.name=medqa_us dataset.subset=10

Running a group of experiments

Groups of experiments are defined in pyproject.toml

poetry run poe medqa_test
poetry run poe medmcqa_valid
poetry run poe pubmedqa_test
poetry run poe mmlu_test_code

Citation

@misc{https://doi.org/10.48550/arxiv.2207.08143,
  doi = {10.48550/ARXIV.2207.08143},
  url = {https://arxiv.org/abs/2207.08143},
  author = {Liévin, Valentin and Hother, Christoffer Egeberg and Winther, Ole},
  keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.1; I.2.7},
  title = {Can large language models reason about medical questions?},
  publisher = {arXiv},
  year = {2022},
  copyright = {arXiv.org perpetual, non-exclusive license}
}

About

Medical reasoning using large language models

License:Apache License 2.0


Languages

Language:HTML 98.1%Language:Python 1.9%