ruoxining / GLoRE

a benckmark for evaluating logical reasoning of LLMs

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

GLoRE

A benchmark for evaluating the logical reasoning of LLMs

For more information, please refer to our Arxiv preprint

Datasets included:

We are working on incorporating more logical reasoning datasets!

This repository is compatible with the OpenAI Eval library. Please download the Eval package first, and put the contents in this repository Data and evals into evals/evals/registry/data/<name_of_your_eval/ and evals/evals/registry/evals/, respectively.

eg. evals/evals/registry/data/logiqa/logiqa.jsonl, evals/evals/registry/evals/logiqa.yaml

Setting-Up

pip install evals

Eval OpenAI Models

  1. export openai api key to the environment

export OPENAI_API_KEY=<your_key>

  1. run eval

oaieval <model_name> <data_name>

eg. oaieval gpt-3.5-turbo logiqa

Contribute Your Own Dataset

We welcome your datasets incorporated in GLoRe.

Please fill free to drop us issues (in the repo) or emails (address provided in the paper) to let us know.

We would recommend you to convert your dataset into GLoRe format with the scripts provided in example_scripts/, which provides three example conversion scripts for .csv, .tsv, and .json formats respectively. But we will also handle them if you meet trouble during format conversion.

How to Cite

@misc{liu2023glore,
      title={GLoRE: Evaluating Logical Reasoning of Large Language Models}, 
      author={Hanmeng liu and Zhiyang Teng and Ruoxi Ning and Jian Liu and Qiji Zhou and Yue Zhang},
      year={2023},
      eprint={2310.09107},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

About

a benckmark for evaluating logical reasoning of LLMs


Languages

Language:Python 100.0%