Skip to content

[ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.

License

Notifications You must be signed in to change notification settings

IAAR-Shanghai/UHGEval

Repository files navigation

🍄 UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation

What does this repository include?
UHGEval: An unconstrained hallucination evaluation benchmark.
Eval Suite: A user-friendly evaluation framework for hallucination tasks.
Eval Suite supports other benchmarks, such as HalluQA and HaluEval.

ACL Anthology Paper arXiv Paper Hugging Face UHGEvalDataset
Static Badge PyPI version Apache 2.0 License

Quick Start

# Install Eval Suite
conda create -n uhg python=3.10
conda activate uhg
pip install eval-suite

# Run evaluations with OpenAI Compatible API
eval_suite eval openai \
    --model_name gpt-4o \
    --api_key your_api_key \
    --base_url https://api.openai.com/v1 \
    --evaluators ExampleQAEvaluator UHGSelectiveEvaluator

# Or run evaluations with Hugging Face Transformers
eval_suite eval huggingface \
    --model_name_or_path Qwen/Qwen2-0.5B-Instruct \
    --apply_chat_template \
    --evaluators ExampleQAEvaluator UHGSelectiveEvaluator

# After evaluation, you can gather statistics of the evaluation results
eval_suite stat

# List all available evaluators
eval_suite list

# Get help
eval_suite --help

Tip

  • Refer to demo.ipynb for more detailed examples.
  • Run export HF_ENDPOINT=https://hf-mirror.com to use the Chinese mirror if you cannot connect to Hugging Face.
  • SilliconFlow provides free API keys for many models, and you can apply for one at https://siliconflow.cn/pricing.

UHGEval

UHGEval is a large-scale benchmark designed for evaluating hallucination in professional Chinese content generation. It builds on unconstrained text generation and hallucination collection, incorporating both automatic annotation and manual review.

UHGEvalDataset. UHGEval contains two dataset versions. The full version includes 5,141 data items, while a concise version with 1,000 items has been created for more efficient evaluation. Below is an example in UHGEvalDataset.

Example
{
    "id": "num_000432",
    "headLine": "(社会)江苏首次评选消费者最喜爱的百种绿色食品",
    "broadcastDate": "2015-02-11 19:46:49",
    "type": "num",
    "newsBeginning": "  新华社南京2月11日电(记者李响)“民以食为天,食以安为先”。江苏11日发布“首届消费者最喜爱的绿色食品”评选结果,老山蜂蜜等100种食品获得消费者“最喜爱的绿色食品”称号。",
    "hallucinatedContinuation": "江苏是全国绿色食品生产最发达的省份之一。",
    "generatedBy": "InternLM_20B_Chat",
    "annotations": [
        "江苏<sep>合理",
        "全国<sep>合理",
        "绿色食品生产<sep>合理",
        "发达<sep>不合理,没有事实证明江苏是全国绿色食品生产发达的省份,但可以确定的是,江苏在绿色食品生产上有积极的实践和推动",
        "省份<sep>合理",
        "之一<sep>不合理,没有具体的事实证据表明江苏是全国绿色食品生产发达的省份之一"
    ],
    "realContinuation": "61家获奖生产企业共同签署诚信公约,共建绿色食品诚信联盟。",
    "newsRemainder": "61家获奖生产企业共同签署诚信公约,共建绿色食品诚信联盟。这是江苏保障食品安全、推动绿色食品生产的重要举措。\n  此次评选由江苏省绿色食品协会等部门主办,并得到江苏省农委、省委农工办、省工商局、省地税局、省信用办、省消协等单位大力支持。评选历时4个多月,经企业报名、组委会初筛、消费者投票等层层选拔,最终出炉的百强食品榜单由消费者亲自票选得出,网络、短信、报纸及现场投票共310多万份票数,充分说明了评选结果的含金量。\n  食品安全一直是社会关注的热点。此次评选过程中,组委会工作人员走街头、进超市,邀请媒体、消费者、专家深入产地开展绿色食品基地行,除了超市选购外,还搭建“诚信购微信商城”“中国移动MO生活绿色有机馆”等线上销售平台,开创江苏绿色食品“评展销”结合新局面。评选不仅宣传了江苏绿色品牌食品,更推动了省内绿色食品市场诚信体系的建立,为江苏绿色食品走向全国搭建了权威的平台。\n  江苏省农委副主任李俊超表示,绿色食品消费是当前社会重要的消费趋势。本次评选不仅为社会培育了食品安全诚信文化,也提高了消费者对食品质量和标识的甄别能力,实现了消费者和生产企业的“双赢”。\n  与会企业表示,能够入选“首届江苏消费者最喜爱的绿色食品”是消费者的信任和支持,他们将以此荣誉作为企业发展的新起点,严把食品质量关,推介放心安全的绿色品牌食品,促进产业稳定健康发展。(完)"
}

Evaluation Methods. UHGEval offers a variety of evaluation methods, including discriminative evaluation, generative evaluation, and selective evaluation.

Evaluator Metric Description
UHGDiscKeywordEvaluator Average Accuracy Given a keyword, the LLM determines whether it contains hallucination.
UHGDiscSentenceEvaluator Average Accuracy Given a sentence, the LLM determines whether it contains hallucination.
UHGGenerativeEvaluator BLEU-4, ROUGE-L, kwPrec, BertScore Given a continuation prompt, the LLM generates a continuation.
UHGSelectiveEvaluator Accuracy Given hallucinated text and unhallucinated text, the LLM selects the realistic text.

Eval Suite

To facilitate evaluation, we have developed a user-friendly evaluation framework called Eval Suite. Currently, Eval Suite supports common hallucination evaluation benchmarks, allowing for comprehensive evaluation of the same LLM with just one command as shown in the Quick Start section.

Benchmark Evaluator More Information
C-Eval CEvalEvaluator src/eval_suite/benchs/ceval
ExampleQA ExampleQAEvaluator src/eval_suite/benchs/exampleqa
HalluQA HalluQAMCEvaluator src/eval_suite/benchs/halluqa
HaluEval HaluEvalDialogEvaluator
HaluEvalQAEvaluator
HaluEvalSummaEvaluator
src/eval_suite/benchs/halueval
UHGEval UHGDiscKeywordEvaluator
UHGDiscSentenceEvaluator
UHGGenerativeEvaluator
UHGSelectiveEvaluator
src/eval_suite/benchs/uhgeval

Learn More

Citation

@inproceedings{liang-etal-2024-uhgeval,
    title = "{UHGE}val: Benchmarking the Hallucination of {C}hinese Large Language Models via Unconstrained Generation",
    author = "Liang, Xun  and
      Song, Shichao  and
      Niu, Simin  and
      Li, Zhiyu  and
      Xiong, Feiyu  and
      Tang, Bo  and
      Wang, Yezhaohui  and
      He, Dawei  and
      Peng, Cheng  and
      Wang, Zhonghao  and
      Deng, Haiying",
    editor = "Ku, Lun-Wei  and
      Martins, Andre  and
      Srikumar, Vivek",
    booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.acl-long.288",
    doi = "10.18653/v1/2024.acl-long.288",
    pages = "5266--5293",
}

TODOs

Click me to show all TODOs
  • feat: vLLM offline inference benchmarking
  • feat(benchs): add TruthfulQA benchmark