@dmir

Pollice Verso at SemEval-2024 Task 6: The Roman Empire Strikes Back

, , and . Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024), page 1529--1536. Mexico City, Mexico, Association for Computational Linguistics, (June 2024)

Abstract

We present an intuitive approach for hallucination detection in LLM outputs that is modeled after how humans would go about this task. We engage several LLM ``experts'' to independently assess whether a response is hallucinated. For this we select recent and popular LLMs smaller than 7B parameters. By analyzing the log probabilities for tokens that signal a positive or negative judgment, we can determine the likelihood of hallucination. Additionally, we enhance the performance of our ``experts'' by automatically refining their prompts using the recently introduced OPRO framework. Furthermore, we ensemble the replies of the different experts in a uniform or weighted manner, which builds a quorum from the expert replies. Overall this leads to accuracy improvements of up to 10.6 p.p. compared to the challenge baseline. We show that a Zephyr 3B model is well suited for the task. Our approach can be applied in the model-agnostic and model-aware subtasks without modification and is flexible and easily extendable to related tasks.

Links and resources

Tags

community

  • @dmir
  • @janpf
@dmir's tags highlighted