The Struggles of Large Language Models with Zero- and Few-Shot (Extended) Metaphor Detection

Authors

DOI:

https://doi.org/10.21248/jlcl.38.2025.287

Keywords:

LLMs, metaphor, extended metaphor, figurative language, zero-shot, few-shot

Abstract

Extended metaphor is the use of multiple metaphoric words that express the same domain mapping. Although it would provide valuable insight for computational metaphor processing, detecting extended metaphor has been rather neglected. We fill this gap by providing a series of zero- and few-shot experiments on the detection of all linguistic metaphors and specifically on extended metaphors with LLaMa and GPT models. We find that no model was able to achieve satisfactory performance on either task, and that LLaMa in particular showed problematic overgeneralization tendencies. Moreover, our error analysis showed that LLaMa is not sufficiently able to construct the domain mappings relevant for metaphor understanding.

Downloads

Published

2025-07-08

How to Cite

Reimann, S., & Scheffler, T. (2025). The Struggles of Large Language Models with Zero- and Few-Shot (Extended) Metaphor Detection. Journal for Language Technology and Computational Linguistics, 38(2), 97–109. https://doi.org/10.21248/jlcl.38.2025.287