The Struggles of Large Language Models with Zero- and Few-Shot (Extended) Metaphor Detection
DOI:
https://doi.org/10.21248/jlcl.38.2025.287Keywords:
LLMs, metaphor, extended metaphor, figurative language, zero-shot, few-shotAbstract
Extended metaphor is the use of multiple metaphoric words that express the same domain mapping. Although it would provide valuable insight for computational metaphor processing, detecting extended metaphor has been rather neglected. We fill this gap by providing a series of zero- and few-shot experiments on the detection of all linguistic metaphors and specifically on extended metaphors with LLaMa and GPT models. We find that no model was able to achieve satisfactory performance on either task, and that LLaMa in particular showed problematic overgeneralization tendencies. Moreover, our error analysis showed that LLaMa is not sufficiently able to construct the domain mappings relevant for metaphor understanding.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Sebastian Reimann, Tatjana Scheffler

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.