Speaker Attribution in German Parliamentary Debates with QLoRA-adapted Large Language Models
DOI:
https://doi.org/10.21248/jlcl.37.2024.244Keywords:
large language models, German, speaker attribution, semantic role labelingAbstract
The growing body of political texts opens up new opportunities for rich insights into political dynamics and ideologies but also increases the workload for manual analysis. Automated speaker attribution, which detects who said what to whom in a speech event and is closely related to semantic role labeling, is an important processing step for computational text analysis. We study the potential of the large language model family Llama 2 to automate speaker attribution in German parliamentary debates from 2017-2021. We fine-tune Llama 2 with QLoRA, an efficient training strategy, and observe our approach to achieve competitive performance in the GermEval 2023 Shared Task On Speaker Attribution in German News Articles and Parliamentary Debates. Our results shed light on the capabilities of large language models in automating speaker attribution, revealing a promising avenue for computational analysis of political discourse and the development of semantic role labeling systems.
Downloads
Published
Versions
- 2024-03-03 (2)
- 2024-02-29 (1)
How to Cite
Issue
Section
License
Copyright (c) 2024 Tobias Bornheim, Niklas Grieger, Patrick Gustav Blaneck , Stephan Bialonski
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.