AI Explainability in Classifying Political Speeches and Interviews
DOI:
https://doi.org/10.21248/jlcl.39.2026.262Keywords:
stance classification, Computational discourse analysis, setfit, explainable AI, SHAPAbstract
This study applies explainable AI techniques to understand the linguistic features involved in classifying speeches and interviews in political discourse, a field where transparency is sensitive. Using a feature-based Linguistic-Rule-Based Model (LRBM), logistic regression, Transformerbased models, and SHAP values, we create a more interpretable version of the predictions made by BERT models in this natural language processing (NLP) binary classification task. The study explores the role that recognizable linguistic features play in both feature-based and neural models. Specifically, it examines the extent to which BERT models depend on linguistic structures for their predictions, using NER anonymization to reduce reliance on thematic context. Built on findings from classic and modern linguistic literature, and in addition to improving the interpretability of neural models, the study highlights the identification of important global “political discourse features” that distinguish speeches and interviews: nominalization frequency, discourse marker frequency, personal pronoun frequency, and interjection frequency.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Juan Francisco Reyes

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.