AI Explainability in Classifying Political Speeches and Interviews

Authors

DOI:

https://doi.org/10.21248/jlcl.39.2026.262

Keywords:

stance classification, Computational discourse analysis, setfit, explainable AI, SHAP

Abstract

This study applies explainable AI techniques to understand the linguistic features involved in classifying speeches and interviews in political discourse, a field where transparency is sensitive. Using a feature-based Linguistic-Rule-Based Model (LRBM), logistic regression, Transformerbased models, and SHAP values, we create a more interpretable version of the predictions made by BERT models in this natural language processing (NLP) binary classification task. The study explores the role that recognizable linguistic features play in both feature-based and neural models. Specifically, it examines the extent to which BERT models depend on linguistic structures for their predictions, using NER anonymization to reduce reliance on thematic context. Built on findings from classic and modern linguistic literature, and in addition to improving the interpretability of neural models, the study highlights the identification of important global “political discourse features” that distinguish speeches and interviews: nominalization frequency, discourse marker frequency, personal pronoun frequency, and interjection frequency.

Downloads

Published

2026-03-31

How to Cite

Reyes, J. F. (2026). AI Explainability in Classifying Political Speeches and Interviews. Journal for Language Technology and Computational Linguistics, 39(1), 33–72. https://doi.org/10.21248/jlcl.39.2026.262

Issue

Section

Research articles