Measuring the Contributions of Vision and Text Modalities
DOI:
https://doi.org/10.21248/jlcl.38.2025.261Keywords:
vision and language, interpretability, explainabilityAbstract
This dissertation investigates multimodal transformers that process both image and text modalities together to generate outputs for various tasks (such as answering questions about images). Specifically, methods are developed to assess the effectiveness of vision and language models in combining, understanding, utilizing, and explaining information from these two modalities. The dissertation contributes to the advancement of the field in three ways: (i) by measuring specific and task-independent capabilities of vision and language models, (ii) by interpreting these models to quantify the extent to which they use and integrate information from both modalities, and (iii) by evaluating their ability to provide self-consistent explanations of their outputs to users.
Published
Versions
- 2025-02-27 (2)
- 2025-02-27 (1)
How to Cite
Issue
Section
License
Copyright (c) 2025 Letitia Parcalabescu

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.