Document Type
Article
Publication Date
2025
DOI
10.3390/make7040116
Publication Title
Machine Learning & Knowledge Extraction
Volume
7
Issue
4
Pages
116
Abstract
(1) Background: Comprehensive conceptual models can result in complex artifacts, consisting of many concepts that interact through multiple mechanisms. This complexity can be acceptable and even expected when generating rich models, for instance to support ensuing analyses that find central concepts or decompose models into parts that can be managed by different actors. However, complexity can become a barrier when the conceptual model is used directly by individuals. A ‘transparent’ model can support learning among stakeholders (e.g., in group model building) and it can motivate the adoption of specific interventions (i.e., using a model as evidence base). Although advances in graph-to-text generation with Large Language Models (LLMs) have made it possible to transform conceptual models into textual reports consisting of coherent and faithful paragraphs, turning a large conceptual model into a very lengthy report would only displace the challenge. (2) Methods: We experimentally examine the implications of two possible approaches: asking the text generator to simplify the model, either via abstractive (LLMs) or extractive summarization, or simplifying the model through graph algorithms and then generating the complete text. (3) Results: We find that the two approaches have similar scores on text-based evaluation metrics including readability and overlap scores (ROUGE, BLEU, Meteor), but faithfulness can be lower when the text generator decides on what is an interesting fact and is tasked with creating a story. These automated metrics capture textual properties, but they do not assess actual user comprehension, which would require an experimental study with human readers. (4) Conclusions: Our results suggest that graph algorithms may be preferable to support modelers in scientific translations from models to text while minimizing hallucinations.
Rights
© 2025 by the authors.
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution 4.0 International (CC BY 4.0) License.
Data Availability
Article states: "Our source code is provided on an open third-party repository at https://osf.io/whqkd, accessed on 26 September 2025."
Original Publication Citation
Gandee, T. J., & Giabbanelli, P. J. (2025). Faithful narratives from complex conceptual models: Should modelers or large language models simplify causal maps? Machine Learning & Knowledge Extraction, 7(4), Article 116. https://doi.org/10.3390/make7040116
ORCID
0000-0001-6816-355X (Giabbanelli)
Repository Citation
Gandee, Tyler J. and Giabbanelli, Philippe J., "Faithful Narratives from Complex Conceptual Models: Should Modelers or Large Language Models Simplify Causal Maps" (2025). VMASC Publications. 150.
https://digitalcommons.odu.edu/vmasc_pubs/150
Included in
Artificial Intelligence and Robotics Commons, Electrical and Computer Engineering Commons, Theory and Algorithms Commons