Document Type
Conference Paper
Publication Date
2024
Publication Title
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Pages
2787-2797
Conference Name
2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), May 20-25, 2024, Torino, Italy
Abstract
Hypothesis formulation and testing are central to empirical research. A strong hypothesis is a best guess based on existing evidence and informed by a comprehensive view of relevant literature. However, with exponential increase in the number of scientific articles published annually, manual aggregation and synthesis of evidence related to a given hypothesis is a challenge. Our work explores the ability of current large language models (LLMs) to discern evidence in support or refute of specific hypotheses based on the text of scientific abstracts. We share a novel dataset for the task of scientific hypothesis evidencing using community-driven annotations of studies in the social sciences. We compare the performance of LLMs to several state of the art methods and highlight opportunities for future research in this area. Our dataset is shared with the research community: https://github.com/Sai90000/ScientificHypothesisEvidencing.git.
Rights
© 2024 ELRA Language Resource Association.
Published under a Creative Commons Attribution 4.0 International License (CC BY 4.0).
Data Availability
Article states: "Our dataset is shared with the research community: https://github.com/Sai90000/ScientificHypothesisEvidencing.git."
Original Publication Citation
Koneru, S., Wu, J., & Rajtmajer, S. (2024). Can large language models discern evidence for scientific hypotheses? Case studies in the social sciences. In N. Calzolari, M.-Y. Kan, V. Hoste, A. Lenci, S. Sakti, & N. Xue (Eds.), Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (pp. 2787-2797). European Languages Resources Association. https://aclanthology.org/2024.lrec-main.248/
Repository Citation
Koneru, S., Wu, J., & Rajtmajer, S. (2024). Can large language models discern evidence for scientific hypotheses? Case studies in the social sciences. In N. Calzolari, M.-Y. Kan, V. Hoste, A. Lenci, S. Sakti, & N. Xue (Eds.), Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (pp. 2787-2797). European Languages Resources Association. https://aclanthology.org/2024.lrec-main.248/
ORCID
0000-0003-0173-4463 (Wu)
Included in
Artificial Intelligence and Robotics Commons, Scholarly Communication Commons, Scholarly Publishing Commons