Self-Attention Mechanisms as Representations for Gene Interaction Networks in Hypothesis-Driven Gene-Based Transformer Genomics AI Models

Document Type

Conference Paper

Publication Date

2024

DOI

10.1609/aaaiss.v4i1.31813

Publication Title

Proceedings of the AAAI Fall Symposium Series

Volume

4

Issue

1

Pages

334-336

Conference Name

2024 AAAI Fall Symposia, November 7-9, 2024, Arlington, Virginia

Abstract

In this position paper, we propose a framework for hypothesis-driven genomic AI using self-attention mechanisms in gene-based transformer models to represent gene interaction networks. Hypotheses can be introduced as attention masks in these transformer models with genes treated as tokens. This approach can bridge the gap between genotypic data and phenotypic observations by using prior knowledge-based masks in the transformer models. By using attention masks as hypotheses to guide the model fitting, the proposed framework can potentially assess various hypotheses to determine which best explains the experimental observations. The proposed framework can enhance the interpretability and predictive power of genomic AI to advance personalized medicine and promote healthcare equity.

Rights

Copyright © 2023, Association for the Advancement of Artificial Intelligence. All rights reserved.

"In the returned rights section of the AAAI copyright form, authors are specifically granted back the right to use their own papers for noncommercial uses, such as inclusion in their dissertations or the right to deposit their own papers in their institutional repositories, provided there is proper attribution."

Original Publication Citation

Qin, H. (2024). Self-attention mechanisms as representations for gene interaction networks in hypothesis-driven gene-based transformer genomics AI models. Proceedings of the AAAI Symposium Series, 4(1), 334-336. https://doi.org/10.1609/aaaiss.v4i1.31813

Share

COinS