Document Type
Article
Publication Date
11-2020
DOI
10.1186/s12859-020-3504-z
Publication Title
BMC Bioinformatics
Volume
21
Issue
Suppl 6
Pages
1-12 pp.
Abstract
Background: One of the most essential problems in structural bioinformatics is protein fold recognition. In this paper, we design a novel deep learning architecture, so-called DeepFrag-k, which identifies fold discriminative features at fragment level to improve the accuracy of protein fold recognition. DeepFrag-k is composed of two stages: the first stage employs a multi-modal Deep Belief Network (DBN) to predict the potential structural fragments given a sequence, represented as a fragment vector, and then the second stage uses a deep convolutional neural network (CNN) to classify the fragment vector into the corresponding fold.
Results: Our results show that DeepFrag-k yields 92.98% accuracy in predicting the top-100 most popular fragments, which can be used to generate discriminative fragment feature vectors to improve protein fold recognition.
Conclusions: There is a set of fragments that can serve as structural “keywords” distinguishing between major protein folds. The deep learning architecture in DeepFrag-k is able to accurately identify these fragments as structure features to improve protein fold recognition.
Original Publication Citation
Elhefnawy, W., Li, M., Wang, J., & Li, Y. (2020). DeepFrag-k: A fragment-based deep learning approach for protein fold recognition. BMC Bioinformatics, 21(Supplement 6), 1-12, Article 203. https://doi.org/10.1186/s12859-020-3504-z
Repository Citation
Elhefnawy, W., Li, M., Wang, J., & Li, Y. (2020). DeepFrag-k: A fragment-based deep learning approach for protein fold recognition. BMC Bioinformatics, 21(Supplement 6), 1-12, Article 203. https://doi.org/10.1186/s12859-020-3504-z
Comments
© The Author(s). 2020 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. The Creative Commons Public Domain Dedication waiver applies to the data made available in this article, unless otherwise stated in a credit line to the data.