论文标题
蛋白质景观预测所需的转移学习是必需的吗?
Is Transfer Learning Necessary for Protein Landscape Prediction?
论文作者
论文摘要
最近,人们对学习如何最好地表示蛋白质,特别是具有固定长度嵌入的兴趣。随着模型的隐藏层产生潜在有用的向量嵌入,深度学习已成为蛋白质表示学习的流行工具。 Tape引入了许多基准任务,并表明,通过大型蛋白质语料库上的语言模型进行了半监督的学习,改善了下游任务的性能。其中两个任务(荧光预测和稳定性预测)涉及学习适应性景观。在本文中,我们表明,仅使用有监督的学习培训的CNN模型都与磁带中的最佳模型竞争,从而利用了在大型蛋白质数据集上付出昂贵的预处理的最佳型号。这些CNN型号非常简单且较小,可以使用Google Colab笔记本进行培训。我们还发现,线性回归的荧光任务优于我们的模型和磁带模型。磁带提出的基准测试任务是模型预测蛋白质功能的能力的极好度量,应使用。但是,我们认为重要的是要添加简单模型的基线,以使迄今已报告的半监督模型的性能呈现到视角。
Recently, there has been great interest in learning how to best represent proteins, specifically with fixed-length embeddings. Deep learning has become a popular tool for protein representation learning as a model's hidden layers produce potentially useful vector embeddings. TAPE introduced a number of benchmark tasks and showed that semi-supervised learning, via pretraining language models on a large protein corpus, improved performance on downstream tasks. Two of the tasks (fluorescence prediction and stability prediction) involve learning fitness landscapes. In this paper, we show that CNN models trained solely using supervised learning both compete with and sometimes outperform the best models from TAPE that leverage expensive pretraining on large protein datasets. These CNN models are sufficiently simple and small that they can be trained using a Google Colab notebook. We also find for the fluorescence task that linear regression outperforms our models and the TAPE models. The benchmarking tasks proposed by TAPE are excellent measures of a model's ability to predict protein function and should be used going forward. However, we believe it is important to add baselines from simple models to put the performance of the semi-supervised models that have been reported so far into perspective.