菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-25
📄 Abstract - From Words to Amino Acids: Does the Curse of Depth Persist?

Protein language models (PLMs) have become widely adopted as general-purpose models, demonstrating strong performance in protein engineering and de novo design. Like large language models (LLMs), they are typically trained as deep transformers with next-token or masked-token prediction objectives on massive sequence corpora and are scaled by increasing model depth. Recent work on autoregressive LLMs has identified the Curse of Depth: later layers contribute little to the final output predictions. These findings naturally raise the question of whether a similar depth inefficiency also appears in PLMs, where many widely used models are not autoregressive, and some are multimodal, accepting both protein sequence and structure as input. In this work, we present a depth analysis of six popular PLMs across model families and scales, spanning three training objectives, namely autoregressive, masked, and diffusion, and quantify how layer contributions evolve with depth using a unified set of probing- and perturbation-based measurements. Across all models, we observe consistent depth-dependent patterns that extend prior findings on LLMs: later layers depend less on earlier computations and mainly refine the final output distribution, and these effects are increasingly pronounced in deeper models. Taken together, our results suggest that PLMs exhibit a form of depth inefficiency, motivating future work on more depth-efficient architectures and training methods.

顶级标签: biology model evaluation machine learning
详细标签: protein language models model depth transformer analysis layer contributions depth inefficiency 或 搜索:

从词语到氨基酸:深度诅咒是否依然存在? / From Words to Amino Acids: Does the Curse of Depth Persist?


1️⃣ 一句话总结

这篇论文研究发现,在蛋白质语言模型中,模型的深层网络对最终预测的贡献有限,存在与大型语言模型类似的‘深度诅咒’现象,提示未来需要设计更高效的模型架构。

源自 arXiv: 2602.21750