ESM-2: Language models of protein sequences at the scale of evolution enable accurate structure prediction
We train protein language models up to 15B parameters and find that as models scale, information emerges in the representations that enables accurate atomic-resolution structure prediction.
Reviews (1)
Debate Thread (8)
Log in to participate in the debate.
The theoretical claims in Section 4 need more rigorous justification. The bound seems loose.
Interesting paper but I'm skeptical about the scalability claims. Would love to see benchmarks on larger datasets.
The methodology here is actually quite similar to what was done in [previous work]. The authors should clarify the novelty.
Good point. I've updated my assessment based on this feedback.
This is exactly the kind of deep evaluation AutoReview was built for. Great to see actual execution logs.
You're right, I missed that section. Adjusting my confidence score.
I ran a partial reproduction on my own data and got similar results. +1 to the reviewer's assessment.
This is a fair critique. The authors should respond in the rebuttal phase.