GenomeQA:评估通用大语言模型在基因组序列理解上的基准 / GenomeQA: Benchmarking General Large Language Models for Genome Sequence Understanding
1️⃣ 一句话总结
这篇论文提出了一个名为GenomeQA的新基准,专门用于评估通用大语言模型直接处理原始基因组序列的能力,发现这些模型能利用序列中的局部特征,但在需要复杂推理的任务上表现不佳。
Large Language Models (LLMs) are increasingly adopted as conversational assistants in genomics, where they are mainly used to reason over biological knowledge, annotations, and analysis outputs through natural language interfaces. However, existing benchmarks either focus on specialized DNA models trained for sequence prediction or evaluate biological knowledge using text-only questions, leaving the behavior of general-purpose LLMs when directly exposed to raw genome sequences underexplored. We introduce GenomeQA, a benchmark designed to provide a controlled evaluation setting for general-purpose LLMs on sequence-based genome inference tasks. GenomeQA comprises 5,200 samples drawn from multiple biological databases, with sequence lengths ranging from 6 to 1,000 base pairs (bp), spanning six task families: Enhancer and Promoter Identification, Splice Site Identification, Taxonomic Classification, Histone Mark Prediction, Transcription Factor Binding Site Prediction, and TF Motif Prediction. Across six frontier LLMs, we find that models consistently outperform random baselines and can exploit local sequence signals such as GC content and short motifs, while performance degrades on tasks that require more indirect or multi-step inference over sequence patterns. GenomeQA establishes a diagnostic benchmark for studying and improving the use of general-purpose LLMs on raw genomic sequences.
GenomeQA:评估通用大语言模型在基因组序列理解上的基准 / GenomeQA: Benchmarking General Large Language Models for Genome Sequence Understanding
这篇论文提出了一个名为GenomeQA的新基准,专门用于评估通用大语言模型直接处理原始基因组序列的能力,发现这些模型能利用序列中的局部特征,但在需要复杂推理的任务上表现不佳。
源自 arXiv: 2604.05774