AmharicIR+Instr:一个用于神经检索和指令调优的双数据集资源 / AmharicIR+Instr: A Two-Dataset Resource for Neural Retrieval and Instruction Tuning
1️⃣ 一句话总结
这篇论文发布了一个针对低资源语言阿姆哈拉语的双数据集资源,包含一个用于训练和评估神经检索模型的查询-文档三元组数据集,以及一个用于指令跟随文本生成的提示-回答对数据集,旨在支持该语言的检索和生成模型研究。
Neural retrieval and GPT-style generative models rely on large, high-quality supervised data, which is still scarce for low-resource languages such as Amharic. We release an Amharic data resource consisting of two datasets that supports research on (i) neural retrieval-ranking and (ii) instruction-following text generation. The retrieval-ranking dataset contains 1,091 manually verified query-positive-negative document triplets drawn from diverse Amharic sources and constructed to support contrastive training and benchmarking of neural retrievers (e.g., DPR, ColBERT-style late interaction and SPLADE-style sparse neural retrieval). Triplets are created through a combination of expert-curated queries, web-derived queries, and LLM-assisted generation, with positive/negative documents selected from the web or synthesized by LLMs and then validated by native speakers. The instruction prompt-response dataset comprises 6,285 Amharic prompt-response pairs spanning multiple domains and instruction types, generated with several LLMs and refined through manual review and correction for grammaticality, relevance, fluency, and factual plausibility. We release both datasets with standardized splits and formats (CSV,JSON,JSONL) to enable reproducible work on Amharic retrieval, ranking, and generative modelling. These datasets also come with a methodology that can be generalized to other low-resource languages.
AmharicIR+Instr:一个用于神经检索和指令调优的双数据集资源 / AmharicIR+Instr: A Two-Dataset Resource for Neural Retrieval and Instruction Tuning
这篇论文发布了一个针对低资源语言阿姆哈拉语的双数据集资源,包含一个用于训练和评估神经检索模型的查询-文档三元组数据集,以及一个用于指令跟随文本生成的提示-回答对数据集,旨在支持该语言的检索和生成模型研究。
源自 arXiv: 2602.09914