菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-24
📄 Abstract - SibylSense: Adaptive Rubric Learning via Memory Tuning and Adversarial Probing

Designing aligned and robust rewards for open-ended generation remains a key barrier to RL post-training. Rubrics provide structured, interpretable supervision, but scaling rubric construction is difficult: expert rubrics are costly, prompted rubrics are often superficial or inconsistent, and fixed-pool discriminative rubrics can saturate and drift, enabling reward hacking. We present SibylSense, an inference-time learning approach that adapts a frozen rubric generator through a tunable memory bank of validated rubric items. Memory is updated via verifier-based item rewards measured by reference-candidate answer discriminative gaps from a handful of examples. SibylSense alternates memory tuning with a rubric-adversarial policy update that produces rubric-satisfying candidate answers, shrinking discriminative gaps and driving the rubric generator to capture new quality dimensions. Experiments on two open-ended tasks show that SibylSense yields more discriminative rubrics and improves downstream RL performance over static and non-adaptive baselines.

顶级标签: llm model training agents
详细标签: reinforcement learning reward design rubric learning memory tuning adversarial probing 或 搜索:

SibylSense:通过记忆调谐与对抗性探测实现自适应评价标准学习 / SibylSense: Adaptive Rubric Learning via Memory Tuning and Adversarial Probing


1️⃣ 一句话总结

这篇论文提出了一种名为SibylSense的新方法,它能在模型使用时动态学习和优化评价标准,通过一个可更新的‘记忆库’来生成更精准、更不易被‘欺骗’的评分规则,从而提升AI在开放式生成任务中的表现。

源自 arXiv: 2602.20751