菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-23
📄 Abstract - On Reasoning Behind Next Occupation Recommendation

In this work, we develop a novel reasoning approach to enhance the performance of large language models (LLMs) in future occupation prediction. In this approach, a reason generator first derives a ``reason'' for a user using his/her past education and career history. The reason summarizes the user's preference and is used as the input of an occupation predictor to recommend the user's next occupation. This two-step occupation prediction approach is, however, non-trivial as LLMs are not aligned with career paths or the unobserved reasons behind each occupation decision. We therefore propose to fine-tune LLMs improving their reasoning and occupation prediction performance. We first derive high-quality oracle reasons, as measured by factuality, coherence and utility criteria, using a LLM-as-a-Judge. These oracle reasons are then used to fine-tune small LLMs to perform reason generation and next occupation prediction. Our extensive experiments show that: (a) our approach effectively enhances LLM's accuracy in next occupation prediction making them comparable to fully supervised methods and outperforming unsupervised methods; (b) a single LLM fine-tuned to perform reason generation and occupation prediction outperforms two LLMs fine-tuned to perform the tasks separately; and (c) the next occupation prediction accuracy depends on the quality of generated reasons. Our code is available at this https URL.

顶级标签: llm machine learning agents
详细标签: reasoning occupation prediction fine-tuning llm-as-a-judge career modeling 或 搜索:

下一职业推荐背后的推理机制研究 / On Reasoning Behind Next Occupation Recommendation


1️⃣ 一句话总结

本文提出一种让大语言模型先生成用户职业选择理由、再据此预测下一职业的两步推理方法,并通过微调小型模型和使用AI裁判筛选高质量理由,显著提升了职业预测的准确性。

源自 arXiv: 2604.21204