菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-05-13
📄 Abstract - Children's English Reading Story Generation via Supervised Fine-Tuning of Compact LLMs with Controllable Difficulty and Safety

Large Language Models (LLMs) are widely applied in educational practices, such as for generating children's stories. However, the generated stories are often too difficult for children to read, and the operational cost of LLMs hinders their widespread adoption in educational settings. We used an existing expert-designed children's reading curriculum and its corresponding generated stories from GPT-4o and Llama 3.3 70B to design different experiments for fine-tuning three 8B-parameter LLMs, which then generated new English reading stories that were subjected to quantitative and qualitative evaluation. Our method prioritizes controllability over scale, enabling educators to target reading levels and error patterns with a compact, affordable model. Our evaluation results show that with appropriate fine-tuning designs, children's English reading stories generated by 8B LLMs perform better on difficulty-related metrics than those from zero-shot GPT-4o and Llama 3.3 70B, with almost no discernible safety issues. Such fine-tuned LLMs could be more broadly used by teachers, parents, and children in classrooms and at home to generate engaging English reading stories with children's interests, controllable difficulty and safety.

顶级标签: llm education natural language processing
详细标签: story generation reading difficulty controllable generation safety fine-tuning 或 搜索:

基于可控难度与安全性的紧凑型大语言模型监督微调用于生成儿童英语阅读故事 / Children's English Reading Story Generation via Supervised Fine-Tuning of Compact LLMs with Controllable Difficulty and Safety


1️⃣ 一句话总结

本文通过监督微调三个参数量为80亿的紧凑型大语言模型,在现有专家设计的儿童阅读课程和GPT-4o等模型生成故事的基础上,让这些小型模型能够生成难度可控、安全性高的儿童英语阅读故事,效果甚至优于大型模型的零样本输出,为学校和家庭提供了一种低成本的实用方案。

源自 arXiv: 2605.13709