菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-05-05
📄 Abstract - Distribution-Free Pretraining of Classification Losses via Evolutionary Dynamics

We propose Evolutionary Dynamic Loss (EDL), a framework that learns a transferable classification loss in the probability space using unlimited synthetic prediction-label pairs, without accessing real samples during the main loss pretraining stage. EDL parameterizes the loss as a lightweight network and is trained with a semantics-free ranking-consistency objective that assigns larger penalties for more erroneous predictions. To robustly explore the space of loss functions, we optimize EDL via an evolutionary strategy and introduce chaotic mutation to improve exploration under noisy fitness evaluations. Experiments on CIFAR-10 with ResNet backbones show that EDL can serve as a drop-in replacement for cross-entropy and achieves competitive or improved accuracy, while ablation studies confirm that chaotic mutation yields faster convergence and better synthetic pretraining metrics than standard Gaussian mutation.

顶级标签: machine learning model training
详细标签: loss function learning evolutionary strategy classification pretraining synthetic data 或 搜索:

基于进化动力学的无分布预训练分类损失函数 / Distribution-Free Pretraining of Classification Losses via Evolutionary Dynamics


1️⃣ 一句话总结

本文提出一种名为EDL的新方法,能在不依赖真实数据的情况下,通过进化算法自动学习一个轻量级的分类损失函数,从而替代传统交叉熵损失,提升模型准确率并加快训练收敛。

源自 arXiv: 2605.03722