面向封闭式分割计算的轻量化用户个性化方法 / Lightweight User-Personalization Method for Closed Split Computing
1️⃣ 一句话总结
本文提出了一种名为SALT的轻量级自适应框架,通过在客户端部署一个微型适配器来优化中间数据,从而在不改动原有模型、不增加通信负担的情况下,有效提升分割计算系统在用户个性化、通信不稳定和隐私保护等多种场景下的推理性能。
Split Computing enables collaborative inference between edge devices and the cloud by partitioning a deep neural network into an edge-side head and a server-side tail, reducing latency and limiting exposure of raw input data. However, inference performance often degrades in practical deployments due to user-specific data distribution shifts, unreliable communication, and privacy-oriented perturbations, especially in closed environments where model architectures and parameters are inaccessible. To address this challenge, we propose SALT (Split-Adaptive Lightweight Tuning), a lightweight adaptation framework for closed Split Computing systems. SALT introduces a compact client-side adapter that refines intermediate representations produced by a frozen head network, enabling effective model adaptation without modifying the head or tail networks or increasing communication overhead. By modifying only the training conditions, SALT supports multiple adaptation objectives, including user personalization, communication robustness, and privacy-aware inference. Experiments using ResNet-18 on CIFAR-10 and CIFAR-100 show that SALT achieves higher accuracy than conventional retraining and fine-tuning while significantly reducing training cost. On CIFAR-10, SALT improves personalized accuracy from 88.1% to 93.8% while reducing training latency by more than 60%. SALT also maintains over 90% accuracy under 75% packet loss and preserves high accuracy (about 88% at sigma = 1.0) under noise injection. These results demonstrate that SALT provides an efficient and practical adaptation framework for real-world Split Computing systems.
面向封闭式分割计算的轻量化用户个性化方法 / Lightweight User-Personalization Method for Closed Split Computing
本文提出了一种名为SALT的轻量级自适应框架,通过在客户端部署一个微型适配器来优化中间数据,从而在不改动原有模型、不增加通信负担的情况下,有效提升分割计算系统在用户个性化、通信不稳定和隐私保护等多种场景下的推理性能。
源自 arXiv: 2603.14958