📄
Abstract - Prime Once, then Reprogram Locally: An Efficient Alternative to Black-Box Service Model Adaptation
Adapting closed-box service models (i.e., APIs) for target tasks typically relies on reprogramming via Zeroth-Order Optimization (ZOO). However, this standard strategy is known for extensive, costly API calls and often suffers from slow, unstable optimization. Furthermore, we observe that this paradigm faces new challenges with modern APIs (e.g., GPT-4o). These models can be less sensitive to the input perturbations ZOO relies on, thereby hindering performance gains. To address these limitations, we propose an Alternative efficient Reprogramming approach for Service models (AReS). Instead of direct, continuous closed-box optimization, AReS initiates a single-pass interaction with the service API to prime an amenable local pre-trained encoder. This priming stage trains only a lightweight layer on top of the local encoder, making it highly receptive to the subsequent glass-box (white-box) reprogramming stage performed directly on the local model. Consequently, all subsequent adaptation and inference rely solely on this local proxy, eliminating all further API costs. Experiments demonstrate AReS's effectiveness where prior ZOO-based methods struggle: on GPT-4o, AReS achieves a +27.8% gain over the zero-shot baseline, a task where ZOO-based methods provide little to no improvement. Broadly, across ten diverse datasets, AReS outperforms state-of-the-art methods (+2.5% for VLMs, +15.6% for standard VMs) while reducing API calls by over 99.99%. AReS thus provides a robust and practical solution for adapting modern closed-box models.
一次预激活,本地再编程:一种高效的黑盒服务模型适应替代方案 /
Prime Once, then Reprogram Locally: An Efficient Alternative to Black-Box Service Model Adaptation
1️⃣ 一句话总结
这篇论文提出了一种名为AReS的新方法,它只需调用一次外部AI服务(如GPT-4o)来‘激活’一个本地模型,之后所有任务适应和推理都在这个本地模型上进行,从而在性能大幅提升的同时,几乎完全避免了昂贵且缓慢的API调用。