菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-07
📄 Abstract - Visual prompting reimagined: The power of the Activation Prompts

Visual prompting (VP) has emerged as a popular method to repurpose pretrained vision models for adaptation to downstream tasks. Unlike conventional model fine-tuning techniques, VP introduces a universal perturbation directly into the input data to facilitate task-specific fine-tuning rather than modifying model parameters. However, there exists a noticeable performance gap between VP and conventional fine-tuning methods, highlighting an unexplored realm in theory and practice to understand and advance the input-level VP to reduce its current performance gap. Towards this end, we introduce a generalized concept, termed activation prompt (AP), which extends the scope of the input-level VP by enabling universal perturbations to be applied to activation maps within the intermediate layers of the model. By using AP to revisit the problem of VP and employing it as an analytical tool, we demonstrate the intrinsic limitations of VP in both performance and efficiency, revealing why input-level prompting may lack effectiveness compared to AP, which exhibits a model-dependent layer preference. We show that AP is closely related to normalization tuning in convolutional neural networks and vision transformers, although each model type has distinct layer preferences for prompting. We also theoretically elucidate the rationale behind such a preference by analyzing global features across layers. Through extensive experiments across 29 datasets and various model architectures, we provide a comprehensive performance analysis of AP, comparing it with VP and parameter-efficient fine-tuning baselines. Our results demonstrate AP's superiority in both accuracy and efficiency, considering factors such as time, parameters, memory usage, and throughput.

顶级标签: computer vision model training machine learning
详细标签: visual prompting activation prompts fine-tuning vision transformers parameter efficiency 或 搜索:

视觉提示的重新构想:激活提示的力量 / Visual prompting reimagined: The power of the Activation Prompts


1️⃣ 一句话总结

这篇论文提出了一种名为‘激活提示’的新方法,它通过在模型内部中间层的激活图上添加通用扰动,显著提升了视觉提示技术的性能与效率,超越了传统输入级视觉提示和参数微调方法,并在多种模型和数据集上验证了其优越性。

源自 arXiv: 2604.06440