论文标题
RLPrompt:通过增强学习优化离散文本提示
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
论文作者
论文摘要
提示在使大型的语言模型(LMS)执行不同的NLP任务方面表现出了令人印象深刻的成功,尤其是当只有少数下游数据可用时。但是,自动为每个任务找到最佳提示是具有挑战性的。大多数现有的工作诉诸于调整软提示(例如,嵌入),其无法解释性,LMS的可重复性以及梯度无法访问时适用性。另一方面,离散的提示很难优化,并且通常是通过“枚举(例如,释义)”而创建的,而不是系统地探索及时空间的启发式方法。本文提出了RLPrompt,这是一种有效的离散及时优化方法(RL)。 RLPrompt制定了一个参数有效的策略网络,该网络在奖励后培训后生成所需的离散提示。为了克服大型LM环境的奖励信号的复杂性和随机性,我们结合了有效的奖励稳定,从而大大提高了训练效率。 RLPrompt灵活地适用于不同类型的LMS,例如蒙版(例如BERT)和左右模型(例如GPTS),用于分类和生成任务。几乎没有射击分类和无监督的文本样式转移的实验在广泛的现有列出或提示方法中表现出卓越的性能。有趣的是,由此产生的优化提示通常是语法上的无语文字。令人惊讶的是,这些胆汁的提示可以在不同的LM之间转移以保持显着的性能,这表明LM提示可能不会遵循人类的语言模式。
Prompting has shown impressive success in enabling large pretrained language models (LMs) to perform diverse NLP tasks, especially when only few downstream data are available. Automatically finding the optimal prompt for each task, however, is challenging. Most existing work resorts to tuning soft prompt (e.g., embeddings) which falls short of interpretability, reusability across LMs, and applicability when gradients are not accessible. Discrete prompt, on the other hand, is difficult to optimize, and is often created by "enumeration (e.g., paraphrasing)-then-selection" heuristics that do not explore the prompt space systematically. This paper proposes RLPrompt, an efficient discrete prompt optimization approach with reinforcement learning (RL). RLPrompt formulates a parameter-efficient policy network that generates the desired discrete prompt after training with reward. To overcome the complexity and stochasticity of reward signals by the large LM environment, we incorporate effective reward stabilization that substantially enhances the training efficiency. RLPrompt is flexibly applicable to different types of LMs, such as masked (e.g., BERT) and left-to-right models (e.g., GPTs), for both classification and generation tasks. Experiments on few-shot classification and unsupervised text style transfer show superior performance over a wide range of existing finetuning or prompting methods. Interestingly, the resulting optimized prompts are often ungrammatical gibberish text; and surprisingly, those gibberish prompts are transferrable between different LMs to retain significant performance, indicating LM prompting may not follow human language patterns.