论文标题
一个词价值一千美元:对推文的对抗性攻击傻瓜股票预测
A Word is Worth A Thousand Dollars: Adversarial Attack on Tweets Fools Stock Predictions
论文作者
论文摘要
越来越多的投资者和机器学习模型依赖社交媒体(例如Twitter和Reddit)来收集实时信息和情感以预测股票价格变动。尽管已知基于文本的模型容易受到对抗性攻击的影响,但库存预测模型是否具有相似的漏洞。在本文中,我们尝试了各种对抗性攻击配置,以欺骗三个股票预测受害者模型。我们通过解决语义和预算限制的组合优化问题来解决对抗生成的任务。我们的结果表明,提出的攻击方法可以通过简单地将扰动但语义上相似的推文串联来实现一致的成功率,并在交易模拟中造成巨大的货币损失。
More and more investors and machine learning models rely on social media (e.g., Twitter and Reddit) to gather real-time information and sentiment to predict stock price movements. Although text-based models are known to be vulnerable to adversarial attacks, whether stock prediction models have similar vulnerability is underexplored. In this paper, we experiment with a variety of adversarial attack configurations to fool three stock prediction victim models. We address the task of adversarial generation by solving combinatorial optimization problems with semantics and budget constraints. Our results show that the proposed attack method can achieve consistent success rates and cause significant monetary loss in trading simulation by simply concatenating a perturbed but semantically similar tweet.