论文标题
内存计算体系结构的侧通道攻击分析
Side-channel attack analysis on in-memory computing architectures
论文作者
论文摘要
内存计算(IMC)系统具有加速数据密集型任务(例如深神经网络(DNN))的巨大潜力。由于DNN模型通常是高度专有的,因此神经网络体系结构成为攻击的宝贵目标。在IMC系统中,由于整个模型被映射在芯片上,并且可以限制重量存储器读取,因此预先映射的DNN模型可作为用户的``黑匣子''。但是,本地化和固定的重量和数据模式可能会对IMC系统进行其他攻击。在本文中,我们建议对IMC体系结构进行侧向通道攻击方法。我们表明,可以从功率痕量测量值中提取模型架构信息,而无需任何神经网络的知识。我们首先开发了一个模拟框架,该框架可以模拟IMC宏的动态功率痕迹。然后,我们对侧向通道泄漏分析进行了反向工程模型信息,例如存储的层类型,层序列,输出通道/特征大小和卷积内核大小,来自IMC宏的功率痕迹。基于提取的信息,可以在不知道神经网络的情况下重建完整的网络。最后,我们讨论了构建IMC系统的潜在对策,这些系统为这些模型提取攻击提供了阻力。
In-memory computing (IMC) systems have great potential for accelerating data-intensive tasks such as deep neural networks (DNNs). As DNN models are generally highly proprietary, the neural network architectures become valuable targets for attacks. In IMC systems, since the whole model is mapped on chip and weight memory read can be restricted, the pre-mapped DNN model acts as a ``black box'' for users. However, the localized and stationary weight and data patterns may subject IMC systems to other attacks. In this paper, we propose a side-channel attack methodology on IMC architectures. We show that it is possible to extract model architectural information from power trace measurements without any prior knowledge of the neural network. We first developed a simulation framework that can emulate the dynamic power traces of the IMC macros. We then performed side-channel leakage analysis to reverse engineer model information such as the stored layer type, layer sequence, output channel/feature size and convolution kernel size from power traces of the IMC macros. Based on the extracted information, full networks can potentially be reconstructed without any knowledge of the neural network. Finally, we discuss potential countermeasures for building IMC systems that offer resistance to these model extraction attack.