论文标题
通过正规化解开私人课程
Disentangling private classes through regularization
论文作者
论文摘要
如今,大量部署了深度学习模型,以解决各种各样的任务。但是,很少关注关联的法律方面。 2016年,欧盟批准了2018年生效的一般数据保护法规。其主要理由是通过经营所谓的“数据经济”的方式来保护其公民的隐私和数据保护。由于数据是现代人工智能的燃料,因此认为GDPR可以部分适用于一系列算法的决策制定任务,然后更具结构化的AI法规才能生效。同时,AI不应允许不希望的信息泄漏偏离创建的目的。在这项工作中,我们提出了DISP,这是一种深入学习模型的方法,该模型删除了与AI处理的数据相关的某些类别的信息。特别是,分配是一种正规化策略,在培训时间逐渐消除了属于同一私人班级的功能,从而隐藏了私人类会员资格的信息。我们对最先进的深度学习模型的实验表明了分配的有效性,最大程度地降低了我们希望保持私人的班级的提取风险。
Deep learning models are nowadays broadly deployed to solve an incredibly large variety of tasks. However, little attention has been devoted to connected legal aspects. In 2016, the European Union approved the General Data Protection Regulation which entered into force in 2018. Its main rationale was to protect the privacy and data protection of its citizens by the way of operating of the so-called "Data Economy". As data is the fuel of modern Artificial Intelligence, it is argued that the GDPR can be partly applicable to a series of algorithmic decision making tasks before a more structured AI Regulation enters into force. In the meantime, AI should not allow undesired information leakage deviating from the purpose for which is created. In this work we propose DisP, an approach for deep learning models disentangling the information related to some classes we desire to keep private, from the data processed by AI. In particular, DisP is a regularization strategy de-correlating the features belonging to the same private class at training time, hiding the information of private classes membership. Our experiments on state-of-the-art deep learning models show the effectiveness of DisP, minimizing the risk of extraction for the classes we desire to keep private.