论文标题
还有您的隐私蛋糕并食用它:为公共利益,由平台支持社交媒体算法的审计
Having your Privacy Cake and Eating it Too: Platform-supported Auditing of Social Media Algorithms for Public Interest
论文作者
论文摘要
社交媒体平台策划获取信息和机会,因此在塑造当今公共话语中发挥着关键作用。这些平台用来策划内容的算法的不透明性质提出了社会问题。先前的研究使用了黑盒方法来表明这些算法可能导致偏见或歧视性结果。但是,现有的审核方法面临基本限制,因为它们与平台无关。对潜在损害的担忧促使美国和欧盟的立法提案提出。为了授权一种新的审核形式,审查的外部研究人员可以特权访问社交媒体平台。不幸的是,迄今为止,还没有提供此类审计的具体技术建议,因为在规模上进行审计风险披露用户的私人数据和平台专有算法。我们提出了一种新方法,用于平台支持的审计,可以满足拟议立法的目标。我们的首要贡献是列举现有审计方法的挑战,以大规模实施这些政策。其次,我们建议,有限,特权访问相关性估计器是实现外部研究人员提供可普遍支持平台支持的审计的关键。第三,我们通过提出一个防止这些风险的审计框架来显示平台支持的审计不需要风险用户隐私或披露平台的业务利益。对于特定的公平度量,我们表明,在准确审核所需的样本数量中,确保隐私仅施加较小的恒定因子增加(6.34倍作为上限,典型参数为4倍)。我们的技术贡献以及持续的法律和政策努力,可以通过超越隐私-VS透明障碍来使社交媒体平台如何影响个人和社会。
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse today. The opaque nature of the algorithms these platforms use to curate content raises societal questions. Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes. However, existing auditing methods face fundamental limitations because they function independent of the platforms. Concerns of potential harm have prompted proposal of legislation in both the U.S. and the E.U. to mandate a new form of auditing where vetted external researchers get privileged access to social media platforms. Unfortunately, to date there have been no concrete technical proposals to provide such auditing, because auditing at scale risks disclosure of users' private data and platforms' proprietary algorithms. We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation. Our first contribution is to enumerate the challenges of existing auditing methods to implement these policies at scale. Second, we suggest that limited, privileged access to relevance estimators is the key to enabling generalizable platform-supported auditing by external researchers. Third, we show platform-supported auditing need not risk user privacy nor disclosure of platforms' business interests by proposing an auditing framework that protects against these risks. For a particular fairness metric, we show that ensuring privacy imposes only a small constant factor increase (6.34x as an upper bound, and 4x for typical parameters) in the number of samples required for accurate auditing. Our technical contributions, combined with ongoing legal and policy efforts, can enable public oversight into how social media platforms affect individuals and society by moving past the privacy-vs-transparency hurdle.