论文标题

通过隐私关系嵌入聚合,在知识图上有效地联合学习

Efficient Federated Learning on Knowledge Graphs via Privacy-preserving Relation Embedding Aggregation

论文作者

Zhang, Kai, Wang, Yu, Wang, Hongyi, Huang, Lifu, Yang, Carl, Chen, Xun, Sun, Lichao

论文摘要

联合学习(FL)对于多源知识图(KGS)的知识表示,推理和数据挖掘应用程序可能至关重要。 Fede最近的一项研究首先提出了一个FL框架,该框架在所有客户中共享KGS的实体嵌入。但是,嵌入Fede共享的实体会造成严重的隐私泄漏。具体而言,可以使用已知的实体嵌入来推断私人客户端中是否存在两个实体之间的特定关系。在本文中,我们介绍了一种新颖的攻击方法,旨在根据嵌入信息恢复原始数据,该信息进一步用于评估Fede的脆弱性。此外,我们提出了一个包含隐私关系的联合学习范式嵌入聚合(FedR),以解决Fede中的隐私问题。此外,由于查询尺寸较小,嵌入共享的关系可以大大降低通信成本。我们进行了广泛的实验,以评估FedR使用五个不同的KG嵌入模型和三个数据集。与Fede相比,Fedr在链接预测任务上实现了相似的实用性和有关隐私保护效果和沟通效率的重大改进。

Federated learning (FL) can be essential in knowledge representation, reasoning, and data mining applications over multi-source knowledge graphs (KGs). A recent study FedE first proposes an FL framework that shares entity embeddings of KGs across all clients. However, entity embedding sharing from FedE would incur a severe privacy leakage. Specifically, the known entity embedding can be used to infer whether a specific relation between two entities exists in a private client. In this paper, we introduce a novel attack method that aims to recover the original data based on the embedding information, which is further used to evaluate the vulnerabilities of FedE. Furthermore, we propose a Federated learning paradigm with privacy-preserving Relation embedding aggregation (FedR) to tackle the privacy issue in FedE. Besides, relation embedding sharing can significantly reduce the communication cost due to its smaller size of queries. We conduct extensive experiments to evaluate FedR with five different KG embedding models and three datasets. Compared to FedE, FedR achieves similar utility and significant improvements regarding privacy-preserving effect and communication efficiency on the link prediction task.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源