论文标题

通过图形认知逻辑回归和抢先查询生成对归因图的积极学习

Active Learning on Attributed Graphs via Graph Cognizant Logistic Regression and Preemptive Query Generation

论文作者

Regol, Florence, Pal, Soumyasundar, Zhang, Yingxue, Coates, Mark

论文摘要

属性图中的节点分类是多个实际设置中的重要任务,但是获得标签通常很困难或昂贵。积极的学习可以提高针对特定预算的查询标签数量的分类性能。最好的现有方法是基于图形神经网络,但是除非有可用的标记节点有较大的验证集以选择良好的超级参数,否则它们的性能通常很差。我们为属性图中的节点分类任务提出了一种基于图形的活性学习算法。我们的算法使用图形认知逻辑回归,等效于线性化图卷积神经网络(GCN)进行预测阶段,并最大化查询阶段的预期误差降低。为了减少标记与系统交互的延迟,我们得出了一个先发制的查询系统,该系统在标签过程中计算新的查询,并解决学习的设置几乎没有标记的数据,我们还开发了一种混合算法,该混合算法可以执行标签传播和线性化的GCN的适应性模型。我们对五个公共基准数据集进行了实验,证明了对最新方法的显着改进,并通过将其应用于私人微波链接网络数据集来说明该方法的实际价值。

Node classification in attributed graphs is an important task in multiple practical settings, but it can often be difficult or expensive to obtain labels. Active learning can improve the achieved classification performance for a given budget on the number of queried labels. The best existing methods are based on graph neural networks, but they often perform poorly unless a sizeable validation set of labelled nodes is available in order to choose good hyperparameters. We propose a novel graph-based active learning algorithm for the task of node classification in attributed graphs; our algorithm uses graph cognizant logistic regression, equivalent to a linearized graph convolutional neural network (GCN), for the prediction phase and maximizes the expected error reduction in the query phase. To reduce the delay experienced by a labeller interacting with the system, we derive a preemptive querying system that calculates a new query during the labelling process, and to address the setting where learning starts with almost no labelled data, we also develop a hybrid algorithm that performs adaptive model averaging of label propagation and linearized GCN inference. We conduct experiments on five public benchmark datasets, demonstrating a significant improvement over state-of-the-art approaches and illustrate the practical value of the method by applying it to a private microwave link network dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源