论文标题

学习和强烈的真实多任务同伴预测:一种变异方法

Learning and Strongly Truthful Multi-Task Peer Prediction: A Variational Approach

论文作者

Schoenebeck, Grant, Yu, Fang-Yi

论文摘要

同行预测机制激励代理人,即使没有通过对代理的报告与同龄人的报告进行比较,即使在没有验证的情况下,也可以真实地报告其信号。在无细节的多任务设置中,代理对多个独立和相同分布的任务做出响应,并且该机制不知道代理信号的先前分布。目的是提供$ε$ - 巨大的真实机制,在该机制中,真相奖励代理人比任何其他策略配置文件都“严格”(带有$ε$添加性错误),并在需要尽可能少的任务的同时这样做。我们设计了一个具有评分功能的机制家族,将一对报告映射到得分。如果评分函数是“先前的理想”,并且只要评分函数足够接近理想的功能,则该机制是真实的。这将上述机制设计问题减少到学习问题 - 特别是学习理想的评分功能。我们利用这种减少来获得以下三个结果。 1)我们展示了如何在不同类型的先验所需的任务数量上得出良好的界限。我们的还原适用于无数的连续信号空间设置。这是为多任务设置设计的连续信号上的第一个同行预测机制。 2)我们展示了如何将代理信号(鉴于其他代理的信号)变成机制。这允许实际使用机器学习算法,即使许多代理商提供嘈杂的信息,这些算法也可以提供良好的结果。 3)对于有限的信号空间,我们在任何随机相关的先验上获得$ε$ - 巨大的真实机制,这是可能的最大先验。相比之下,先前的工作只能实现较弱的真实概念(知情的真实性),或者需要对先前的更强有力的假设。

Peer prediction mechanisms incentivize agents to truthfully report their signals even in the absence of verification by comparing agents' reports with those of their peers. In the detail-free multi-task setting, agents respond to multiple independent and identically distributed tasks, and the mechanism does not know the prior distribution of agents' signals. The goal is to provide an $ε$-strongly truthful mechanism where truth-telling rewards agents "strictly" more than any other strategy profile (with $ε$ additive error), and to do so while requiring as few tasks as possible. We design a family of mechanisms with a scoring function that maps a pair of reports to a score. The mechanism is strongly truthful if the scoring function is "prior ideal," and $ε$-strongly truthful as long as the scoring function is sufficiently close to the ideal one. This reduces the above mechanism design problem to a learning problem -- specifically learning an ideal scoring function. We leverage this reduction to obtain the following three results. 1) We show how to derive good bounds on the number of tasks required for different types of priors. Our reduction applies to myriad continuous signal space settings. This is the first peer-prediction mechanism on continuous signals designed for the multi-task setting. 2) We show how to turn a soft-predictor of an agent's signals (given the other agents' signals) into a mechanism. This allows the practical use of machine learning algorithms that give good results even when many agents provide noisy information. 3) For finite signal spaces, we obtain $ε$-strongly truthful mechanisms on any stochastically relevant prior, which is the maximal possible prior. In contrast, prior work only achieves a weaker notion of truthfulness (informed truthfulness) or requires stronger assumptions on the prior.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源