论文标题
同步:基于COPULA的框架,用于从聚合来源生成合成数据
SYNC: A Copula based Framework for Generating Synthetic Data from Aggregated Sources
论文作者
论文摘要
合成数据集是通过编程生成的数据对象,当直接收集困难或昂贵时,从多个来源创建单个数据集可能很有价值。尽管这是许多数据科学任务的基本步骤,但没有一个有效的标准框架。在本文中,我们从许多低分辨率,易于收集的来源中研究了一种称为降尺度的特定合成数据生成任务,这是一种推断高分辨率,难以汇总信息(例如,个人级别记录)的程序,并提出了一个称为同步的多阶段框架(通过高斯copula的合成数据)。对于给定的低分辨率数据集,同步的核心思想是将高斯copula模型拟合到每个低分辨率数据集中,以正确捕获依赖关系和边际分布,然后从拟合模型中进行采样以获取所需的高分辨率子集。然后使用预测模型将采样子集合并为一个,最后,根据低分辨率的边际约束来缩放采样的数据集。我们在这项工作中做出了四个关键贡献:1)提出一个新的框架,通过结合最先进的机器学习和统计技术来从汇总数据源中生成个体级别的数据,2)执行模拟研究,以验证同步性能作为合成数据生成算法的同步性能,3)3)表明其在功能工程工具上的价值,以及在其中的四个替代方案,以及在其中汇集了四个替代方案,以及在其中汇集了四个替代方案,该算法是在其中的四个中汇集了替代方案。易于使用的框架实现,可在生产级别上可重现性和可伸缩性,可轻松合并新数据。
A synthetic dataset is a data object that is generated programmatically, and it may be valuable to creating a single dataset from multiple sources when direct collection is difficult or costly. Although it is a fundamental step for many data science tasks, an efficient and standard framework is absent. In this paper, we study a specific synthetic data generation task called downscaling, a procedure to infer high-resolution, harder-to-collect information (e.g., individual level records) from many low-resolution, easy-to-collect sources, and propose a multi-stage framework called SYNC (Synthetic Data Generation via Gaussian Copula). For given low-resolution datasets, the central idea of SYNC is to fit Gaussian copula models to each of the low-resolution datasets in order to correctly capture dependencies and marginal distributions, and then sample from the fitted models to obtain the desired high-resolution subsets. Predictive models are then used to merge sampled subsets into one, and finally, sampled datasets are scaled according to low-resolution marginal constraints. We make four key contributions in this work: 1) propose a novel framework for generating individual level data from aggregated data sources by combining state-of-the-art machine learning and statistical techniques, 2) perform simulation studies to validate SYNC's performance as a synthetic data generation algorithm, 3) demonstrate its value as a feature engineering tool, as well as an alternative to data collection in situations where gathering is difficult through two real-world datasets, 4) release an easy-to-use framework implementation for reproducibility and scalability at the production level that easily incorporates new data.