论文标题

自我训练与目的保持增强可改善几乎没有生成对话状态跟踪

Self-Training with Purpose Preserving Augmentation Improves Few-shot Generative Dialogue State Tracking

论文作者

Lee, Jihyun, Lee, Chaebin, Kim, Yunsu, Lee, Gary Geunbae

论文摘要

在对话状态跟踪(DST)中,标记数据集涉及相当大的人工劳动。我们为使用未标记的数据提供了一个新的自我训练框架,以使用未标记的数据。我们的自我训练方法迭代地通过伪标记改善了该模型,并采用目的保留增强(PPAG)来防止过度拟合。与基线相比,我们在Multiwoz 2.1上将少量射击的10%的性能提高了约4%,并增强了插槽重量8.34%。

In dialogue state tracking (DST), labeling the dataset involves considerable human labor. We propose a new self-training framework for few-shot generative DST that utilize unlabeled data. Our self-training method iteratively improves the model by pseudo labeling and employs Purpose Preserving Augmentation (PPAug) to prevent overfitting. We increaese the few-shot 10% performance by approximately 4% on MultiWOZ 2.1 and enhances the slot-recall 8.34% for unseen values compared to baseline.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源