论文标题

基于自我监督学习的非侵入式负载监控

Non-intrusive Load Monitoring based on Self-supervised Learning

论文作者

Chen, Shuyi, Zhao, Bochao, Zhong, Mingjun, Luan, Wenpeng, Yu, Yixin

论文摘要

非侵入性负载监测(NILM)的深度学习模型往往需要大量的标记数据进行培训。但是,由于不同的负载特性和数据集之间的设备操作模式,很难将受过训练的模型推广到看不见的站点。为了解决此类问题,本文提出了自我监督的学习(SSL),其中不需要从目标数据集或房屋中标记的设备级数据。最初,仅需要来自目标数据集的汇总功率读数才能通过自我监督的借口任务预先培训通用网络,以将汇总幂序列映射到派生的代表。然后,为每个设备类别执行监督的下游任务,以微调预训练的网络,在该网络中,以借口任务所学习的功能被传输。使用标记的源数据集,通过将聚合映射到标签,可以使下游任务了解如何分解每个负载。最后,将微调网络应用于目标位点的负载分解。为了进行验证,根据三个公开访问的Redd,Uk-dale和Refit数据集设计了多种实验案例。此外,采用最新的神经网络在实验中执行NILM任务。基于NILM导致各种情况下,SSL通常在改善负载分解性能的情况下优于零击学习,而无需来自目标数据集的任何子计量数据。

Deep learning models for non-intrusive load monitoring (NILM) tend to require a large amount of labeled data for training. However, it is difficult to generalize the trained models to unseen sites due to different load characteristics and operating patterns of appliances between data sets. For addressing such problems, self-supervised learning (SSL) is proposed in this paper, where labeled appliance-level data from the target data set or house is not required. Initially, only the aggregate power readings from target data set are required to pre-train a general network via a self-supervised pretext task to map aggregate power sequences to derived representatives. Then, supervised downstream tasks are carried out for each appliance category to fine-tune the pre-trained network, where the features learned in the pretext task are transferred. Utilizing labeled source data sets enables the downstream tasks to learn how each load is disaggregated, by mapping the aggregate to labels. Finally, the fine-tuned network is applied to load disaggregation for the target sites. For validation, multiple experimental cases are designed based on three publicly accessible REDD, UK-DALE, and REFIT data sets. Besides, state-of-the-art neural networks are employed to perform NILM task in the experiments. Based on the NILM results in various cases, SSL generally outperforms zero-shot learning in improving load disaggregation performance without any sub-metering data from the target data sets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源