论文标题

多模式传感器数据融合用于使用加速度计和GNSS数据对动物行为的原位分类

Multimodal sensor data fusion for in-situ classification of animal behavior using accelerometry and GNSS data

论文作者

Arablouei, Reza, Wang, Ziwei, Bishop-Hurley, Greg J., Liu, Jiajun

论文摘要

在本文中,我们研究了来自多种感应模式的数据的使用,即加速度计和全球导航卫星系统(GNSS),以分类动物行为。我们从GNSS数据中提取三个新功能,即距水点,中位速度和中位数估计的水平位置误差的距离。我们通过两种方法组合了从加速度计和GNSS数据中获得的信息。第一种方法是基于从传感器数据中提取的特征并将串联特征向量馈入多层感知器(MLP)分类器中的串联。第二种方法是基于两个MLP分类器预测的后验概率。每个分类器的输入是从一个传感模式的数据中提取的功能。我们使用通过智能牛领标和耳号收集的两个现实世界数据集评估了开发的多模式动物行为行为分类算法的性能。与仅使用一种传感模式的数据相比,剩下的一个动物交叉验证结果表明,两种方法都可以显着提高分类性能。这对于步行和饮酒的罕见但重要的行为更为值得注意。基于两种方法开发的算法需要很少的计算,因此内存资源适合在我们的衣领标签和耳罩的嵌入式系统上实现。但是,基于后验概率融合的多模式动物行为分类算法比基于特征串联的算法更可取,因为它提供了更好的分类精度,具有较低的计算和记忆复杂性,对传感器数据故障更强大,并且享受更好的模块化。

In this paper, we examine the use of data from multiple sensing modes, i.e., accelerometry and global navigation satellite system (GNSS), for classifying animal behavior. We extract three new features from the GNSS data, namely, distance from water point, median speed, and median estimated horizontal position error. We combine the information available from the accelerometry and GNSS data via two approaches. The first approach is based on concatenating the features extracted from both sensor data and feeding the concatenated feature vector into a multi-layer perceptron (MLP) classifier. The second approach is based on fusing the posterior probabilities predicted by two MLP classifiers. The input to each classifier is the features extracted from the data of one sensing mode. We evaluate the performance of the developed multimodal animal behavior classification algorithms using two real-world datasets collected via smart cattle collar tags and ear tags. The leave-one-animal-out cross-validation results show that both approaches improve the classification performance appreciably compared with using data of only one sensing mode. This is more notable for the infrequent but important behaviors of walking and drinking. The algorithms developed based on both approaches require little computational and memory resources hence are suitable for implementation on embedded systems of our collar tags and ear tags. However, the multimodal animal behavior classification algorithm based on posterior probability fusion is preferable to the one based on feature concatenation as it delivers better classification accuracy, has less computational and memory complexity, is more robust to sensor data failure, and enjoys better modularity.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源