论文标题
DOUFU:驱动轨迹表示的双融合关节学习方法
DouFu: A Double Fusion Joint Learning Method For Driving Trajectory Representation
论文作者
论文摘要
驱动轨迹表示学习对于各种基于位置的服务,例如驾驶模式挖掘和路线建议具有重要意义。但是,以前的代表性生成方法往往很少解决三个挑战:1)如何廉价地表示流动性的复杂语义意图; 2)由于轨迹数据的稀疏性和异质性,复杂和弱时空依赖性; 3)路线选择偏好及其与驾驶行为的相关性。在本文中,我们提出了一种新型的多模式融合模型Doufu,用于轨迹表示联合学习,该模型应用多模式学习和注意力融合模块来捕获轨迹的内部特征。我们首先设计了从轨迹数据和城市功能区域产生的运动,路线和全球功能,然后分别通过注意编码器或Feed Forward网络分析它们。注意融合模块将路线功能与运动特征结合在一起,以创建更好的时空嵌入。凭借全球语义功能,Doufu为每个轨迹产生全面的嵌入。我们评估了通过我们的方法和其他基线模型生成的表示和聚类任务的表示形式。经验结果表明,在大多数学习算法(例如线性回归和支持向量机)的大多数学习算法中,DouFU胜过其他模型。
Driving trajectory representation learning is of great significance for various location-based services, such as driving pattern mining and route recommendation. However, previous representation generation approaches tend to rarely address three challenges: 1) how to represent the intricate semantic intentions of mobility inexpensively; 2) complex and weak spatial-temporal dependencies due to the sparsity and heterogeneity of the trajectory data; 3) route selection preferences and their correlation to driving behavior. In this paper, we propose a novel multimodal fusion model, DouFu, for trajectory representation joint learning, which applies multimodal learning and attention fusion module to capture the internal characteristics of trajectories. We first design movement, route, and global features generated from the trajectory data and urban functional zones and then analyze them respectively with the attention encoder or feed forward network. The attention fusion module incorporates route features with movement features to create a better spatial-temporal embedding. With the global semantic feature, DouFu produces a comprehensive embedding for each trajectory. We evaluate representations generated by our method and other baseline models on classification and clustering tasks. Empirical results show that DouFu outperforms other models in most of the learning algorithms like the linear regression and the support vector machine by more than 10%.