论文标题

AOL:动态视频场景中人类轨迹预测的自适应在线学习

AOL: Adaptive Online Learning for Human Trajectory Prediction in Dynamic Video Scenes

论文作者

Huynh, Manh, Alaghband, Gita

论文摘要

我们提出了一个新颖的自适应在线学习(AOL)框架,以预测动态视频场景中的人类运动轨迹。我们的框架学习并适应场景环境中的变化,并为不同方案生成最佳的网络权重。该框架可以应用于预测模型并提高其性能,因为它在遇到场景变化时会动态调整,并可以将最佳的训练权重应用来预测下一个位置。我们通过将框架与两个现有的预测模型集成:LSTM [3]和未来的人位置(FPL)[1]来证明这一点。此外,我们分析了最佳性能的网络权重的数量,并表明我们可以使用固定数量的网络使用最近使用的最不使用的网络(LRU)策略来实现实时,以维护最新训练的网络权重。通过广泛的实验,我们表明我们的框架将LSTM和FPL的预测准确度提高了约17%和28%,而最坏情况下FPL的预测准确性在实时(20FPS)的同时,FPL的预测准确性提高了约50%。

We present a novel adaptive online learning (AOL) framework to predict human movement trajectories in dynamic video scenes. Our framework learns and adapts to changes in the scene environment and generates best network weights for different scenarios. The framework can be applied to prediction models and improve their performance as it dynamically adjusts when it encounters changes in the scene and can apply the best training weights for predicting the next locations. We demonstrate this by integrating our framework with two existing prediction models: LSTM [3] and Future Person Location (FPL) [1]. Furthermore, we analyze the number of network weights for optimal performance and show that we can achieve real-time with a fixed number of networks using the least recently used (LRU) strategy for maintaining the most recently trained network weights. With extensive experiments, we show that our framework increases prediction accuracies of LSTM and FPL by ~17% and 28% on average, and up to ~50% for FPL on the worst case while achieving real-time (20fps).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源