论文标题
参考 - 本地功能匹配的旋转模棱两可的功能
ReF -- Rotation Equivariant Features for Local Feature Matching
论文作者
论文摘要
稀疏的本地功能匹配是许多计算机视觉和机器人任务的关键。为了提高其对挑战性外观条件和查看角度的不变性,因此,现有的基于学习的方法主要集中在基于数据的基于数据的培训上。在这项工作中,我们提出了一种替代性的补充方法,该方法以诱导模型体系结构本身诱导偏差为中心,以使用可检验的E2-CNN生成“旋转特异性”特征,然后将其组成以实现旋转不变的本地特征。我们证明,可以通过将其与受增强训练的标准CNN相结合,但通常不准确,从而将其结合到所有旋转角度,从而将其扩展到所有旋转角度,从而创建最新的旋转旋转本地功能匹配器,从而将其扩展到所有旋转角度。我们针对HPATCHES现有技术和新提出的urbanscenes3D-air数据集进行了针对现有技术的建议,以进行视觉位置识别。此外,我们对结合,健壮估计,网络体系结构变化以及旋转先验的使用的性能效应进行了详细的分析。
Sparse local feature matching is pivotal for many computer vision and robotics tasks. To improve their invariance to challenging appearance conditions and viewing angles, and hence their usefulness, existing learning-based methods have primarily focused on data augmentation-based training. In this work, we propose an alternative, complementary approach that centers on inducing bias in the model architecture itself to generate `rotation-specific' features using Steerable E2-CNNs, that are then group-pooled to achieve rotation-invariant local features. We demonstrate that this high performance, rotation-specific coverage from the steerable CNNs can be expanded to all rotation angles by combining it with augmentation-trained standard CNNs which have broader coverage but are often inaccurate, thus creating a state-of-the-art rotation-robust local feature matcher. We benchmark our proposed methods against existing techniques on HPatches and a newly proposed UrbanScenes3D-Air dataset for visual place recognition. Furthermore, we present a detailed analysis of the performance effects of ensembling, robust estimation, network architecture variations, and the use of rotation priors.