本文共 3276 字,大约阅读时间需要 10 分钟。
Back to the Feature: Learning Robust Camera Localization from Pixels to Pose
Camera pose estimation in known scenes can be improved by focusing on learning robust and invariant visual features while leaving geometric estimation to principled algorithms.
Our approach leverages direct alignment of multiscale deep features, framing camera localization as a metric learning problem while also enhancing sparse feature matching accuracy.
Inspired by direct image alignment [22, 26, 27, 63, 90, 91] and learned image representations for outlier rejection [42], we advocate that end-to-end visual localization algorithms should prioritize representation learning.
By not requiring pose regression itself, the network can extract suitable features, ensuring accurate and scene-agnostic performance.
PixLoc achieves localization by aligning query and reference images based on the known 3D structure of the scene.
Motivation: In absolute pose and scene coordinate regression from a single image, a deep neural network learns to:
i) Recognize the approximate location in a scene,
ii) Recognize robust visual features tailored to this scene, and
iii) Regress accurate geometric quantities like pose or coordinates.
Given CNNs' ability to learn generalizable features, i) and ii) do not need to be scene-specific, and i) is already addressed by image retrieval.
On the other hand, iii) can be effectively handled by classical geometry using feature matching [19, 20, 28] or image alignment [4, 26, 27, 51] combined with 3D representation.
Therefore, focusing on learning robust and generalizable features is key, enabling scene-agnostic and tightly-constrained pose estimation by geometry.
The challenge lies in defining effective features for localization. We solve this by making geometric estimation differentiable and only supervising the final pose estimate.
Section 3.1: Localization as Image Alignment
Image Representation: Sparse alignment is performed over learned feature representations, utilizing CNNs' ability to extract hierarchical features at multiple levels.
The features are L2-normalized along channels to enhance robustness and generalization across datasets.
This representation, inspired by past works on handcrafted and learned features for camera tracking [22, 52, 63, 85, 90, 93], is robust to significant illumination and viewpoint changes, providing meaningful gradients for successful alignments despite initial pose inaccuracies.
Direct Alignment: The geometric optimization aims to find the pose (R, t), aligning query and reference images based on scene structure.
Visual Priors: Combining pointwise uncertainties of query and reference images into per-residual weights allows the network to learn uncertainty, such as in domain shift scenarios, similar to aleatoric uncertainty [36].
This weighting captures multiple scenarios, enhancing pose accuracy across different conditions.
Experiments: The refinement improves performance on RobotCar Night, which faces motion blur and challenges in sparse keypoint detection, while showing no improvement on RobotCar Day or being detrimental on Aachen at 0.25m, potentially due to limited ground truth accuracy or camera intrinsics.
The difficulty of RobotCar Oxford dataset may also contribute to these results.
转载地址:http://jiokk.baihongyu.com/