Abstract
Autonomous driving holds great potential to transform road safety and traffic efficiency by minimizing human error and reducing congestion. A key challenge in realizing this potential is the accurate estimation of steering angles, which is essential for effective vehicle navigation and control. Recent breakthroughs in deep learning have made it possible to estimate steering angles directly from raw camera inputs. However, the limited available navigation data can hinder optimal feature learning, impacting the system's performance in complex driving scenarios. In this paper, we propose a shared encoder trained on multiple computer vision tasks critical for urban navigation, such as depth, pose, and 3D scene flow estimation, as well as semantic, instance, panoptic, and motion segmentation. By incorporating diverse visual information used by humans during navigation, this unified encoder might enhance steering angle estimation. To achieve effective multi-task learning within a single encoder, we introduce a multi-scale feature network for pose estimation to improve depth learning. Additionally, we employ knowledge distillation from a multi-backbone model pretrained on these navigation tasks to stabilize training and boost performance. Our findings demonstrate that a shared backbone trained on diverse visual tasks is capable of providing overall perception capabilities. While our performance in steering angle estimation is comparable to existing methods, the integration of human-like perception through multi-task learning holds significant potential for advancing autonomous driving systems.
Model architecture
Model Overview
Fig. 1: Overview of our multi-task training strategy. Let \(I_s\), \(I_t\), and \(I_1, I_2, \ldots, I_{16}\) represent the source image, target image, and 16 sequential images, respectively. Their corresponding features, denoted as \(f_s\), \(f_t\), and \(f_1, f_2, \ldots, f_{16}\), are extracted using a shared encoder. These features can be concatenated when necessary for subsequent processing.
Fig. 2: Simplified architecture of our model:
(a) Depth network using target image features \(f_t\) to output
depth \(\mathbf{d}_t\).
(b) Multi-scale pose network using source and target image
features \(f_s, f_t\) to output relative pose \(\mathbf{T}_{t
\rightarrow s}\).
(c) 3D Scene Flow \(\mathbf{F}_C\) and Motion mask \(\mathbf{M}\)
networks using RGB images and features \(f_s, f_t\).
(d) Segmentation network outputting panoptic, instance, and
semantic segmentations.
(e) Loss computation \(L_{\text{ssup}}\) for joint training of
depth, pose, 3D scene flow, and motion mask segmentation.
We denote rigid flow \(\mathbf{F}_R\), independent flow
\(\mathbf{F}_I\), final flow, and sampled target image
\(\hat{\mathbf{I}}_t\).
Fig. 3: Our shared encoder architecture based on Swin Transformer [1]
Fig. 4: Pose decoder architecture details.
Fig. 5: Depth decoder architecture details. The depth decoder is based on the TransDssl architecture [2].
Fig. 6: 3D Scene Flow and Motion mask decoder architecture details.