Improving 2D Feature Representations by
3D-Aware Fine-Tuning

1ETH Zurich 2Max Planck Institute for Informatics 3Google
ECCV 2024

Feature visualization. We show the PCA features and simple K-Means clustering results.

TL;DR: We propose 3D-aware fine-tuning to improve 2D foundation features. Our method starts with lifting 2D image features (e.g. DINOv2) (b) to a 3D representation. Then we finetune the 2D foundation model using the 3D-aware features (c). We demonstrate that incorporating the fine-tuned features (d) results in improved performance on downstream tasks such as semantic segmentation and depth estimation on a variety of datasets with simple linear probing (right). Feature maps are visualized using principal component analysis (PCA).

Abstract

Current visual foundation models are trained purely on unstructured 2D data, limiting their understanding of 3D structure of objects and scenes. In this work, we show that fine-tuning on 3D-aware data improves the quality of emerging semantic features. We design a method to lift semantic 2D features into an efficient 3D Gaussian representation, which allows us to re-render them for arbitrary views. Using the rendered 3D-aware features, we design a fine-tuning strategy to transfer such 3D awareness into a 2D foundation model. We demonstrate that models fine-tuned in that way produce features that readily improve downstream task performance in semantic segmentation and depth estimation through simple linear probing. Notably, though fined-tuned on a single indoor dataset, the improvement is transferable to a variety of indoor datasets and out-of-domain datasets. We hope our study encourages the community to consider injecting 3D awareness when training 2D foundation models.

Method

We present a two-stage pipeline. In the first stage, we lift 2D foundation features (e.g. DINOv2) into 3D-aware features by training 3D Gaussian representation for each scene. In the second stage, we use the rendered features of multiple scenes to finetune the 2D foundation model.



Universality

Our 3D-aware fine-tuning is universal and applicable to a variety of 2D vision models, e.g. DINOv2, DINOv2-reg, CLIP, MAE, and DeiT-III.



Downstream Evaluation

We conduct evaluation on downstream tasks including semantic segmentation and depth estimation with linear probing. The 3D-aware fine-tuning was only conducted on a single indoor dataset ScanNet++. We first evaluate on ScanNet++ validation set then move on to other indoor datasets ScanNet and NYUd. To investigate the generalization ability of the fine-tuned features, we also perform out-of-domain evaluation on generally distributed datasets including ADE20k, Pascal VOC and the outdoor dataset KITTI. Our method brings improvements on all datasets.

Citation

@inproceedings{yue2024fit3d,
  title     = {{Improving 2D Feature Representations by 3D-Aware Fine-Tuning}},
  author    = {Yue, Yuanwen and Das, Anurag and Engelmann, Francis and Tang, Siyu and Lenssen, Jan Eric},
  booktitle = {European Conference on Computer Vision (ECCV)},
  year      = {2024}
}

Acknowledgment

Francis Engelmann is partially supported by an ETH AI Center postdoctoral research fellowship and an ETH Zurich Career Seed Award. This project was also partially supported by Saarbrücken Research Center for Visual Computing, Interaction and AI.