DeepScene

Semantic Segmentation using Deep Convolutional Neural Networks

DeepScene contains our unimodal AdapNet++ and multimodal SSMA models trained on various datasets. Select a dataset and a corresponding model to load from the drop down box below, and click on Random Example to see the live segmentation results. The code for these models is available in our Github repository.

Note: No pre-computation is performed for these images. They are treated as a fresh upload with every click. Inference time might vary depending on the current server load and the number of users.


Input Image


Segmentation Output



  • Picture

  • Bathtub

  • Counter

  • Door

  • Floor

  • Wall

  • Sofa

  • Unlabeled

  • Bed

  • Cabinet

  • Window

  • Other Furniture

  • Shower Curtain

  • Desk

  • Bookshelf

  • Table

  • Chair

  • Toilet

  • Curtain

  • Refrigerator

  • Sink

Datasets

Overview


The Freiburg Forest dataset was collected using our Viona autonomous mobile robot platform equipped with cameras for capturing multi-spectral and multi-modal images. The dataset may be used for evaluation of different perception algorithms for segmentation, detection, classification, etc. All scenes were recorded at 20 Hz with a camera resolution of 1024x768 pixels. The data was collected on three different days to have enough variability in lighting conditions as shadows and sun angles play a crucial role in the quality of acquired images. The robot traversed about 4.7 km each day. We provide manually annotated pixel-wise ground truth segmentation masks for 6 classes: Obstacle, Trail, Sky, Grass, Vegetation, and Void.

For each spectrum/modality, we provide one zip file containing all the sequences. Each sequence is a continous stream of camera frames. All the multi-spectral images are in the PNG format and the depth images are in the 16-bit TIFF format. For the evaluations mentioned in the paper, we provide two text files containing the train and test splits. If you would like to contribute to the annotations, please contact us. More details and evaluations can be found in our papers listed under publications.

BibTeX


Please cite our work if you use the Freiburg Forest Dataset or report results based on it.

@InProceedings{valada16iser,
author = {Abhinav Valada and Gabriel Oliveira and Thomas Brox and Wolfram Burgard},
title = {Deep Multispectral Semantic Scene Understanding of Forested Environments using Multimodal Fusion},
booktitle = {International Symposium on Experimental Robotics (ISER)},
year = {2016},
}

License Agreement


The data is provided for non-commercial use only. By downloading the data, you accept the license agreement which can be downloaded here. If you report results based on the Freiburg Forest datasets, please consider citing the paper mentioned above.

Downloads


Freiburg Forest Raw

The raw dataset contains over 15,000 images of unstructued forest environments, captured at 20Hz using our Viona autonomous robot platform equipped with a Bumblebee2 stereo vision camera.

Freiburg Forest Multi-Modal/Spectral Annotated

The dataset contains the following multi-modal/spectral images with groundtruth annotations: RGB, Depth, NIR, NRG, NDVI, EVI and their variants. Pixel-level annotations are provided for 6 semantic classes: Trail, Grass, Vegetation, Obstacle, Sky, Void.

Code

A software implementation of this project (AdapNet++, SSMA, AdapNet, CMoDE) based on TensorFlow can be found in our GitHub repository for academic usage and is released under the GPLv3 license. For any commercial purpose, please contact the authors.

Video Demos











Qualitative Unimodal Adapnet++ Results

RGB Input

Segmentation Output

Cityscapes
Synthia
SUN RGB-D
ScanNet v2
Freiburg Forest

Qualitative Multimodal SSMA Results

Modality 1

Modality 2

Segmentation Output

Synthia-Rain
Synthia-Winter
Synthia-Night
Synthia-Fog
Cityscapes
SUN RGB-D
ScanNet v2
Freiburg Forest

Publications

  • Abhinav Valada, Rohit Mohan, Wolfram Burgard
    Self-Supervised Model Adaptation for Multimodal Semantic Segmentation
    International Journal of Computer Vision (IJCV), Special Issue: Deep Learning for Robotic Vision, 128(5):1239-1285, 2019.

  • Abhinav Valada, Johan Vertens, Ankit Dhall, Wolfram Burgard
    AdapNet: Adaptive Semantic Segmentation in Adverse Environmental Conditions
    Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Singapore, 2017.

  • Abhinav Valada, Ankit Dhall, Wolfram Burgard
    Convoluted Mixture of Deep Experts for Robust Semantic Segmentation
    IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Workshop, State Estimation and Terrain Perception for All Terrain Mobile Robots, Daejeon, Korea, 2016.

  • Abhinav Valada, Gabriel L. Oliveira, Thomas Brox, Wolfram Burgard
    Deep Multispectral Semantic Scene Understanding of Forested Environments using Multimodal Fusion
    International Symposium on Experimental Robotics (ISER), Tokyo, Japan, 2016.

  • Abhinav Valada, Gabriel L. Oliveira, Thomas Brox, Wolfram Burgard
    Robust Semantic Segmentation using Deep Fusion
    Robotics: Science and Systems (RSS) Workshop, Limits and Potentials of Deep Learning in Robotics, Ann Arbor, USA, 2016.

  • Gabriel L. Oliveira, Abhinav Valada, Wolfram Burgard, Thomas Brox
    Deep Learning for Human Part Discovery in Images
    Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 2016.
  • People