The 1st International Workshop on Dynamic Scene Reconstruction

Reconstruction of general dynamic scenes is motivated by potential applications in film and broadcast production together with the ultimate goal of automatic understanding of real-world scenes from distributed camera networks. With recent advances in hardware and the advent of virtual and augmented reality, dynamic scene reconstruction is being applied to more complex scenes with applications in Entertainment, Games, Film, Creative Industries and AR/VR/MR. We welcome contributions to this workshop in the form of oral presentations and posters. Suggested topics include, but are not limited to:

  • - Dynamic 3D reconstruction from single, stereo or multiple views
  • - Multi-modal dynamic scene modelling (RGBD, LIDAR, 360 video, light-field)
  • - 4D reconstruction and modelling
  • - 3D segmentation and recognition
  • - 3D/4D data acquisition, representation, compression and transmission
  • - Scene analysis and understanding
  • - Structure-from-motion, camera calibration and pose estimation
  • - Geometry processing
  • - Computational photography
  • - Appearance and reflectance modelling
  • - Scene modelling in the wild
  • - Applications of dynamic scene reconstruction (virtual/augmented/mixed reality, character animation, free-viewpoint video, relighting, medical imaging, creative content production, animal welfare, HCI, sports)

The objectives for this workshop are to:

  • - create a common portal for Papers and Datasets in dynamic scene reconstruction
  • - discuss techniques to capture high-quality ground-truth for dynamic scenes
  • - develop an evaluation framework for dynamic scene reconstruction

Submission

Instructions

Papers will be limited up to 8 pages according to the CVPR format (main conference authors guidelines). All papers will be reviewed with double blind policy. Papers will be selected based on relevance, significance and novelty of results, technical merit, and clarity of presentation.

All the papers should be submitted through the CMT Submission Site

Important Dates

Action Date
Paper submission deadline March 1015, 2019
Notification to authors April 01, 2019
Camera ready deadline April 08, 2019

Speakers

Program

Time Session
09:00 - 09:10 Introduction
09:10 - 10:00 Keynote - Andrew Fitzgibbon
10:00 - 10:20 Coffee Break
Revealing Scenes by Inverting Structure from Motion Reconstructions
Francesco Pittaluga, Sanjeev J. Koppal, Sing Bing Kang, Sudipta N. Sinha
10:20 - 11:20 What Do Single-View 3D Reconstruction Networks Learn?
Maxim Tatarchenko, Stephan R. Richter, René Ranftl, Zhuwen Li, Vladlen Koltun, Thomas Brox
Jumping Manifolds: Geometry Aware Dense Non-Rigid Structure from Motion
Suryansh Kumar
11:20 - 12:10 Keynote - Christian Theobalt
12:20 - 13:20 Lunch Break and Poster Session
13:20 - 14:10 Keynote - Matthias Nießner
3D Human Pose Estimation From Multi Person, Multi Camera 360 Scenes
Matthew Shere, Hansung Kim, Adrian Hilton
14:10 - 15:10 Live Reconstruction of Large-Scale Dynamic Outdoor Worlds
Ondrej Miksik, Vibhav Vineet
FML: Face Model Learning from Videos
Ayush Tewari, Florian Bernard, Pablo Garrido, Gaurav Bharaj, Mohamed Elgharib,
Hans-Peter Seidel, Patrick Perez, Michael Zollhofer, Christian Theobalt
15:10 - 15:30 Coffee Break
15:30 - 16:20 Keynote - Angjoo Kanazawa
16:20 - 16:30 Close

Organizers