The 3rd International Workshop on Dynamic Scene Reconstruction

Reconstruction of general dynamic scenes is motivated by potential applications in film and broadcast production together with the ultimate goal of automatic understanding of real-world scenes from distributed camera networks. With recent advances in hardware and the advent of virtual and augmented reality, dynamic scene reconstruction is being applied to more complex scenes with applications in Entertainment, Games, Film, Creative Industries and AR/VR/MR. We welcome contributions to this workshop in the form of oral presentations and posters. Suggested topics include, but are not limited to:

  • - Dynamic 3D reconstruction from single, stereo or multiple views
  • - Learning-based methods in dynamic scene reconstruction and understanding
  • - Multi-modal dynamic scene modelling (RGBD, LIDAR, 360 video, light fields)
  • - 4D reconstruction and modelling
  • - 3D/4D data acquisition, representation, compression and transmission
  • - Scene analysis and understanding in 2D and 3D
  • - Structure from motion, camera calibration and pose estimation
  • - Digital humans: motion and performance capture, bodies, faces, hands
  • - Geometry processing
  • - Computational photography
  • - Appearance and reflectance modelling
  • - Scene modelling in the wild, moving cameras, handheld cameras
  • - Applications of dynamic scene reconstruction (VR/AR, character animation, free-viewpoint video, relighting, medical imaging, creative content production, animal tracking, HCI, sports)

The objectives for this workshop are to:

  • - Bringing together leading experts in the field of general dynamic scene reconstruction to help propel the field forward.
  • - Create and maintain an online database of datasets and papers
  • - Accelerate research progress in the field of dynamic scene reconstruction to match the requirements of real-world applications by identifying the challenges and ways to address them through a panel discussion between experts, presenters and attendees.

Submission

We welcome submissions from both industry and academia, including interdisciplinary work and work from those outside of the mainstream computer vision community.

Instructions

Papers will be limited up to 8 pages according to the CVPR format (main conference authors guidelines). All papers will be reviewed with double blind policy. Papers will be selected based on relevance, significance and novelty of results, technical merit, and clarity of presentation.

Submission website

Important Dates

Action Date
Paper submission deadline March 12 26, 2021
Notification to authors April 2, 2021
Camera ready deadline April 16, 2021

Speakers

Professor Lourdes Agapito’s research has consistently focused on the inference of 3D information from the video acquired from a single moving camera. Prof Agapito’s early research focused on static scenes, but her attention soon turned to the much more challenging problem of estimating the 3D shape of non-rigid objects (Non-Rigid Structure from Motion, NR-SFM) or complex dynamic scenes where an unknown number of objects might be moving, possibly deforming, independently. Prof Agapito’s research group investigates all theoretical and practical aspects of NR-SFM: deformable tracking; dense optical flow estimation and non-rigid video registration; 3D reconstruction of deformable and articulated structure and dense 3D modelling of non-rigid dynamic scenes.

Talk Title: TBC

Dr Hao Li is best known for his work on dynamic geometry processing and data-driven techniques for making 3D human digitization and facial animation accessible to the masses. During his PhD, Dr. Li co-created the first real-time and markerless system for performance-driven facial animation based on depth sensors which won the best paper award at the ACM SIGGRAPH / Eurographics Symposium on Computer Animation in 2009. His recent research focuses on combining techniques in Deep Learning and Computer Graphics to facilitate the creation of 3D avatars and to enable true immersive face-to-face communication and telepresence in Virtual Reality. In 2015, Dr Li founded Pinscreen, Inc. in Los Angeles, which introduced a technology that can generate realistic 3D avatars of a person including the hair from a single photograph. Due to the ease of generating and manipulating digital faces, Li has been raising public awareness about the threat of manipulated videos such as deepfakes.

Talk Title: TBC

Program


Time TBC

Session

08:30 - 08:40 Welcome and Introduction
08:40 - 09:25 Keynote 1
09:25 - 10:25 Paper Session (15 mins each)
10:30 - 10:50 Break
10:50 - 11:35 Keynote 2
11:35 - 12.05 Panel Discussion
12:05 - 12:15 Close

Organizers