The 3rd International Workshop on Dynamic Scene Reconstruction
Reconstruction of general dynamic scenes is motivated by potential applications in film and broadcast production together with the ultimate goal of automatic understanding of real-world scenes from distributed camera networks. With recent advances in hardware and the advent of virtual and augmented reality, dynamic scene reconstruction is being applied to more complex scenes with applications in Entertainment, Games, Film, Creative Industries and AR/VR/MR. We welcome contributions to this workshop in the form of oral presentations and posters. Suggested topics include, but are not limited to:
- - Dynamic 3D reconstruction from single, stereo or multiple views
- - Learning-based methods in dynamic scene reconstruction and understanding
- - Multi-modal dynamic scene modelling (RGBD, LIDAR, 360 video, light fields)
- - 4D reconstruction and modelling
- - 3D/4D data acquisition, representation, compression and transmission
- - Scene analysis and understanding in 2D and 3D
- - Structure from motion, camera calibration and pose estimation
- - Digital humans: motion and performance capture, bodies, faces, hands
- - Geometry processing
- - Computational photography
- - Appearance and reflectance modelling
- - Scene modelling in the wild, moving cameras, handheld cameras
- - Applications of dynamic scene reconstruction (VR/AR, character animation, free-viewpoint video, relighting, medical imaging, creative content production, animal tracking, HCI, sports)
The objectives for this workshop are to:
- - Bringing together leading experts in the field of general dynamic scene reconstruction to help propel the field forward.
- - Create and maintain an online database of datasets and papers
- - Accelerate research progress in the field of dynamic scene reconstruction to match the requirements of real-world applications by identifying the challenges and ways to address them through a panel discussion between experts, presenters and attendees.
Speakers
Professor Lourdes Agapito’s research has consistently focused on the inference of 3D information from the video acquired from a single moving camera. Prof Agapito’s early research focused on static scenes, but her attention soon turned to the much more challenging problem of estimating the 3D shape of non-rigid objects (Non-Rigid Structure from Motion, NR-SFM) or complex dynamic scenes where an unknown number of objects might be moving, possibly deforming, independently. Prof Agapito’s research group investigates all theoretical and practical aspects of NR-SFM: deformable tracking; dense optical flow estimation and non-rigid video registration; 3D reconstruction of deformable and articulated structure and dense 3D modelling of non-rigid dynamic scenes.
Hao Li is CEO and Co-Founder of Pinscreen, a startup that builds cutting edge AI-driven virtual avatar technologies. He is also a Distinguished Fellow of the Computer Vision Group at UC Berkeley. Before that, he was an Associate Professor of Computer Science at the University of Southern California, as well as the director of the Vision and Graphics Lab at the USC Institute for Creative Technologies. Hao's work in Computer Graphics and Computer Vision focuses on digitizing humans and capturing their performances for immersive communication, telepresence in virtual worlds, and entertainment. His research involves the development of novel deep learning, data-driven, and geometry processing algorithms. He is known for his seminal work in avatar creation, facial animation, hair digitization, dynamic shape processing, as well as his recent efforts in preventing the spread of malicious deep fakes. He was previously a visiting professor at Weta Digital, a research lead at Industrial Light & Magic / Lucasfilm, and a postdoctoral fellow at Columbia and Princeton Universities. He was named top 35 innovator under 35 by MIT Technology Review in 2013 and was also awarded the Google Faculty Award, the Okawa Foundation Research Grant, as well as the Andrew and Erna Viterbi Early Career Chair. He won the Office of Naval Research (ONR) Young Investigator Award in 2018 and was named named to the DARPA ISAT Study Group in 2019. In 2020, he won the ACM SIGGRAPH Real-Time Live! “Best in Show” award. Hao obtained his PhD at ETH Zurich and his MSc at the University of Karlsruhe (TH).
Program
Time (PT) |
Session |
08:30 - 08:40 | Welcome and Introduction Video |
08:40 - 09:25 | Keynote 1: Lourdes Agapito Video |
09:25 - 10:25 |
Paper Session (10 mins each) Video Dynamic Appearance Modelling from Minimal Cameras Consistent 3D Human Shape from Repeatable Action Temporal Consistency Loss for High Resolution Textured and Clothed 3D Human Reconstruction from Monocular Video Super-Resolution Appearance Transfer for 4D Human Performances Editable Free-viewpoint Video Using a Layered Neural Representation Papers Q&A |
10:30 - 10:50 | Break |
10:50 - 11:35 | Keynote 2: Hao Li Video |
11:35 - 12.05 | Panel Discussion Video |
12:05 - 12:15 | Close and Best Paper Video |
Organizers
Submission
We welcome submissions from both industry and academia, including interdisciplinary work and work from those outside of the mainstream computer vision community.
Instructions
Papers will be limited up to 8 pages according to the CVPR format (main conference authors guidelines). All papers will be reviewed with double blind policy. Papers will be selected based on relevance, significance and novelty of results, technical merit, and clarity of presentation.
Important Dates
Action | Date |
Paper submission deadline | |
Notification to authors | |
Camera ready deadline |