Dynavis 2021 ReCap

The 4th International Workshop on Dynamic Scene Reconstruction

Reconstruction of general dynamic scenes is motivated by potential applications in film and broadcast production together with the ultimate goal of automatic understanding of real-world scenes from distributed camera networks. With recent advances in hardware and the advent of virtual and augmented reality, dynamic scene reconstruction is being applied to more complex scenes with applications in Entertainment, Games, Film, Creative Industries and AR/VR/MR. We welcome contributions to this workshop in the form of oral presentations and posters. Suggested topics include, but are not limited to:

  • - Dynamic 3D reconstruction from single, stereo or multiple views
  • - Learning-based methods in dynamic scene reconstruction and understanding
  • - Multi-modal dynamic scene modelling (RGBD, LIDAR, 360 video, light fields)
  • - 4D reconstruction and modelling
  • - 3D/4D data acquisition, representation, compression and transmission
  • - Scene analysis and understanding in 2D and 3D
  • - Structure from motion, camera calibration and pose estimation
  • - Digital humans: motion and performance capture, bodies, faces, hands
  • - Geometry processing
  • - Computational photography
  • - Appearance and reflectance modelling
  • - Scene modelling in the wild, moving cameras, handheld cameras
  • - Applications of dynamic scene reconstruction (VR/AR, character animation, free-viewpoint video, relighting, medical imaging, creative content production, animal tracking, HCI, sports)

The objectives for this workshop are to:

  • - Bringing together leading experts in the field of general dynamic scene reconstruction to help propel the field forward.
  • - Create and maintain an online database of datasets and papers
  • - Accelerate research progress in the field of dynamic scene reconstruction to match the requirements of real-world applications by identifying the challenges and ways to address them through a panel discussion between experts, presenters and attendees.

Speakers

Fatma Güney is an Assistant Professor at Koc University in Istanbul. She received her Ph.D. from MPI in Germany and worked as a postdoctoral researcher at VGG in Oxford. She is a recipient of multiple outstanding reviewer awards at CVPR and ICCV. Her research interests include 3D computer vision and representation learning from video sequences.

Gerard Pons-Moll is a Professor at the University of Tübingen, at the department of Computer Science. He is also the head of the Emmy Noether independent research group "Real Virtual Humans", senior researcher at the Max Planck for Informatics (MPII) in Saarbrücken, Germany. His research lies at the intersection of computer vision, computer graphics and machine learning. His research has produced some of the most advanced statistical human body models of pose, shape, soft-tissue and clothing (which are currently used for a number of applications in industry and research), as well as algorithms to track and reconstruct 3D people models from images, video, depth, and IMUs. His most recent interests span human-scene interaction, and 3D scene and object representation learning.


Capturing and modelling human behavior with neural fields
Abstract: Capturing and modelling 3D humans and the objects they are interacting with from consumer grade sensors like RGB and RGBD cameras is extremely challenging due to heavy occlusions, noise and complex interaction. Clearly solving this problem requires learned models of the human shape, pose and of the space of possible interactions. In this talk, I will show that neural fields are a power paradigm to learn such models from data, and I will show how they can be combined with classical model based fitting techniques to obtain the best of both worlds. I will also show that joint reasoning about humans and objects/scene is crucial for capture and animation, and I will show how object pose can even be inferred based on human pose alone.

Program


Time (PT)

Session

14:00 – 14:10 Welcome and Introduction
14:10 – 14:25 Invited Talk: Fatma Güney
14:25 – 15:10 Paper Session (15 mins each)

CAMM: Building Category-Agnostic and Animatable 3D Models from Monocular Videos
Tianshu Kuai, Akash Karthikeyan, Yash Kant, Ashkan Mirzaei, Igor Gilitschenski

Unbiased 4D: Monocular 4D Reconstruction with a Neural Deformation Model
Erik C.M. Johnson, Marc Habermann, Soshi Shimada, Vladislav Golyanik, Christian Theobalt

Robust Monocular 3D Human Motion with Lasso-Based Differential Kinematics
Abed Malti


15:10 – 15:45 Coffee Break
15:45 – 16:30 Keynote 2: Gerard Pons-Moll
16:30 – 17.15 Paper Session 2 (15 mins each)

CAT-NeRF: Constancy-Aware Tx^2Former for Dynamic Body Modeling
Haidong Zhu, Zhaoheng Zheng, Wanrong Zheng, Ram Nevatia

DynamicStereo: Consistent Dynamic Depth from Stereo Videos
Nikita Karaev, Ignacio Rocco, Benjamin Graham, Natalia Neverova, Andrea Vedaldi, Christian Rupprecht
Invited from CVPR 2023


DynIBaR Neural Dynamic Image-Based Rendering
Zhengqi Li, Qianqian Wang, Forrester Cole, Richard Tucker, Noah Snavely
Invited from CVPR 2023 (Award Candidate)


17:15 – 17:20 Closing Remarks

Organizers

Program Committee

The organising committee would like to express their gratitude to the program committee who provided their time and expertise to ensure accepted papers are high quality, suitable for DynaVis and of sound scientific merit.

Akin Caliskan
Flawless AI
Fabian Prada
Meta
Franziska Mueller
Google
Helge Rhodin
UBC
Marco Pesavento
University of Surrey
Timur Bagautdinov
Meta Reality Labs

Submission

We welcome submissions from both industry and academia, including interdisciplinary work and work from those outside of the mainstream computer vision community.

Instructions

Papers will be limited up to 8 pages according to the CVPR format (main conference authors guidelines). All papers will be reviewed with double blind policy. Papers will be selected based on relevance, significance and novelty of results, technical merit, and clarity of presentation.

Submission website

Important Dates

Action Date
Paper submission deadline March 1017 2023
Notification to authors March 29, 2023
Camera ready deadline April 5, 2023