Speaker: Diego Thomas
Date: March 31st, 2014
Place: room 102, Faculty of Science Bldg. 7, Hongo Campus, The University of Tokyo
The generation of fine 3D models from RGB-D measurements is of wide interest for the computer vision community, with various potential applications. For example, 3D models of real scenes can be used in serious games or indoor space organisation. Also, 3D models of humans (avatar) can be used for remote users interaction in virtual environments. With recent efforts on developing inexpensive depth sensors such as the Microsoft Kinect camera or the Asus Xtion Pro camera (also called RGB-D cameras), capturing depth information in indoor environments becomes an easy task.
This new set-up opens new possibilities for 3D modeling, and several softwares have been already proposed to realise live 3D reconstruction using RGB-D cameras. We propose a new flexible 3D surface representation using a set of parametric surface patches that is cheap in memory use and, nevertheless, achieves accurate generation of 3D models from RGB-D image sequences. Projecting a scene or an object onto different parametric surface patches reduces significantly the size of the 3D representation and thus it allows us to generate textured 3D models with lower memory requirement while keeping accuracy and easiness to update with live RGB-D measurements. Experimental results with two different scenarios (indoor scene reconstruction and 3D face reconstruction) confirm effectiveness of our proposed representation, showing accurate generation of 3D models.