Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision | |||
members:steinbrf [2014/02/18 16:53] steinbrf |
members:steinbrf [2014/02/18 17:00] (current) steinbrf |
||
---|---|---|---|
Line 1: | Line 1: | ||
+ | {{page> | ||
+ | |||
+ | |||
+ | === Research Interests === | ||
+ | Correspondence Problems, Segmentation, | ||
+ | |||
+ | === Brief Bio === | ||
+ | Frank Steinbrücker received his Bachelor' | ||
+ | Since September 2008 he is a Ph.D. student in the Research Group for Computer Vision, | ||
+ | Image Processing and Pattern Recognition at the University of Bonn headed by | ||
+ | [[cremers|Professor Daniel Cremers]]. | ||
+ | |||
+ | === Visual Odometry === | ||
+ | At ICCV 2011 we published a method for getting a camera pose estimation from RGBD-Images. | ||
+ | In the video below, the Kinect camera is moving in a static scene and the camera poses are being accurately estimated. | ||
+ | |||
+ | < | ||
+ | |||
+ | === Dense Mapping of large RGB-D Sequences === | ||
+ | In our publication at ICCV 2013 I describe a method for the volumetric fusion of large RGB-D sequences. The video below shows the mesh visualization of our office floor, a scene computed from more than 24.000 RGB-D images captured with the Asus Xtion sensor. The reconstruction run at more than 200 Hz on a GTX680. The finest resolution was 5mm and the entire scene fit into approximately 2.5 GB of GPU RAM, including color. | ||
+ | < | ||
+ | |||
+ | While the method published at ICCV 2013 required a GPU to run in real-time, in our paper published at ICRA 2014, we demonstrated that the mapping part of dense volumetric RGB-D image fusion also works on a single standard CPU core at camera speed. Furthermore, | ||
+ | |||
+ | < | ||
+ | |||
+ | |||
+ | ========== Publications ========== | ||
+ | < | ||
+ | < | ||
+ | </ | ||