q-Space Novelty Detection with Variational Autoencoders (bibtex)
by A. Vasilev, V. Golkov, M. Meissner, I. Lipp, E. Sgarlata, V. Tomassini, D. K. Jones and D. Cremers
Reference:
q-Space Novelty Detection with Variational Autoencoders (A. Vasilev, V. Golkov, M. Meissner, I. Lipp, E. Sgarlata, V. Tomassini, D. K. Jones and D. Cremers), In MICCAI 2019 International Workshop on Computational Diffusion MRI, 2019. 
Bibtex Entry:
@inproceedings{Vasilev-et-al-2018,
 author = {A. Vasilev and V. Golkov and M. Meissner and I. Lipp and E. Sgarlata and V. Tomassini and D. K. Jones and D. Cremers},
 title = {{q}-{S}pace Novelty Detection with Variational Autoencoders},
 year = {2019},
 booktitle = {{MICCAI} 2019 International Workshop on Computational Diffusion {MRI}},
 eprint = {1806.02997},
 eprinttype = {arXiv},
 keywords = {deep learning, novelty detection, anomaly detection, neural networks, medical imaging, magnetic resonance imaging, diffusion MRI},
 award = {Oral Presentation},
}
Powered by bibtexbrowser
q-Space Novelty Detection with Variational Autoencoders (bibtex)
q-Space Novelty Detection with Variational Autoencoders (bibtex)
by A. Vasilev, V. Golkov, M. Meissner, I. Lipp, E. Sgarlata, V. Tomassini, D. K. Jones and D. Cremers
Reference:
q-Space Novelty Detection with Variational Autoencoders (A. Vasilev, V. Golkov, M. Meissner, I. Lipp, E. Sgarlata, V. Tomassini, D. K. Jones and D. Cremers), In MICCAI 2019 International Workshop on Computational Diffusion MRI, 2019. 
Bibtex Entry:
@inproceedings{Vasilev-et-al-2018,
 author = {A. Vasilev and V. Golkov and M. Meissner and I. Lipp and E. Sgarlata and V. Tomassini and D. K. Jones and D. Cremers},
 title = {{q}-{S}pace Novelty Detection with Variational Autoencoders},
 year = {2019},
 booktitle = {{MICCAI} 2019 International Workshop on Computational Diffusion {MRI}},
 eprint = {1806.02997},
 eprinttype = {arXiv},
 keywords = {deep learning, novelty detection, anomaly detection, neural networks, medical imaging, magnetic resonance imaging, diffusion MRI},
 award = {Oral Presentation},
}
Powered by bibtexbrowser
members:sturmju:phd_thesis

You're viewing the old homepage. The new homepage of Jürgen Sturm is located here: http://jsturm.de.

PhD Thesis

Mobile manipulation robots are envisioned to provide many useful services both in domestic environments as well as in the industrial context. Examples include domestic service robots, that implement large parts of the housework, and versatile industrial assistants, that provide automation, transportation, inspection, and monitoring services. The challenge in these applications is that the robots have to function under changing, real-world conditions, be able to deal with considerable amounts of noise and uncertainty, and operate without the supervision of an expert. To meet these challenges, current robotic systems are typically custom-tailored to specific applications in well-defined environments, and therefore cannot deal robustly with changes in the situation. This thesis presents novel learning techniques that enable mobile manipulation robots, i.e., mobile platforms with one or more robotic manipulators, to autonomously adapt to new or changing situations. The developed approaches in this thesis cover the following four topics: (1) learning the robot's kinematic structure and properties using actuation and visual feedback, (2) learning about articulated objects in the environment in which the robot is operating, (3) using tactile feedback to augment the visual perception, and (4) learning novel manipulation tasks from human demonstrations.

In the first part of this thesis, we present innovative approaches to learning a robot's own body schema from scratch using visual self-observation. This allows manipulation robots to calibrate themselves automatically and to adapt their body schemata autonomously, for example after hardware failures or during tool use. In the second part, we extend the developed framework to learning about articulated objects – such as doors and drawers – with which service robots often need to interact. The presented algorithms enable robots to learn accurate kinematic models of articulated objects, which in turn allow them to interact with the objects robustly. In the third part, we provide approaches that allow manipulation robots to make use of tactile perception – an ability that is known to play an important role in human object manipulation skills. The main contributions in this part are approaches to identifying objects and to perceiving aspects of their internal states. With this, a manipulation robot can verify that it has grasped the correct object and, for example, discriminate full from empty bottles. Finally, we present an integrated system that allows human operators to intuitively teach a robot novel manipulation tasks by demonstration.

All techniques developed in the thesis are based on probabilistic learning and inference. They have been implemented and evaluated on real robots as well as in simulation. Extensive experiments have been conducted to analyze and validate the properties of the developed algorithms and to demonstrate a significant increase in robustness, adaptability, and utility of mobile manipulation robots in everyday life.

Additional material (videos, datasets, software)

Body Schema Learning including bootstrapping and life-long adaptation of the body schema and kinematic models, and robust control in the presence of hardware failures
Learning Kinematic Models for Articulated Objects including robust model estimation, Bayesian model comparison and selection, visual perception of articulated objects and robust control with mobile manipulation robots
Imitation Learning to learn task descriptions from human demonstrations
Tactile Sensing for manipulation robots, to enable them to recognize objects and estimate their internal state using tactile sensors


Find all the videos of my PhD thesis on Youtube.

My previous homepage is located here: http://www.informatik.uni-freiburg.de/~sturm.

Corresponding Publication