Tesi etd-01022018-192300
Link copiato negli appunti
Tipo di tesi
Perfezionamento
Autore
KIM, JAESEOK
URN
etd-01022018-192300
Titolo
The mobile manipulation tasks in the domestic environment using service robot
Settore scientifico disciplinare
ING-IND/34
Corso di studi
INGEGNERIA - Biorobotics
Commissione
relatore Prof. CAVALLO, FILIPPO
Parole chiave
- Deep neuron network
- Learning from Demonstration
- Shared autonomy
Data inizio appello
30/06/2018;
Disponibilità
completa
Riassunto analitico
Mobile manipulation tasks is one of the most researched field of service robotics and still promising research field in the robotics. There are manyservice robots have been developed or being devloped for domestic environment. The main objective of these service robots are to support human life (particularly elderly and handicapped people) to build in activities of daily living (ADL). In particular, these robotshave been researched for solutions to perform manipulation tasks such as grasping objects, washing dishes, cleaning tables and etc. However, the robots are still slow in the compared to human actions and can not easily adapt to the human environment despite computation process is faster than the human brain. Therefore, this thesis targets the design of robotic systems able to help manipulation tasks for a domestic environment. As a benchmark, we chosen the challenging tasks of grasping multiple objects in the highly cluttered environment and cleaning dirt on a table.
In the first part of the thesis, the problem of how to grasp multiple objects in the highly cluttered environment is addressed. To perform the task effectively , our robotic systems based on Human-Robot Interaction (HRI) have been developed by combining the cognitive skills of a human operator with autonomous robot behaviors. In this work, we presented some techniques for integration of HRI for assistive mobile manipulation in the domestic environment. In particular, mobile manipulation tasks for grasping multiple objects is considered for variable and unknown table heights. To accomplish the mobile manipulation, three strategies were developed and were used with motion planning for grasping an object selected in the environment. In addition, a combination of the three strategies was selected according to table height. Two intuitive interfaces, which are a visual interface in rviz and a voice user interface with speech recognition, were used. The interfaces helped in deciding and selecting the desired object. We validate the manipulation tasks with three strategies using domestic robot Doro.
In the second part of the thesis how to clean dirt on a table is addressed. In order to perform the cleaning tasks, Learning from Demonstration (LfD) was implementthat is an approach to the problem of intelligent behavior generation in autonomous robot. In LfD, a human demonstrator interacts with the robot and provides a set of examples for the intended robot behavior while performing a given task. The variability in such demonstrations can be coped with the use of a Gaussian Mixture Model (GMM) that encodes such variability while providing appropriate generalization abilities. However, there is the need to extend these probabilistic models in order to extrapolate the demonstrations to different task parameters such as movement locations, amplitudes or orientations. One such method is the task-parameterized GMM, where an auxiliary set of reference frames is defined that helps to achieve an invariance to such parameters. Still, while such frames can be automatically extracted from sensors in well-controlled environments, by carefully choosing some appropriate image features, in cleaning tasks such as dust sweeping or table wiping, there is a need to resort to hand placed markers to define the task parametrization. In order to perform the cleaning tasks, such reference frames of dirt are automatically extracted from robot camera images, using a deep neural network previously trained on images taken during human demonstrations of a cleaning task. This approach has two main benefits: on one hand it takes the human completely out of the loop while performing complex tasks; on the other hand, a specific task to be performed can be identified by the network from the image alone, thus also enabling automatic task selection from a set of previously demonstrated tasks. We describe results obtained with the iCub humanoid robot in the context of a cleaning task.
In the first part of the thesis, the problem of how to grasp multiple objects in the highly cluttered environment is addressed. To perform the task effectively , our robotic systems based on Human-Robot Interaction (HRI) have been developed by combining the cognitive skills of a human operator with autonomous robot behaviors. In this work, we presented some techniques for integration of HRI for assistive mobile manipulation in the domestic environment. In particular, mobile manipulation tasks for grasping multiple objects is considered for variable and unknown table heights. To accomplish the mobile manipulation, three strategies were developed and were used with motion planning for grasping an object selected in the environment. In addition, a combination of the three strategies was selected according to table height. Two intuitive interfaces, which are a visual interface in rviz and a voice user interface with speech recognition, were used. The interfaces helped in deciding and selecting the desired object. We validate the manipulation tasks with three strategies using domestic robot Doro.
In the second part of the thesis how to clean dirt on a table is addressed. In order to perform the cleaning tasks, Learning from Demonstration (LfD) was implementthat is an approach to the problem of intelligent behavior generation in autonomous robot. In LfD, a human demonstrator interacts with the robot and provides a set of examples for the intended robot behavior while performing a given task. The variability in such demonstrations can be coped with the use of a Gaussian Mixture Model (GMM) that encodes such variability while providing appropriate generalization abilities. However, there is the need to extend these probabilistic models in order to extrapolate the demonstrations to different task parameters such as movement locations, amplitudes or orientations. One such method is the task-parameterized GMM, where an auxiliary set of reference frames is defined that helps to achieve an invariance to such parameters. Still, while such frames can be automatically extracted from sensors in well-controlled environments, by carefully choosing some appropriate image features, in cleaning tasks such as dust sweeping or table wiping, there is a need to resort to hand placed markers to define the task parametrization. In order to perform the cleaning tasks, such reference frames of dirt are automatically extracted from robot camera images, using a deep neural network previously trained on images taken during human demonstrations of a cleaning task. This approach has two main benefits: on one hand it takes the human completely out of the loop while performing complex tasks; on the other hand, a specific task to be performed can be identified by the network from the image alone, thus also enabling automatic task selection from a set of previously demonstrated tasks. We describe results obtained with the iCub humanoid robot in the context of a cleaning task.
File
Nome file | Dimensione |
---|---|
Jaeseok_...ion_1.pdf | 30.21 Mb |
Contatta l'autore |