This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
m3_seminar:m3_seminar_2018:projects_kit:human_motion_capturing_2018 [2018/07/08 21:19] – [Summary and Conclusion] stud_kit | m3_seminar:m3_seminar_2018:projects_kit:human_motion_capturing_2018 [2018/07/09 23:26] (current) – [Podcast] stud_kit | ||
---|---|---|---|
Line 3: | Line 3: | ||
^ Autors | Herzog Michael, Kratzer Fabian & Mayer Anjela | | ^ Autors | Herzog Michael, Kratzer Fabian & Mayer Anjela | | ||
^ Betreuer | Isabel Ehrenberger | | ^ Betreuer | Isabel Ehrenberger | | ||
- | ^ Bearbeitungsdauer | ca. 25 Stunden | + | ^ Bearbeitungsdauer | ca. 90 Stunden | |
^ Präsentationstermin | ^ Präsentationstermin | ||
- | |||
- | <note tip> | ||
- | You can find more information following the next links: | ||
- | * Wiki page ** formatting **: [[https:// | ||
- | * Creation of a **video-podcast ** [[http:// | ||
- | </ | ||
- | |||
===== Introduction/ | ===== Introduction/ | ||
Motion in robotics is a difficult task, concerning the complexity of robot kinematics, dynamics and their environment. When it comes to robot motion, it is necessary to plan the desired movement beforehand. It is required to generate a trajectory of the motion, which contains sequential positions of the robot over time. Afterwards the positions are mapped onto the joint angles of the robot [3]. | Motion in robotics is a difficult task, concerning the complexity of robot kinematics, dynamics and their environment. When it comes to robot motion, it is necessary to plan the desired movement beforehand. It is required to generate a trajectory of the motion, which contains sequential positions of the robot over time. Afterwards the positions are mapped onto the joint angles of the robot [3]. | ||
Line 24: | Line 17: | ||
===== Podcast ===== | ===== Podcast ===== | ||
- | {{ : | + | {{ : |
===== Theoretical Basics ===== | ===== Theoretical Basics ===== | ||
Line 33: | Line 26: | ||
=== Optical Motion Capture === | === Optical Motion Capture === | ||
Optical methods utilize several cameras with different perspectives on the subject. The spatial position is determined by recognizing the subject features and triangulating their positions within the image from different perspectives. Accordingly, | Optical methods utilize several cameras with different perspectives on the subject. The spatial position is determined by recognizing the subject features and triangulating their positions within the image from different perspectives. Accordingly, | ||
- | {{ : | + | {{ : |
== Marker-based motion capture == | == Marker-based motion capture == | ||
Line 66: | Line 59: | ||
The MMM developed at the H²T Institute at KIT [2] is a framework which contains a generalized reference model of the human body and provides tools to handle sampled data and to make them available to output modules. Sampled data such as data from motion tracking are converted to a human reference model of one meter height. The reference model uses the markers at the same relative position of the human body as the human that is captured. That means that regardless who is converted, every conversion ends up with the reference model. The model itself is based on statistical data of the human body. The individual properties of the person captured, such as leg length or weight, are converted to match the reference model. Data that are converted to the MMM can also be saved in the Motion Database (DB) for later use. The MMM representation of the motion captured can be converted to every arbitrary robot. For that, a converter tool using the provided interfaced is needed. After the mapping, maybe there is need to slightly adapt some parameters, i.e. for the engines, because the technical properties of the robot model can differ a little bit to the one of the real robot. Figure 2 illustrates the workflow explained above and the various interfaces. As illustrated with the arrows pointing from the captures to MMM, as soon as the captured data is converted, it can be further progressed by all converters illustrated on the bottom or it can be saved to the Motion DB and thus used later. This approach is very flexible as several methods to obtain samples and to post-process the model calculated can be used. Furthermore, | The MMM developed at the H²T Institute at KIT [2] is a framework which contains a generalized reference model of the human body and provides tools to handle sampled data and to make them available to output modules. Sampled data such as data from motion tracking are converted to a human reference model of one meter height. The reference model uses the markers at the same relative position of the human body as the human that is captured. That means that regardless who is converted, every conversion ends up with the reference model. The model itself is based on statistical data of the human body. The individual properties of the person captured, such as leg length or weight, are converted to match the reference model. Data that are converted to the MMM can also be saved in the Motion Database (DB) for later use. The MMM representation of the motion captured can be converted to every arbitrary robot. For that, a converter tool using the provided interfaced is needed. After the mapping, maybe there is need to slightly adapt some parameters, i.e. for the engines, because the technical properties of the robot model can differ a little bit to the one of the real robot. Figure 2 illustrates the workflow explained above and the various interfaces. As illustrated with the arrows pointing from the captures to MMM, as soon as the captured data is converted, it can be further progressed by all converters illustrated on the bottom or it can be saved to the Motion DB and thus used later. This approach is very flexible as several methods to obtain samples and to post-process the model calculated can be used. Furthermore, | ||
- | {{ : | + | {{ : |
===== Experimental Setup ===== | ===== Experimental Setup ===== | ||
Line 97: | Line 90: | ||
{{ : | {{ : | ||
- | | | + | Figure 5: Top: the recorded human with the sponge in his hand. Bottom: the movement is mapped to the MMM model. |
===== Difficulties ===== | ===== Difficulties ===== |