Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
m3_seminar:m3_seminar_2018:projects_kit:human_motion_capturing_2018 [2018/07/08 18:27] – [Podcast] stud_kitm3_seminar:m3_seminar_2018:projects_kit:human_motion_capturing_2018 [2018/07/09 23:26] (current) – [Podcast] stud_kit
Line 3: Line 3:
 ^ Autors | Herzog Michael, Kratzer Fabian & Mayer Anjela | ^ Autors | Herzog Michael, Kratzer Fabian & Mayer Anjela |
 ^ Betreuer | Isabel Ehrenberger | ^ Betreuer | Isabel Ehrenberger |
-^ Bearbeitungsdauer | ca. 25 Stunden ?? was ist damit gemeint? |+^ Bearbeitungsdauer | ca. 90 Stunden |
 ^ Präsentationstermin  | 08.07.2018 | ^ Präsentationstermin  | 08.07.2018 |
- 
-<note tip> 
-You can find more information following the next links: 
-  * Wiki page ** formatting **: [[https://www.dokuwiki.org/wiki:syntax#formatting_syntax|HowTo - Wiki-Format]] 
-  * Creation of a **video-podcast ** [[http://wiki.ifs-tud.de/howto|HowTo - Podcast]]  
-</note> 
- 
 ===== Introduction/Motivation ===== ===== Introduction/Motivation =====
 Motion in robotics is a difficult task, concerning the complexity of robot kinematics, dynamics and their environment. When it comes to robot motion, it is necessary to plan the desired movement beforehand. It is required to generate a trajectory of the motion, which contains sequential positions of the robot over time. Afterwards the positions are mapped onto the joint angles of the robot [3]. Motion in robotics is a difficult task, concerning the complexity of robot kinematics, dynamics and their environment. When it comes to robot motion, it is necessary to plan the desired movement beforehand. It is required to generate a trajectory of the motion, which contains sequential positions of the robot over time. Afterwards the positions are mapped onto the joint angles of the robot [3].
Line 24: Line 17:
 ===== Podcast ===== ===== Podcast =====
  
-{{ :m3_seminar:m3_seminar_2018:projects_kit:M3Video_small.mp4 |}}+{{ :m3_seminar:m3_seminar_2018:projects_kit:M3Video_small1.mp4 |}}
  
 ===== Theoretical Basics ===== ===== Theoretical Basics =====
Line 33: Line 26:
 === Optical Motion Capture === === Optical Motion Capture ===
 Optical methods utilize several cameras with different perspectives on the subject. The spatial position is determined by recognizing the subject features and triangulating their positions within the image from different perspectives. Accordingly, the complete motion is derived from the captured sequence of images. For example, figure 1 illustrates two projections, X1 and X2, of the position X on the image panes of two different cameras. With the given coordinates of X1 and X2 the original position of X can be determined with the help of epipolar geometry. Optical motion capture methods can be classified furthermore into marker-based and marker-less tracking.  Optical methods utilize several cameras with different perspectives on the subject. The spatial position is determined by recognizing the subject features and triangulating their positions within the image from different perspectives. Accordingly, the complete motion is derived from the captured sequence of images. For example, figure 1 illustrates two projections, X1 and X2, of the position X on the image panes of two different cameras. With the given coordinates of X1 and X2 the original position of X can be determined with the help of epipolar geometry. Optical motion capture methods can be classified furthermore into marker-based and marker-less tracking. 
-{{ :m3_seminar:m3_seminar_2018:projects_kit:projections.png?400  }}Figure 1: Position X mapped to the image planes of two different perspectives [4]+{{ :m3_seminar:m3_seminar_2018:projects_kit:projections.png?400 }}Figure 1: Position X mapped to the image planes of two different perspectives [4]
  
 == Marker-based motion capture == == Marker-based motion capture ==
Line 66: Line 59:
 The MMM developed at the H²T Institute at KIT [2] is a framework which contains a generalized reference model of the human body and provides tools to handle sampled data and to make them available to output modules. Sampled data such as data from motion tracking are converted to a human reference model of one meter height. The reference model uses the markers at the same relative position of the human body as the human that is captured. That means that regardless who is converted, every conversion ends up with the reference model. The model itself is based on statistical data of the human body. The individual properties of the person captured, such as leg length or weight, are converted to match the reference model. Data that are converted to the MMM can also be saved in the Motion Database (DB) for later use. The MMM representation of the motion captured can be converted to every arbitrary robot. For that, a converter tool using the provided interfaced is needed. After the mapping, maybe there is need to slightly adapt some parameters, i.e. for the engines, because the technical properties of the robot model can differ a little bit to the one of the real robot. Figure 2 illustrates the workflow explained above and the various interfaces. As illustrated with the arrows pointing from the captures to MMM, as soon as the captured data is converted, it can be further progressed by all converters illustrated on the bottom or it can be saved to the Motion DB and thus used later. This approach is very flexible as several methods to obtain samples and to post-process the model calculated can be used. Furthermore, it can be extended and thus adapted to new techniques or requirements.  The MMM developed at the H²T Institute at KIT [2] is a framework which contains a generalized reference model of the human body and provides tools to handle sampled data and to make them available to output modules. Sampled data such as data from motion tracking are converted to a human reference model of one meter height. The reference model uses the markers at the same relative position of the human body as the human that is captured. That means that regardless who is converted, every conversion ends up with the reference model. The model itself is based on statistical data of the human body. The individual properties of the person captured, such as leg length or weight, are converted to match the reference model. Data that are converted to the MMM can also be saved in the Motion Database (DB) for later use. The MMM representation of the motion captured can be converted to every arbitrary robot. For that, a converter tool using the provided interfaced is needed. After the mapping, maybe there is need to slightly adapt some parameters, i.e. for the engines, because the technical properties of the robot model can differ a little bit to the one of the real robot. Figure 2 illustrates the workflow explained above and the various interfaces. As illustrated with the arrows pointing from the captures to MMM, as soon as the captured data is converted, it can be further progressed by all converters illustrated on the bottom or it can be saved to the Motion DB and thus used later. This approach is very flexible as several methods to obtain samples and to post-process the model calculated can be used. Furthermore, it can be extended and thus adapted to new techniques or requirements. 
  
-{{ :m3_seminar:m3_seminar_2018:projects_kit:mmm.png?400 }}Figure 2: The MMM framework [1]+{{ :m3_seminar:m3_seminar_2018:projects_kit:mmm.png?400 }} Figure 2: The MMM framework [1]
  
 ===== Experimental Setup ===== ===== Experimental Setup =====
Line 97: Line 90:
 {{ :m3_seminar:m3_seminar_2018:projects_kit:recording.jpg?250 }} {{ :m3_seminar:m3_seminar_2018:projects_kit:mmm.jpg?201 }} {{ :m3_seminar:m3_seminar_2018:projects_kit:recording.jpg?250 }} {{ :m3_seminar:m3_seminar_2018:projects_kit:mmm.jpg?201 }}
  
-|  Figure 5: Top: the recorded human with the sponge in his hand. Bottom: the movement is mapped to the MMM model.  |+Figure 5: Top: the recorded human with the sponge in his hand. Bottom: the movement is mapped to the MMM model.
  
 ===== Difficulties ===== ===== Difficulties =====
Line 108: Line 101:
  
 ===== Summary and Conclusion  ===== ===== Summary and Conclusion  =====
-In our lab session we performed the whole process of motion tracking. First, we selected scenarios, then we set-up the cameras and the environment. The next step was to carefully calibrate the cameras. Then, we recorded three scenarios three times each. Afterwards, we post-processed the recordings with the help of the VICON software. Here, we spent some time fixing trajectories as described above. Finally, we converted the post-processed recording to the MMM model. The steps to perform from recording a movement to the ready-to-use MMM model are numerous and took us several hours. Even though we took care of proper positioning of the markers and that the movement could be fully recorded, at the end it turned out that some markers were hidden or placed incorrectly. In order to obtain high quality recordings, experience is helpful to avoid those problems. Additionally, with cameras placed on the floor, the quality of our recordings would have been better.  +In our lab session we performed the whole process of motion tracking. First, we selected scenarios, then we set-up the cameras and the environment. The next step was to carefully calibrate the cameras. Then, we recorded three scenarios three times each. Afterwards, we post-processed the recordings with the help of the VICON software. Here, we spent some time fixing trajectories as described above. Finally, we converted the post-processed recording to the MMM model. The steps to perform from recording a movement to the ready-to-use MMM model are numerous and took us several hours. Even though we took care of proper positioning of the markers and that the movement could be fully recorded, at the end it turned out that some markers were hidden or placed incorrectly. In order to obtain high quality recordings, experience is helpful to avoid those problems. Additionally, with cameras placed on the floor, the quality of our recordings would have been better.
- +
-<note important>formatieren</note>+
 ===== References ===== ===== References =====
 [1] Master Motor Map: MMMCore Documentation. (2018, May 6). From: https://mmm.humanoids.kit.edu/index.html [1] Master Motor Map: MMMCore Documentation. (2018, May 6). From: https://mmm.humanoids.kit.edu/index.html
Line 132: Line 123:
 [10] Magnetic Motion Capture Systems. (n.d.). From: http://metamotion.com/motion-capture/magnetic-motion-capture-1.htm  [10] Magnetic Motion Capture Systems. (n.d.). From: http://metamotion.com/motion-capture/magnetic-motion-capture-1.htm 
  
-<color #ed1c24>[11] Robotik IFolien  +[11] Michael Gleicher1999Animation from observation: Motion capture and motion editing. SIGGRAPH ComputGraph. 33, 4 (November 1999), 51-54DOI=http://dx.doi.org/10.1145/345370.345409 
- +
-[11] [Dozent]Vorlesung Robotik I(KIT). ([Semester]). [Foliensatz]ppx-x</color> +
  
 [12] Motion Capture Sensor Systems. (2012, August 10). From: http://www.azosensors.com/article.aspx?ArticleID=43  [12] Motion Capture Sensor Systems. (2012, August 10). From: http://www.azosensors.com/article.aspx?ArticleID=43