Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
m3_seminar:m3_seminar_2018:projects_kit:human_motion_capturing_2018 [2018/07/08 21:22] – [Conversion to the MMM] stud_kitm3_seminar:m3_seminar_2018:projects_kit:human_motion_capturing_2018 [2018/07/09 23:26] (current) – [Podcast] stud_kit
Line 3: Line 3:
 ^ Autors | Herzog Michael, Kratzer Fabian & Mayer Anjela | ^ Autors | Herzog Michael, Kratzer Fabian & Mayer Anjela |
 ^ Betreuer | Isabel Ehrenberger | ^ Betreuer | Isabel Ehrenberger |
-^ Bearbeitungsdauer | ca. 50 Stunden |+^ Bearbeitungsdauer | ca. 90 Stunden |
 ^ Präsentationstermin  | 08.07.2018 | ^ Präsentationstermin  | 08.07.2018 |
 ===== Introduction/Motivation ===== ===== Introduction/Motivation =====
Line 17: Line 17:
 ===== Podcast ===== ===== Podcast =====
  
-{{ :m3_seminar:m3_seminar_2018:projects_kit:M3Video_small.mp4 |}}+{{ :m3_seminar:m3_seminar_2018:projects_kit:M3Video_small1.mp4 |}}
  
 ===== Theoretical Basics ===== ===== Theoretical Basics =====
Line 26: Line 26:
 === Optical Motion Capture === === Optical Motion Capture ===
 Optical methods utilize several cameras with different perspectives on the subject. The spatial position is determined by recognizing the subject features and triangulating their positions within the image from different perspectives. Accordingly, the complete motion is derived from the captured sequence of images. For example, figure 1 illustrates two projections, X1 and X2, of the position X on the image panes of two different cameras. With the given coordinates of X1 and X2 the original position of X can be determined with the help of epipolar geometry. Optical motion capture methods can be classified furthermore into marker-based and marker-less tracking.  Optical methods utilize several cameras with different perspectives on the subject. The spatial position is determined by recognizing the subject features and triangulating their positions within the image from different perspectives. Accordingly, the complete motion is derived from the captured sequence of images. For example, figure 1 illustrates two projections, X1 and X2, of the position X on the image panes of two different cameras. With the given coordinates of X1 and X2 the original position of X can be determined with the help of epipolar geometry. Optical motion capture methods can be classified furthermore into marker-based and marker-less tracking. 
-{{ :m3_seminar:m3_seminar_2018:projects_kit:projections.png?400  }}Figure 1: Position X mapped to the image planes of two different perspectives [4]+{{ :m3_seminar:m3_seminar_2018:projects_kit:projections.png?400 }}Figure 1: Position X mapped to the image planes of two different perspectives [4]
  
 == Marker-based motion capture == == Marker-based motion capture ==
Line 59: Line 59:
 The MMM developed at the H²T Institute at KIT [2] is a framework which contains a generalized reference model of the human body and provides tools to handle sampled data and to make them available to output modules. Sampled data such as data from motion tracking are converted to a human reference model of one meter height. The reference model uses the markers at the same relative position of the human body as the human that is captured. That means that regardless who is converted, every conversion ends up with the reference model. The model itself is based on statistical data of the human body. The individual properties of the person captured, such as leg length or weight, are converted to match the reference model. Data that are converted to the MMM can also be saved in the Motion Database (DB) for later use. The MMM representation of the motion captured can be converted to every arbitrary robot. For that, a converter tool using the provided interfaced is needed. After the mapping, maybe there is need to slightly adapt some parameters, i.e. for the engines, because the technical properties of the robot model can differ a little bit to the one of the real robot. Figure 2 illustrates the workflow explained above and the various interfaces. As illustrated with the arrows pointing from the captures to MMM, as soon as the captured data is converted, it can be further progressed by all converters illustrated on the bottom or it can be saved to the Motion DB and thus used later. This approach is very flexible as several methods to obtain samples and to post-process the model calculated can be used. Furthermore, it can be extended and thus adapted to new techniques or requirements.  The MMM developed at the H²T Institute at KIT [2] is a framework which contains a generalized reference model of the human body and provides tools to handle sampled data and to make them available to output modules. Sampled data such as data from motion tracking are converted to a human reference model of one meter height. The reference model uses the markers at the same relative position of the human body as the human that is captured. That means that regardless who is converted, every conversion ends up with the reference model. The model itself is based on statistical data of the human body. The individual properties of the person captured, such as leg length or weight, are converted to match the reference model. Data that are converted to the MMM can also be saved in the Motion Database (DB) for later use. The MMM representation of the motion captured can be converted to every arbitrary robot. For that, a converter tool using the provided interfaced is needed. After the mapping, maybe there is need to slightly adapt some parameters, i.e. for the engines, because the technical properties of the robot model can differ a little bit to the one of the real robot. Figure 2 illustrates the workflow explained above and the various interfaces. As illustrated with the arrows pointing from the captures to MMM, as soon as the captured data is converted, it can be further progressed by all converters illustrated on the bottom or it can be saved to the Motion DB and thus used later. This approach is very flexible as several methods to obtain samples and to post-process the model calculated can be used. Furthermore, it can be extended and thus adapted to new techniques or requirements. 
  
-{{ :m3_seminar:m3_seminar_2018:projects_kit:mmm.png?400 }}Figure 2: The MMM framework [1]+{{ :m3_seminar:m3_seminar_2018:projects_kit:mmm.png?400 }} Figure 2: The MMM framework [1]
  
 ===== Experimental Setup ===== ===== Experimental Setup =====