Frequently Asked Questions
This page contains answers to some frequently (or less frequently) asked questions about the Motion Database. If your question is not answered here, please contact us.
- Which types of data are offered by the KIT Whole-Body Human Motion Database ?
- Do I need an account to access the KIT Whole-Body Human Motion Database ? How can I register?
- How to download content from the KIT Whole-Body Human Motion Database ?
- How to access the KIT Whole-Body Human Motion Database programatically?
- How is access to the content in the KIT Whole-Body Human Motion Database controlled?
- How can I cite the KIT Whole-Body Human Motion Database ?
- What is the Master Motor Map (MMM) framework and where can I find its code and documentation?
- What is the Motion Description Tree?
- How does the "Advanced MDT search term" filter work?
- Why do some videos not offer a preview video on the webpage?
- How is data from the CMU Graphics Lab Motion Capture Database contained within the KIT Whole-Body Human Motion Database?
Which types of data are offered by the KIT Whole-Body Human Motion Database
?
The major types of data available in the database are:
-
MMM Motions (*.xml):
Human motion represented on the Master Motor Map (MMM) reference model as a well-specified kinematic and dynamic model of the human body.
For every timestep (100 Hz), root location and rotation and joint angle values of the reference model are given.
Additionally, these files also include the motion of environmental objects.
For more information about the MMM framework, see the corresponding question below.
-
C3D Files (*.c3d):
Raw recordings (100 Hz) from the Vicon motion capture in the industry standard file format C3D.
-
Video Files (*.avi):
Complementary video recordings for the captured motions.
Publicly available videos are anonymized, e.g. have their audio track removed and subjects' faces blurred.
-
Information about Subjects:
Body height and weight, segment lengths according to the Anthropometric Data Table, gender, age.
-
Information about Objects:
3D models (Blender and Simox), images.
Depending on the type of motion capture experiment, there may be additional data available, e.g. measurements from force sensors or an inertial measurement unit.
Do I need an account to access the KIT Whole-Body Human Motion Database
? How can I register?
Downloading files from the KIT Whole-Body Human Motion Database
requires a account.
Registration is free and only takes a few seconds.
Browsing the database content to see what is available, including previews of video recordings, is possible without a login (some data may be hidden for privacy protection, e.g. non-anonymized video recordings, subject names).
To create an account, fill out the registration form.
To change your account information like mail address and password, you can edit your user profile after login.
Please note that we disabled the login via the H²T project management system (Redmine). If you need access please create a local account via this webpage or contact us.
How to download content from the KIT Whole-Body Human Motion Database
?
All content related to specific motions or
objects can be downloaded individually via our available web interface.
We are currently working on providing exports of parts of the KIT Whole-Body Human Motion Database including relevant meta information.
We also provide downloads for individual datasets:
- The KIT Motion-Language Dataset (external website),
- The Extended KIT Bimanual Manipulation Dataset,
- or other datasets
Please note to cite the respective dataset.
If you need any assistance, feel free to contact us.
How to access the KIT Whole-Body Human Motion Database
programatically?
Please note that due to changes to our web server, the API is deactivated for an indefinite period of time.
You can find out how to download content from our database under the previous point.
How is access to the content in the KIT Whole-Body Human Motion Database
controlled?
In general, we aim to make content freely available to the whole scientific community.
Some files however need be protected for certain reasons, e.g. video recordings of motions that are not yet anonymized and allow the identification of human subjects.
On a more technical level, users in the KIT Whole-Body Human Motion Database
are associated with a number of user groups.
These groups determine which protected files the user can access and which database entries they can edit.
When logged in, you can see the groups your account is assigned to on the user profile page.
For every file uploaded to a database entry (motion, subject or object), the uploader can select whether the file is public or protected.
For every database entry, groups can be selected for two different levels of access:
-
Read protected groups:
Users in one of the "read protected groups" can download files marked as protected associated with this database entry.
-
Write groups:
Users in one of the "write groups" can alter the database entry, which means they can edit the entry (including the assigned groups), delete it and upload/edit/delete associated files.
Additionally, the users in one of the "write groups" can also download protected files (like for "read protected groups").
In addition to the group-based permission system, the user that created a database entry always retains full read and write access.
How can I cite the KIT Whole-Body Human Motion Database
?
If you are using the KIT Whole-Body Human Motion Database
in work that leads to a publication, we kindly ask you to cite one of the following papers:
If you are using specific motions corresponding to a published dataset, please cite the corresponding dataset, such as the KIT Bimanual Manipulation Dataset:
@INPROCEEDINGS {KrebsMeixner2021,
author = {Franziska Krebs and Andre Meixner and Isabel Patzer and Tamim Asfour},
title = {The {KIT} {B}imanual {M}anipulation {D}ataset},
booktitle = {IEEE/RAS International Conference on Humanoid Robots (Humanoids)},
pages = {499--506},
year = {2021}
}
[KrebsMeixner2021 - BibTeX]
[KrebsMeixner2021 - PDF]
Otherwise, if you are using the motion database as a whole or other motion recordings not corresponding to a published dataset, please cite one of the following citations:
@ARTICLE {Mandery2016b,
author = {Christian Mandery and \"Omer Terlemez and Martin Do and Nikolaus Vahrenkamp and Tamim Asfour},
title = {Unifying Representations and Large-Scale Whole-Body Motion Databases for Studying Human Motion},
pages = {796--809},
volume ={32},
number ={4},
journal ={IEEE Transactions on Robotics},
year = {2016},
}
@INPROCEEDINGS {Mandery2015a,
author = {Christian Mandery and \"Omer Terlemez and Martin Do and Nikolaus Vahrenkamp and Tamim Asfour},
title = {The KIT Whole-Body Human Motion Database},
booktitle = {International Conference on Advanced Robotics (ICAR)},
pages = {329--336},
year = {2015},
}
[Mandery2016b - BibTeX]
[Mandery2016b - PDF]
[Mandery2015a - BibTeX]
[Mandery2015a - PDF]
What is the Master Motor Map (MMM) framework and where can I find its code and documentation?
Master Motor Map (MMM) is a conceptual framework for perception, visualization, reproduction and recognition of human motion in order to decouple motion capture data from further post-processing tasks, such as execution on a humanoid robot.
The MMM framework has been developed in our lab at KIT and is freely available on GitLab under the GNU General Public License (see next question).
In addition to raw C3D motion capture data (which can be used without MMM by all kinds of motion processing tools), the KIT Whole-Body Human Motion Database
also provides the motions converted to the MMM reference model in the XML-based MMM motion format.
MMM consists of two packages:
MMMCore contains the data structures, kinematic models and code for reading and writing motion data.
MMMTools contains tools for visualization, reproduction and recognition for motion, e.g. the converters used to transfer raw motions from MoCap to the MMM reference model.
The documentation can be found at mmm.humanoids.kit.edu and a discussion of the core ideas and principles of MMM is provided in the following paper:
@inproceedings{Terlemez2014a,
author = {Oemer Terlemez and Stefan Ulbrich and Christian Mandery and Martin Do and Nikolaus Vahrenkamp and Tamim Asfour},
title = {Master Motor Map (MMM) -
Framework and Toolkit for Capturing, Representing, and Reproducing Human Motion on Humanoid Robots},
booktitle = {IEEE/RAS International Conference on Humanoid Robots (Humanoids)},
pages = {894--901},
year = {2014}
}
[Terlemez2014a - BibTeX]
[Terlemez2014a - PDF]
What is the Motion Description Tree?
The Motion Description Tree (MDT) consists of a hierarchical structure of tags that can be used to describe human motion.
Every motion entry in the database can be assigned one or more of the "motion descriptions" available in the MDT.
When filtering for a motion description using the filter panel in the right bar, only motions are considered that are contained in one of the selected subtrees.
Additionally, the "Advanced MDT search term" filter can be used to construct more complex search queries (see next question).
How does the "Advanced MDT search term" filter work?
The "Advanced MDT search term" filter allows to filter for motions based on their classification in the Motion Description Tree.
If a search term is provided, the simpler "Motion descriptions" filter is ignored.
Search terms consist of queries chained by using the logical operators "x AND y", "x OR y" and "NOT(x)".
These search terms can be of an (almost) arbitrary length.
Examples:
"run AND forward": Returns all running motions directed forwards.
"carry AND drop": Returns all motions where an object is carried and dropped (a specific object may also be included in the search by using the object filter).
"run OR (walk AND NOT(slow))": Returns all motions where the subject is running or walking, but not slow.
Motion description tags that contain spaces must be written within quotation marks when used within a search term (e.g.: "hand stand").
Why do some videos not offer a preview video on the webpage?
Video previews that are shown on the motion list and the motion detail page use the excellent VP8 codec from Google in a WebM container.
They should work in any major browser (Firefox, Chrome/Chromium, Opera) except Internet Explorer.
Preview videos share the same access restrictions as their corresponding (full-resolution) video files.
Therefore, if a video file is not accessible to you (e.g. because you are not logged in and the video is not yet properly anonymized), you will neither be able to see its preview.
Additionally, preview videos are generated once daily, which is why they are not shown for very recently uploaded videos.
Of course, you can still download a video file to inspect its content in this case.
How is data from the CMU Graphics Lab Motion Capture Database contained within the KIT Whole-Body Human Motion Database?
Starting in June 2016, we have integrated motion recordings from the CMU Graphics Lab Motion Capture Database as a subset into our motion database.
These motions can be found by filtering the list of motions for the "Carnegie Mellon University (CMU)" institution.
The motion recordings are provided as C3D files and as the corresponding MMM representations (see above).
Some important limitations and differences to the rest of our data should be noted when working with this data though:
The CMU data does not contain information about objects with which the human subject is interacting.
The CMU recordings use a slightly different marker set, which is described here (in contrast to our KIT reference marker set here).
Data imported from the CMU database is not labeled according to our Motion Description Tree and only contains the imported free text description (this may change someday).
The date of the recordings is not available and has been set arbitrarily to 2010-01-01 in our database.
The CMU data does not provide information about subjects. Therefore, for every motion experiment, a separate "dummy subject" has been created in our database. There "dummy subjects" do not contain anthropometric measurements and the subject height is estimated only based on the head markers in the initial pose.
Some recordings miss some of the defined markers in the CMU marker set and have been skipped.
Acknowledgments: Motion data from the CMU Graphics Lab Motion Capture Database was obtained from mocap.cs.cmu.edu.
This database was created with funding from NSF EIA-0196217.
The major types of data available in the database are:
-
MMM Motions (*.xml):
Human motion represented on the Master Motor Map (MMM) reference model as a well-specified kinematic and dynamic model of the human body. For every timestep (100 Hz), root location and rotation and joint angle values of the reference model are given. Additionally, these files also include the motion of environmental objects. For more information about the MMM framework, see the corresponding question below.
-
C3D Files (*.c3d):
Raw recordings (100 Hz) from the Vicon motion capture in the industry standard file format C3D.
-
Video Files (*.avi):
Complementary video recordings for the captured motions. Publicly available videos are anonymized, e.g. have their audio track removed and subjects' faces blurred.
-
Information about Subjects:
Body height and weight, segment lengths according to the Anthropometric Data Table, gender, age.
-
Information about Objects:
3D models (Blender and Simox), images.
Depending on the type of motion capture experiment, there may be additional data available, e.g. measurements from force sensors or an inertial measurement unit.
Downloading files from the KIT Whole-Body Human Motion Database requires a account. Registration is free and only takes a few seconds. Browsing the database content to see what is available, including previews of video recordings, is possible without a login (some data may be hidden for privacy protection, e.g. non-anonymized video recordings, subject names).
To create an account, fill out the registration form. To change your account information like mail address and password, you can edit your user profile after login.
Please note that we disabled the login via the H²T project management system (Redmine). If you need access please create a local account via this webpage or contact us.
How to download content from the KIT Whole-Body Human Motion Database
?
All content related to specific motions or
objects can be downloaded individually via our available web interface.
We are currently working on providing exports of parts of the KIT Whole-Body Human Motion Database including relevant meta information.
We also provide downloads for individual datasets:
- The KIT Motion-Language Dataset (external website),
- The Extended KIT Bimanual Manipulation Dataset,
- or other datasets
Please note to cite the respective dataset.
If you need any assistance, feel free to contact us.
How to access the KIT Whole-Body Human Motion Database
programatically?
Please note that due to changes to our web server, the API is deactivated for an indefinite period of time.
You can find out how to download content from our database under the previous point.
How is access to the content in the KIT Whole-Body Human Motion Database
controlled?
In general, we aim to make content freely available to the whole scientific community.
Some files however need be protected for certain reasons, e.g. video recordings of motions that are not yet anonymized and allow the identification of human subjects.
On a more technical level, users in the KIT Whole-Body Human Motion Database
are associated with a number of user groups.
These groups determine which protected files the user can access and which database entries they can edit.
When logged in, you can see the groups your account is assigned to on the user profile page.
For every file uploaded to a database entry (motion, subject or object), the uploader can select whether the file is public or protected.
For every database entry, groups can be selected for two different levels of access:
-
Read protected groups:
Users in one of the "read protected groups" can download files marked as protected associated with this database entry.
-
Write groups:
Users in one of the "write groups" can alter the database entry, which means they can edit the entry (including the assigned groups), delete it and upload/edit/delete associated files.
Additionally, the users in one of the "write groups" can also download protected files (like for "read protected groups").
In addition to the group-based permission system, the user that created a database entry always retains full read and write access.
How can I cite the KIT Whole-Body Human Motion Database
?
If you are using the KIT Whole-Body Human Motion Database
in work that leads to a publication, we kindly ask you to cite one of the following papers:
If you are using specific motions corresponding to a published dataset, please cite the corresponding dataset, such as the KIT Bimanual Manipulation Dataset:
@INPROCEEDINGS {KrebsMeixner2021,
author = {Franziska Krebs and Andre Meixner and Isabel Patzer and Tamim Asfour},
title = {The {KIT} {B}imanual {M}anipulation {D}ataset},
booktitle = {IEEE/RAS International Conference on Humanoid Robots (Humanoids)},
pages = {499--506},
year = {2021}
}
[KrebsMeixner2021 - BibTeX]
[KrebsMeixner2021 - PDF]
Otherwise, if you are using the motion database as a whole or other motion recordings not corresponding to a published dataset, please cite one of the following citations:
@ARTICLE {Mandery2016b,
author = {Christian Mandery and \"Omer Terlemez and Martin Do and Nikolaus Vahrenkamp and Tamim Asfour},
title = {Unifying Representations and Large-Scale Whole-Body Motion Databases for Studying Human Motion},
pages = {796--809},
volume ={32},
number ={4},
journal ={IEEE Transactions on Robotics},
year = {2016},
}
@INPROCEEDINGS {Mandery2015a,
author = {Christian Mandery and \"Omer Terlemez and Martin Do and Nikolaus Vahrenkamp and Tamim Asfour},
title = {The KIT Whole-Body Human Motion Database},
booktitle = {International Conference on Advanced Robotics (ICAR)},
pages = {329--336},
year = {2015},
}
[Mandery2016b - BibTeX]
[Mandery2016b - PDF]
[Mandery2015a - BibTeX]
[Mandery2015a - PDF]
What is the Master Motor Map (MMM) framework and where can I find its code and documentation?
Master Motor Map (MMM) is a conceptual framework for perception, visualization, reproduction and recognition of human motion in order to decouple motion capture data from further post-processing tasks, such as execution on a humanoid robot.
The MMM framework has been developed in our lab at KIT and is freely available on GitLab under the GNU General Public License (see next question).
In addition to raw C3D motion capture data (which can be used without MMM by all kinds of motion processing tools), the KIT Whole-Body Human Motion Database
also provides the motions converted to the MMM reference model in the XML-based MMM motion format.
MMM consists of two packages:
MMMCore contains the data structures, kinematic models and code for reading and writing motion data.
MMMTools contains tools for visualization, reproduction and recognition for motion, e.g. the converters used to transfer raw motions from MoCap to the MMM reference model.
The documentation can be found at mmm.humanoids.kit.edu and a discussion of the core ideas and principles of MMM is provided in the following paper:
@inproceedings{Terlemez2014a,
author = {Oemer Terlemez and Stefan Ulbrich and Christian Mandery and Martin Do and Nikolaus Vahrenkamp and Tamim Asfour},
title = {Master Motor Map (MMM) -
Framework and Toolkit for Capturing, Representing, and Reproducing Human Motion on Humanoid Robots},
booktitle = {IEEE/RAS International Conference on Humanoid Robots (Humanoids)},
pages = {894--901},
year = {2014}
}
[Terlemez2014a - BibTeX]
[Terlemez2014a - PDF]
What is the Motion Description Tree?
The Motion Description Tree (MDT) consists of a hierarchical structure of tags that can be used to describe human motion.
Every motion entry in the database can be assigned one or more of the "motion descriptions" available in the MDT.
When filtering for a motion description using the filter panel in the right bar, only motions are considered that are contained in one of the selected subtrees.
Additionally, the "Advanced MDT search term" filter can be used to construct more complex search queries (see next question).
How does the "Advanced MDT search term" filter work?
The "Advanced MDT search term" filter allows to filter for motions based on their classification in the Motion Description Tree.
If a search term is provided, the simpler "Motion descriptions" filter is ignored.
Search terms consist of queries chained by using the logical operators "x AND y", "x OR y" and "NOT(x)".
These search terms can be of an (almost) arbitrary length.
Examples:
"run AND forward": Returns all running motions directed forwards.
"carry AND drop": Returns all motions where an object is carried and dropped (a specific object may also be included in the search by using the object filter).
"run OR (walk AND NOT(slow))": Returns all motions where the subject is running or walking, but not slow.
Motion description tags that contain spaces must be written within quotation marks when used within a search term (e.g.: "hand stand").
Why do some videos not offer a preview video on the webpage?
Video previews that are shown on the motion list and the motion detail page use the excellent VP8 codec from Google in a WebM container.
They should work in any major browser (Firefox, Chrome/Chromium, Opera) except Internet Explorer.
Preview videos share the same access restrictions as their corresponding (full-resolution) video files.
Therefore, if a video file is not accessible to you (e.g. because you are not logged in and the video is not yet properly anonymized), you will neither be able to see its preview.
Additionally, preview videos are generated once daily, which is why they are not shown for very recently uploaded videos.
Of course, you can still download a video file to inspect its content in this case.
How is data from the CMU Graphics Lab Motion Capture Database contained within the KIT Whole-Body Human Motion Database?
Starting in June 2016, we have integrated motion recordings from the CMU Graphics Lab Motion Capture Database as a subset into our motion database.
These motions can be found by filtering the list of motions for the "Carnegie Mellon University (CMU)" institution.
The motion recordings are provided as C3D files and as the corresponding MMM representations (see above).
Some important limitations and differences to the rest of our data should be noted when working with this data though:
The CMU data does not contain information about objects with which the human subject is interacting.
The CMU recordings use a slightly different marker set, which is described here (in contrast to our KIT reference marker set here).
Data imported from the CMU database is not labeled according to our Motion Description Tree and only contains the imported free text description (this may change someday).
The date of the recordings is not available and has been set arbitrarily to 2010-01-01 in our database.
The CMU data does not provide information about subjects. Therefore, for every motion experiment, a separate "dummy subject" has been created in our database. There "dummy subjects" do not contain anthropometric measurements and the subject height is estimated only based on the head markers in the initial pose.
Some recordings miss some of the defined markers in the CMU marker set and have been skipped.
Acknowledgments: Motion data from the CMU Graphics Lab Motion Capture Database was obtained from mocap.cs.cmu.edu.
This database was created with funding from NSF EIA-0196217.
All content related to specific motions or objects can be downloaded individually via our available web interface.
We are currently working on providing exports of parts of the KIT Whole-Body Human Motion Database including relevant meta information.
We also provide downloads for individual datasets:
- The KIT Motion-Language Dataset (external website),
- The Extended KIT Bimanual Manipulation Dataset,
- or other datasets
Please note to cite the respective dataset.
If you need any assistance, feel free to contact us.
Please note that due to changes to our web server, the API is deactivated for an indefinite period of time. You can find out how to download content from our database under the previous point.
How is access to the content in the KIT Whole-Body Human Motion Database
controlled?
In general, we aim to make content freely available to the whole scientific community.
Some files however need be protected for certain reasons, e.g. video recordings of motions that are not yet anonymized and allow the identification of human subjects.
On a more technical level, users in the KIT Whole-Body Human Motion Database
are associated with a number of user groups.
These groups determine which protected files the user can access and which database entries they can edit.
When logged in, you can see the groups your account is assigned to on the user profile page.
For every file uploaded to a database entry (motion, subject or object), the uploader can select whether the file is public or protected.
For every database entry, groups can be selected for two different levels of access:
-
Read protected groups:
Users in one of the "read protected groups" can download files marked as protected associated with this database entry.
-
Write groups:
Users in one of the "write groups" can alter the database entry, which means they can edit the entry (including the assigned groups), delete it and upload/edit/delete associated files.
Additionally, the users in one of the "write groups" can also download protected files (like for "read protected groups").
In addition to the group-based permission system, the user that created a database entry always retains full read and write access.
How can I cite the KIT Whole-Body Human Motion Database
?
If you are using the KIT Whole-Body Human Motion Database
in work that leads to a publication, we kindly ask you to cite one of the following papers:
If you are using specific motions corresponding to a published dataset, please cite the corresponding dataset, such as the KIT Bimanual Manipulation Dataset:
@INPROCEEDINGS {KrebsMeixner2021,
author = {Franziska Krebs and Andre Meixner and Isabel Patzer and Tamim Asfour},
title = {The {KIT} {B}imanual {M}anipulation {D}ataset},
booktitle = {IEEE/RAS International Conference on Humanoid Robots (Humanoids)},
pages = {499--506},
year = {2021}
}
[KrebsMeixner2021 - BibTeX]
[KrebsMeixner2021 - PDF]
Otherwise, if you are using the motion database as a whole or other motion recordings not corresponding to a published dataset, please cite one of the following citations:
@ARTICLE {Mandery2016b,
author = {Christian Mandery and \"Omer Terlemez and Martin Do and Nikolaus Vahrenkamp and Tamim Asfour},
title = {Unifying Representations and Large-Scale Whole-Body Motion Databases for Studying Human Motion},
pages = {796--809},
volume ={32},
number ={4},
journal ={IEEE Transactions on Robotics},
year = {2016},
}
@INPROCEEDINGS {Mandery2015a,
author = {Christian Mandery and \"Omer Terlemez and Martin Do and Nikolaus Vahrenkamp and Tamim Asfour},
title = {The KIT Whole-Body Human Motion Database},
booktitle = {International Conference on Advanced Robotics (ICAR)},
pages = {329--336},
year = {2015},
}
[Mandery2016b - BibTeX]
[Mandery2016b - PDF]
[Mandery2015a - BibTeX]
[Mandery2015a - PDF]
What is the Master Motor Map (MMM) framework and where can I find its code and documentation?
Master Motor Map (MMM) is a conceptual framework for perception, visualization, reproduction and recognition of human motion in order to decouple motion capture data from further post-processing tasks, such as execution on a humanoid robot.
The MMM framework has been developed in our lab at KIT and is freely available on GitLab under the GNU General Public License (see next question).
In addition to raw C3D motion capture data (which can be used without MMM by all kinds of motion processing tools), the KIT Whole-Body Human Motion Database
also provides the motions converted to the MMM reference model in the XML-based MMM motion format.
MMM consists of two packages:
MMMCore contains the data structures, kinematic models and code for reading and writing motion data.
MMMTools contains tools for visualization, reproduction and recognition for motion, e.g. the converters used to transfer raw motions from MoCap to the MMM reference model.
The documentation can be found at mmm.humanoids.kit.edu and a discussion of the core ideas and principles of MMM is provided in the following paper:
@inproceedings{Terlemez2014a,
author = {Oemer Terlemez and Stefan Ulbrich and Christian Mandery and Martin Do and Nikolaus Vahrenkamp and Tamim Asfour},
title = {Master Motor Map (MMM) -
Framework and Toolkit for Capturing, Representing, and Reproducing Human Motion on Humanoid Robots},
booktitle = {IEEE/RAS International Conference on Humanoid Robots (Humanoids)},
pages = {894--901},
year = {2014}
}
[Terlemez2014a - BibTeX]
[Terlemez2014a - PDF]
What is the Motion Description Tree?
The Motion Description Tree (MDT) consists of a hierarchical structure of tags that can be used to describe human motion.
Every motion entry in the database can be assigned one or more of the "motion descriptions" available in the MDT.
When filtering for a motion description using the filter panel in the right bar, only motions are considered that are contained in one of the selected subtrees.
Additionally, the "Advanced MDT search term" filter can be used to construct more complex search queries (see next question).
How does the "Advanced MDT search term" filter work?
The "Advanced MDT search term" filter allows to filter for motions based on their classification in the Motion Description Tree.
If a search term is provided, the simpler "Motion descriptions" filter is ignored.
Search terms consist of queries chained by using the logical operators "x AND y", "x OR y" and "NOT(x)".
These search terms can be of an (almost) arbitrary length.
Examples:
"run AND forward": Returns all running motions directed forwards.
"carry AND drop": Returns all motions where an object is carried and dropped (a specific object may also be included in the search by using the object filter).
"run OR (walk AND NOT(slow))": Returns all motions where the subject is running or walking, but not slow.
Motion description tags that contain spaces must be written within quotation marks when used within a search term (e.g.: "hand stand").
Why do some videos not offer a preview video on the webpage?
Video previews that are shown on the motion list and the motion detail page use the excellent VP8 codec from Google in a WebM container.
They should work in any major browser (Firefox, Chrome/Chromium, Opera) except Internet Explorer.
Preview videos share the same access restrictions as their corresponding (full-resolution) video files.
Therefore, if a video file is not accessible to you (e.g. because you are not logged in and the video is not yet properly anonymized), you will neither be able to see its preview.
Additionally, preview videos are generated once daily, which is why they are not shown for very recently uploaded videos.
Of course, you can still download a video file to inspect its content in this case.
How is data from the CMU Graphics Lab Motion Capture Database contained within the KIT Whole-Body Human Motion Database?
Starting in June 2016, we have integrated motion recordings from the CMU Graphics Lab Motion Capture Database as a subset into our motion database.
These motions can be found by filtering the list of motions for the "Carnegie Mellon University (CMU)" institution.
The motion recordings are provided as C3D files and as the corresponding MMM representations (see above).
Some important limitations and differences to the rest of our data should be noted when working with this data though:
The CMU data does not contain information about objects with which the human subject is interacting.
The CMU recordings use a slightly different marker set, which is described here (in contrast to our KIT reference marker set here).
Data imported from the CMU database is not labeled according to our Motion Description Tree and only contains the imported free text description (this may change someday).
The date of the recordings is not available and has been set arbitrarily to 2010-01-01 in our database.
The CMU data does not provide information about subjects. Therefore, for every motion experiment, a separate "dummy subject" has been created in our database. There "dummy subjects" do not contain anthropometric measurements and the subject height is estimated only based on the head markers in the initial pose.
Some recordings miss some of the defined markers in the CMU marker set and have been skipped.
Acknowledgments: Motion data from the CMU Graphics Lab Motion Capture Database was obtained from mocap.cs.cmu.edu.
This database was created with funding from NSF EIA-0196217.
In general, we aim to make content freely available to the whole scientific community. Some files however need be protected for certain reasons, e.g. video recordings of motions that are not yet anonymized and allow the identification of human subjects.
On a more technical level, users in the KIT Whole-Body Human Motion Database are associated with a number of user groups. These groups determine which protected files the user can access and which database entries they can edit. When logged in, you can see the groups your account is assigned to on the user profile page.
For every file uploaded to a database entry (motion, subject or object), the uploader can select whether the file is public or protected. For every database entry, groups can be selected for two different levels of access:
-
Read protected groups:
Users in one of the "read protected groups" can download files marked as protected associated with this database entry.
-
Write groups:
Users in one of the "write groups" can alter the database entry, which means they can edit the entry (including the assigned groups), delete it and upload/edit/delete associated files. Additionally, the users in one of the "write groups" can also download protected files (like for "read protected groups").
In addition to the group-based permission system, the user that created a database entry always retains full read and write access.
If you are using the KIT Whole-Body Human Motion Database in work that leads to a publication, we kindly ask you to cite one of the following papers:
If you are using specific motions corresponding to a published dataset, please cite the corresponding dataset, such as the KIT Bimanual Manipulation Dataset:
@INPROCEEDINGS {KrebsMeixner2021,
author = {Franziska Krebs and Andre Meixner and Isabel Patzer and Tamim Asfour},
title = {The {KIT} {B}imanual {M}anipulation {D}ataset},
booktitle = {IEEE/RAS International Conference on Humanoid Robots (Humanoids)},
pages = {499--506},
year = {2021}
}
[KrebsMeixner2021 - BibTeX]
[KrebsMeixner2021 - PDF]
Otherwise, if you are using the motion database as a whole or other motion recordings not corresponding to a published dataset, please cite one of the following citations:
@ARTICLE {Mandery2016b,
author = {Christian Mandery and \"Omer Terlemez and Martin Do and Nikolaus Vahrenkamp and Tamim Asfour},
title = {Unifying Representations and Large-Scale Whole-Body Motion Databases for Studying Human Motion},
pages = {796--809},
volume ={32},
number ={4},
journal ={IEEE Transactions on Robotics},
year = {2016},
}
@INPROCEEDINGS {Mandery2015a,
author = {Christian Mandery and \"Omer Terlemez and Martin Do and Nikolaus Vahrenkamp and Tamim Asfour},
title = {The KIT Whole-Body Human Motion Database},
booktitle = {International Conference on Advanced Robotics (ICAR)},
pages = {329--336},
year = {2015},
}
[Mandery2016b - BibTeX]
[Mandery2016b - PDF]
[Mandery2015a - BibTeX] [Mandery2015a - PDF]
What is the Master Motor Map (MMM) framework and where can I find its code and documentation?
Master Motor Map (MMM) is a conceptual framework for perception, visualization, reproduction and recognition of human motion in order to decouple motion capture data from further post-processing tasks, such as execution on a humanoid robot.
The MMM framework has been developed in our lab at KIT and is freely available on GitLab under the GNU General Public License (see next question).
In addition to raw C3D motion capture data (which can be used without MMM by all kinds of motion processing tools), the KIT Whole-Body Human Motion Database
also provides the motions converted to the MMM reference model in the XML-based MMM motion format.
MMM consists of two packages:
MMMCore contains the data structures, kinematic models and code for reading and writing motion data.
MMMTools contains tools for visualization, reproduction and recognition for motion, e.g. the converters used to transfer raw motions from MoCap to the MMM reference model.
The documentation can be found at mmm.humanoids.kit.edu and a discussion of the core ideas and principles of MMM is provided in the following paper:
@inproceedings{Terlemez2014a,
author = {Oemer Terlemez and Stefan Ulbrich and Christian Mandery and Martin Do and Nikolaus Vahrenkamp and Tamim Asfour},
title = {Master Motor Map (MMM) -
Framework and Toolkit for Capturing, Representing, and Reproducing Human Motion on Humanoid Robots},
booktitle = {IEEE/RAS International Conference on Humanoid Robots (Humanoids)},
pages = {894--901},
year = {2014}
}
[Terlemez2014a - BibTeX]
[Terlemez2014a - PDF]
What is the Motion Description Tree?
The Motion Description Tree (MDT) consists of a hierarchical structure of tags that can be used to describe human motion.
Every motion entry in the database can be assigned one or more of the "motion descriptions" available in the MDT.
When filtering for a motion description using the filter panel in the right bar, only motions are considered that are contained in one of the selected subtrees.
Additionally, the "Advanced MDT search term" filter can be used to construct more complex search queries (see next question).
How does the "Advanced MDT search term" filter work?
The "Advanced MDT search term" filter allows to filter for motions based on their classification in the Motion Description Tree.
If a search term is provided, the simpler "Motion descriptions" filter is ignored.
Search terms consist of queries chained by using the logical operators "x AND y", "x OR y" and "NOT(x)".
These search terms can be of an (almost) arbitrary length.
Examples:
"run AND forward": Returns all running motions directed forwards.
"carry AND drop": Returns all motions where an object is carried and dropped (a specific object may also be included in the search by using the object filter).
"run OR (walk AND NOT(slow))": Returns all motions where the subject is running or walking, but not slow.
Motion description tags that contain spaces must be written within quotation marks when used within a search term (e.g.: "hand stand").
Why do some videos not offer a preview video on the webpage?
Video previews that are shown on the motion list and the motion detail page use the excellent VP8 codec from Google in a WebM container.
They should work in any major browser (Firefox, Chrome/Chromium, Opera) except Internet Explorer.
Preview videos share the same access restrictions as their corresponding (full-resolution) video files.
Therefore, if a video file is not accessible to you (e.g. because you are not logged in and the video is not yet properly anonymized), you will neither be able to see its preview.
Additionally, preview videos are generated once daily, which is why they are not shown for very recently uploaded videos.
Of course, you can still download a video file to inspect its content in this case.
How is data from the CMU Graphics Lab Motion Capture Database contained within the KIT Whole-Body Human Motion Database?
Starting in June 2016, we have integrated motion recordings from the CMU Graphics Lab Motion Capture Database as a subset into our motion database.
These motions can be found by filtering the list of motions for the "Carnegie Mellon University (CMU)" institution.
The motion recordings are provided as C3D files and as the corresponding MMM representations (see above).
Some important limitations and differences to the rest of our data should be noted when working with this data though:
The CMU data does not contain information about objects with which the human subject is interacting.
The CMU recordings use a slightly different marker set, which is described here (in contrast to our KIT reference marker set here).
Data imported from the CMU database is not labeled according to our Motion Description Tree and only contains the imported free text description (this may change someday).
The date of the recordings is not available and has been set arbitrarily to 2010-01-01 in our database.
The CMU data does not provide information about subjects. Therefore, for every motion experiment, a separate "dummy subject" has been created in our database. There "dummy subjects" do not contain anthropometric measurements and the subject height is estimated only based on the head markers in the initial pose.
Some recordings miss some of the defined markers in the CMU marker set and have been skipped.
Acknowledgments: Motion data from the CMU Graphics Lab Motion Capture Database was obtained from mocap.cs.cmu.edu.
This database was created with funding from NSF EIA-0196217.
Master Motor Map (MMM) is a conceptual framework for perception, visualization, reproduction and recognition of human motion in order to decouple motion capture data from further post-processing tasks, such as execution on a humanoid robot. The MMM framework has been developed in our lab at KIT and is freely available on GitLab under the GNU General Public License (see next question).
In addition to raw C3D motion capture data (which can be used without MMM by all kinds of motion processing tools), the KIT Whole-Body Human Motion Database also provides the motions converted to the MMM reference model in the XML-based MMM motion format.
MMM consists of two packages:
MMMCore contains the data structures, kinematic models and code for reading and writing motion data.
MMMTools contains tools for visualization, reproduction and recognition for motion, e.g. the converters used to transfer raw motions from MoCap to the MMM reference model.
The documentation can be found at mmm.humanoids.kit.edu and a discussion of the core ideas and principles of MMM is provided in the following paper:
@inproceedings{Terlemez2014a,
author = {Oemer Terlemez and Stefan Ulbrich and Christian Mandery and Martin Do and Nikolaus Vahrenkamp and Tamim Asfour},
title = {Master Motor Map (MMM) -
Framework and Toolkit for Capturing, Representing, and Reproducing Human Motion on Humanoid Robots},
booktitle = {IEEE/RAS International Conference on Humanoid Robots (Humanoids)},
pages = {894--901},
year = {2014}
}
[Terlemez2014a - BibTeX]
[Terlemez2014a - PDF]
The Motion Description Tree (MDT) consists of a hierarchical structure of tags that can be used to describe human motion. Every motion entry in the database can be assigned one or more of the "motion descriptions" available in the MDT.
When filtering for a motion description using the filter panel in the right bar, only motions are considered that are contained in one of the selected subtrees. Additionally, the "Advanced MDT search term" filter can be used to construct more complex search queries (see next question).
How does the "Advanced MDT search term" filter work?
The "Advanced MDT search term" filter allows to filter for motions based on their classification in the Motion Description Tree.
If a search term is provided, the simpler "Motion descriptions" filter is ignored.
Search terms consist of queries chained by using the logical operators "x AND y", "x OR y" and "NOT(x)".
These search terms can be of an (almost) arbitrary length.
Examples:
"run AND forward": Returns all running motions directed forwards.
"carry AND drop": Returns all motions where an object is carried and dropped (a specific object may also be included in the search by using the object filter).
"run OR (walk AND NOT(slow))": Returns all motions where the subject is running or walking, but not slow.
Motion description tags that contain spaces must be written within quotation marks when used within a search term (e.g.: "hand stand").
Why do some videos not offer a preview video on the webpage?
Video previews that are shown on the motion list and the motion detail page use the excellent VP8 codec from Google in a WebM container.
They should work in any major browser (Firefox, Chrome/Chromium, Opera) except Internet Explorer.
Preview videos share the same access restrictions as their corresponding (full-resolution) video files.
Therefore, if a video file is not accessible to you (e.g. because you are not logged in and the video is not yet properly anonymized), you will neither be able to see its preview.
Additionally, preview videos are generated once daily, which is why they are not shown for very recently uploaded videos.
Of course, you can still download a video file to inspect its content in this case.
How is data from the CMU Graphics Lab Motion Capture Database contained within the KIT Whole-Body Human Motion Database?
Starting in June 2016, we have integrated motion recordings from the CMU Graphics Lab Motion Capture Database as a subset into our motion database.
These motions can be found by filtering the list of motions for the "Carnegie Mellon University (CMU)" institution.
The motion recordings are provided as C3D files and as the corresponding MMM representations (see above).
Some important limitations and differences to the rest of our data should be noted when working with this data though:
The CMU data does not contain information about objects with which the human subject is interacting.
The CMU recordings use a slightly different marker set, which is described here (in contrast to our KIT reference marker set here).
Data imported from the CMU database is not labeled according to our Motion Description Tree and only contains the imported free text description (this may change someday).
The date of the recordings is not available and has been set arbitrarily to 2010-01-01 in our database.
The CMU data does not provide information about subjects. Therefore, for every motion experiment, a separate "dummy subject" has been created in our database. There "dummy subjects" do not contain anthropometric measurements and the subject height is estimated only based on the head markers in the initial pose.
Some recordings miss some of the defined markers in the CMU marker set and have been skipped.
Acknowledgments: Motion data from the CMU Graphics Lab Motion Capture Database was obtained from mocap.cs.cmu.edu.
This database was created with funding from NSF EIA-0196217.
The "Advanced MDT search term" filter allows to filter for motions based on their classification in the Motion Description Tree. If a search term is provided, the simpler "Motion descriptions" filter is ignored.
Search terms consist of queries chained by using the logical operators "x AND y", "x OR y" and "NOT(x)". These search terms can be of an (almost) arbitrary length.
Examples:
"run AND forward": Returns all running motions directed forwards.
"carry AND drop": Returns all motions where an object is carried and dropped (a specific object may also be included in the search by using the object filter).
"run OR (walk AND NOT(slow))": Returns all motions where the subject is running or walking, but not slow.
Motion description tags that contain spaces must be written within quotation marks when used within a search term (e.g.: "hand stand").
Video previews that are shown on the motion list and the motion detail page use the excellent VP8 codec from Google in a WebM container. They should work in any major browser (Firefox, Chrome/Chromium, Opera) except Internet Explorer.
Preview videos share the same access restrictions as their corresponding (full-resolution) video files. Therefore, if a video file is not accessible to you (e.g. because you are not logged in and the video is not yet properly anonymized), you will neither be able to see its preview. Additionally, preview videos are generated once daily, which is why they are not shown for very recently uploaded videos. Of course, you can still download a video file to inspect its content in this case.
How is data from the CMU Graphics Lab Motion Capture Database contained within the KIT Whole-Body Human Motion Database?
Starting in June 2016, we have integrated motion recordings from the CMU Graphics Lab Motion Capture Database as a subset into our motion database.
These motions can be found by filtering the list of motions for the "Carnegie Mellon University (CMU)" institution.
The motion recordings are provided as C3D files and as the corresponding MMM representations (see above).
Some important limitations and differences to the rest of our data should be noted when working with this data though:
The CMU data does not contain information about objects with which the human subject is interacting.
The CMU recordings use a slightly different marker set, which is described here (in contrast to our KIT reference marker set here).
Data imported from the CMU database is not labeled according to our Motion Description Tree and only contains the imported free text description (this may change someday).
The date of the recordings is not available and has been set arbitrarily to 2010-01-01 in our database.
The CMU data does not provide information about subjects. Therefore, for every motion experiment, a separate "dummy subject" has been created in our database. There "dummy subjects" do not contain anthropometric measurements and the subject height is estimated only based on the head markers in the initial pose.
Some recordings miss some of the defined markers in the CMU marker set and have been skipped.
Acknowledgments: Motion data from the CMU Graphics Lab Motion Capture Database was obtained from mocap.cs.cmu.edu.
This database was created with funding from NSF EIA-0196217.
Starting in June 2016, we have integrated motion recordings from the CMU Graphics Lab Motion Capture Database as a subset into our motion database. These motions can be found by filtering the list of motions for the "Carnegie Mellon University (CMU)" institution. The motion recordings are provided as C3D files and as the corresponding MMM representations (see above).
Some important limitations and differences to the rest of our data should be noted when working with this data though:
The CMU data does not contain information about objects with which the human subject is interacting.
The CMU recordings use a slightly different marker set, which is described here (in contrast to our KIT reference marker set here).
Data imported from the CMU database is not labeled according to our Motion Description Tree and only contains the imported free text description (this may change someday).
The date of the recordings is not available and has been set arbitrarily to 2010-01-01 in our database.
The CMU data does not provide information about subjects. Therefore, for every motion experiment, a separate "dummy subject" has been created in our database. There "dummy subjects" do not contain anthropometric measurements and the subject height is estimated only based on the head markers in the initial pose.
Some recordings miss some of the defined markers in the CMU marker set and have been skipped.
Acknowledgments: Motion data from the CMU Graphics Lab Motion Capture Database was obtained from mocap.cs.cmu.edu. This database was created with funding from NSF EIA-0196217.