Have you ever suddenly become aware of a weird position you’ve been sitting in for the past half hour? It’s happened to all of us, and we can’t begin to comprehend the endless different positions our bodies can be in at any given time. Even common positions like standing, sitting, squatting, or lying down are still vague depictions — our arms and legs can be positioned in a variety of ways during any of those poses. Nowadays, video game consoles can record your movements, assess them in real-time, and depict whether or not you made the right moves with your body in order to pass the level. That’s a lot of complex processing for a task that seems effortless with today’s technology, but it wouldn’t be possible without what we refer to as human pose estimation.
In this article, you’ll receive a comprehensive overview of everything related to human pose estimation with deep learning, including:
- What is human pose estimation?
- Frequently asked questions
- Deep learning methods
- Use cases and applications
- Key takeaways
What is human pose estimation?
Human pose estimation (HPE), also known as pose tracking, is a computer vision problem that aims to identify and depict human joints in a given visual that will then go on to help construct a full picture of a given individual’s entire stance. As a result, we get what is known as a human pose skeleton — a graphical representation of a human being in either 2D or 3D, depending on the method. A coordinate is assigned to each joint (shoulders, knees, elbows, etc.) and is referred to as a key point. When those key points are connected to one another via a straight line, it’s referred to as a skeleton graph, whose vertices represent the joints and where the edges (lines) represent the physical connection (often a bone) between them. Human pose estimation takes traditional object detection to another level of precision by not limiting the detected objects with merely a bounding box.
Pose estimation can be achieved either in 2D or 3D, with the primary difference lying in the desired type of output result. With the 2D output, we receive a visual that resembles a stick figure or skeleton representation of the various key points on the body. While with 3D human pose estimation, we receive a visual representation of the key points on a 3D spatial plane, with the option of a three-dimensional figure instead of its 2D projection. More often than not, the 2D model is established first, and then the 3D version is lifted from that visual.
Other types of pose estimation
Human pose estimation is actually a subset of something bigger — pose estimation in general. Besides detecting and identifying the key points of an entire figure, there are types of pose estimation that pinpoint more precise elements of an image. A few of the most notable ones aside from HPE are:
- Object pose estimation (detection of real-life objects, including distance, size, and angle. Also referred to as 6D object estimation).
- Face pose estimation (detection of facial features and movements: lips, eyes, nose, cheeks, etc).
- Head pose estimation (detection and identification of head/face angles).
Frequently asked questions
What are the limitations of human pose estimation?
When the individuals form body poses that are difficult to articulate because of the angle, lighting, or clothing, there can be discrepancies in the identification of the key points. Make sure your model is trained on diverse-enough images that cover most of your use cases without introducing any bias.
Can HPE be done for multiple people close together?
Yes, that is known as multi-person pose estimation which identifies and labels more than one person in a given visual. It is more complicated to do, though, especially if there are people with overlapping body parts. There are two common approaches to execute this. The first is the top-down pose estimation approach, which means you start by detecting the two individuals first and then calculating the parts and poses for each. The bottom-up approach, on the other hand, refers to detecting all the separate parts of the person and grouping them to their corresponding individuals.
What is 6D pose estimation?
6D pose estimation is the same as object pose detection and refers to a very detailed graphical representation of real-life objects. This is vital for robotics in order to provide an in-depth view of the surroundings to ensure that the robot can use its “hands” to grab the right object when prompted. For that reason, 6D pose estimation includes visualizing whether or not the object has a texture, what angle it is in, how far it is from the robot, and so on.
Deep learning methods
Now that we have an in-depth understanding of what pose estimation is, we can observe just how human pose estimation works via a handful of the most notable deep learning approaches.
OpenPose is deemed as one of the most accurate and popular methods for real-time multi-person HPE. This is a bottom-up approach that relies on convolutional neural networks (CNN) as its primary architecture for detecting the separate body parts first before pairing them. OpenPose tackles the difficulty of executing multi-person pose estimation through its confidence maps that are predicted for each body part by the first network along with another branch that predicts a Part Affinity Fields (a set of flow fields that encodes unstructured pairwise relationships between body parts). The primary task of the stages subsequent to this deal with refining the data with the acquired predictions.
You’ve certainly heard of Mask R-CNN in another context related to deep learning. Specifically, it is a notable algorithm for executing many different tasks like instance segmentation and object detection. This architecture can be extended to carry out HPE tasks as well with a top-down approach. That is achieved by two independent steps — one for part detection and the other for person detection when (roughly) the outputs of both are combined in the end.
DeepCut is yet another one of the models for pose estimation that has a bottom-up approach for multi-person pose estimation. The aim of this approach is to simultaneously execute object detection and pose estimation. The authors of DeepCut visualize the process in the following order:
1. Identify all potential body parts in the supplied image.
2. Classify them as head, hands, legs, and so on.
3. Separate the body parts belonging to each individual to form the pose sticks.
Last, but certainly not least, we have DeepPose — a method proposed by researchers at Google that is based on Deep Neural Networks (DNN). It captures all the visible key points and then merges several layers: pooling layer, convolution layer, and a fully-connected layer to achieve a high-precision final output. The approach quickly became known as state-of-the-art since it was the first application of HPE with a cascade of DNN regressors to capture the key points.
Use cases and applications
Human pose estimation implementation in technological developments all around is certainly not news. Let’s see how many of these notable applications you’ve heard of.
Athlete pose detection in sports
From professional athletics to amateur competitions, the web gets lavishly stuffed with sports content every day. Then, perhaps, it comes as no surprise that sports represent an ample opportunity to leverage the power of AI to make real-life decisions pertaining to professional teams. Current computer vision applications allow for tracking and estimating player posture, detecting shots in games like baseball and tennis to recommend moves and enhance the training experience. By analyzing every mini-movement of players, human pose detection provides models with data-backed insights to make predictions on.
Computer-generated imagery (CGI)
Did you know that human pose estimation has immensely aided in perfecting CGI in movies? We’ve all seen the behind-the-scenes images and videos of actors running around in green motion picture suits, covered in dozens of those white spheres. By tracking all of the actors' movements down to the facial expressions, it’s possible to easily render graphics that can easily be “fitted” to match the exact movements of the actor to attach an entirely different face or character to those movements in the film.
Human pose estimation substantially boosts the speed and accuracy in the field of robotics, especially if the robot is tasked with mimicking human behavior. The robot can be equipped with the necessary hardware to detect the position and orientation of a person training them directly and mimicking their movements to ensure they will be trained to do so autonomously in the future. As mentioned above, 6D pose estimation is necessary for the robot to grasp a sense of vision and understanding towards their surroundings in order to know how to function in case they are tasked with gripping, moving, or interacting with objects.
AR is yet another one of the everyday uses of human pose estimation that relies on achieving a visual that matches real-world actions down to every last detail. Just think of how much precision is required when you are using Apple’s AR animals on your smartphone. The features of the 3D model need to be aligned with your facial features and all of that through a camera. What is currently an optimal way to solve that? Through vision-based pose estimation technologies.
Human pose estimation with deep learning helps us bring computer vision applications yet another step closer to ideally assessing and mimicking human behavior. We’ve arguably finessed individual HPE and the current challenge that data scientists aim to perfect is multi-pose HPE. That is where top-down and bottom-up approaches via notable methods like OpenPose, DeepCut, and other deep learning models come to play an important role. By using readily-available human pose datasets or conducting live robot training with a human trainer, we can expect to reach new heights in the coming years in terms of the quality and variety of HPE-based applications that will emerge in the market.