We’re surrounded by thousands of objects at any given moment, and our eyes are capable of defining their limits on a 3D plane, in real-time. Computer vision has evolved to not only detect objects in a given visual and label them but also precisely outline their entire form, no matter the unique shape, all thanks to image segmentation. With that, we’re able to significantly expand the scope of accuracy and precision among image annotation tasks and apply it to innovative technological advancements. Such instances include, for example, medical imaging, AI for autonomous vehicles, agriculture, and many more. In order to understand the significance of image segmentation in those use cases, we must have a steady grasp of what the process entails and how it differs from other forms of image annotation.
In our article, you’ll gain insight to:
- What is image segmentation?
- Types of image segmentation
- Image segmentation vs object detection & others
- Image segmentation techniques
- Image segmentation with deep learning
- Key takeaways
What is image segmentation?
Image segmentation is a vital computer vision task that expands on the premise of object detection. We will expand further on the significant similarities and differences between image segmentation and object detection (and other related processes) in a moment. Before that, it is necessary to determine what the task of image segmentation involves exactly. As the name suggests, image segmentation is essentially segmenting an image into fragments and assigning a label to each of those. This occurs on a pixel level in order to define the precise outline of an object within the frame and its class. Those outlines — otherwise known as the output — are highlighted with either one or more colors, depending on the type of segmentation.
To streamline image segmentation in machine learning, the system is trained with sets of data, be they manually collected or open-source datasets in order to accurately identify and label the visuals, which are input.
Types of image segmentation
Image segmentation can further be divided into two primary categories — instance segmentation and semantic segmentation. The latter task is what comes to mind of many when thinking of image segmentation. It refers to the most foundational definition of image segmentation that we discussed above — the identification, grouping, and labeling of pixels in a visual that forms a whole object. How does instance segmentation differ from that then? Similarly, objects with their bounds are detected in the image, however, each new object is labeled as a different instance, even within the same class. For example, there are billions of people in the world but no two are essentially the same. For the sake of simplification, let’s look at four people in one image — all of them are human but all of them are different individuals with varying height, race, age, gender, and so on. If the output of a semantic segmentation process identifies the objects as “people”, then with instance segmentation, each individual will be defined as a new instance within the general category.
Instance segmentation assesses and takes account of the variation of objects in the segmentation process. That is the subtlety between semantic and instance segmentation that makes a noteworthy difference. The primary issue that lies with semantic segmentation is that it isn’t optimal to use in cases where more precise labeling is necessary. For variations in species of animals or plants, it is vital that the AI distinguishes those differences in order to establish accurate labeling with minimal discrepancies.
Image segmentation vs object detection & others
Image segmentation is often merged together with other image annotation processes such as image classification, localization, and object detection. While seemingly similar, all of them are innately different. Let’s briefly define the characteristics of each one now that we have established what image segmentation is.
- Image classification — A single class is assigned to an image, commonly to the main object that is portrayed in the image. If an image contains a cat, the image will be classified with ‘cat’. We do not know the precise location of the cat in the image nor can we identify its limits on the visual as with object localization, detection, or segmentation.
- Object detection — The objects within an image or video are detected, marked with a bounding box, and labeled. The primary difference we will notice between object detection and image segmentation is the final output. With object detection, the key signifier is the bounding box which draws a square or rectangle around the limits of each object. With image segmentation, the entire and precise outline of the object is considered, without the presence of any background content.
- Localization — With image/object localization, we are able to identify the location of the main subject of an image. However, image localization typically does not assign classes to the localized object and considers the main subject instead of all of the present objects in a given frame.
Image segmentation techniques
There are many techniques for achieving image segmentation both classical and less traditional. Each technique proposes a unique approach to reaching the final output of an image or video. A few of the most common image segmentation techniques include, but aren’t limited to, region-based segmentation, edge detection segmentation, thresholding, and clustering. Let’s take a closer look at how some of them function.
With this first technique, similarities are detected in the pixels of segments in direct proximity to one another. Pixels nearest to each other are more likely to be a part of the same object which is why this technique analyzes the similarities and differences of adjacent pixels to determine the boundaries of the given object. One of the shortcomings of this technique is that lighting and contrast within the image may lead to inaccurate defining of the object parameters.
With the aim of resolving the shortcomings of region-based techniques, edge detection algorithms emphasize the object edges in order to achieve reliant results. That is done by determining and classifying certain pixels as “edge pixels” before anything else. Edge detection is best used for visuals with objects that have clearly defined outlines and are simple to implement for regular use compared to other techniques that are more time-consuming.
Thresholding is arguably the least complex technique for image segmentation. Essentially, it considers the act of changing the original image into a black-and-white one to result in a binary image or binary map instead. Typically, the pixels are valued with either 0 and 1 where the 0s are assigned to the background and anything above that threshold, on the foreground, is valued with 1. This is an optimal technique for images where the background and foreground have considerable contrast and must be highlighted.
Image segmentation with deep learning
However, it doesn’t end there. Similar to how deep learning takes already high-performing processes and elevates them in terms of accuracy and speed, the same applies to image segmentation. Image segmentation with deep learning is becoming arguably the most accurate approach for the task in recent years. If you are looking into executing image segmentation for machine learning, it is not necessary to dive too deep into the complex architectures of segmentation with deep learning, but rather grasp a basic understanding of how it functions.
The key elements of image segmentation architecture consist of the encoder and decoder. Through this, parts of the image are extracted via filters in the pooling layers and then a final output is received with a segmentation mask. This is also known as the convolutional encoder-decoder architecture. Another notable model is U-Net — a convolutional neural network that resembles a ‘U’ shape when the architecture is visualized. It is composed of two parts: the upsampling and downsampling otherwise referred to as the contracting path and expanding part respectively. The significance of U-Net is the accuracy and speed it achieves for image segmentation by repurposing the same feature maps that are initially used to expand a vector into a fully segmented output image. The most prominent use of the U-Net architecture is for image segmentation for medical imaging.
Today, thanks to image segmentation, we are able to detect the location, class, and accurate limits of objects within images and videos. The two prominent types of image segmentation are semantic and instance segmentation. The latter gives us the opportunity to explore new avenues of accurate and detailed segmentation for variation within images. That is particularly valuable for many everyday use cases ranging from satellite imagery to machine vision in agricultural automation and so on. A simple process that we carry out effortlessly with our eyes implemented in the world of AI is revolutionary for technology.