Data labeling, or image annotation where the data is an image, is the first and arguably most important step in the AI and machine learning (ML) lifecycle. High-quality labeling is necessary for training AI and ML algorithms and sets the foundation for developing accurate and reliable models.

For ML teams, a huge amount of time is spent on the cleaning and development of datasets. This is because, for commercial ML models, each dataset must be processed and labeled in a way that tailors it for their specific commercial application; there is no one-size-fits-all approach.

To build novel datasets that are accurate, dependable, and fit for purpose for their intended application, ML experts have the choice of several annotation types to classify objects that appear in an image—segmentation is one such method.

What is image segmentation?

Computer vision involves three common tasks: classification, object detection, and image segmentation. The goal of classification is to identify which objects exist in an image while object detection goes further by finding the position of individual objects by using bounding boxes.

While image annotation is itself a huge undertaking no matter which method is used, image segmentation is one of the most labor-intensive labeling methods. This is because image segmentation involves partitioning an image into multiple classes to enable a ML model to analyze. It’s a delicate process that requires accuracy at the pixel level to improve the accuracy of ML models for critical applications such as self-driving cars and medical equipment.

Purpose of image segmentation

Image segmentation breaks images down to the pixel level and assigns every pixel in the image to a specific object class. With image segmentation, multiple individual regions can belong to a single object class, such as ‘car’ or ‘traffic light’, with multiple occurrences of a class given an ID, such as ‘car 1’ and ‘car 2’.

While image segmentation is a critical process for ensuring high accuracy, it’s also very prone to human error. Errors might include overlapping pixels between annotations or annotations with pixels that go beyond their intended limit. Segmentation errors, such as invalid annotations, are problematic because they can invalidate the whole segmentation process, leading to wasted time and unnecessary expenses.

Detecting invalid annotations with validation scripts

There are a few methods that can be employed to detect dangerous invalid annotations and irregularities. At Tasq, we utilize many of these. Here are two examples:

The first way we detect invalid annotations and is to look at the relationship between them, for example:

  • Two classes of segmentation that must not overlap
  • A class whose existence in an image is required by the existence of another class
  • Two classes that must be adjacent
  • An annotation that must overlap or be contained by a second annotation

The second way we do this is by finding simple rules that an annotation must comply with, for example:

  • An annotation must be continuous
  • An annotation must not have any holes
  • An annotation must begin at the edge of an image

To execute these methods, we turn to old-school automation in the form of python scripts. These are supplemented with an assortment of third-party tools and resources such as OpenCV and Matplotlib. When these tools are used together with our internal methods and resources, it’s possible to quickly create custom validation scripts that are efficient and accurate at detecting segmentation errors.

Fixing invalid annotations

When it comes to fixing invalid annotations, we employ either a manual or automatic process depending on the nature of the error.

For example, if an error arises because a certain class of segmentation must be contained within a different class, an automation tool like NumPy can be used to determine which pixels of the contained annotations exceed the boundaries of the parent annotation and remove them.

In a case where two annotations overlap, however, there’s no simple way to determine which annotation should be changed without examining the annotated image. In this scenario, an annotated image must be manually reviewed and rectified by a human operator.

1,000 errors found in 450 images

In a recent polygon segmentation project that consisted of over 36,000 annotations across 450 images, we were able to find around 500 errors by examining the relationships between the different annotations. In some cases, margins were smaller than 2 pixels, making them almost invisible to the human eye.

At Tasq, we are able to achieve results like these very quickly by integrating our powerful validation scripts directly into the validation process. This is all thanks to our robust data annotation platform.

Want to learn more about how the Tasq platform can take your data labeling project to the next level? Sign up for a 30-minute demo today!