Blog Post

Image Quality Assessment

Author : Yossi Motro

The quality of a machine learning model in supervised machine learning is directly correlated with the quality of the data used to train the model.

We have powerful machine learning algorithms for many problems that, given enough data, can achieve unprecedented performance. However, having an abundance of versatile and high-quality data is not easily attained. It is especially difficult for problems that require human intervention. Having people directly involved in the data acquisition process is both costly and time consuming.

The presence of unwanted outliers in the data can significantly reduce a model’s accuracy or, worse, result in a biased model that leads to incorrect classification. Detecting and eliminating outliers is therefore crucial for creating high-quality training datasets.

Challenges in Image Dataset Authenticity Evaluation

It is difficult to provide absolute scores in many subjective evaluation tasks. For example, scoring the degree of smiles is a difficult task, and the scores can vary greatly depending on the question. Because estimating such a wide range of subjective scores is difficult, sidestep methods have been widely considered. Approaches based on learning-to-rank are now recognised as a promising solution. They provide a learning framework for simply ordering scores among target samples rather than absolute scores. Returning to our previous example, sorting the images by degree of smile is simpler than assigning smile scores to each image.

Metrics for Image Quality Assessment (IQA) take an arbitrary image as input and produce a quality score. The goal of these metrics is to quantitatively represent the human perception of quality. These metrics are commonly used to assess the performance of algorithms in computer vision fields such as image compression, image transmission, and image processing.

Image processing algorithms are fine-tuned to achieve a high level of detail, increased enhancement and realism, and a sense of depth in images. These algorithms were previously configured by hand.

Automatic image quality assessment methods have evolved over time, greatly simplifying the work of algorithm designers. Two such methods are the Structural similarity index (SSIM) and the Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE). SSIM is a full-reference IQA method while BRISQUE is a no-reference image quality assessment – meaning the algorithm only receives the distorted image whose quality is being evaluated, rather than a base image to evaluate image quality. It remains a challenge to develop metrics that capture perceived image quality despite these methods.

Automatic image quality assessment is an issue for many image processing applications, such as image/video compression or reconstruction. These applications fuel the development of computational quality metrics, which forecast the level of impairment as perceived by a human observer. Such metrics must be trained using ground truth data gathered during subjective quality assessment experiments. However, it is not always widely recognised that data from different quality assessment experiments may be scaled differently, resulting in significantly different quality scores.

Supporting authenticity experiments raised a new challenge:

We don’t teach or instruct labelers how to evaluate the authenticity of an image. But this leads to inconsistency in the rating given by the different labelers on the same image. We can have one user rank a specific image for 9 and one as 7. Although both seemed correct to the respective labelers, the discrepancy in the rating raises different questions.

But again, clients generally use this process to measure the quality of the last version. Therefore, we expect similar results from different groups of labelers. We solve this by normalizing those results.

Our Approach

In literature, a strategy for evaluating image quality is provided, which divides images into the following categories based on their quality: Excellent, Good, Fair, Poor, and Bad. We can also use a method to assess image distortion quality that converts image quality with those same five categories into a continuous range of 0 to 10, where 0 is the worst level, 10 is the best level, and every two points is divided into one interval. We gather multiple opinions in order to achieve a better distribution of visual aesthetics in images.

We set up a small set of calibration images that are then presented to only qualified labelers at the beginning of their work, always in the same order. Qualified labelers are those who have passed our assessments, meaning they completely understand the task and are capable of doing it. The tests include evaluating their language proficiency if the task is about assessing text data. The labellers should be attentive and diligent to mark all objects accurately. Only after they passed our training process do we allow them to move forward to calibration, which is important for subjective tasks like smile intensity and unimportant for objective tasks like finding and classifying objects in an image)

Then, based on the labeler results for these images, we can normalize the rest of the scores as a post-process step. This allows us to make sure we get consistent results.

Post-processing works by dividing the labelers to 4 groups:

Unreliable users. Standard deviation of their answers is 0 (always score the same) – we exclude them.

Optimistics. When the score is between 3 and 10 – normalize them to the middle (decrease all their ranking scores).

Pessimists. When the score is always lower than 8 and goes down to 0 – normalize them to the middle (increase all their ranking scores).

Normal. Distribution of scores is uniform in the score scale.

Subjective & Objective Evaluation

The question of how to measure image quality and determine whether an image meets a specific requirement arises. To address this issue, an effective image quality assessment (IQA) system must be established.

The sufficiency and accuracy of the acquired information are heavily influenced by image quality. The image is, however, inevitably distorted during the acquisition, compression, processing, transmission, and display processes.

IQA methods are currently divided into subjective and objective evaluation methods. The goal of objective IQA is to create mathematical models that can accurately and automatically predict image quality similar to a typical human observer. Because human observers are the ultimate users in the majority of multimedia applications, subjective testing is the most reliable method for assessing image quality.

There are two types of rating experiments:

(i) categorical, in which the subject selects a category under which a condition falls, and

(ii) cardinal, where the subject assigns a number to the condition. A large number of potential comparisons is a big drawback in ranking.

After that, the average, called the Mean Opinion Score, is calculated using the combined scores from all subjects.

Objective image quality evaluations are quite useful. They can provide feedback and optimization for denoising algorithms, early evaluation, and preprocessing of image data for computer vision  tasks, and can even reflect the quality of the shooting equipment indirectly.

Conclusion

As digital image technologies are increasingly used in medical imaging, retail monitoring , automotive vehicles , and other industries , precise quality assessment methods become essential. Many processes like compression, transmission, display, and acquisition can impact image quality. As a result, accurate image quality measurement becomes necessary in many image-based applications.

Tasq.ai is a leading AI platform for enterprises to develop and deploy deep learning object tracking applications with a single end-to-end solution. Get in touch with our team at: Contact Us learn more.

The latest blog

Tasq.ai works with leading GenAI companies, enterprises, and government agencies.