Scientists from the University of Shanghai for Science and Technology in China have reported the development of a novel display denoising method for use in computer vision applications. Currently in the pre-proof stage of publication, the paper will appear in the journal Displays.
Study: Kronecker component with robust low-rank dictionary for image denoising. Image Credit: Gorodenkoff/Shutterstock.com
Computer Vision and Image Denoising
Computer vision is a field of artificial intelligence. Research in this field is used to develop systems for computers to accurately derive meaningful information from inputs. These inputs include videos, digital images, and other types of visual input. Information is used by computers to act and make decisions. AI enables a computer to think, whereas computer vision allows it to see images, observe, and understand visual inputs.
Humans have an advantage over machines due to a lifetime of context which gives them the ability to differentiate between and distinguish objects and elucidate information about them, such as movement, distance, and if there is something wrong with the object. However, machines have the advantage of being able to process images much faster and in larger quantities, but the suite of sensors, cameras, and processing elements must be trained with algorithms and data analysis.
Image denoising has become a key research topic in computer vision. This is a low-level image processing technique that is crucial for tasks including segmentation, detection, classification, and recognition. In recent years, there has been an intense research focus in the field of computer vision on developing robust, reliable, and efficient image denoising techniques.
Principal Component Analysis (PCA) has emerged as a prime candidate for providing enhanced image denoising capabilities. This technique works on the basis that high-dimensional data can be embedded in a low-dimensional linear subspace. Many techniques have been developed based on PCA, with sparse and robust PCA being particularly notable. Sparse PCA has some limitations with the use of robust PCA in denoising can overcome.
The Study
The research team has developed a robust, novel image denoising technique that transforms spare representation models typically used for two-dimensional images into a more robust model typical of 3D image analysis. The technique works by utilizing the Kronecker Product and mode-n product to separate the sparse model’s dictionary into two separate dictionaries. The authors have termed this model the RKCA model, which stands for Robust Kronecker Component Analysis.
Furthermore, the model used Tucker factorization to decompose 3D data into two dictionaries and a sparse matrix. Two low-rank dictionaries are produced using the Frobenius norm constraint. To capture the dictionaries’ low-rank property, the team introduced the Nuclear norm into their novel denoising model.
Moreover, based on the original model developed in the research, the team improved the model by combining the Nuclear norm and the Frobenius norm. This improved, more efficient denoising model was termed KCRD, which stands for Kronecker Component with Robust Low-Rank Dictionary. By taking advantage of the advantageous properties of both norms, the KCRD model developed in the study produces an improved low-rank dictionary. An augmented Lagrange multiplier was used to optimize the models.
The authors confirmed the efficaciousness and competitiveness of the two proposed novel denoising models in several experiments. The results of the experiments demonstrated that the novel models developed by the authors possess a superior denoising performance than contemporary and other proposed denoising techniques.
More from AZoM: What is Cryo-Electron Microscopy?
Simulations performed by the researchers demonstrated that the novel methods can denoise colored images, efficiently restoring them. Even severely damaged images could be adequately reconstructed using the models. Results of the experiments demonstrated that the Frobenius norm has superior robustness for high damage noise due to its improved stability. Whilst the Nuclear norm suffers from a lack of stability, it performs better at low ranks.
Future opportunities were identified in the research based on the results of using the novel denoising models. The authors have stated that future studies will apply the models to different tasks to assess their potential. These will include hyperspectral image restoration and background elimination.
Another area of research identified by the authors which should prove useful is utilizing both linear analysis and deep learning. Whilst combining these two fields will prove challenging for future research, it presents some significant opportunities for improving image denoising for computer vision as complex problems faced by traditional linear analysis can be addressed using automated neural network coding.
Overall, the research has presented the development of a novel, robust mathematical model for developing improved image denoising capabilities for computer vision, which requires further testing to fully explore its potential.
Further Reading
Zhang, L & Liu, C (2022) Kronecker component with robust low-rank dictionary for image denoising [online] Displays 102194 | sciencedirect.com. Available at:
Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.