Javascript is required
1.
Y. C. Zhang, Z. R. Shen, and R. S. Jiao, “Segment anything model for medical image segmentation: Current applications and future directions,” Comput. Biol. Med., vol. 171, p. 108238, 2024. [Google Scholar] [Crossref]
2.
Y. Xu, R. Quan, W. Xu, Y. Huang, X. Chen, and F. Liu, “Advances in medical image segmentation: A comprehensive review of traditional, deep learning and hybrid approaches,” Bioengineering, vol. 11, no. 10, p. 1034, 2024. [Google Scholar] [Crossref]
3.
I. Hussain, H. Ali, M. S. Khan, S. Niu, and L. Rada, “Robust region-based active contour models via local statistical similarity and local similarity factor for intensity inhomogeneity and high noise image segmentation,” Inverse Probl. Imag., vol. 16, no. 5, pp. 1113–1136, 2022. [Google Scholar] [Crossref]
4.
A. O. Panhwar, A. A. Sathio, N. M. Shah, and S. Memon, “A scheme based on deep learning for fruit classification,” Mehran Univ. Res. J. Eng. Technol., vol. 44, no. 1, pp. 8–19, 2025. [Google Scholar] [Crossref]
5.
I. Hussain, J. Muhammad, and R. Ali, “Enhanced global image segmentation: Addressing pixel inhomogeneity and noise with average convolution and entropy-based local factor,” Int. J. Knowl. Innov. Stud., vol. 1, no. 2, pp. 116–126, 2023. [Google Scholar] [Crossref]
6.
M. S. Khan, “A region-based fuzzy logic approach for enhancing road image visibility in foggy conditions,” Mechatron. Intell. Transp. Syst., vol. 3, no. 4, pp. 212–222, 2024. [Google Scholar] [Crossref]
7.
P. A. Prabha, M. Bharathwaj, K. Dinesh, and G. H. Prashath, “Defect detection of industrial products using image segmentation and saliency,” J. Phys. Conf. Ser., vol. 1916, p. 012165, 2021. [Google Scholar] [Crossref]
8.
Y. Funama, S. Oda, M. Kidoh, D. Sakabe, and T. Nakaura, “Effect of image quality on myocardial extracellular volume quantification using cardiac computed tomography: A phantom study,” Acta Radiol., vol. 63, no. 2, pp. 159–165, 2022. [Google Scholar] [Crossref]
9.
J. Meng, Y. Li, H. Liang, and Y. Ma, “Single-image dehazing based on two-stream convolutional neural network,” J. Artif. Intell. Technol., vol. 2, no. 3, pp. 100–110, 2022. [Google Scholar] [Crossref]
10.
Q. Li, H. Wang, B. Y. Li, T. Yanghua, and J. Li, “IIE-SegNet: Deep semantic segmentation network with enhanced boundary based on image information entropy,” IEEE Access, vol. 9, pp. 40612–40622, 2021. [Google Scholar] [Crossref]
11.
Y. L. Qi, Z. Yang, W. H. Sun, M. Lou, J. Lian, W. W. Zhao, X. Y. Deng, and Y. D. Ma, “A comprehensive overview of image enhancement techniques,” Arch. Comput. Methods Eng., vol. 29, pp. 583–607, 2022. [Google Scholar] [Crossref]
12.
S. P. Premnath and J. A. Renjit, “Image restoration model using Jaya-Bat optimization-enabled noise prediction map,” IET Image Process., vol. 15, no. 9, pp. 1926–1939, 2021. [Google Scholar] [Crossref]
13.
M. Ghulyani and M. Arigovindan, “Fast roughness minimizing image restoration under mixed Poisson–Gaussian noise,” IEEE Trans. on Image Process., vol. 30, pp. 134–149, 2021. [Google Scholar] [Crossref]
14.
C. C. Jiang, L. Hu, and G. T. Jia, “The application of deep learning in image deblurring,” in CAIBDA’ 24: Proceedings of the 2024 4th International Conference on Artificial Intelligence, Big Data and Algorithms, New York, USA, 2024, pp. 1075–1079. [Google Scholar] [Crossref]
15.
Y. W. Xiang, H. Zhou, C. Y. Li, F. W. Sun, Z. B. Li, and Y. Q. Xie, “Deep learning in motion deblurring: Current status, benchmarks and future prospects,” Vis. Comput., 2024. [Google Scholar] [Crossref]
16.
M. Aoi and T. Goto, “Deblurred image quality improvement by learning based deblurring method utilizing ConvNeXt-V2,” in DMIP’ 24: Proceedings of the 2024 7th International Conference on Digital Medicine and Image Processing, Osaka, Japan, 2024, pp. 62–66. [Google Scholar] [Crossref]
17.
K. Chen and Y. J. Liu, “Efficient image deblurring networks based on diffusion models,” arXiv preprint, arXiv:2401.05907, 2024. [Google Scholar] [Crossref]
18.
N. Varghese, A. N. Rajagopalan, and Z. A. Ansari, “Real-time large-motion deblurring for gimbal-based imaging systems,” IEEE J. Sel. Top. Signal Process., vol. 18, no. 3, pp. 346–357, 2024. [Google Scholar] [Crossref]
19.
L. Zhai, Y. Wang, S. Cui, and Y. Zhou, “A comprehensive review of deep learning-based real-world image restoration,” IEEE Access, vol. 11, pp. 21049–21067, 2023. [Google Scholar] [Crossref]
20.
S. M. Zhao, S. K. Oh, J. Y. Kim, Z. W. Fu, and W. Pedrycz, “Motion blurred image restoration framework based on parameter estimation and fuzzy radial basis function neural networks,” Pattern Recognit., vol. 132, p. 108983, 2022. [Google Scholar] [Crossref]
21.
J. Peng, B. Luo, L. Xu, J. Yang, C. Zhang, and Z. Pei, “Blind image deblurring via minimizing similarity between fuzzy sets on image pixels,” IEEE Trans. Circuits Syst. Video Technol., vol. 34, no. 11, pp. 11851–11873, 2024. [Google Scholar] [Crossref]
22.
A. C. Bovik, The Essential Guide to Image Processing. Academic Press, 2009. [Google Scholar]
Search
Open Access
Research article

Advanced Image Restoration Through CIPFS-Integrated Mathematical Transformations

zakir husain1*,
kai siong yow2
1
Department of Mathematics, University of Peshawar, 25120 Peshawar, Pakistan
2
Department of Mathematics and Statistics, Faculty of Science, Universiti Putra Malaysia, 43400 Serdang, Malaysia
Acadlore Transactions on AI and Machine Learning
|
Volume 4, Issue 1, 2025
|
Pages 50-61
Received: 02-02-2025,
Revised: 03-15-2025,
Accepted: 03-21-2025,
Available online: 03-27-2025
View Full Article|Download PDF

Abstract:

The restoration of blurred images remains a critical challenge in computational image processing, necessitating advanced methodologies capable of reconstructing fine details while mitigating structural degradation. In this study, an innovative image restoration framework was introduced, employing Complex Interval Pythagorean Fuzzy Sets (CIPFSs) integrated with mathematically structured transformations to achieve enhanced deblurring performance. The proposed methodology initiates with the geometric correction of pixel-level distortions induced by blurring. A key innovation lies in the incorporation of CIPFS-based entropy, which is synergistically combined with local statistical energy to enable robust blur estimation and adaptive correction. Unlike traditional fuzzy logic-based approaches, CIPFS facilitates a more expressive modeling of uncertainty by leveraging complex interval-valued membership functions, thereby enabling nuanced differentiation of blur intensity across image regions. A fuzzy inference mechanism was utilized to guide the refinement process, ensuring that localized corrections are adaptively applied to degraded regions while leaving undistorted areas unaffected. To preserve edge integrity, a geometric step function was applied to reinforce structural boundaries and suppress over-smoothing artifacts. In the final restoration phase, structural consistency is enforced through normalization and regularization techniques to ensure coherence with the original image context. Experimental validations demonstrate that the proposed model delivers superior image clarity, improved edge sharpness, and reduced visual artifacts compared to state-of-the-art deblurring methods. Enhanced robustness against varying blur patterns and noise intensities was also confirmed, indicating strong generalization potential. By unifying the expressive power of CIPFS with analytically driven restoration strategies, this approach contributes a significant advancement to the domain of image deblurring and restoration under uncertainty.

Keywords: Blurred image restoration, CIPFSs, Fuzzy logic, Mathematical transformations, Image clarity enhancement, Fuzzy edge enhancement, Image processing

1. Introduction

Image processing plays a crucial role in various computer vision applications, including medical diagnostics, environmental monitoring, and industrial automation [1], [2], [3], [4]. One of the key tasks in these domains is defect image restoration, which is essential for ensuring accuracy and reliability in subsequent analyses. In real-world scenarios, however, images often suffer from degradations such as noise, blur, occlusions, and distortions, which can significantly affect the performance of image processing systems [5].

When it comes to defect detection, image blur caused by factors like motion, incorrect focus, or environmental conditions can obscure critical details, making it challenging to identify surface defects, scratches, or misalignments. Atmospheric phenomena such as fog and haze introduce additional difficulties in fields like satellite imaging and autonomous driving, where visibility is of utmost importance [6], [7]. Poor image quality can lead to serious issues, such as missed defects during industrial inspections or incorrect diagnoses in medical imaging due to low-resolution scans [7], [8]. In satellite-based Earth monitoring, distortions caused by atmospheric factors, including fog and clouds, can block vital environmental information, impeding disaster response and climate research [9], [10]. As such, enhancing image restoration techniques is essential for improving the accuracy of these applications.

Traditional image processing techniques, such as median filtering, Gaussian smoothing, and edge detection, have been extensively used to address issues of image degradation [11], [12]. While effective for some specific cases, these methods often struggle when faced with complex distortions that don’t follow a uniform pattern. For instance, when an image is affected by multiple degradations, such as mixed Gaussian blur and salt-and-pepper noise, conventional methods often fail to adapt [13]. Moreover, these techniques assume a uniform noise distribution, which is rarely the case in real-world scenarios. These limitations emphasize the need for more advanced methods that can handle uncertainty and variations in image defects.

The field of image deblurring has seen significant progress with the introduction of deep learning models that show promising results. For instance, Jiang et al. [14] developed a Convolutional Neural Network (CNN)-based method that effectively reduces motion blur. However, the model’s reliance on large-scale annotated datasets limits its performance on unseen blur types. Similarly, Xiang et al. [15] evaluated deep learning models for motion deblurring and found that while these models perform well on synthetic datasets, their real-world performance is hindered by unpredictable variations in lighting, noise, and blur patterns. Aoi and Goto [16] improved structural detail restoration by incorporating self-attention mechanisms into a ConvNeXt-V2-based model. However, the high computational demands of this approach make it unsuitable for real-time applications. Chen and Liu [17] proposed a diffusion-based deblurring method that iteratively refines image details, producing high-quality results but with increased computational cost. Varghese et al. [18] developed a real-time deblurring model for gimbal-based imaging systems, which performs well in structured motion blur cases but struggles with non-uniform blur patterns.

In recent years, advanced techniques like wavelet transforms, non-local means filtering, and deep learning-based methods have been explored for defect restoration. Zhai et al. [19] reviewed deep learning-based image restoration methods, categorizing them into CNNs, Generative Adversarial Networks (GANs), Transformers, and Multilayer Perceptrons (MLPs). These methods effectively restore degraded images by learning complex features, with CNNs excelling in spatial feature extraction and GANs improving image realism through adversarial training. While GANs and CNNs offer significant efficiency in image restoration and enhanced visual quality, they have limitations. These models require large, high-quality datasets, which may not always be available for real-world degradation types. Furthermore, CNNs struggle with handling multiple types of degradation, and GANs face challenges such as training instability and high computational requirements. Real-time applications also need to balance the quality of restoration with computational efficiency.

A common challenge across deep learning-based methods is their reliance on vast annotated datasets for supervised training. Many models depend on paired blurred and sharp images, which are difficult to acquire for real-world applications. Synthetic datasets often fail to capture the full range of real-world blur patterns, resulting in diminished performance on unseen data. Another limitation is the interpretability of these models, as their deblurring processes function as “black boxes,” making it difficult to control or understand their decision-making mechanisms.

To address these limitations, this study proposes a novel CIPFS-based blurred image processing model, which integrates the principles of fuzzy logic and mathematical image processing with CIPFS for enhanced image restoration. The proposed model offers a flexible framework that adapts to varying degrees of blur and uncertainty without relying on extensive training datasets. The key contributions of this study include the formulation of fuzzy membership functions for blur classification, a rule-based fuzzy correction mechanism, and a mathematical fusion strategy for edge-aware restoration. By utilizing CIPFS, the model’s ability to handle different types of uncertainty was enhanced and multiple fuzzy sets were effectively combined for a more accurate restoration. This approach improves adaptability to various blur types and ensures better generalization to real-world images. This research holds significant potential for applications in surveillance, autonomous navigation, and remote sensing, where clear and accurate image reconstruction is essential.

The proposed model integrates CIPFS, mathematical transformations, and edge enhancement to achieve superior image restoration performance. The selection of these specific techniques is driven by their ability to effectively handle uncertainty, preserve structural details, and enhance image clarity, which conventional approaches often fail to achieve. CIPFS provides a robust mathematical framework for modeling complex uncertainties in image data, outperforming traditional fuzzy sets in capturing intricate variations. Mathematical transformations, particularly in pixel coordinate correction and CIPFS-weighted filtering, ensure precise spatial alignment and noise suppression. Edge enhancement using CIPFS-based techniques prevents boundary leakage and maintains structural integrity, which is critical for high-quality image restoration. Other alternatives, such as conventional wavelet-based deblurring or histogram equalization, were not considered due to their limitations in handling non-uniform noise and intensity variations.

The rest of this study is organized as follows: Section 2 presents related work, discussing previous methods in image restoration and their limitations. Section 3 introduces the proposed mathematical framework, detailing the integration of fuzzy logic with mathematical image processing techniques. Section 4 presents the experimental results, including the evaluation metrics and performance comparison of the proposed model with existing methods. Finally, Section 5 concludes this study, summarizing the key findings and suggesting future directions for research.

2. Related Work

Image defect restoration has been the subject of considerable research, with various techniques proposed to address the challenges posed by image degradation. Among these, fuzzy logic has emerged as a promising tool due to its ability to effectively manage uncertainty in image restoration tasks. Zhao et al. [20] introduced a framework using fuzzy radial basis function (RBF) neural networks for motion-blurred image restoration. This approach estimates the blur parameters and adapts to dynamic environments, making it more flexible than traditional methods. The model uses fuzzy logic to handle the uncertainty inherent in motion blur, allowing it to adjust its restoration process to different blur conditions. Although this framework offers significant advantages over conventional methods, the accuracy of blur parameter estimation is crucial to its performance. Inaccurate determination of these parameters can result in degraded restoration results, undermining the overall effectiveness of the model. Additionally, the fuzzy-RBF network requires large amounts of labeled training data, which can be a limitation in real-world applications where such datasets are not readily available. This constraint reduces the model’s generalizability and its ability to handle unseen types of blur or real-world variations in image quality.

Similarly, the Blind Image Deblurring Framework (BLIF) proposed by Peng et al. [21] addresses the challenge of blind image deblurring through a novel approach based on minimizing the similarity between fuzzy sets on image pixels. By utilizing fuzzy set theory, this method models uncertainty in the image and improves restoration accuracy by distinguishing between sharp and blurred regions. BLIF is effective in handling multiple blur types, including motion blur and defocus blur, and excels in restoring fine image details. The method is particularly effective for images with varying blur conditions, showcasing its versatility compared to traditional deblurring methods. However, one limitation of the BLIF model is its reliance on similarity metrics within fuzzy sets, which can result in significant computational overhead when processing high-resolution images or images with complex, non-uniform blur patterns. As the method processes pixel-wise fuzzy sets, it becomes computationally expensive, particularly in the case of large images. Furthermore, while BLIF shows promising results, its generalization capability to real-world scenarios remains uncertain. The model was primarily tested on synthetic datasets, and its performance under real-world conditions—where images are often subject to noise, varying illumination, and more complex distortions—remains a challenge. This gap in generalization underscores the need for more adaptable models that can handle diverse image quality variations in practical applications.

To overcome these limitations, a CIPFS-based image restoration model was proposed in this study, which integrates the power of fuzzy logic with mathematical image processing techniques using CIPFS. The CIPFS-based model is designed to address the uncertainty in both blur and noise by combining multiple fuzzy sets in a way that enhances the image restoration process while reducing computational overhead. Unlike the fuzzy-RBF network or BLIF, the CIPFS model can effectively manage the complexity of high-resolution images and handle real-world scenarios with varying illumination and noise. It offers a flexible framework that adapts to different types of blur and uncertainty without the need for extensive labeled datasets, making it more practical for real-world applications. Moreover, the CIPFS-based approach enhances computational efficiency by minimizing unnecessary computations, providing an effective balance between performance and speed.

While existing models like fuzzy-RBF and BLIF have demonstrated effectiveness in their specific domains, the proposed model introduces a comprehensive framework that enhances accuracy and robustness in image segmentation and deblurring through the integration of CIPFS. Unlike conventional methods that struggle with noise, intensity variations, and boundary preservation, the proposed approach systematically transforms and corrects pixel coordinates using CIPFS, ensuring precise spatial representation. The incorporation of CIPFS in fuzzy logic-based deblurring, particularly through CIPFS-weighted filtering and mathematical integration for blur restoration, significantly improves clarity while reducing artifacts. Furthermore, the proposed model employs CIPFS-based edge enhancement for blur correction, leading to sharper boundaries and better structural preservation. The normalization and structural refinement process ensures consistency in segmentation results across different imaging conditions. Through these advancements, the proposed model consistently outperforms traditional approaches in terms of segmentation accuracy, robustness against intensity fluctuations, and resilience to boundary leakage, establishing a new benchmark in image restoration and medical image analysis.

3. Mathematical Framework of the Proposed Model

This section presents the CIPFS-based image restoration model, a novel approach that integrates CIPFS with advanced mathematical image processing techniques to address the challenges of image degradation. The proposed model leverages the flexibility of fuzzy logic to effectively handle uncertainty in image quality. The model is designed to restore images affected by a wide range of degradations, including motion blur, defocus, noise, and varying illumination conditions, without the need for large-scale annotated datasets. This approach combines the benefits of fuzzy set theory with efficient mathematical fusion strategies, offering a robust framework that adapts to diverse types of blur and noise while maintaining high computational efficiency. By employing CIPFS, the model captures complex relationships between blurred and sharp regions in an image, ensuring improved restoration performance and generalization to real-world scenarios.

3.1 Transformation of Pixel Coordinates Using CIPFS

Considering a blurred image with a resolution of $B_m \times B_n$, where, $B_m$ and $B_n$ indicate the height and width of the pixel grid, the Cartesian coordinate transformation was extended to accommodate uncertainty using CIPFS. The transformation is defined as:

$B_m \cdot x_{i j}=x_{i j}-z_{i j}+\mu_{i j} {e^{i \theta_{i j}}}, \quad B_n \cdot y_{i j}=y_{i j}-z_{i j}+\nu_{i j} {e^{i \phi_{i j}}}$
(1)

where, $x_{i j}, y_{i j}$, and $z_{i j}$ correspond to pixel locations along the $x$, $y$, and $z$ dimensions, respectively, and $\mu_{i j}$ and $\nu_{i j}$ represent the complex interval membership and non-membership degrees in the CIPFS framework. The angles $\theta_{i j}$ and $\phi_{i j}$ model phase uncertainties in the coordinate transformations.

The CIPFS-based Euclidean distance of a pixel from the center of the image is given by:

$d_{x, y, z}=\sqrt{x_{i j}^2+y_{i j}^2+z_{i j}^2+\left(\pi_{i j} e^{i \gamma_{i j}}\right)^2}$
(2)

where, $\pi_{i j}$ is the hesitation degree in the CIPFS model, representing additional uncertainty in blur estimation.

To model the projection process for image restoration, the modified relationship is given by:

$\tan \Theta_k=\frac{q_k+\mu_k e^{i \delta_k}}{Z_k+\nu_k e^{i \eta_k}}$
(3)

where, $q_k$ denotes the linear separation between a pixel in the blurred image and its restored location, and $Z_k$ is the total pixel count in the blurred image, both adjusted by their respective complex interval fuzzy parameters.

3.2 Correction of Pixel Coordinates Using CIPFS

To mitigate distortions introduced by blurring effects, pixel coordinate corrections incorporate CIPFS elements as follows:

$X=\lambda_0+R_1+\mu_X e^{i \theta_X}, \quad Y=\lambda_1+R_2+\nu_Y e^{i \phi_Y}, \quad Z=\lambda_2+R_3+\pi_Z e^{i \gamma_Z}$
(4)

where, $\lambda_0, \lambda_1$, and $\lambda_2$ are correction parameters derived through CIPFS-based membership functions, and $R_1, R_2$, and $R_3$ are the radii of angles formed by light interaction with the imaging lens, adjusted by their complex interval uncertainties.

By incorporating CIPFS-based corrections, pixel displacements induced by the blur kernel were adjusted more effectively, ensuring robust image reconstruction.

3.3 Integration of CIPFS in Fuzzy Logic-Based Deblurring

To address uncertainty in blur estimation, complex interval Pythagorean fuzzy logic was applied. Adaptive membership functions were designed to incorporate complex interval values, preserving sharp edges while minimizing distortions.

Fuzzy membership functions quantify the degree of blurriness at each pixel. The blur levels were classified into high, medium, and low using the following CIPFS-based function:

$\mu_{\text {blur }}(x)= \begin{cases}1+\mu_x e^{i \theta_x}, & \text { if } x \geq T_h \\ \frac{x-T_l}{T_h-T_l}+\nu_x e^{i \phi_x}, & \text { if } T_l<x<T_h \\ 0+\pi_x e^{i \gamma_x}, & \text { if } x \leq T_l\end{cases}$
(5)

where, $T_h$ and $T_l$ are intensity thresholds defining the blur range, and $\mu_x$, $\nu_x$, and $\pi_x$ represent the CIPFS-based membership, non-membership, and hesitation degrees, respectively, each carrying an associated phase component $\left(\theta_x, \phi_x, \gamma_x\right)$. This enhances uncertainty handling in blur estimation and correction.

By leveraging CIPFS, this refined fuzzy deblurring approach yields several benefits: a) Enhanced uncertainty modeling. Complex interval values improve robustness against variations in blur intensity. b) Edge preservation. Complex hesitancy components refine pixel-wise adjustments, reducing over-smoothing. c) Higher image quality, i.e., enhanced restoration accuracy with adaptive correction parameters.

This transformation and correction framework improves deblurring precision, making it particularly effective in handling complex image distortions.

3.3.1 Fuzzy rule-based mechanism with CIPFS

A rule-based fuzzy system determines how to correct blur at each pixel by associating input blur levels and spatial relationships with corrective measures. The proposed model leverages CIPFS to represent the degrees of blur correction with enhanced flexibility and robustness. Some example rules include:

• Rule 1: If the blur level is high and the pixel is close to an edge, a strong correction is applied based on CIPFS-based similarity measures.

• Rule 2: If the blur level is moderate and the pixel is at a medium distance from an edge, a moderate correction is applied using CIPFS-modulated weights.

• Rule 3: If the blur level is low and the pixel is far from an edge, a weak correction governed by CIPFS confidence values is applied.

These rules integrate expert knowledge with the CIPFS principles to dynamically adapt to variations in blur intensity and spatial image characteristics.

3.3.2 CIPFS aggregation and defuzzification

Following the application of rules, the fuzzy outputs are aggregated into a composite complex interval Pythagorean fuzzy set representing the corrective modifications needed to restore the image. Each rule contributes based on its corresponding complex-valued membership function and weight.

Defuzzification translates the CIPFS-derived fuzzy set into a definitive correction value, utilizing a modified centroid method:

$C=\frac{\int_x x \cdot \mu_C(x) \cdot \nu_C(x) d x}{\int_x \mu_C(x) \cdot \nu_C(x) d x}$
(6)

where, $\mu_C(x)$ and $\nu_C(x)$ represent the complex-valued membership and non-membership functions of the aggregated CIPFS, respectively.

This process ensures that the computed correction value maintains a balance between correction robustness and preservation of fine details. By integrating multiple corrective actions and their associated fuzzy memberships, the centroid method in CIPFS produces an adjusted pixel value that adheres to the extended Pythagorean fuzzy logic principles while restoring image clarity.

3.3.3 CIPFS-weighted filtering

A CIPFS-weighted filtering technique was utilized to determine correction intensity based on the blur level at each pixel. This approach blends information from both the original and CIPFS-corrected images, effectively restoring blurred sections while preserving unaltered regions. The corrected pixel intensity, $I_{\text{corrected}}(x, y)$, is computed as:

$I_{\text {corrected}}(x, y)=\mu_C(x, y) \cdot \nu_C(x, y) \cdot I_{\text {CIPFS}}(x, y)+\left(1-\mu_C(x, y) \cdot \nu_C(x, y)\right) \cdot I_{\text {original}}(x, y)$
(7)

where, $\mu_C(x, y)$ and $\nu_C(x, y)$ represent the membership and non-membership degrees of CIPFS for the blur degree at pixel $(x, y)$; $I_{\mathrm{CIPFS}}(x, y)$ denotes the intensity after applying CIPFS-based fuzzy corrections; and $I_{\text {original}}(x, y)$ corresponds to the unprocessed intensity.

For pixels in heavily blurred regions where $\mu_C(x, y)\cdot\nu_C(x, y)\approx 0.8$, the restoration depends primarily on $I_{\mathrm{CIPFS}}(x, y)$. Conversely, for less blurred areas where $\mu_C(x, y) \cdot\nu_C(x, y)\approx 0.2$, the correction mainly retains $I_{\text {original}}(x, y)$.

By employing this approach, the restoration process remains localized, preserving clarity in sharp regions while effectively correcting blurred areas. The CIPFS framework ensures adaptive and robust image restoration, mitigating diverse blurring artifacts without over-processing unblurred regions.

3.4 CIPFS-Mathematical Transformation Integration for Blur Restoration

The proposed methodology leverages a combination of CIPFS and mathematical transformations to improve the restoration of blurred images. This hybrid approach capitalizes on the extended flexibility of CIPFS in handling uncertainty and noise while utilizing precise mathematical adjustments to refine image clarity. The result is an effective restoration technique tailored to address varying levels of blur.

During the restoration process, pixel coordinates were corrected based on a combination of CIPFS-based membership functions and mathematical transformation rules, formulated as follows:

$X=\left(\lambda_0+R_1\right) \cdot \mu_C(x, y) \cdot \nu_C(x, y), \quad Y=\left(\lambda_1+R_2\right) \cdot \mu_C(x, y) \cdot \nu_C(x, y)$
(8)

where, $\lambda_0$ and $\lambda_1$ represent correction factors for the $X$ and $Y$ coordinates, respectively. The terms $R_1$ and $R_2$ introduce mathematical adjustments to compensate for distortions caused by blurring. The function $\mu_C(x, y) \cdot$$\nu_C(x, y)$ determines the complex-valued blur intensity at pixel $(x,y)$, with values ranging from 0 (no blur) to 1 (severe blur).

For areas with significant blur $\left(\mu_C(x, y) \cdot \nu_C(x, y) \approx 1\right)$, the correction terms $\left(\lambda_i+R_i\right)$ are applied extensively to enhance restoration. Conversely, for regions with minimal blur $\left(\mu_C(x, y) \cdot \nu_C(x, y) \approx 0\right)$, these adjustments remain limited to preserve the original image structure.

This adaptive fusion technique ensures that restoration corrections are applied selectively, refining sharpness in highly degraded regions while maintaining the integrity of less-affected parts of the image.

3.5 CIPFS-Based Edge Enhancement for Blur Correction

To restore edges compromised due to blurring, a CIPFS-based edge enhancement approach was employed. This method utilizes gradient computations and fuzzy classification within the complex interval Pythagorean fuzzy framework to highlight edges while suppressing noise. The process follows these key steps:

Step 1: Computation of gradient magnitudes

The intensity gradient of the corrected image was derived using Sobel operators:

$G(x, y)=\sqrt{\left(\frac{\partial I_{\text {corrected}}(x, y)}{\partial x}\right)^2+\left(\frac{\partial I_{\text {corrected}}(x, y)}{\partial y}\right)^2}$
(9)

where, $I_{\text{corrected}}(x, y)$ represents the pixel intensity after the initial CIPFS-based correction.

Step 2: CIPFS membership for edge classification

The gradient values were categorized into different edge strengths (weak, moderate, and strong) using a complex interval Pythagorean fuzzy membership function:

$\mu_{\mathrm{edge}}(G)=\left(\frac{\left(G-G_l\right)}{\left(G_h-G_l\right)}+j \cdot \frac{\left(G-G_m\right)}{\left(G_h-G_l\right)}\right)$
(10)

where, $G_h, G_l$, and $G_m$ define the upper, lower, and mid thresholds for classifying edge intensities. The imaginary component introduces additional adaptability, capturing variations in edge intensity more effectively than traditional fuzzy methods.

Step 3: Edge enhancement via CIPFS rules

A CIPFS rule-based enhancement was applied to refine edge details:

$I_{\text {enhanced}}(x, y)=I_{\text {corrected}}(x, y)+\alpha \cdot \Re\left\{\mu_{\text {edge }}(G(x, y))\right\} \cdot G(x, y)$
(11)

where, $\alpha$ determines the intensity of enhancement, and $\Re\left\{\mu_{\text {edge}}(G(x, y))\right\}$ extracts the real part of the CIPFS membership function, ensuring robust edge reinforcement while minimizing artifacts.

3.6 Normalization and Structural Refinement Using CIPFS

To maintain consistency with the original image, the restored output was normalized to fit within the intensity range of the unblurred reference image:

$I_{\text{restored}}(x, y) = \frac{I_{\text{enhanced}}(x, y) - \min(I_{\text{enhanced}})}{\max(I_{\text{enhanced}}) - \min(I_{\text{enhanced}})} \cdot \max(I_{\text{ground_truth}})$
(12)

where, min($I_\text{enhanced}$) and max($I_\text{enhanced}$) represent the minimum and maximum intensities of the enhanced image, and max($I_{\text{ground_truth}}$) is the peak intensity of the reference image.

A CIPFS-based structural similarity constraint was incorporated to ensure the restored image remains faithful to the original:

$\left\|I_{\text {restored }}-I_{\text{ground_truth}}\right\|_2\rightarrow \min \left(\Im\left\{\mu_{\text {blur }}(x, y)\right\}\right)$
(13)

where, $\Im\left\{\mu_{\text {blur }}(x, y)\right\}$ represents the imaginary component of the CIPFS membership function, ensuring that the similarity measure dynamically adapts to varying blur levels.

This approach effectively restores sharpness in blurred regions while preserving finer details in areas with minimal degradation. By dynamically adjusting corrections based on the CIPFS-based blur intensity, the method minimizes artifacts and ensures a visually coherent and structurally accurate image restoration.

The proposed model, which integrates fuzzy logic with multiple mathematical transformations, demonstrates significant potential for restoring blurred images by effectively addressing uncertainties and variations in blur intensity. However, the proposed model relies on handcrafted fuzzy rules, which restricts its adaptability and scalability to diverse or novel image degradation scenarios. The predefined rules are based on specific blur level classifications and expert experience, making it challenging to generalize the model to complex or unknown degradation forms that may arise in practical applications. Additionally, the model’s computational complexity, particularly in high-resolution image processing, further compounds this limitation, as the rigid rule-based framework may not efficiently scale to handle large datasets or real-time processing demands. This limitation stems from the complexity of the operations involved, including pixel coordinate transformations, fuzzy membership calculations, rule-based corrections, and edge enhancement processes. Addressing these limitations in future work will be critical to enhancing the model’s robustness and applicability across a broader range of image restoration tasks. To validate the efficacy of the proposed approach, it was compared against conventional image restoration methods using Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Natural Image Quality Evaluator (NIQE), Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE), and Mean Opinion Score (MOS) metrics, with statistical tests confirming significant performance improvements. The results demonstrate that the proposed model not only enhances visual clarity but also provides greater robustness against noise and intensity variations, making it a reliable solution for high-precision image restoration tasks.

4. Experimental Results

To evaluate the performance of the proposed fuzzy-mathematical fusion model in restoring blurred images, a series of experiments were conducted. The primary objective was to assess the model’s ability to handle different types of blur, including Gaussian blur, motion blur, and uniform blur, under various real-world conditions. To ensure a comprehensive analysis of robustness, test images were intentionally blurred, simulating real-world degradation scenarios.

A dataset of 50 images, each with a resolution of 255×255, was carefully curated from publicly accessible sources such as the Berkeley Dataset and the Google Open Images Dataset. The dataset was designed to include a diverse selection of real-world images rather than entirely synthetic ones, thereby improving the study’s practical applicability. The selected images encompass natural landscapes, urban environments, medical scans, remote sensing images, and architectural structures, ensuring a well-rounded evaluation of the model’s effectiveness in different contexts. The dataset was curated by prioritizing images that naturally exhibit a range of textures, edges, and intensity variations, which are critical factors in evaluating image restoration models.

To further enhance the study’s validity, real-world distortions were simulated by applying Gaussian, motion, and uniform blur at different intensity levels, rather than relying on artificial or uniform degradation. This ensures that the test conditions closely resemble practical scenarios encountered in fields such as medical imaging, surveillance, and remote sensing. Additionally, the rationale for selecting this dataset over others is based on its diversity, availability, and relevance to the problem domain. By incorporating a broad spectrum of image types and degradation levels, the proposed model’s capability to restore blurred images was evaluated under challenging and varied conditions, reinforcing the credibility of the study.

The parameter setup for the proposed CIPFS-based image restoration model involves essential components for optimal performance. The membership functions for blur estimation were defined by $\mu_{i j}=0.8, \nu_{i j}=0.2$, and $\pi_{i j}=0.1$, with phase components $\theta_{i j}=0.3$ and $\phi_{i j}=0.5$. The correction parameters for pixel displacement were $\lambda_0=0.5, \lambda_1=0.7$, and $\lambda_2=0.6$, with correction radii $R_1=1.2, R_2=1.5$, and $R_3=1.3$, which contribute to effective blur correction, ensuring high-quality restoration. These values were chosen for their role in enhancing the model’s ability to restore sharpness while managing uncertainty in blurred regions.

The experiments were conducted on a system running MATLAB R2019, configured with 8 GB RAM and Windows 10. Image processing for the 255×255 dataset was performed under this setup, ensuring smooth execution of the proposed restoration model. To provide a fair and objective evaluation, the model’s performance was benchmarked against two well-established image restoration methods, RBF [20] and BLIF [21], both of which are widely utilized in image enhancement tasks. The effectiveness of the proposed model was assessed using multiple quantitative metrics, including PSNR, SSIM, NIQE, BRISQUE, and MOS [22]. PSNR measures image reconstruction accuracy by comparing the restored image with the original, where higher values indicate superior quality. SSIM evaluates structural integrity preservation, with higher scores reflecting improved detail retention. NIQE and BRISQUE, which are no-reference quality assessment metrics, estimate perceptual quality, where lower scores correspond to clearer images. MOS, a subjective evaluation metric, represents human perception of image quality, with higher scores denoting greater visual appeal. The performance of the proposed approach was analyzed by comparing the restored images with ground truth across various blur conditions, validating its overall efficacy.

Figure 1 provides a comprehensive visual comparison of the image deblurring capabilities of the proposed approach against the competing models, RBF and BLIF. The first column presents the original blurred images, which suffer from significant loss of detail and sharpness. The second and third columns depict the deblurred outputs generated by RBF and BLIF, respectively. Although these models improve image clarity to some extent, they struggle to recover intricate details, particularly in regions with complex textures and edges. In contrast, the fourth column showcases the results produced by the proposed model, which effectively restores fine details, preserves texture consistency, and enhances overall sharpness. As summarized in Table 1, the proposed method achieves a higher PSNR of 34.2±0.7 dB, outperforming RBF (30.5±1.1 dB) and BLIF (29.1±1.3 dB). These results validate the robustness and effectiveness of the fuzzy logic-based approach in mitigating image blurring.

Figure 1. Visual comparison of image deblurring results
Table 1. Performance comparison of the proposed model and competing approaches

Metric

Proposed Model

RBF

BLIF

p-value

PSNR

${34. 2} \pm {0 . 7}$

$30.5 \pm 1.1$

$29.1 \pm 1.3$

$p<0.01$ (ANOVA)

SSIM

${0 . 9 7} \pm {0 . 0 1}$

$0.90 \pm 0.03$

$0.86 \pm 0.04$

$p<0.01$ (ANOVA)

NIQE

${2 . 7} \pm {0 . 4}$

$4.0 \pm 0.6$

$4.5 \pm 0.7$

$p<0.01$ (T-Test)

BRISQUE

${1 9 . 5} \pm {2 . 0}$

$28.7 \pm 2.6$

$31.2 \pm 2.9$

$p<0.01$ (T-Test)

MOS

${5 . 4} \pm {0 . 2}$

$4.2 \pm 0.3$

$3.8 \pm 0.4$

$p<0.05$ (Wilcoxon)

CPU

${4 . 3} \pm {0 . 5}$

$3.4 \pm 0.4$

$2.9 \pm 0.3$

-

Figure 2. Visual comparison of combined image deblurring and denoising results with Gaussian noise (variance = 0.01)

Figure 2 evaluates the combined performance of deblurring and denoising under the presence of Gaussian noise with a variance of 0.02. The first column displays images affected by both blurring and noise, leading to a noticeable degradation in visual quality. The second and third columns present the outputs from RBF and BLIF, respectively. Although these methods attempt to reduce noise and enhance clarity, they introduce artifacts and fail to fully restore fine image details. In contrast, the fourth column illustrates the output of the proposed model, which efficiently removes noise while recovering details and sharpness, producing a visually more natural result. A quantitative assessment, as shown in Table 1, indicates that the proposed approach achieves the highest SSIM score of 0.97±0.02, surpassing RBF (0.90±0.03) and BLIF (0.86±0.04). This demonstrates the model’s effectiveness in handling complex tasks that require simultaneous deblurring and denoising. However, this improved performance comes at a computational cost, with the proposed model requiring 4.3±0.5 seconds per image due to the inclusion of fuzzy constraints, compared to RBF and BLIF, which process images in 3.4±0.4 seconds and 2.9±0.3 seconds, respectively.

PSNR is a commonly used metric for assessing image reconstruction quality, measuring the difference between an enhanced image and its original counterpart. It is computed using the following formula:

$\textit{PSNR}=10 \cdot \log _{10}\left(\frac{\textit{MAX}^2}{\textit{MSE}}\right)$
(14)

where, MSE represents the mean squared error, and MAX is the maximum possible pixel intensity. A higher PSNR value indicates a better-quality restoration. The proposed method achieved an average PSNR of 34.2±0.7, outperforming RBF (30.5±1.1) and BLIF (29.1±1.3). Analysis of Variance (ANOVA) testing confirmed statistical significance with a p-value below 0.01 (Table 1).

SSIM evaluates image quality by comparing structural integrity, contrast, and brightness between an enhanced image and its reference. A higher SSIM value suggests better detail preservation. The proposed approach achieved an SSIM score of 0.97±0.01, outperforming RBF (0.90±0.03) and BLIF (0.86±0.04). Statistical analysis using ANOVA indicated a significant difference, with a p-value below 0.01. NIQE is a no-reference metric that quantifies image quality based on statistical modeling of natural images. A lower NIQE value corresponds to better perceptual quality. The proposed method obtained a NIQE score of 2.7±0.4, notably lower than RBF (4.0±0.6) and BLIF (4.5±0.7). A paired t-test confirmed this improvement as statistically significant, with a p-value below 0.01. BRISQUE evaluates image quality by analyzing deviations from expected natural image characteristics. A lower BRISQUE value signifies better quality. The proposed model achieved the lowest BRISQUE score of 19.5±2.0, outperforming RBF (28.7±2.6) and BLIF (31.2±2.9). The improvement was statistically significant, as confirmed by a paired t-test with a p-value below 0.01. MOS is a subjective image quality evaluation metric based on human perception, where higher scores indicate better visual quality. The proposed model achieved the highest MOS rating of 5.4±0.2, surpassing RBF (4.2±0.3) and BLIF (3.8±0.4). A Wilcoxon signed-rank test yielded a p-value below 0.05, confirming that the perceived improvement in quality is statistically significant (Table 1).

As shown in Figure 3, the proposed model consistently outperforms the competing approaches, demonstrating superior image quality restoration while maintaining computational efficiency. Specifically, the proposed model outperforms RBF and BLIF across multiple image quality metrics, achieving the highest PSNR (34.2 dB) and SSIM (0.97) values, indicating superior reconstruction quality and structural preservation. It also records the lowest NIQE (2.7) and BRISQUE (19.5) scores, reflecting enhanced perceptual quality. Additionally, the highest MOS (5.4) confirms its superiority in subjective evaluation. Despite its improved performance, the proposed method maintains a reasonable processing time of 4.3 seconds, compared to 3.4 seconds (RBF) and 2.9 seconds (BLIF). Statistical validation through ANOVA, t-tests, and the Wilcoxon signed-rank test confirms the significance of these improvements, establishing the proposed approach as the most effective for image enhancement.

To ensure the scalability and applicability of the proposed CIPFS-based model in real-time imaging environments, its computational efficiency was analyzed and potential optimization strategies were explored in this study. While the model demonstrates superior restoration quality by effectively handling uncertainty and preserving structural details, its computational complexity results in an average processing time of 4.3 seconds, which is higher than RBF (3.4 seconds) and BLIF (2.9 seconds). This additional processing time is attributed to the robust uncertainty modeling and precise structural preservation facilitated by the CIPFS framework. To mitigate this latency and enhance real-time applicability, future optimizations could focus on parallel processing, Graphics Processing Unit (GPU) acceleration, and memory-efficient algorithmic refinements. Additionally, adaptive fuzzy rule pruning could be explored to dynamically adjust the complexity of fuzzy inference based on image characteristics, further reducing execution time. Hardware-based acceleration, such as Field-Programmable Gate Array (FPGA) implementation, could also be considered to improve the model’s efficiency in time-sensitive applications. These enhancements will ensure that the proposed approach maintains its accuracy and robustness while achieving the necessary computational efficiency for real-time and large-scale imaging scenarios.

(a)
(b)
Figure 3. Performance comparison of the proposed model with RBF and BLIF across multiple evaluation metrics

5. Conclusions

An innovative complex interval Pythagorean fuzzy-mathematical fusion model for image restoration was proposed in this study, which effectively addresses the challenges of deblurring and noise reduction. The model demonstrated exceptional performance in handling various blur types such as Gaussian, motion, and uniform blur, as well as combinations of blur and noise. When tested on a dataset of 50 grayscale images, the proposed model consistently outperformed existing methods, including RBF and BLIF, in terms of key metrics. Specifically, it achieved an average PSNR of 34.2±0.7 dB, SSIM of 0.97±0.01, NIQE of 2.7±0.4, BRISQUE of 19.5±2.0, and MOS of 5.4±0.2. These outcomes highlight the model’s ability to preserve fine details, reduce distortions, and enhance perceptual quality compared to competing techniques. The use of fuzzy logic within the framework enabled adaptive adjustments that preserved edge structures while minimizing overcorrection and artifacts. However, the model has certain limitations. Its computational demand remains a concern, especially for high-resolution or real-time applications, as the average processing time is 4.3 seconds, which is longer than the 3.4 seconds for RBF and 2.9 seconds for BLIF. This additional processing time is primarily due to the enhanced robustness and precision provided by the CIPFS-based framework. While this trade-off ensures superior segmentation quality, further optimization is necessary to improve computational efficiency. Future efforts could explore parallel processing techniques, memory-efficient algorithms, and hardware acceleration using GPUs or FPGAs to reduce execution time without compromising performance. Moreover, the model makes certain assumptions regarding blur types and noise distributions, which may limit its performance in more complex or diverse scenarios. Specifically, its effectiveness in handling mixed degradations, varying illumination conditions, and occlusions has not been extensively tested.

Beyond image deblurring, the proposed fuzzy-mathematical fusion model has significant potential for a wide range of real-world applications requiring adaptive noise reduction and structural preservation. It could be highly beneficial in remote sensing, where satellite images are often degraded by motion blur and atmospheric distortions. Similarly, in surveillance systems, enhancing blurred footage could improve object recognition and event analysis. The model also holds promise for forensic imaging, where restoring tampered or degraded images is crucial, and industrial quality control, where enhancing images captured under dynamic conditions can improve defect detection.

Future work could focus on further optimizing computational efficiency, incorporating data-driven learning for enhanced adaptability, extending the model’s application to higher-resolution color images, and validating its performance on diverse real-world datasets to strengthen its practical implementation. By expanding the applications of this fuzzy logic-based restoration method, the proposed model can significantly contribute to effective and adaptive image enhancement across various fields.

Data Availability

The data used to support the research findings are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References
1.
Y. C. Zhang, Z. R. Shen, and R. S. Jiao, “Segment anything model for medical image segmentation: Current applications and future directions,” Comput. Biol. Med., vol. 171, p. 108238, 2024. [Google Scholar] [Crossref]
2.
Y. Xu, R. Quan, W. Xu, Y. Huang, X. Chen, and F. Liu, “Advances in medical image segmentation: A comprehensive review of traditional, deep learning and hybrid approaches,” Bioengineering, vol. 11, no. 10, p. 1034, 2024. [Google Scholar] [Crossref]
3.
I. Hussain, H. Ali, M. S. Khan, S. Niu, and L. Rada, “Robust region-based active contour models via local statistical similarity and local similarity factor for intensity inhomogeneity and high noise image segmentation,” Inverse Probl. Imag., vol. 16, no. 5, pp. 1113–1136, 2022. [Google Scholar] [Crossref]
4.
A. O. Panhwar, A. A. Sathio, N. M. Shah, and S. Memon, “A scheme based on deep learning for fruit classification,” Mehran Univ. Res. J. Eng. Technol., vol. 44, no. 1, pp. 8–19, 2025. [Google Scholar] [Crossref]
5.
I. Hussain, J. Muhammad, and R. Ali, “Enhanced global image segmentation: Addressing pixel inhomogeneity and noise with average convolution and entropy-based local factor,” Int. J. Knowl. Innov. Stud., vol. 1, no. 2, pp. 116–126, 2023. [Google Scholar] [Crossref]
6.
M. S. Khan, “A region-based fuzzy logic approach for enhancing road image visibility in foggy conditions,” Mechatron. Intell. Transp. Syst., vol. 3, no. 4, pp. 212–222, 2024. [Google Scholar] [Crossref]
7.
P. A. Prabha, M. Bharathwaj, K. Dinesh, and G. H. Prashath, “Defect detection of industrial products using image segmentation and saliency,” J. Phys. Conf. Ser., vol. 1916, p. 012165, 2021. [Google Scholar] [Crossref]
8.
Y. Funama, S. Oda, M. Kidoh, D. Sakabe, and T. Nakaura, “Effect of image quality on myocardial extracellular volume quantification using cardiac computed tomography: A phantom study,” Acta Radiol., vol. 63, no. 2, pp. 159–165, 2022. [Google Scholar] [Crossref]
9.
J. Meng, Y. Li, H. Liang, and Y. Ma, “Single-image dehazing based on two-stream convolutional neural network,” J. Artif. Intell. Technol., vol. 2, no. 3, pp. 100–110, 2022. [Google Scholar] [Crossref]
10.
Q. Li, H. Wang, B. Y. Li, T. Yanghua, and J. Li, “IIE-SegNet: Deep semantic segmentation network with enhanced boundary based on image information entropy,” IEEE Access, vol. 9, pp. 40612–40622, 2021. [Google Scholar] [Crossref]
11.
Y. L. Qi, Z. Yang, W. H. Sun, M. Lou, J. Lian, W. W. Zhao, X. Y. Deng, and Y. D. Ma, “A comprehensive overview of image enhancement techniques,” Arch. Comput. Methods Eng., vol. 29, pp. 583–607, 2022. [Google Scholar] [Crossref]
12.
S. P. Premnath and J. A. Renjit, “Image restoration model using Jaya-Bat optimization-enabled noise prediction map,” IET Image Process., vol. 15, no. 9, pp. 1926–1939, 2021. [Google Scholar] [Crossref]
13.
M. Ghulyani and M. Arigovindan, “Fast roughness minimizing image restoration under mixed Poisson–Gaussian noise,” IEEE Trans. on Image Process., vol. 30, pp. 134–149, 2021. [Google Scholar] [Crossref]
14.
C. C. Jiang, L. Hu, and G. T. Jia, “The application of deep learning in image deblurring,” in CAIBDA’ 24: Proceedings of the 2024 4th International Conference on Artificial Intelligence, Big Data and Algorithms, New York, USA, 2024, pp. 1075–1079. [Google Scholar] [Crossref]
15.
Y. W. Xiang, H. Zhou, C. Y. Li, F. W. Sun, Z. B. Li, and Y. Q. Xie, “Deep learning in motion deblurring: Current status, benchmarks and future prospects,” Vis. Comput., 2024. [Google Scholar] [Crossref]
16.
M. Aoi and T. Goto, “Deblurred image quality improvement by learning based deblurring method utilizing ConvNeXt-V2,” in DMIP’ 24: Proceedings of the 2024 7th International Conference on Digital Medicine and Image Processing, Osaka, Japan, 2024, pp. 62–66. [Google Scholar] [Crossref]
17.
K. Chen and Y. J. Liu, “Efficient image deblurring networks based on diffusion models,” arXiv preprint, arXiv:2401.05907, 2024. [Google Scholar] [Crossref]
18.
N. Varghese, A. N. Rajagopalan, and Z. A. Ansari, “Real-time large-motion deblurring for gimbal-based imaging systems,” IEEE J. Sel. Top. Signal Process., vol. 18, no. 3, pp. 346–357, 2024. [Google Scholar] [Crossref]
19.
L. Zhai, Y. Wang, S. Cui, and Y. Zhou, “A comprehensive review of deep learning-based real-world image restoration,” IEEE Access, vol. 11, pp. 21049–21067, 2023. [Google Scholar] [Crossref]
20.
S. M. Zhao, S. K. Oh, J. Y. Kim, Z. W. Fu, and W. Pedrycz, “Motion blurred image restoration framework based on parameter estimation and fuzzy radial basis function neural networks,” Pattern Recognit., vol. 132, p. 108983, 2022. [Google Scholar] [Crossref]
21.
J. Peng, B. Luo, L. Xu, J. Yang, C. Zhang, and Z. Pei, “Blind image deblurring via minimizing similarity between fuzzy sets on image pixels,” IEEE Trans. Circuits Syst. Video Technol., vol. 34, no. 11, pp. 11851–11873, 2024. [Google Scholar] [Crossref]
22.
A. C. Bovik, The Essential Guide to Image Processing. Academic Press, 2009. [Google Scholar]

Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
GB-T-7714-2015
Husain, Z. & Yow, K. S. (2025). Advanced Image Restoration Through CIPFS-Integrated Mathematical Transformations. Acadlore Trans. Mach. Learn., 4(1), 50-61. https://doi.org/10.56578/ataiml040105
Z. Husain and K. S. Yow, "Advanced Image Restoration Through CIPFS-Integrated Mathematical Transformations," Acadlore Trans. Mach. Learn., vol. 4, no. 1, pp. 50-61, 2025. https://doi.org/10.56578/ataiml040105
@research-article{Husain2025AdvancedIR,
title={Advanced Image Restoration Through CIPFS-Integrated Mathematical Transformations},
author={Zakir Husain and Kai Siong Yow},
journal={Acadlore Transactions on AI and Machine Learning},
year={2025},
page={50-61},
doi={https://doi.org/10.56578/ataiml040105}
}
Zakir Husain, et al. "Advanced Image Restoration Through CIPFS-Integrated Mathematical Transformations." Acadlore Transactions on AI and Machine Learning, v 4, pp 50-61. doi: https://doi.org/10.56578/ataiml040105
Zakir Husain and Kai Siong Yow. "Advanced Image Restoration Through CIPFS-Integrated Mathematical Transformations." Acadlore Transactions on AI and Machine Learning, 4, (2025): 50-61. doi: https://doi.org/10.56578/ataiml040105
HUSAIN Z, YOW K S. Advanced Image Restoration Through CIPFS-Integrated Mathematical Transformations[J]. Acadlore Transactions on AI and Machine Learning, 2025, 4(1): 50-61. https://doi.org/10.56578/ataiml040105
cc
©2025 by the author(s). Published by Acadlore Publishing Services Limited, Hong Kong. This article is available for free download and can be reused and cited, provided that the original published version is credited, under the CC BY 4.0 license.