Javascript is required
1.
T. Sharma and N. Verma, “Image dehazing using type-2 fuzzy approach,” in Artificial Intelligent Algorithms for Image Dehazing and Non-Uniform Illumination Enhancement, 2024, pp. 79–110. [Google Scholar] [Crossref]
2.
D. Singh and V. Kumar, “A comprehensive review of computational dehazing techniques,” Arch. Computat. Methods Eng., vol. 26, no. 5, pp. 1395–1413, 2019. [Google Scholar] [Crossref]
3.
I. Hussain, “An adaptive multi-stage fuzzy logic framework for accurate detection and structural analysis of road cracks,” Mechatron. Intell Transp. Syst., vol. 3, no. 3, pp. 190–202, 2024. [Google Scholar] [Crossref]
4.
Z. Hao, Y. Yu, and J. Sun, “Optimization and implementation of image defogging algorithm based on field programmable gate array,” in 2023 3rd International Conference on Computer Science and Blockchain (CCSB), Shenzhen, China, 2023, pp. 88–94. [Google Scholar] [Crossref]
5.
I. Hussain and L. Alam, “Adaptive road crack detection and segmentation using einstein operators and ANFIS for real-time applications,” J. Intell Syst. Control, vol. 3, no. 4, pp. 213–226, 2024. [Google Scholar] [Crossref]
6.
Y. T. Kuo, W. T. Chen, P. Y. Chen, and C. H. Li, “VLSI implementation for an adaptive haze removal method,” IEEE Access, vol. 7, pp. 173977–173988, 2019. [Google Scholar] [Crossref]
7.
K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 2341–2353, 2010. [Google Scholar] [Crossref]
8.
R. Tan, “Visibility in bad weather from a single image,” in 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 2008, pp. 1–8. [Google Scholar] [Crossref]
9.
C. Zhang and C. Wu, “Multi-scale attentive feature fusion network for single image dehazing,” in 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 2022, pp. 1–7. doi: 10.1 109/IJCNN55064.2022.9892050. [Google Scholar]
10.
R. Fattal, “Single image dehazing,” ACM Trans. Graph., vol. 27, no. 3, pp. 1–9, 2008. [Google Scholar] [Crossref]
11.
S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” Int. J. Comput. Vis., vol. 48, pp. 233–254, 2002. [Google Scholar] [Crossref]
12.
W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 2016, pp. 154–169. [Google Scholar] [Crossref]
13.
D. Ngo, S. Lee, U. J. Kang, T. M. Ngo, G. D. Lee, and B. Kang, “Adapting a dehazing system to haze conditions by piece-wisely linearizing a depth estimator,” Sensors, vol. 22, no. 5, p. 1957, 2022. [Google Scholar] [Crossref]
14.
B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “AOD-net: All-in-one dehazing network,” in 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017, pp. 4780–4788. doi: 10 .1109/ICCV.2017.511. [Google Scholar]
15.
S. Koley, A. Sadhu, H. Roy, and S. Dhar, “Single image visibility restoration using dark channel prior and fuzzy logic,” in 2018 2nd International Conference on Electronics, Materials Engineering and Nano-Technology (IEMENTech), Kolkata, India, 2018, pp. 1–7. [Google Scholar] [Crossref]
16.
B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “DehazeNet: An end-to-end system for single image haze removal,” IEEE Trans. Image Process., vol. 25, no. 11, pp. 5187–5198, 2016. [Google Scholar] [Crossref]
17.
S. Lai and X. Ren, “Image dehazing and enhancement based on fuzzy image modeling,” Int. J. Front. Med., vol. 4, no. 4, pp. 34–41, 2022. [Google Scholar] [Crossref]
18.
P. Garg, A. Jha, and S. Jindal, “Enhancing visibility: Multiresolution dark channel prior for dehazing and fog removal in images,” in 2023 7th International Conference on Computation System and Information Technology for Sustainable Solutions (CSITSS), Bangalore, India, 2023, pp. 1–6. 23.10334128. [Google Scholar] [Crossref]
19.
W. Mao, D. Zheng, M. Chen, and J. Chen, “Single image defogging via multi-exposure image fusion and detail enhancement,” J. Saf. Sci. Resil., vol. 5, no. 1, pp. 37–46, 2024. [Google Scholar] [Crossref]
20.
M. Khan, “A region-based fuzzy logic approach for enhancing road image visibility in foggy conditions,” Mechatron. Intell Transp. Syst., vol. 3, no. 4, pp. 212–222, 2024. [Google Scholar] [Crossref]
Search
Open Access
Research article

A Robust Road Image Defogging Framework Integrating Pythagorean Fuzzy Aggregation, Gaussian Mixture Models, and Level-Set Segmentation

luqman alam*
Abdus Salam School of Mathematical Sciences, Government College University Lahore, 54600 Lahore, Pakistan
Mechatronics and Intelligent Transportation Systems
|
Volume 3, Issue 4, 2024
|
Pages 254-263
Received: 10-31-2024,
Revised: 12-09-2024,
Accepted: 12-19-2024,
Available online: 12-30-2024
View Full Article|Download PDF

Abstract:

Foggy road conditions present substantial challenges to road monitoring and autonomous driving systems, as existing defogging techniques often fail to accurately recover structural details, manage dense fog, and mitigate artifacts. In response, a novel defogging model is proposed, incorporating Pythagorean fuzzy aggregation, Gaussian Mixture Models (GMM), and the level-set method, aimed at overcoming these limitations. Unlike conventional methods that depend on fixed priors or oversimplified haze models, the proposed framework leverages the advantages of Pythagorean fuzzy aggregation to enhance contrast and detail restoration, GMM to estimate fog density robustly, and the level-set method for precise edge preservation. The performance of the model is quantitatively assessed, revealing a Peak Signal-to-Noise Ratio (PSNR) of up to 37.1 dB and a Structural Similarity Index (SSIM) of 0.96, which significantly outperforms existing defogging techniques. Statistical analyses further confirm the robustness of the approach, with a p-value of less than 0.001 for key performance metrics. Additionally, the model demonstrates an execution time of 0.07 seconds, indicating its suitability for real-time road monitoring applications. Qualitative assessments highlight the model's ability to restore natural road colours and maintain high structural fidelity, even under conditions of dense fog. This work provides a promising advancement over current methods, with potential applications in autonomous driving, traffic surveillance, and smart transportation systems.

Keywords: Image defogging, Pythagorean aggregation, GMM, Level-set method, Haze removal, Real-time processing, Structural fidelity, Statistical analysis

1. Introduction

Image defogging, or dehazing, is a critical preprocessing step in computer vision that aims to restore the quality of images degraded by atmospheric conditions such as fog, haze, and smog [1], [2], [3], [4]. These atmospheric phenomena reduce visibility, contrast, and color fidelity by scattering light, thereby degrading the overall image quality. Effective defogging is essential in applications such as autonomous driving, outdoor photography, remote sensing, and surveillance systems, where clear visibility is vital for performance and safety.

The atmospheric scattering model forms the foundation for most defogging techniques [5], [6]. This model describes the process of light attenuation and scattering, leading to the formation of foggy images. Mathematically, the hazy image $I(x)$ can be expressed as:

$ I(x)=J(x) t(x)+A(1-t(x)) $

where, $J(x)$ represents the scene radiance, $t(x)$ is the transmission map indicating the proportion of unscattered light, and $A$ denotes the atmospheric light. Accurate estimation of $t(x)$ and $A$ are critical for restoring the true scene radiance $J(x)$. The transmission map is depth-dependent, meaning that objects further from the camera appear more obscured due to greater scattering.

Over the years, various defogging techniques have been proposed, which can be broadly categorized into enhancement-based methods, restoration-based methods, and deep learning-based methods [7], [8], [9], [10], [11], [12], [13], [14]. Enhancement-based approaches improve the visual quality of images without explicitly modeling the degradation process. Methods such as histogram equalization and its adaptive variants are commonly used to enhance contrast. However, these methods often fail to recover the actual scene radiance, leading to artifacts and an unnatural appearance. Retinex-based methods, inspired by human visual perception, separate illumination from reflectance to improve visibility. While effective in some scenarios, these methods struggle with color distortion and may amplify noise. For instance, Tan [8] introduced the idea of improving visibility in bad weather through a single image, which also falls under enhancement-based techniques.

Restoration-based methods aim to reconstruct the original scene radiance using physical models of haze formation. The dark channel prior (DCP), proposed by He et al. [7], is one of the most prominent methods in this category. It assumes that in most non-sky regions, at least one-color channel has minimal intensity. By estimating the transmission map and atmospheric light, DCP has demonstrated significant improvements in image restoration. However, it suffers from halo artifacts in sky regions and requires additional post-processing steps for optimal results. Other restoration methods include polarization-based techniques, which utilize multiple polarized images to estimate depth information [15], and fusion-based approaches, which combine multiple images captured under different conditions [9].

Deep learning has revolutionized the field of image defogging, offering robust solutions for complex scenarios. Convolutional Neural Networks (CNNs), such as DehazeNet [16], learn feature representations to map hazy images to their haze-free counterparts. Generative Adversarial Networks (GANs), such as CycleGAN, improve performance even more by allowing image-to-image translation without paired training datasets [14]. Despite their effectiveness, deep learning-based methods face challenges such as computational complexity, the need for extensive training data, and the generation of potential artifacts. Ren et al. [12] proposed using multi-scale CNNs for single-image dehazing, further advancing this approach. Lai and Ren [17] used fuzzy logic for image modeling to address similar challenges.

The proposed model integrates Pythagorean aggregation, GMM, and the level-set method, each addressing distinct challenges in road defogging while complementing one another. Pythagorean aggregation effectively handles uncertainty and vagueness in foggy images by robustly combining pixel intensity values, significantly enhancing contrast and preserving fine details across varying fog densities. This adaptive framework ensures that the defogging process avoids artifacts commonly introduced by traditional methods. GMM further enhances the model’s performance by accurately classifying intensity-based regions within the image, leveraging probabilistic distributions to target fog-affected areas with precision. This step ensures that haze is removed in a context-aware manner, maintaining the natural gradients and preventing overprocessing. The level-set method refines the output by incorporating edge-preserving constraints, enabling the recovery of road boundaries, lane markings, and other critical structural details often lost in foggy conditions. By preserving both geometric and intensity-based features, the method ensures structural fidelity while restoring visual clarity.

Furthermore, the model incorporates a multiscale fusion approach inspired by Zhang and Wu [9], which combines global and local characteristics across spatial scales. This technique ensures that the defogging process enhances both overall visibility and fine texture details, effectively addressing the limitations of traditional methods that either focus solely on large-scale improvements or overlook subtle local variations. Together, these components form a synergistic framework: Pythagorean aggregation serves as the foundation for enhancing visibility and contrast, GMM provides a targeted and adaptive estimation of fog density, and the level-set method ensures the final output is both visually coherent and structurally accurate.

In addition, the model is designed with computational efficiency in mind, achieving a processing time of 0.07 seconds per frame, making it highly suitable for real-time applications such as autonomous driving, traffic monitoring, and smart transportation systems. This combination of advanced techniques not only resolves limitations in existing defogging approaches, such as uneven haze distribution and structural blurring, but also ensures superior performance metrics, including enhanced clarity, artifact-free results, and natural color restoration. The proposed model represents a significant advancement in addressing the challenges of road defogging, particularly under dense fog conditions.

2. Related Work

In recent years, image defogging and enhancement techniques have gained significant attention, particularly in the context of image dehazing. A notable contribution in this domain is the work by Garg et al. [18]. This study introduces a novel image defogging algorithm based on the dark channel prior, which leverages statistical insights from haze-free outdoor images to estimate haze thickness and restore high-quality images. In foggy or smoggy conditions, image quality is often compromised, affecting tasks such as image segmentation and target detection. The method improves visibility and contrast, making it particularly effective for outdoor monitoring systems.

Despite its effectiveness, the dark channel prior-based method by Garg et al. has some limitations. It assumes a predominantly hazy scene, which may not apply to all images, especially those with complex backgrounds or varying weather conditions. The algorithm may also struggle with uneven haze thickness and extreme fog conditions, leading to inaccurate haze estimation. Additionally, the method can be computationally expensive, particularly for high-resolution images or real-time applications, requiring optimization for faster processing.

Mao et al. [19] introduced a method for single-image defogging using multi-exposure image fusion (M-EIF), which enhances image details by combining multiple exposures through the following formula:

$ I_{\text {fused }}(x)=\sum_{i=1}^N w_i I_i(x) $

where, $I_i(x)$ represents the input images taken at different exposure levels, and $w_i$ are the corresponding weights assigned to each exposure. This technique improves visibility by merging useful information from several images. The multi-exposure approach depends on having a series of images captured at different exposures, which may not be practical in real-time or rapidly changing conditions. Additionally, the method's effectiveness can be compromised by motion blur or if the images are not properly aligned.

Khan [20] proposed a notable method for improving the visibility of road images under foggy conditions. This approach utilizes a context-aware fuzzy transmission map adjustment (C-AFTM) to enhance visibility across varying fog densities. Unlike traditional methods that rely on a uniform map, this model employs fuzzy logic to adjust the transmission map based on local fog density and contextual factors. By segmenting the image into regions and applying edge detection and texture analysis, the model preserves critical road details effectively. Additionally, proximity-based adjustments near high-intensity regions, such as streetlights, help maintain brightness. This approach outperforms traditional methods in terms of brightness, contrast, and detail retention.

The model depends on accurate image segmentation, which may be challenging in complex or dynamic environments. It may also struggle in extremely dense fog or fluctuating lighting conditions. The computational complexity of the fuzzy logic adjustments can make it less suitable for real-time applications.

3. Proposed Model

This paper introduces a novel approach to road defogging that integrates the Pythagorean fuzzy aggregation operator with the GMM and the level set function. This method leverages the power of fuzzy logic to model the uncertainty and vagueness in fog detection, while GMM is used for intensity-based classification of foggy and clear regions. The proposed model is particularly effective in improving the visibility of fog-indicated road images.

3.1 Pythagorean Fuzzy Set

A Pythagorean fuzzy set (PFS) $A_i$ is defined as a pair of membership and non-membership functions, which satisfy the condition:

$ \mu_i^2(x)+\nu_i^2(x) \leq 1 $

where, $\mu_i(x)$ represents the degree of membership of a pixel $x$ in the fuzzy set, indicating the likelihood of it being part of a foggy region, and $v_i(x)$ represents the degree of non-membership of the pixel $x$, indicating the likelihood of it being part of a clear region. For each pixel $x$, the membership function is modeled using the GMM.

3.2 GMM for Fog Detection

GMM is employed to model the intensity distribution of the road image affected by fog. The GMM assumes that the pixel intensity $x_i$ follows a mixture of several Gaussian distributions:

$ P\left(x_i \mid \mu_k, \sigma_k\right)=\frac{1}{\sqrt{2 \pi \sigma_k^2}} \exp \left(-\frac{\left(x_i-\mu_k\right)^2}{2 \sigma_k^2}\right) $

where, $\mu_k$ is the mean intensity of the $k$-th-Gaussian component. $\sigma_k$ is the standard deviation of the $k$-th Gaussian component, and $x_i$ is the intensity of the pixel $x_i$ in the image. Each pixel intensity $x_i$ is assigned a degree of membership to each Gaussian component. The membership function $\mu_i(x)$ for a pixel $x_i$ belonging to the $k$-th Gaussian component is given by:

$ \mu_k\left(x_i\right)=\frac{1}{\sqrt{2 \pi \sigma_k^2}} \exp \left(-\frac{\left(x_i-\mu_k\right)^2}{2 \sigma_k^2}\right) $

The non-membership function $v_i(x)$ is complementary to the membership function $\mu_i(x)$. It is defined as:

$ \nu_i(x)=\sqrt{1-\mu_i^2(x)} $

where, $\mu_i(x)$ is the membership value for the foggy region. This relationship ensures that the membership and non-membership values together satisfy the Pythagorean fuzzy set constraint.

3.3 Pythagorean Fuzzy Aggregation Operator

In order to combine the fuzzy membership functions from the different Gaussian components, we use the Pythagorean fuzzy aggregation operator. The aggregated membership function $\mu_{\text {agg}}\left(x_i\right)$ is defined as:

$ \mu_{\mathrm{agg}}\left(x_i\right)=\left(\sum_{k=1}^n w_k \mu_k^2\left(x_i\right)\right)^{1 / 2} $

where, $w_k$ is the weight associated with the $k$-th Gaussian component, reflecting the importance of the component in the overall fog detection process. The weight $w_k$ can be computed using the likelihood of each Gaussian component or can be assigned based on prior knowledge about the fog distribution. $\mu_k\left(x_i\right)$ is the membership value for the $k$-th Gaussian component for the pixel $x_i$ and $n$ is the total number of Gaussian components in the mixture model.

Similarly, the aggregated non-membership function $v_{\text {agg}}\left(x_i\right)$ is computed as:

$ \nu_{\mathrm{agg}}\left(x_i\right)=\sqrt{1-\mu_{\mathrm{agg}}^2\left(x_i\right)} $

This aggregated non-membership function gives us the degree of non-foggy region at each pixel. The final step in the road defogging process is the classification of road segments as either foggy or clear using the level set function. The level set function $\Phi\left(x_i\right)$ is defined as:

$ \Phi\left(x_i\right)= \begin{cases}1, & \text { if } \mu_{\mathrm{agg}}\left(x_i\right) \geq \alpha \\ 0, & \text { if } \mu_{\mathrm{agg}}\left(x_i\right)<\alpha\end{cases} $

where, $\alpha$ is a threshold value. If the aggregated membership function $\mu_{\mathrm{agg}}\left(x_i\right)$ is greater than or equal to this threshold, the pixel is classified as foggy (1). Otherwise, it is classified as clear (0).

The level set function helps segment the road image into clear and foggy regions, providing a binary classification for defogging.

4. Statistical Analysis for Proposed Model

The mathematical calculations in the context of defogging (or image dehazing) involve evaluating the quality of defogged images using metrics such as PSNR, SSIM, and statistical tests. These calculations help quantify the performance of the proposed defogging model against other competing models.

4.1 PSNR

PSNR is used to assess the quality of the defogged image, where a higher PSNR indicates better quality. The PSNR for an image is calculated as:

$ \operatorname{PSNR}=10 \cdot \log _{10}\left(\frac{M A X_I^2}{M S E}\right) $

where, $M A X_I$ is the maximum pixel value of the image (e.g., 255 for 8 -bit images), $M S E$ is the Mean Squared Error between the original (fog-free) image, $I_{\text {orig}}$ and the defogged image $I_{\text {defog}}$, defined as:

$ M S E=\frac{1}{N} \sum_{i=1}^N\left(I_{\text {orig}}(i)-I_{\text {defog}}(i)\right)^2 $

where, $N$ is the total number of pixels in the image.

4.2 SSIM

SSIM measures the perceptual quality of the defogged image by considering luminance, contrast, and structure. It is given by the formula:

$ \operatorname{SSIM}(x, y)=\frac{\left(2 \mu_x \mu_y+C_1\right)\left(2 \sigma_{x y}+C_2\right)}{\left(\mu_x^2+\mu_y^2+C_1\right)\left(\sigma_x^2+\sigma_y^2+C_2\right)} $

where, $\mu_x$ and $\mu_y$ are the average pixel intensities of the original and defogged images, $\sigma_x^2$ and $\sigma_y^2$ are the variances of the original and defogged images, $\sigma_{x y}$ is the covariance between the original and defogged images, $C_1$ and $C_2$ are constants that help stabilize the division with weak denominators.

4.3 Paired t-Test for Defogging Model Comparison

The paired t-test is used to compare the performance of the proposed de- fogging model against competing models. The test statistic is calculated as:

$ t=\frac{\bar{d}}{s_d / \sqrt{n}} $

where, $\bar{d}$ is the mean of the differences between the paired samples (i.e., the difference in PSNR or SSIM values between the proposed defogging model and another model), $s_d$ is the standard deviation of these differences, $n$ is the number of pairs (number of test images).

The p-value associated with the t-test is used to determine if the observed difference in performance is statistically significant.

4.4 One-Way ANOVA for Model Comparison

One-way ANOVA is applied to compare the performance of multiple defogging models across various metrics. The F-statistic is calculated as:

$ F=\frac{\text {Between-group variability}}{\text {Within-group variability}} $

where, Between-group variability measures how much the means of the different models differ from the overall mean. Within-group variability measures the variability within each group of model outputs.

A large F-value and a small p-value indicate that at least one of the models performs significantly differently from the others.

4.5 Wilcoxon Signed-Rank Test for Defogging Efficiency Comparison

The Wilcoxon Signed-Rank Test is a non-parametric test used to compare the execution time of the proposed defogging model with competing models. It is calculated as:

$ W=\sum(\text {signed ranks of execution time differences}) $

where, the differences between execution times of the proposed and competing models are ranked, and the sign of each difference is retained. The p-value obtained from this test indicates whether there is a significant difference in execution time between the models.

5. Experiments

To validate the performance of the proposed Pythagorean Aggregation-Based Road Defogging Model, experiments were conducted using road images affected by fog, sourced from publicly available datasets such as the RESIDE dataset and Foggy Driving dataset. These datasets were chosen for their diversity, representing a wide range of fog densities, road types, and environmental conditions, including urban streets, highways, and rural roads. The dataset includes over 500 images, with resolutions ranging from 640×480 to 1920×1080 pixels, ensuring a comprehensive evaluation of the defogging model. Prior to applying the proposed model, the images underwent preprocessing steps that included resizing to a consistent resolution of 800×600 pixels to standardize the input dimensions and histogram equalization to normalize intensity distributions. These steps ensured consistent conditions for testing the model’s effectiveness. The experiments were carried out on a computational setup utilizing MATLAB R2015a on a high-performance CPU with 8 GB of RAM and Windows 10 (64 bits). This setup enabled efficient processing and evaluation of the model across varying fog conditions.

The parameter settings of the proposed model were chosen based on extensive experimental evaluations and were optimized to ensure robust performance. The Pythagorean fuzzy set membership function $\mu_i(x)$ and non-membership function $v_i(x)$ adhered to the constraint $\mu^2(x)+v^2(x) \leq 1$, which is fundamental to the properties of Pythagorean fuzzy sets. These functions were derived by analyzing the distribution of pixel intensity values in the foggy images, ensuring an accurate representation of fog density. For the GMM, the mean intensity of each Gaussian component $\mu_k$ was set to 120 , and the standard deviation $\sigma_k$ was set to 25 for the foggy regions. These values were selected based on statistical analysis of pixel intensity distributions in the training set, where foggy regions consistently exhibited intensity clusters within this range. This configuration allowed the model to accurately identify and segment fog-affected areas, even under varying lighting conditions. The weights of the Pythagorean fuzzy aggregation operator $w_1$ and $w_2$ were set to 0.7 and 0.3 , respectively, reflecting the observed dominance of foggy regions in the test images. These weights were optimized through iterative testing, ensuring that the aggregation process effectively enhanced contrast while preserving fine details. The threshold parameter $\alpha$ in the level-set function was empirically set to 0.5 to classify road segments as either foggy or clear. This value was chosen to balance sensitivity and specificity, enabling the level-set method to maintain structural fidelity and accurately delineate road boundaries.

Figure 1 illustrates the effectiveness of the proposed fog removal model through a side-by-side comparison of a foggy image, the defogged result, and the ground truth image. The quantitative metrics include a PSNR of 35.4 and an SSIM of 0.95, indicating high-quality restoration and near-perfect preservation of structural details in the defogged output. Additionally, the execution time of 0.08 seconds underscores the computational efficiency of the model, making it viable for real-time applications.

Figure 1. The foggy image, quantitative visualization (PSNR = 35.4, SSIM= 0.95, and execution time = 0.08 seconds), the defogging result of the proposed model, and the ground truth

The visual results confirm the capability of the proposed model to restore clarity and contrast effectively. The foggy image is dominated by reduced visibility and diminished scene details due to scattering effects. In contrast, the defogged result showcases a marked improvement, with enhanced contrast, restored object visibility, and colors closely resembling the ground truth image. This suggests that the model successfully mitigates the effects of haze, resulting in an output that aligns closely with the ideal scene representation. Figure 2 provides a comparative analysis of the proposed model against competing approaches, denoted as C-AFTM and Mao's model. The results are demonstrated using a sequence of images, including the input foggy image, outputs from C-AFTM and Mao's model, and the output of the proposed model. Column 1 exhibits severe fog effects, characterized by low visibility, muted colors, and blurred details, underscoring the challenging conditions addressed in this evaluation. In column 2, the defogged results from C-AFTM show some improvement in clarity, but do not fully restore finer details or achieve optimal contrast. The output often retains residual haze and subdued colors, indicating limited effectiveness in handling dense fog scenarios. In column 3, the results from Mao's model. demonstrate a moderate enhancement over C-AFTM, with better contrast and restored details. However, issues such as over-enhancement or noise artifacts are apparent, detracting from the overall quality of the restored image. In conclusion, the proposed model significantly outperforms both C-AFTM and Mao's model. The defogged images exhibit superior clarity, with well-preserved details and natural color tones. The model effectively eliminates haze, restores sharpness, and achieves a balanced enhancement without introducing artifacts or over-saturation.

Figure 2. Comparative evaluation performance of defogging models.

In Figure 2, the first column represents the input foggy images, the second column shows the results of C-AFTM [19], the third column presents the results of Mao's model [19], and the final column displays the defogging results of the proposed model, demonstrating superior clarity and detail preservation.

The comparative evaluation highlights the robustness and efficiency of the proposed model in addressing fog-related distortions. Its superior performance, both quantitatively and qualitatively, suggests that it is well-suited for applications requiring high-quality scene restoration under challenging atmospheric conditions.

The statistical analysis presented in Table 1 compares the performance of the proposed model with two competing models (C-AFTM and Mao's model) across three experiments. The following metrics were analyzed: PSNR, SSIM, and Execution Time. Various statistical tests, including the paired t-test, one-way ANOVA, and Wilcoxon Signed-Rank Test, were applied to evaluate the significance of differences between the models.

Table 1. Statistical analysis (PSNR, SSIM, and Execution Time) across three experiments
MetricProposed ModelC-AFTMMao's ModelStatistical Testp-value
Experiment 1
PSNR (dB)33.5----
SSIM0.92----
Execution Time (s)0.10----
Experiment 2
PSNR (dB)35.428.731.2Paired t-test (Proposed vs. C-AFTM/Mao)p $<$ 0.001 (both)
SSIM0.950.850.89One-way ANOVA (F = 45.6)p $<$ 0.001
Execution Time (s)0.080.110.09Wilcoxon Signed-Rank Testp = 0.03 (vs. C-AFTM), p = 0.12 (vs. Mao)
Experiment 3
PSNR (dB)37.129.232.5Paired t-test (Proposed vs. C-AFTM/Mao)p $<$ 0.001 (both)
SSIM0.960.860.90One-way ANOVA (F = 50.8)p $<$ 0.001
Execution Time (s)0.070.120.10Wilcoxon Signed-Rank Testp = 0.02 (vs. C-AFTM), p = 0.09 (vs. Mao)
5.1 PSNR (dB)

PSNR is a commonly used metric to evaluate image reconstruction quality, where a higher PSNR indicates better quality. In Experiment 2, the proposed model achieved a PSNR of 35.4 dB, outperforming both C-AFTM (28.7 dB) and Mao's model (31.2 dB). A paired t-test was conducted to compare the proposed model with both competing models, resulting in a p-value less than 0.001, indicating statistically significant differences. This suggests that the proposed model provides superior quality compared to the C-AFTM and Mao's model.

In Experiment 3, the proposed model achieved a PSNR of 37.1 dB, again surpassing both C-AFTM (29.2 dB) and Mao's model (32.5 dB). Similar to Experiment 2, a paired t-test yielded a p-value less than 0.001, confirming the statistical significance of the proposed model's performance.

5.2 SSIM

The SSIM metric evaluates the structural similarity between the original and processed images. A higher SSIM indicates better preservation of the structural features in the image. In Experiment 2, the proposed model achieved an SSIM of 0.95, significantly outperforming C-AFTM (0.85) and Mao's model (0.89). A one-way ANOVA was performed, with an F-statistic of 45.6 and a p-value less than 0.001, indicating that the differences between the models are statistically significant.

In Experiment 3, the proposed model again outperformed both C-AFTM and Mao's model, with an SSIM of 0.96 compared to 0.86 and 0.90, respectively. The one-way ANOVA test yielded an F-statistic of 50.8 and a p-value less than 0.001, further supporting the superiority of the proposed model.

5.3 Execution Time

Execution time is a critical metric for evaluating the efficiency of image processing algorithms. In Experiment 2, the proposed model had an execution time of 0.08 seconds, which was faster than C-AFTM (0.11 seconds) and comparable to Mao's model (0.09 seconds). A Wilcoxon Signed-Rank Test was used to compare the execution times, yielding a p-value of 0.03 for the comparison between the proposed model and C-AFTM, indicating a statistically significant difference. However, no significant difference was found between the proposed model and Mao's model (p = 0.12).

In Experiment 3, the proposed model achieved an even faster execution time of 0.07 seconds, outperforming both C-AFTM (0.12 seconds) and Mao's model (0.10 seconds). The Wilcoxon Signed-Rank Test showed a p-value of 0.02 when comparing the proposed model to C-AFTM, indicating a statistically significant difference. However, the difference between the proposed model and Mao's model was not statistically significant (p = 0.09).

The performance of the proposed defogging model was compared with two competing models, namely C-AFTM and Mao's model, using standard metrics derived from the confusion matrix. The following metrics were used to evaluate the models:

Accuracy: The proportion of correctly classified pixels (foggy and clear) out of the total number of pixels.

$ \text { Accuracy }=\frac{T P+T N}{T P+T N+F P+F N} $

Precision: The proportion of correctly classified foggy pixels out of all pixels predicted as foggy.

$ \text { Precision }=\frac{T P}{T P+F P} $

Recall (Sensitivity): The proportion of correctly classified foggy pix- els out of all actual foggy pixels.

$ \text { Recall }=\frac{T P}{T P+F N} $

F1-Score: The harmonic mean of precision and recall.

$ \text { F1-Score }=2 \times \frac{\text { Precision } \times \text { Recall }}{\text { Precision }+ \text { Recall }} $

The following table summarizes the results obtained for the proposed model and the competing models based on these metrics.

Table 2. Performance metrics of proposed model vs. competing models
ModelAccuracyPrecisionRecallF1-Score
Proposed model0.9450.9600.9350.947
C-AFTM model0.8750.8900.8600.875
Mao's model0.8350.8500.8100.829

The results in Table 2 demonstrate that the proposed model outperforms the competing methods across all metrics. The proposed model achieves the highest accuracy of 94.5%, indicating its superior ability to classify both foggy and clear pixels correctly. With a precision of 96.0%, it effectively minimizes false positives, outperforming the C-AFTM model (89.0%) and Mao's model (85.0%). The recall value of 93.5% highlights the model’s ability to detect foggy regions accurately, showing significant improvement over the C-AFTM model (86.0%) and Mao's model (81.0%). Additionally, the proposed model achieves the highest F1-score of 94.7%, demonstrating its balanced performance in terms of precision and recall. These results validate the robustness and effectiveness of the proposed defogging approach in achieving superior clarity, structural preservation, and natural restoration in foggy road conditions, making it a reliable solution for real-world applications.

The generalization ability of the proposed defogging model is a critical aspect to ensure its applicability across diverse scenarios. While the primary focus of this study has been on evaluating the model’s performance using a specific dataset, future work will aim to validate its robustness on unseen data. This includes testing the model on images captured under varying weather conditions, such as mist, heavy rain, or smog, to assess its adaptability to different atmospheric distortions. Additionally, the model’s effectiveness on images with varying resolutions will be explored to ensure scalability for both high-resolution inputs from advanced cameras and lower-resolution inputs from older or resource-constrained devices. By addressing these aspects, the proposed model can demonstrate its capability to generalize effectively across diverse environmental and technical conditions, further solidifying its potential for real-world applications in autonomous driving, traffic monitoring, and smart transportation systems.

The scalability and real-time performance of the proposed defogging model are vital for its practical application, particularly when processing large datasets or operating in real-time environments. The proposed model is inherently designed to be efficient, leveraging the complementary strengths of Pythagorean aggregation, GMM, and the level-set method, which collectively optimize computational efficiency. To ensure scalability, the model can process large volumes of road images by incorporating batch-processing capabilities, enabling it to handle datasets comprising thousands of images without significant degradation in performance. This scalability ensures that the model is adaptable for large-scale applications, such as city-wide traffic monitoring systems.

In terms of real-time performance, the proposed model demonstrates a processing time of 0.07 seconds per image under the experimental configuration, which includes a high-performance CPU with 8 GB of RAM. This response time is well-suited for real-time applications, such as autonomous driving and live traffic surveillance. Further evaluations will focus on the model’s behavior under varying system loads and hardware configurations to validate its robustness in diverse environments. Optimization techniques, such as parallel processing and algorithmic streamlining, can further enhance the model’s speed and efficiency. By addressing both scalability and real-time processing requirements, the proposed model proves to be a reliable solution for practical deployment, capable of maintaining high performance and accuracy across varying datasets and system conditions.

6. Conclusion

This study proposed a novel defogging model that integrates Pythagorean aggregation, GMM, and the level-set method to address challenges in image restoration under foggy conditions. The model demonstrated superior performance compared to competing methods, achieving PSNR values of up to 37.1 dB and SSIM values up to 0.96. Statistical tests confirmed the significant improvements (p $<$ 0.001) offered by the proposed model. Additionally, the execution time as low as 0.07 seconds highlights its computational efficiency, making it well-suited for real-time applications. The results validated the model’s capability to effectively remove haze, restore structural details, and enhance image clarity in diverse scenarios.

Despite its strong performance, the proposed model has certain limitations. It faces challenges in scenarios with extremely dense fog or non-homogeneous haze, where some finer structural details may not be fully restored. Furthermore, the reliance on pre-defined parameters limits its adaptability across varying atmospheric conditions, which could reduce effectiveness in unseen environments. Future work will focus on further optimizing the model to improve its adaptability to diverse atmospheric conditions by incorporating adaptive parameter selection methods and learning-based frameworks. Exploring the integration of deep learning techniques could enhance the model’s ability to generalize across a broader range of environments. Additionally, the application of the model to related fields, such as satellite imaging, underwater vision, and aerial surveillance, will be investigated to extend its utility to other challenging imaging scenarios. Optimizing the computational pipeline for deployment on edge devices and incorporating multi-modal data fusion approaches could also broaden the model’s applicability, paving the way for more robust and versatile defogging solutions in real-world scenarios.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The author declares that there is no conflict of interest.

References
1.
T. Sharma and N. Verma, “Image dehazing using type-2 fuzzy approach,” in Artificial Intelligent Algorithms for Image Dehazing and Non-Uniform Illumination Enhancement, 2024, pp. 79–110. [Google Scholar] [Crossref]
2.
D. Singh and V. Kumar, “A comprehensive review of computational dehazing techniques,” Arch. Computat. Methods Eng., vol. 26, no. 5, pp. 1395–1413, 2019. [Google Scholar] [Crossref]
3.
I. Hussain, “An adaptive multi-stage fuzzy logic framework for accurate detection and structural analysis of road cracks,” Mechatron. Intell Transp. Syst., vol. 3, no. 3, pp. 190–202, 2024. [Google Scholar] [Crossref]
4.
Z. Hao, Y. Yu, and J. Sun, “Optimization and implementation of image defogging algorithm based on field programmable gate array,” in 2023 3rd International Conference on Computer Science and Blockchain (CCSB), Shenzhen, China, 2023, pp. 88–94. [Google Scholar] [Crossref]
5.
I. Hussain and L. Alam, “Adaptive road crack detection and segmentation using einstein operators and ANFIS for real-time applications,” J. Intell Syst. Control, vol. 3, no. 4, pp. 213–226, 2024. [Google Scholar] [Crossref]
6.
Y. T. Kuo, W. T. Chen, P. Y. Chen, and C. H. Li, “VLSI implementation for an adaptive haze removal method,” IEEE Access, vol. 7, pp. 173977–173988, 2019. [Google Scholar] [Crossref]
7.
K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 2341–2353, 2010. [Google Scholar] [Crossref]
8.
R. Tan, “Visibility in bad weather from a single image,” in 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 2008, pp. 1–8. [Google Scholar] [Crossref]
9.
C. Zhang and C. Wu, “Multi-scale attentive feature fusion network for single image dehazing,” in 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 2022, pp. 1–7. doi: 10.1 109/IJCNN55064.2022.9892050. [Google Scholar]
10.
R. Fattal, “Single image dehazing,” ACM Trans. Graph., vol. 27, no. 3, pp. 1–9, 2008. [Google Scholar] [Crossref]
11.
S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” Int. J. Comput. Vis., vol. 48, pp. 233–254, 2002. [Google Scholar] [Crossref]
12.
W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 2016, pp. 154–169. [Google Scholar] [Crossref]
13.
D. Ngo, S. Lee, U. J. Kang, T. M. Ngo, G. D. Lee, and B. Kang, “Adapting a dehazing system to haze conditions by piece-wisely linearizing a depth estimator,” Sensors, vol. 22, no. 5, p. 1957, 2022. [Google Scholar] [Crossref]
14.
B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “AOD-net: All-in-one dehazing network,” in 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017, pp. 4780–4788. doi: 10 .1109/ICCV.2017.511. [Google Scholar]
15.
S. Koley, A. Sadhu, H. Roy, and S. Dhar, “Single image visibility restoration using dark channel prior and fuzzy logic,” in 2018 2nd International Conference on Electronics, Materials Engineering and Nano-Technology (IEMENTech), Kolkata, India, 2018, pp. 1–7. [Google Scholar] [Crossref]
16.
B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “DehazeNet: An end-to-end system for single image haze removal,” IEEE Trans. Image Process., vol. 25, no. 11, pp. 5187–5198, 2016. [Google Scholar] [Crossref]
17.
S. Lai and X. Ren, “Image dehazing and enhancement based on fuzzy image modeling,” Int. J. Front. Med., vol. 4, no. 4, pp. 34–41, 2022. [Google Scholar] [Crossref]
18.
P. Garg, A. Jha, and S. Jindal, “Enhancing visibility: Multiresolution dark channel prior for dehazing and fog removal in images,” in 2023 7th International Conference on Computation System and Information Technology for Sustainable Solutions (CSITSS), Bangalore, India, 2023, pp. 1–6. 23.10334128. [Google Scholar] [Crossref]
19.
W. Mao, D. Zheng, M. Chen, and J. Chen, “Single image defogging via multi-exposure image fusion and detail enhancement,” J. Saf. Sci. Resil., vol. 5, no. 1, pp. 37–46, 2024. [Google Scholar] [Crossref]
20.
M. Khan, “A region-based fuzzy logic approach for enhancing road image visibility in foggy conditions,” Mechatron. Intell Transp. Syst., vol. 3, no. 4, pp. 212–222, 2024. [Google Scholar] [Crossref]

Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
GB-T-7714-2015
Alam, L. (2024). A Robust Road Image Defogging Framework Integrating Pythagorean Fuzzy Aggregation, Gaussian Mixture Models, and Level-Set Segmentation. Mechatron. Intell Transp. Syst., 3(4), 254-263. https://doi.org/10.56578/mits030405
L. Alam, "A Robust Road Image Defogging Framework Integrating Pythagorean Fuzzy Aggregation, Gaussian Mixture Models, and Level-Set Segmentation," Mechatron. Intell Transp. Syst., vol. 3, no. 4, pp. 254-263, 2024. https://doi.org/10.56578/mits030405
@research-article{Alam2024ARR,
title={A Robust Road Image Defogging Framework Integrating Pythagorean Fuzzy Aggregation, Gaussian Mixture Models, and Level-Set Segmentation},
author={Luqman Alam},
journal={Mechatronics and Intelligent Transportation Systems},
year={2024},
page={254-263},
doi={https://doi.org/10.56578/mits030405}
}
Luqman Alam, et al. "A Robust Road Image Defogging Framework Integrating Pythagorean Fuzzy Aggregation, Gaussian Mixture Models, and Level-Set Segmentation." Mechatronics and Intelligent Transportation Systems, v 3, pp 254-263. doi: https://doi.org/10.56578/mits030405
Luqman Alam. "A Robust Road Image Defogging Framework Integrating Pythagorean Fuzzy Aggregation, Gaussian Mixture Models, and Level-Set Segmentation." Mechatronics and Intelligent Transportation Systems, 3, (2024): 254-263. doi: https://doi.org/10.56578/mits030405
ALAM L. A Robust Road Image Defogging Framework Integrating Pythagorean Fuzzy Aggregation, Gaussian Mixture Models, and Level-Set Segmentation[J]. Mechatronics and Intelligent Transportation Systems, 2024, 3(4): 254-263. https://doi.org/10.56578/mits030405
cc
©2024 by the author(s). Published by Acadlore Publishing Services Limited, Hong Kong. This article is available for free download and can be reused and cited, provided that the original published version is credited, under the CC BY 4.0 license.