Javascript is required
1.
A. H. Alomari and E. Abu Lebdeh, “Smart real-time vehicle detection and tracking system using road surveillance cameras,” J. Transp. Eng. Part A Syst., vol. 148, no. 10, p. 04022076, 2022. [Google Scholar] [Crossref]
2.
M. Fan, J. Liu, and J. Yu, “Highway obstacle recognition based on improvedYOLOv7 and defogging algorithm,” in IoT as a Service: 9th EAI International Conference, IoTaaS 2023, Nanjing, China, 2023, pp. 22–34. [Google Scholar] [Crossref]
3.
J. Zhang, “Traffic sign defogging based on conditional adversarial neural network pix2pixHD,” in 2023 3rd International Conference on Neural Networks, Information and Communication Engineering (NNICE), Guangzhou, China, 2023, pp. 61–65. [Google Scholar] [Crossref]
4.
Z. Hao, Y. Yu, and J. Sun, “Optimization and implementation of image defogging algorithm based on field programmable gate array,” in 2023 3rd International Conference on Computer Science and Blockchain (CCSB), Shenzhen, China, 2023, pp. 88–94. [Google Scholar] [Crossref]
5.
Y. Liu, J. Tian, W. Zheng, and L. Yin, “Spatial and temporal distribution characteristics of haze and pollution particles in China based on spatial statistics,” Urban Clim., vol. 41, p. 101031, 2022. [Google Scholar] [Crossref]
6.
Y. T. Kuo, W. T. Chen, P. Y. Chen, and C. H. Li, “VLSI implementation for an adaptive haze removal method,” IEEE Access, vol. 7, pp. 173977–173988, 2019. [Google Scholar] [Crossref]
7.
D. Engin, A. Genç, and H. Kemal Ekenel, “Cycle-dehaze: Enhanced CycleGAN for single image dehazing,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 2018, pp. 938–946. [Google Scholar] [Crossref]
8.
R. T. Tan, “Visibility in bad weather from a single image,” in 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 2008, pp. 1–8. [Google Scholar] [Crossref]
9.
H. Zhang, V. Sindagi, and V. M. Patel, “Multi-scale single image dehazing using perceptual pyramid deep network,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 2018, pp. 1015–1024. [Google Scholar] [Crossref]
10.
M. Sarkar, P. R. Sarkar, U. Mondal, and D. Nandi, “Empirical wavelet transform-based fog removal via dark channel prior,” IET Image Process., vol. 14, no. 6, pp. 1170–1179, 2020. [Google Scholar] [Crossref]
11.
X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, “FFA-Net: Feature fusion attention network for single image dehazing,” Proc. AAAI Conf. Artif. Intell., vol. 34, no. 7, pp. 11908–11915, 2020. [Google Scholar] [Crossref]
12.
F. Qu, “Image defogging algorithm based on physical prior and contrast learning,” in 5th International Conference on Artificial Intelligence and Advanced Manufacturing (AIAM 2023), Brussels, Belgium, 2023, pp. 208–216. [Google Scholar] [Crossref]
13.
D. Ngo, S. Lee, U. J. Kang, T. M. Ngo, G. D. Lee, and B. Kang, “Adapting a dehazing system to haze conditions by piece-wisely linearizing a depth estimator,” Sensors, vol. 22, no. 5, p. 1957, 2022. [Google Scholar] [Crossref]
14.
C. Yang and Y. Li, “Polarization image defogging based on detail recovery generative adversarial network,” in Third International Conference on Signal Image Processing and Communication (ICSIPC 2023), Kunming, China, 2023, pp. 86–91. [Google Scholar] [Crossref]
15.
S. Liu, L. Qi, and T. Cao, “Research on traffic sign image recognition algorithms under complex weather conditions,” in Quality, Reliability, Security and Robustness in Heterogeneous Systems: 19th EAI International Conference, QShine 2023, Shenzhen, China, 2023, pp. 109–119. 031-65126-7_11. [Google Scholar] [Crossref]
16.
X. Shi and A. Song, “Defog YOLO for road object detection in foggy weather,” Comput. J., vol. 67, no. 11, pp. 3115–3127, 2024. [Google Scholar] [Crossref]
17.
K. Y. Choi, K. M. Jeong, and B. C. Song, “Fog detection for de-fogging of road driving images,” in 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 2017, pp. 1–6. [Google Scholar] [Crossref]
18.
K. Jeong, K. Choi, D. Kim, and B. C. Song, “Fast fog detection for de-fogging of road driving images,” IEICE Trans. Inf. Syst., vol. 101, no. 2, pp. 473–480, 2018. [Google Scholar] [Crossref]
19.
D. Singh and V. Kumar, “Defogging of road images using gain coefficient-based trilateral filter,” J. Electron. Imaging, vol. 27, no. 1, p. 013004, 2018. [Google Scholar] [Crossref]
20.
F. Guo, H. Peng, and J. Tang, “Fast defogging and restoration assessment approach to road scene images,” J. Inf. Sci. Eng., vol. 32, no. 3, pp. 677–702, 2016. [Google Scholar] [Crossref]
21.
C. Sabitha and S. Eluri, “Improving dehazing results for different weather conditions using guided multi-model adaptive network (GMAN) and cross-entropy deep learning neural network (CE-DLNN),” i-Manag. J. Comput. Sci., vol. 11, no. 2, pp. 1–11, 2023. [Google Scholar] [Crossref]
22.
C. Li and Y. Xu, “Image defogging method for road traffic in haze days,” J. Phys. Conf. Ser., vol. 2035, no. 1, p. 012024, 2021. [Google Scholar] [Crossref]
Search
Open Access
Research article

An Efficient Road Image Dehazing Model Based on Entropy-Weighted Gaussian Mixture Model and Level Set Refinement for Autonomous Driving Applications

muhammad zeeshan naeem*
Department of Mathematics, Qurtuba University of Science and Information Technology, 25000 Peshawar, Pakistan
Mechatronics and Intelligent Transportation Systems
|
Volume 4, Issue 1, 2025
|
Pages 16-27
Received: 01-01-2025,
Revised: 02-06-2025,
Accepted: 02-13-2025,
Available online: 02-19-2025
View Full Article|Download PDF

Abstract:

Foggy road conditions present significant challenges for road monitoring systems and autonomous driving, as conventional defogging techniques often fail to accurately recover fine details of road structures, particularly under dense fog conditions, and may introduce undesirable artifacts. Furthermore, these methods typically lack the ability to dynamically adjust transmission maps, leading to imprecise differentiation between foggy and clear areas. To address these limitations, a novel approach to image dehazing is proposed, which combines an entropy-weighted Gaussian Mixture Model (EW-GMM) with Pythagorean fuzzy aggregation (PFA) and a level set refinement technique. The method enhances the performance of existing models by adaptively adjusting the influence of each Gaussian component based on entropy, with greater emphasis placed on regions exhibiting higher uncertainty, thereby enabling more accurate restoration of foggy images. The EW-GMM is further refined using PFA, which integrates fuzzy membership functions with entropy-based weights to improve the distinction between foggy and clear regions. A level set method is subsequently applied to smooth the transmission map, reducing noise and preserving critical image details. This process is guided by an energy functional that accounts for spatial smoothness, entropy-weighted components, and observed pixel intensities, ensuring a more robust and accurate dehazing effect. Experimental results demonstrate that the proposed model outperforms conventional methods in terms of feature similarity, image quality, and cross-correlation, while significantly reducing execution time. The results highlight the efficiency and robustness of the proposed approach, making it a promising solution for real-time image processing applications, particularly in the context of road monitoring and autonomous driving systems.

Keywords: Image defogging, Fuzzy aggregation, Entropy, Level-set method, Real-time processing, Road monitoring, Statistical analysis

1. Introduction

Image defogging, or dehazing, plays a crucial role in enhancing the visibility and quality of road images compromised by environmental conditions such as fog, haze, and smog. These atmospheric phenomena scatter light, leading to a significant reduction in contrast, color fidelity, and overall image clarity [1], [2], [3], [4]. Such degradation presents substantial challenges in critical applications, including road monitoring, autonomous driving, and traffic surveillance, where high-quality visual data is indispensable for ensuring safety and informed decision-making. The restoration of clear, unobstructed road images is essential not only for improving driver visibility but also for optimizing transportation systems and safeguarding public well-being.

A key concept in numerous defogging methods is the atmospheric scattering model, which characterizes the interaction of light with atmospheric particles, leading to image haziness [5], [6]. Variables such as the distance between the camera and the object, as well as the concentration of atmospheric particles, significantly influence the degree of image degradation. Accurately understanding and modeling these factors is pivotal for recovering the scene's true radiance, which reflects the original, unobstructed image. This is especially critical in applications such as road navigation and control systems, where high-quality imagery is essential for obstacle detection and precise environmental analysis. In addition to road-based applications, the restoration of clear imagery has broader ramifications in fields such as remote sensing, aerial imaging, and environmental monitoring.

Several approaches have been proposed to tackle the problem of road image defogging, with methods categorized into enhancement-based, restoration-based, and deep learning-based techniques [7], [8], [9], [10], [11], [12], [13], [14]. Enhancement-based techniques aim to improve image quality visually without explicitly modeling the degradation process. Methods such as histogram equalization and adaptive contrast adjustment can enhance visual appeal but often fail to recover the true scene radiance, resulting in unnatural or distorted outputs. Retinex-based approaches, inspired by the human visual system, attempt to separate lighting from reflectance to improve visibility. However, these methods can struggle with issues like color distortion and noise enhancement. For instance, Tan [8] proposed a method for improving visibility in adverse weather using a single image, but this approach has limitations in accurately restoring scene details. Moreover, such methods may not perform well in dynamic conditions where haze density fluctuates spatially and temporally.

Restoration-based methods aim to reconstruct the original scene radiance by modeling the physical haze formation process. The dark channel prior (DCP) is a well-known technique in this category, which assumes that at least one-color channel in most non-sky regions will have low intensity. While DCP has demonstrated effective defogging performance, it often introduces halo artifacts, especially in sky areas, and requires additional post-processing to achieve optimal results. Other restoration methods, such as polarization-based approaches, use multiple polarized images for depth estimation, and fusion-based methods combine images captured under varying conditions [9], [15]. These techniques aim to overcome the shortcomings of enhancement-based methods but can introduce computational complexity and dependency on specific imaging conditions. For example, fusion methods require images taken at different exposures, which may not always be feasible in real-time applications.

Road defogging models have gained significant attention due to their crucial role in enhancing visibility and safety in adverse weather conditions. Shi and Song [16] proposed Defog YOLO, a deep-learning-based approach for road object detection in foggy weather. Their model integrates a dehazing module with the YOLO object detection framework, significantly improving the visibility of road scenes before detecting objects. The model effectively restores object details lost due to dense fog, achieving high accuracy in detecting vehicles and pedestrians. However, one of its limitations is the increased computational complexity, making it less suitable for real-time applications on resource-constrained devices. Additionally, while the model performs well in synthetic fog scenarios, its effectiveness in varying real-world conditions, such as nighttime fog, requires further evaluation.

Another notable study by Choi et al. [17] introduced a fog detection mechanism to improve the effectiveness of de-fogging algorithms for road driving images. Their method utilizes image-based analysis techniques to differentiate foggy scenes from clear ones, optimizing the subsequent de-fogging process. In their follow-up study in 2018, Jeong et al. [18] proposed a Fast Fog Detection method, which significantly reduces the processing time of fog recognition using enhanced feature extraction techniques. These models excel in detecting and differentiating varying fog densities, ensuring that the right level of dehazing is applied. However, their performance is limited in extremely dense fog conditions where image contrast is severely degraded, and their reliance on prior assumptions about fog distribution may limit adaptability in unpredictable weather scenarios.

Singh and Kumar [19] introduced a Gain Coefficient-Based Trilateral Filter for defogging road images, focusing on preserving edge details while enhancing image clarity. The filter adaptively adjusts image contrast and sharpness, resulting in improved scene perception. Similarly, Guo et al. [20] developed a Fast Defogging and Restoration Assessment Approach, which prioritizes real-time processing by balancing computational efficiency and image quality restoration. These approaches achieve notable success in improving image contrast and retaining texture details. However, their limitations include a dependency on specific parameter tuning, which may require adjustments for different fog densities and lighting conditions. Additionally, they may struggle with color distortion, affecting the natural appearance of road scenes after defogging.

Despite the advances in road image defogging techniques, many existing models still face significant limitations in accurately differentiating between clear and foggy regions. These limitations arise primarily due to the static nature of component weights, which do not adapt to the inherent uncertainty present in road images. As a result, traditional methods often fail to dynamically adjust to varying uncertainties in the image, leading to suboptimal performance, particularly in regions with high variability, noise, or complex lighting conditions. Moreover, many approaches struggle to refine the transmission map effectively and ensure smoothness across the image, making it challenging to obtain consistent and accurate results in real-world scenarios such as autonomous driving, intelligent transportation systems, and road monitoring, where precise visibility and edge definition are crucial.

To overcome these limitations, this paper introduces a novel road image defogging approach that integrates an EW-GMM with PFA and level set refinement. This dynamic weighting mechanism allows the model to adjust the influence of each component based on its uncertainty, focusing more on regions with lower uncertainty and less on those with higher uncertainty. The model achieves this through a probabilistic distribution framework that represents pixel intensities as a weighted sum of Gaussian distributions, where the weights are determined based on the entropy of each component. Additionally, the incorporation of PFA enhances the robustness of the model by refining the transmission map, giving greater importance to more certain regions while dynamically adjusting for uncertain areas. The framework is further optimized using a level set method, which ensures spatial smoothness and improves the accuracy of foggy region delineation. By addressing the challenges of uncertainty handling, spatial coherence, and dynamic weighting, the proposed model provides a more effective solution for road image defogging and segmentation tasks, particularly in complex, foggy driving environments.

The proposed model utilizes a probabilistic mixture approach to model pixel intensities in an image. The key components of the model are as follows:

Probabilistic Mixture Model for Pixel Intensities: A mixture of Gaussian distributions is used to represent different regions of the image, such as clear and foggy areas, ensuring accurate modeling of road scenes.

Entropy-Based Weighting for Gaussian Components: The weights of each Gaussian component are dynamically adjusted based on their entropy. Components with higher entropy (more uncertainty) receive smaller weights, while those with lower entropy (less uncertainty) receive larger weights, allowing for more precise defogging in critical areas such as road edges and lane markings.

Impact of High and Low Entropy: The model adjusts the influence of each Gaussian component by prioritizing low-entropy (high-certainty) areas, such as vehicle contours and road signs, while reducing the influence of high-entropy (low-certainty) regions affected by dense fog.

Fuzzy Aggregation with Pythagorean Fuzzy Logic: Fuzzy aggregation is employed to integrate multiple fuzzy sets derived from the image, considering the entropy-based weights of the Gaussian components. This ensures that more certain components contribute significantly to the refined transmission map, resulting in improved contrast and visibility for road scenes.

Level Set Refinement for Smoothing: The transmission map is further refined using a level set method, which smooths the map and reduces noise, particularly in uncertain regions. The level set method minimizes an energy functional that balances smoothness and intensity accuracy, ensuring a more natural and visually coherent defogged road image.

This approach effectively adjusts for uncertainty in pixel intensities and refines road images through fuzzy aggregation and level set techniques, leading to enhanced visibility and improved scene perception for autonomous driving and intelligent transportation applications.

The structure of the paper is as follows: Section 2 reviews previous defogging techniques, discussing their strengths and limitations. Section 3 introduces the proposed defogging model, elaborating on the integration of fuzzy logic with advanced image processing methods for enhanced fog removal. Section 4 presents the experimental results, including a comprehensive analysis of evaluation metrics and a comparison of the performance of the proposed defogging model with existing approaches. Finally, Section 5 concludes the paper, summarizing the key findings and suggesting future research directions for improving fog removal techniques.

2. Related Work

In recent years, significant advancements have been made in the field of image defogging and enhancement, particularly for improving visibility and quality under foggy conditions. A notable contribution in this area is the work by Sabitha and Eluri [21], who proposed an innovative image defogging and enhancement method based on the Retinex algorithm. This method was designed to address the common issue of image degradation caused by fog, which typically results in low contrast, poor visibility, and blurred details. Sabitha's approach effectively mitigates the blurring of image details that often occurs in foggy weather, providing a clearer and more accurate representation of the scene. The Retinex-based method not only improves the overall image quality but also enhances the fine details, making it suitable for a wide range of applications, such as surveillance and autonomous driving. However, the approach struggles with heavy fog, as it does not consistently provide accurate results in such extreme conditions. The method’s performance declines under dense fog, leading to potential limitations in achieving the desired clarity and detail restoration.

Image dehazing has been a significant challenge in computer vision, particularly in poor visibility conditions such as haze, fog, or mist. The Cross-Entropy Deep Learning Neural Network (CE-DLNN) and the Guided Multi-Model Adaptive Network (GMAN) have been explored for dehazing, with a study by Sabitha and Eluri [21] comparing their performance. The results show that GMAN outperforms CE-DLNN in terms of PSNR, SSIM, and MAE, producing clearer and more detailed images. Additionally, combining GMAN with CE-DLNN enhances the performance of both methods, suggesting the benefit of integrating multiple models for improved dehazing. The mathematical formulation of this model can be defined as follows:

$ I_{\mathrm{final}}(x, y)=\beta_1 \cdot\left(\frac{I_{\mathrm{hazy}}(x, y)-A}{\max \left(t(x, y), t_0\right)}+A\right)+\beta_2 \cdot f_{\mathrm{CE}}\left(I_{\mathrm{hazy}}(x, y) ; \theta\right) $

where, $I_{\text {hazy}}(x, y)$ is the input hazy image, $A$ is the atmospheric light, $t(x, y)$ is the transmission map estimated by GMAN, $t_0$ is a small constant to avoid division by zero, and $f_{\operatorname{CE}}\left(I_{\text {hazy}}(x, y) ; \theta\right)$ represents the dehazed output from CE-DLNN. The weighting factors $\beta_1$ and $\beta_2$ satisfy $\beta_1+\beta_2$$=1$, ensuring a balanced integration of the two models' outputs.

However, these approaches have limitations, including the need for substantial computational resources and large datasets for training. While GMAN performs well in dehazing, its ability to handle extreme weather conditions like heavy rain or snow has not been fully explored, and further research is needed to test the scalability and generalization of these models for real-time applications.

Li and Xu [22] proposed image defogging method demonstrates significant advancements in addressing the challenges of reduced visibility in haze-affected road traffic scenarios. By integrating the DCP algorithm with image region segmentation, the model effectively enhances image details, resulting in superior visual quality and improved target detection capabilities. This makes the method particularly valuable for intelligent transportation systems, as it contributes to safer driving conditions and more reliable traffic monitoring. The mathematical function representing the image dehazing process based on the improved DCP method can be expressed as:

$ J(x)=\frac{I(x)-A}{\max \left(1-\omega \cdot \min _{y \in \Omega(x)}\left(\min _{c \in\{r, g, b\}} I^c(y)\right), t_0\right)}+A $

where, $J(x)$ is the dehazed image (scene radiance) at pixel $x, I(x)$ is the observed hazy image, and $A$ represents the atmospheric light estimated from the brightest pixels in the dark channel. The term $\omega$ is a weighting parameter controlling the extent of haze removal, and $\min _{c \in\{r, g, b\}} I^c(y)$ represents the minimum intensity across RGB channels at pixel $y$. The operator $\min _{y \in \Omega(x)}$ calculates the minimum value within a local patch $\Omega(x)$ centered at pixel $x$, while $t_0$ is a small constant used to avoid division by zero. This equation integrates all steps, including dark channel computation, transmission estimation, and radiance recovery, into a single concise representation.

However, the approach is not without limitations. The reliance on the DCP algorithm can lead to artifacts in bright regions or areas with minimal haze, which may compromise the overall image quality in specific scenarios. Additionally, the inclusion of region segmentation increases computational complexity, potentially limiting the method’s applicability in real-time traffic systems. Future work should focus on optimizing the algorithm to address these challenges while maintaining its effectiveness in diverse traffic.

3. The Propose Mathematical Approach

The proposed mathematical framework models the pixel intensities of an image through a probabilistic mixture approach, where the pixel intensity $I(x)$ is described by a mixture of probability distributions. Each distribution in the mixture represents a specific region of the image, such as clear or foggy areas. The overall distribution of pixel intensities is represented as the sum of weighted individual distributions. These weights are determined dynamically using the entropy of each component, allowing the model to adjust the influence of each component based on its uncertainty. The probability distribution model is formulated as follows:

$ P(I(x))=\sum_{k=1}^K w_k \pi_k \mathcal{N}\left(I(x) \mid \mu_k, \sigma_k^2\right) $

where, $P(I(x))$ is the probability distribution of pixel intensity $I(x)$ in our proposed model. $w_k$ is the weight of the $k$-th Gaussian component. This weight reflects the importance of the Gaussian component in modeling the pixel intensities, and it is dynamically adjusted based on the entropy of each component. $\pi_k$ is the prior probability, indicating how likely Gaussian component is to represent a given pixel intensity. $\mathcal{N}\left(I(x) \mid \mu_k, \sigma_k^2\right)$ is the Gaussian distribution with mean $\mu_k$ and variance $\sigma_k^2$.

In the proposed model, the pixel intensities are modeled as a weighted sum of Gaussian distributions. Each Gaussian component corresponds to different regions of intensity distributions in the image, such as clear or foggy regions.

3.1 Weights Based on Entropy in EW-GMM

In our proposed approach, we introduce the concept of entropy to determine the weight wk for each Gaussian component. This weight is now defined based on the entropy Hk of each Gaussian component. The entropy quantifies the uncertainty or spread of the Gaussian component. We propose to dynamically adjust the weight of each component using the entropy term as follows:

$ w_k=1-H_k $

where, $H_k$ is the entropy of the $k$-th Gaussian component and defined as follows:

$ H_k=\frac{1}{2}\left(1+\log \left(2 \pi \sigma_k^2\right)\right) $

where, $\sigma^2{ }_k$ represents the variance of the $k$-th Gaussian component. The underlying idea behind this formulation is that Gaussian components with higher uncertainty (i.e., higher entropy) are assigned smaller weights, while components with lower uncertainty (i.e., lower entropy) are given larger weights. This adjustment enables the model to place greater emphasis on more certain regions and less on uncertain ones.

3.2 Impact of High and Low Entropy in Our Proposed Model

In our proposed model, the entropy term plays a crucial role in determining how much influence each Gaussian component will have on the final result. Specifically:

High Entropy ($H_k$): A Gaussian component with high entropy means it is more spread out, and thus represents a region with more uncertainty about the pixel intensities. In our model, such components are given smaller weights (since $w_k=1-H_k$), meaning they will contribute less to the final output.

Low Entropy ($H_k$): A Gaussian component with low entropy indicates a more concentrated distribution, meaning there is less uncertainty about the pixel intensities in that region. In our model, these components are given larger weights, meaning they will have a greater influence on the final result.

To further refine the transmission map and ensure more accurate fog cleaning, we propose using PFA. This approach combines the fuzzy logic with the EW-GMM by aggregating multiple fuzzy sets derived from the image data. The fuzzy aggregation incorporates the weights based on the entropy values of the Gaussian components, and it is formulated as:

$ t^{\prime}(x)=\frac{\sum_{k=1}^K w_k \mu_k(x)}{\sum_{k=1}^K w_k} $

where, $t^{\prime}(x)$ is the refined transmission map after fuzzy aggregation. $\mu_k(x)$ is the membership function for the $k$-th component, representing the degree of belonging of pixel $x$ to the $k$-th component.

This fuzzy aggregation helps to enhance the segmentation by giving higher importance to more certain Gaussian components and dynamically adjusting the weight for uncertain components.

3.3 Level Set Refinement

After the fuzzy aggregation, we further refine the transmission map using a level set method. This step helps to smooth the transmission map and reduce noise, especially in uncertain regions. The minimized energy functional for the level set refinement is defined as:

$ \min _{t^{\prime}(x)} \mathcal{E}\left(t^{\prime}(x)\right)=\int\left(\left(\nabla t^{\prime}(x)\right)^2+\lambda \sum_{k=1}^K w_k\left(\mu_k(x)-t^{\prime}(x)\right)^2+\lambda^{\prime}\left(I(x)-t^{\prime}(x)\right)^2\right) d x $

where, $\nabla t^{\prime}(x)$ represents the gradient of the transmission map, capturing the spatial smoothness. The term $\mu_k(x)$ are the mean values of the Gaussian components at each pixel location. $\lambda$ and $\lambda^{\prime}$ are regularization parameters controlling the influence of the entropy term and the intensity error term and $I(x)$ is the observed image intensity.

The level set refinement enhances the accuracy of the transmission map by considering both the entropy-weighted Gaussian components and the fuzzy membership functions. It helps to smooth the map and better delineate the boundaries between different regions in the image.

The PFA model offers significant advantages for defogging by effectively handling uncertainty and improving visibility in degraded road images. Unlike traditional fuzzy models, PFA incorporates both membership $\mu$ and non-membership $v$ degrees, allowing for a more precise representation of foggy and clear regions. Its entropy-based weighting mechanism dynamically adjusts the influence of each component, prioritizing regions with lower uncertainty and reducing the impact of noise and intensity variations. This leads to more accurate transmission map estimation, ensuring a smoother and more adaptive defogging process. Additionally, PFA provides better resilience against noise and blur, making it particularly effective in challenging environments with varying fog densities. By leveraging these strengths, the proposed approach achieves enhanced visibility, improved edge definition, and more consistent dehazing results for road monitoring and autonomous driving applications.

4. Experimental Setup and Parameter Configuration

To assess the effectiveness of the proposed EW-GMM for road defogging, which incorporates Fuzzy Aggregation and Level Set Refinement, a series of experiments were conducted using fog-affected road images. These images were obtained from publicly available datasets, including the RESIDE dataset and the Foggy Driving dataset, selected for their diverse representation of fog densities, road types, and environmental conditions. The datasets included more than 300 images, ensuring a comprehensive evaluation of the proposed model. Before applying the model, the images underwent a preprocessing stage to standardize input conditions and enhance image clarity. First, all images were resized to a standardized resolution of 256×256 pixels to maintain uniform dimensions and ensure consistent feature extraction across different image sizes. Next, adaptive histogram equalization was applied to normalize intensity distributions, enhancing contrast between foggy and clear regions. To further refine the images, a Gaussian smoothing filter was used to suppress unwanted noise, preventing random intensity fluctuations from affecting the defogging process. Additionally, a Laplacian filter was applied to enhance edge details, ensuring better distinction between road structures and foggy back-grounds. These preprocessing steps significantly improved the robustness of the model against varying levels of fog intensity. To enhance clarity, visual representations of the preprocessing results, including the original foggy image, contrast-enhanced image, noise-reduced image, and edge-enhanced image, will be included in the revised manuscript. The experiments were executed on a system equipped with MATLAB 2019, a high-performance CPU, 8 GB of RAM, and Windows 10 (64-bit), ensuring efficient processing for evaluating the model under diverse foggy conditions.

The parameter settings for the EW-GMM-based model were determined based on extensive experimentation and fine-tuning. Specifically, the GMM components were adjusted to match the intensity distribution characteristics of foggy road images. In the model, the mean intensity $\mu_k$ for the Gaussian components was set to 120 , and the standard deviation $\sigma_k$ was set to 25 for foggy regions, based on the statistical analysis of pixel intensity distributions from the training set. The weights for the PFA operator, $w_1$ and $w_2$, were set to 0.7 and 0.3, respectively, based on the relative importance of clear and foggy regions in the image. For the level set refinement, the threshold parameter $\alpha$ was empirically set to 0.5. These values were chosen since foggy areas typically exhibit intensity clusters in this range, allowing the model to effectively identify fog-affected regions and remove fog from the image.

Figure 1 illustrates a foggy image with added noise, alongside the corresponding defogging result produced by the proposed model and the ground truth image. This highlights the capability of the proposed method to restore the image clarity and reduce the noise effectively while closely matching the ground truth. Figure 2 provides a comparative analysis of defogging performance using input foggy images. The first column displays the original foggy images, while the second and third columns showcase the defogging results from Sabitha and Eluri [21] and Li and Xu [22], respectively. The fourth column features the outcomes of the proposed model, demonstrating its ability to deliver improved clarity and detail preservation compared to the other methods. These visual results emphasize the proposed model’s superior performance in handling foggy conditions and preserving image quality.

Figure 1. The foggy image with noise, result of the proposed model and the ground truth
Figure 2. Comparison of defogging techniques

In Figure 2, the first column contains the input foggy images, while the second column illustrates the outcomes achieved by Sabitha and Eluri [21]. The third column showcases the results obtained by Li and Xu [22], and the final column features the defogged images produced by the proposed model, which exhibit enhanced clarity and superior detail preservation.

Figure 3 provides a detailed comparative analysis of the performance of the competing models and the proposed model. The first column shows the input foggy images with a noise level of 0.1, serving as the baseline for evaluation. The second column illustrates the defogging results obtained using the method proposed by Sabitha and Eluri [21], highlighting its effectiveness in partially improving visibility. The third column displays the outcomes produced by Li's model, which shows further enhancement in clarity compared to Sabitha's method. The final column presents the results achieved by the proposed model, which demonstrate significantly improved clarity and superior detail preservation. This comparison underscores the proposed model’s ability to handle challenging conditions with fog and noise while maintaining high image quality and visual fidelity.

Figure 3. Comparative performance evaluation of defogging models

In Figure 3, the first column displays the input foggy images with a noise level of 0.1. The second column presents the defogging results obtained by Sabitha and Eluri [21], followed by the third column, which shows the outcomes achieved by Li and Xu [22]. Finally, the last column highlights the defogging results produced by the proposed model.

Figure 4 demonstrates the performance of various models in handling blurred images. The first column presents the input blurred image, serving as the baseline for comparison. The second column depicts the defogging results achieved by Sabitha and Eluri [21], which show moderate improvement in visibility but limited detail recovery. The third column illustrates the outcomes from Li's model, offering enhanced sharpness and clarity compared to the previous method. Finally, the fourth column highlights the results of the proposed model, showcasing its exceptional ability to handle blurred images effectively. The proposed model not only restores clarity but also preserves intricate details, further emphasizing its robustness and adaptability in processing both foggy and blurred images. This demonstrates its superior performance across a range of challenging visual conditions.

Figure 4. Comparison of blurred image restoration results using different models

In Figure 4, the provided blurred image is shown in the first column, followed by the results of Sabitha and Eluri [21] in the second column, Li and Xu [22] in the third column, and the proposed model in the fourth column, respectively.

4.1 Statistical Analysis for Proposed Model

The mathematical calculations in the context of defogging (or image dehazing) involve evaluating the quality of defogged images using various metrics, such as Mean Squared Error (MSE), Feature Similarity Index (FSIM), Universal Image Quality Index (UIQI), and statistical tests. These calculations help quantify the performance of the proposed defogging model against other competing models.

4.1.1 MSE

MSE is used to assess the difference between the original (fog-free) image and the defogged image. A lower MSE indicates better quality. The MSE is calculated as:

$ \mathrm{MSE}=\frac{1}{N} \sum_{i=1}^N\left(I_{\text {orig}}(i)-I_{\text {defog }}(i)\right)^2 $

where, $I_{\text {orig }}$ is the original (fog-free) image, $I_{\text {defog}}$ is the defogged image, $N$ is the total number of pixels in the image.

4.1.2 FSIM

FSIM evaluates the similarity between the original and defogged images based on the low-level features like edges and gradients. The FSIM is given by:

$ \operatorname{FSIM}\left(I_{\text {orig}}, I_{\text {defog}}\right)=\frac{\sum_{i=1}^N \operatorname{similarity}\left(I_{\text {orig}}(i), I_{\text {defog}}(i)\right)}{N} $

where, similarity $(\cdot)$ is a feature similarity measure based on edge and gradient information, $N$ is the total number of pixels in the image.

4.1.3 UIQI

UIQI is another important metric that evaluates the perceptual quality of defogged images by considering luminance, contrast, and structure, similar to SSIM, but with a more robust approach to image distortion. The UIQI is given by:

$ \mathrm{UIQI}\left(I_{\text {orig }}, I_{\text {defog }}\right)=\frac{\left(2 \mu_{\text {orig }} \mu_{\text {defog}}+C_1\right)\left(2 \sigma_{\text {origdefog}}+C_2\right)}{\left(\mu_{\text {orig}}^2+\mu_{\text {defog}}^2+C_1\right)\left(\sigma_{\text {orig}}^2+\sigma_{\text {defog}}^2+C_2\right)} $

where, $\mu_{\text {orig }}$ and $\mu_{\text {defog }}$ are the mean intensities of the original and defogged images, $\sigma_{\text {orig }}^2$ and $\sigma_{\text {defog }}^2$ are the variances of the original and defogged images, $\sigma_{\text {origdefog }}$ is the covariance between the original and defogged images, $C_1$ and $C_2$ are constants used to stabilize the denominator.

4.1.4 Paired t-test for defogging model comparison

The paired t-test is used to compare the performance of the proposed defogging model against competing models. The test statistic is calculated as:

$ t=\frac{\bar{d}}{s_d / \sqrt{n}} $

where, $\bar{d}$ is the mean of the differences between the paired samples (i.e., the difference in MSE, FSIM, or UIQI values between the proposed defogging model and another model), $s_d$ is the standard deviation of these differences, $n$ is the number of pairs (number of test images).

$ p=2 \times P(T \geq|t|) $

where, $P(T \geq|t|)$ represents the probability of obtaining a test statistic at least as extreme as the observed $t$-value under the null hypothesis. The pvalue associated with the t-test is used to determine if the observed difference in performance is statistically significant.

4.1.5 One-way ANOVA for model comparison

One-way ANOVA is applied to compare the performance of multiple defogging models across various metrics. The F-statistic is calculated as:

$ F=\frac{\text {Between-group variability}}{\text {Within-group variability}} $

where, between-group variability measures how much the means of the different models differ from the overall mean, within-group variability measures the variability within each group of model outputs.

$ p=P\left(F \geq F_{\text {obs}}\right) $

where, $F_{\text {obs }}$ is the observed F-statistic, and the probability is computed from the F-distribution with the appropriate degrees of freedom. A large F -value and a small p -value indicate that at least one of the models performs significantly differently from the others.

4.1.6 Wilcoxon Signed-Rank (WSR) test for defogging efficiency comparison

The WSR Test is a non-parametric test used to compare the execution time of the proposed defogging model with competing models. It is calculated as:

$ W=\sum(\text {signed ranks of execution time differences}) $

where, the differences between execution times of the proposed and competing models are ranked, and the sign of each difference is retained. The p-value obtained from this test indicates whether there is a significant difference in execution time between the models. The p-value is defined as follows:

$ p=2 \times P\left(W \geq W_{\text {obs}}\right) $

where, $P\left(W \geq W_{\text {obs }}\right)$ is derived from the Wilcoxon rank-sum distribution.

4.1.7 Execution time comparison (second)

The efficiency of the proposed defogging model can also be assessed by measuring its execution time on a test set of images. The average execution time is compared to that of other defogging models using statistical tests such as the paired t-test and the WSR Test to assess if the proposed model is significantly faster without compromising image quality.

4.1.8 Image fidelity comparison

Image fidelity can be measured using a set of perceptual metrics such as Normalized Cross-Correlation (NCC). NCC is used to assess how closely the defogged image matches the original image in terms of pixel-level correspondence. The NCC is calculated as:

$ N C C=\frac{\sum_{i=1}^N\left(I_{\text {orig}}(i)-\mu_{\text {orig}}\right)\left(I_{\text {defog}}(i)-\mu_{\text {defog}}\right)}{\sqrt{\sum_{i=1}^N\left(I_{\text {orig}}(i)-\mu_{\text {orig}}\right)^2 \sum_{i=1}^N\left(I_{\text {defog}}(i)-\mu_{\text {defog}}\right)^2}} $

where, $\mu_{\text {orig }}$ and $\mu_{\text {defog}}$ are the mean pixel values of the original and defogged images, respectively.

The closer the NCC value is to 1, the better the image fidelity.

The performance comparison between the proposed defogging model and the competing models (Sabitha's model and Li's model) is presented in Table 1. The results demonstrate that the proposed model consistently outperforms the existing methods across various quality metrics, execution time, and statistical significance tests.

Table 1. Comparison of the proposed defogging model with competing Sabitha's model and Li's model

Metric

Our Model

Sabitha's Model

Li's Model

MSE

0.023

0.045

0.039

FSIM

$0.95 \pm 0.01$

$0.85 \pm 0.02$

$0.89 \pm 0.02$

UIQI

$0.92 \pm 0.02$

$0.88 \pm 0.03$

$0.84 \pm 0.04$

NCC

$0.98 \pm 0.01$

$0.93 \pm 0.02$

$0.91 \pm 0.03$

Execution time (s)

$2.2 \pm 0.3$

$7.4 \pm 0.5$

$6.5 \pm 0.4$

Paired t-test (p-value)

$< 0.01$

WSR (p-value)

$< 0.05$

ANOVA (p-value)

$< 0.01$

MSE: The proposed model achieved the lowest MSE value of 0.023, significantly better than Sabitha (0.045) and Li (0.039). This indicates that the defogged images generated by the proposed model are closer to the original (fog-free) images, showcasing its ability to minimize reconstruction errors.

FSIM: The FSIM of the proposed model (0.95±0.01) is significantly higher than that of Sabitha (0.85±0.02) and Li (0.89±0.02). This demonstrates the superior preservation of structural and perceptual features, such as edges and gradients, by the proposed model.

UIOI: The proposed model also exhibited a higher UIOI (0.92±0.02) compared to Sabitha (0.88±0.03) and Li (0.84±0.04). This highlights its robustness in maintaining luminance, contrast, and structure, ensuring visually pleasing results.

NCC: With an NCC of 0.98±0.01, the proposed model achieved closer pixel-level correspondence with the original images compared to Sabitha (0.93±0.02) and Li (0.91±0.03). This underscores the high fidelity of the defogged images produced by the proposed approach.

Execution time: The proposed model was also the fastest, with an average execution time of 2.2±0.3 seconds, outperforming Sabitha (7.4±0.5 seconds) and Li (6.5±0.4 seconds). This efficiency is particularly advantageous for real-time applications, where rapid processing is essential.

The statistical tests further validate the superiority of the proposed model. The paired t-test results (p-value < 0.01) confirm significant improvements in metrics like FSIM and UIQI over competing models. Similarly, the WSR test (p-value < 0.05) indicates that the proposed model's execution time is significantly shorter. Finally, the one-way ANOVA (p-value < 0.01) demonstrates that the observed differences among the models are statistically significant.

The combination of lower MSE, higher FSIM, UIQI, and NCC, and reduced execution time demonstrates the overall effectiveness of the proposed defogging model (see Figure 5). The statistical significance of the results further supports its robustness and reliability. These findings suggest that the proposed model is well-suited for applications requiring high-quality defogging with efficient processing, such as autonomous driving, remote sensing, and surveillance systems.

Figure 5. Comparative performance evaluation of defogging models

The proposed model outperforms Sabitha's model and Li's model with the lowest MSE, highest FSIM and UIQI values, and the shortest execution time, demonstrating superior image quality and computational efficiency.

5. Conclusion

In this work, a novel image defogging model has been proposed, integrating EW-GMM with PFA for road image defogging. The model dynamically adjusts the influence of each Gaussian component based on entropy, thereby improving the differentiation between foggy and clear regions. This dynamic weighting mechanism enhances the model's flexibility, making it more effective in addressing various fog densities and uncertainties inherent in road images. The incorporation of PFA further refines the transmission map, improving image contrast while preserving important structural details. Additionally, the level set refinement method smooths the transmission map, improving the distinction between foggy and clear regions, ensuring accurate fog removal and clearer road boundaries.

Despite its promising performance, the model is limited in extremely dense fog conditions or in the presence of non-homogeneous haze, where finer structural details may not be fully restored. Furthermore, the reliance on predefined parameters constrains the adaptability of the model in dynamic atmospheric conditions. Future work will focus on overcoming these limitations by incorporating adaptive parameter selection techniques and learning-based frameworks to enhance the model's adaptability. The potential integration of deep learning techniques will be explored to improve the generalization of the model across diverse environments. Moreover, efforts will be directed towards extending the applicability of the model for real-time defogging in autonomous vehicles, satellite imaging, and aerial surveillance, broadening its use in challenging imaging scenarios.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The author declares no conflict of interest.

References
1.
A. H. Alomari and E. Abu Lebdeh, “Smart real-time vehicle detection and tracking system using road surveillance cameras,” J. Transp. Eng. Part A Syst., vol. 148, no. 10, p. 04022076, 2022. [Google Scholar] [Crossref]
2.
M. Fan, J. Liu, and J. Yu, “Highway obstacle recognition based on improvedYOLOv7 and defogging algorithm,” in IoT as a Service: 9th EAI International Conference, IoTaaS 2023, Nanjing, China, 2023, pp. 22–34. [Google Scholar] [Crossref]
3.
J. Zhang, “Traffic sign defogging based on conditional adversarial neural network pix2pixHD,” in 2023 3rd International Conference on Neural Networks, Information and Communication Engineering (NNICE), Guangzhou, China, 2023, pp. 61–65. [Google Scholar] [Crossref]
4.
Z. Hao, Y. Yu, and J. Sun, “Optimization and implementation of image defogging algorithm based on field programmable gate array,” in 2023 3rd International Conference on Computer Science and Blockchain (CCSB), Shenzhen, China, 2023, pp. 88–94. [Google Scholar] [Crossref]
5.
Y. Liu, J. Tian, W. Zheng, and L. Yin, “Spatial and temporal distribution characteristics of haze and pollution particles in China based on spatial statistics,” Urban Clim., vol. 41, p. 101031, 2022. [Google Scholar] [Crossref]
6.
Y. T. Kuo, W. T. Chen, P. Y. Chen, and C. H. Li, “VLSI implementation for an adaptive haze removal method,” IEEE Access, vol. 7, pp. 173977–173988, 2019. [Google Scholar] [Crossref]
7.
D. Engin, A. Genç, and H. Kemal Ekenel, “Cycle-dehaze: Enhanced CycleGAN for single image dehazing,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 2018, pp. 938–946. [Google Scholar] [Crossref]
8.
R. T. Tan, “Visibility in bad weather from a single image,” in 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 2008, pp. 1–8. [Google Scholar] [Crossref]
9.
H. Zhang, V. Sindagi, and V. M. Patel, “Multi-scale single image dehazing using perceptual pyramid deep network,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 2018, pp. 1015–1024. [Google Scholar] [Crossref]
10.
M. Sarkar, P. R. Sarkar, U. Mondal, and D. Nandi, “Empirical wavelet transform-based fog removal via dark channel prior,” IET Image Process., vol. 14, no. 6, pp. 1170–1179, 2020. [Google Scholar] [Crossref]
11.
X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, “FFA-Net: Feature fusion attention network for single image dehazing,” Proc. AAAI Conf. Artif. Intell., vol. 34, no. 7, pp. 11908–11915, 2020. [Google Scholar] [Crossref]
12.
F. Qu, “Image defogging algorithm based on physical prior and contrast learning,” in 5th International Conference on Artificial Intelligence and Advanced Manufacturing (AIAM 2023), Brussels, Belgium, 2023, pp. 208–216. [Google Scholar] [Crossref]
13.
D. Ngo, S. Lee, U. J. Kang, T. M. Ngo, G. D. Lee, and B. Kang, “Adapting a dehazing system to haze conditions by piece-wisely linearizing a depth estimator,” Sensors, vol. 22, no. 5, p. 1957, 2022. [Google Scholar] [Crossref]
14.
C. Yang and Y. Li, “Polarization image defogging based on detail recovery generative adversarial network,” in Third International Conference on Signal Image Processing and Communication (ICSIPC 2023), Kunming, China, 2023, pp. 86–91. [Google Scholar] [Crossref]
15.
S. Liu, L. Qi, and T. Cao, “Research on traffic sign image recognition algorithms under complex weather conditions,” in Quality, Reliability, Security and Robustness in Heterogeneous Systems: 19th EAI International Conference, QShine 2023, Shenzhen, China, 2023, pp. 109–119. 031-65126-7_11. [Google Scholar] [Crossref]
16.
X. Shi and A. Song, “Defog YOLO for road object detection in foggy weather,” Comput. J., vol. 67, no. 11, pp. 3115–3127, 2024. [Google Scholar] [Crossref]
17.
K. Y. Choi, K. M. Jeong, and B. C. Song, “Fog detection for de-fogging of road driving images,” in 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 2017, pp. 1–6. [Google Scholar] [Crossref]
18.
K. Jeong, K. Choi, D. Kim, and B. C. Song, “Fast fog detection for de-fogging of road driving images,” IEICE Trans. Inf. Syst., vol. 101, no. 2, pp. 473–480, 2018. [Google Scholar] [Crossref]
19.
D. Singh and V. Kumar, “Defogging of road images using gain coefficient-based trilateral filter,” J. Electron. Imaging, vol. 27, no. 1, p. 013004, 2018. [Google Scholar] [Crossref]
20.
F. Guo, H. Peng, and J. Tang, “Fast defogging and restoration assessment approach to road scene images,” J. Inf. Sci. Eng., vol. 32, no. 3, pp. 677–702, 2016. [Google Scholar] [Crossref]
21.
C. Sabitha and S. Eluri, “Improving dehazing results for different weather conditions using guided multi-model adaptive network (GMAN) and cross-entropy deep learning neural network (CE-DLNN),” i-Manag. J. Comput. Sci., vol. 11, no. 2, pp. 1–11, 2023. [Google Scholar] [Crossref]
22.
C. Li and Y. Xu, “Image defogging method for road traffic in haze days,” J. Phys. Conf. Ser., vol. 2035, no. 1, p. 012024, 2021. [Google Scholar] [Crossref]

Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
GB-T-7714-2015
Naeem, M. Z. (2025). An Efficient Road Image Dehazing Model Based on Entropy-Weighted Gaussian Mixture Model and Level Set Refinement for Autonomous Driving Applications. Mechatron. Intell Transp. Syst., 4(1), 16-27. https://doi.org/10.56578/mits040102
M. Z. Naeem, "An Efficient Road Image Dehazing Model Based on Entropy-Weighted Gaussian Mixture Model and Level Set Refinement for Autonomous Driving Applications," Mechatron. Intell Transp. Syst., vol. 4, no. 1, pp. 16-27, 2025. https://doi.org/10.56578/mits040102
@research-article{Naeem2025AnER,
title={An Efficient Road Image Dehazing Model Based on Entropy-Weighted Gaussian Mixture Model and Level Set Refinement for Autonomous Driving Applications},
author={Muhammad Zeeshan Naeem},
journal={Mechatronics and Intelligent Transportation Systems},
year={2025},
page={16-27},
doi={https://doi.org/10.56578/mits040102}
}
Muhammad Zeeshan Naeem, et al. "An Efficient Road Image Dehazing Model Based on Entropy-Weighted Gaussian Mixture Model and Level Set Refinement for Autonomous Driving Applications." Mechatronics and Intelligent Transportation Systems, v 4, pp 16-27. doi: https://doi.org/10.56578/mits040102
Muhammad Zeeshan Naeem. "An Efficient Road Image Dehazing Model Based on Entropy-Weighted Gaussian Mixture Model and Level Set Refinement for Autonomous Driving Applications." Mechatronics and Intelligent Transportation Systems, 4, (2025): 16-27. doi: https://doi.org/10.56578/mits040102
NAEEM M Z. An Efficient Road Image Dehazing Model Based on Entropy-Weighted Gaussian Mixture Model and Level Set Refinement for Autonomous Driving Applications[J]. Mechatronics and Intelligent Transportation Systems, 2025, 4(1): 16-27. https://doi.org/10.56578/mits040102
cc
©2025 by the author(s). Published by Acadlore Publishing Services Limited, Hong Kong. This article is available for free download and can be reused and cited, provided that the original published version is credited, under the CC BY 4.0 license.