The accurate segmentation of visual data into semantically meaningful regions remains a critical task across diverse domains, including medical diagnostics, satellite imagery interpretation, and automated inspection systems, where precise object delineation is essential for subsequent analysis and decision-making. Conventional segmentation techniques often suffer from limitations such as sensitivity to noise, intensity inhomogeneity, and weak boundary definition, resulting in reduced performance under complex imaging conditions. Although fuzzy set-based approaches have been proposed to improve adaptability under uncertainty, they frequently fail to maintain a balance between segmentation precision and robustness. To address these challenges, a novel segmentation framework was developed based on Pythagorean Fuzzy Sets (PyFSs) and local averaging, offering enhanced performance in uncertain and heterogeneous visual environments. By incorporating both membership and non-membership degrees, PyFSs allow a more flexible representation of uncertainty compared to classical fuzzy models. A local average intensity function was introduced, wherein the contribution of each pixel was adaptively weighted according to its PyFS membership degree, improving resistance to local intensity variations. An energy functional was formulated by integrating PyFS-driven intensity constraints, local statistical deviation measures, and regularization terms, ensuring precise boundary localization through level set evolution. Convexity of the energy formulation was analytically demonstrated to guarantee the stability of the optimization process. Experimental evaluations revealed that the proposed method consistently outperforms existing fuzzy and non-fuzzy segmentation algorithms, achieving superior accuracy in applications such as medical image analysis and natural scene segmentation. These results underscore the potential of PyFS-based models as a powerful and generalizable solution for uncertainty-resilient image segmentation in real-world applications.
Accurate prediction of soil fertility and soil organic carbon (SOC) plays a critical role in precision agriculture and sustainable soil management. However, the high spatial-temporal variability inherent in soil properties, compounded by the prevalence of noisy data in real-world conditions, continues to pose significant modeling challenges. To address these issues, a robust hybrid deep learning model, termed RTCNet, was developed by integrating Recurrent Neural Networks (RNNs), Transformer architectures, and Convolutional Neural Networks (CNNs) into a unified predictive framework. Within RTCNet, a one-dimensional convolutional layer was employed for initial feature extraction, followed by MaxPooling for dimensionality reduction, while sequential dependencies were captured using RNN layers. A multi-head attention mechanism was embedded to enhance the representation of inter-variable relationships, thereby improving the model’s ability to handle complex soil data patterns. RTCNet was benchmarked against two conventional models—Artificial Neural Network (ANN) optimized with a Genetic Algorithm (GA), and a Transformer-CNN hybrid model. Under noise-free conditions, RTCNet achieved the lowest Mean Squared Error (MSE) of 0.1032 and Mean Absolute Error (MAE) of 0.1852. Notably, under increasing noise levels, RTCNet consistently maintained stable performance, whereas the comparative models exhibited significant performance degradation. These findings underscore RTCNet’s superior resilience and adaptability, affirming its utility in field-scale agricultural applications where sensor noise, data sparsity, and environmental fluctuations are prevalent. The demonstrated robustness and predictive accuracy of RTCNet position it as a valuable tool for optimizing nutrient management strategies, enhancing SOC monitoring, and supporting informed decision-making in sustainable farming systems.
Enhancing the sharpness of blurred images continues to be a critical and persistent issue in the domain of image restoration and processing, requiring precise techniques to recover lost details and enhance visual clarity. This study proposes a novel model combines the strengths of fuzzy systems with mathematical transformations to address the complexities of blurred image restoration. The model operates through a multi-stage framework, beginning with pixel coordinate transformations and corrections to account for geometric distortions caused by blurring. Fuzzy logic is employed to handle uncertainties in blur estimation, utilizing membership functions to categorize blur levels and a rule-based system to dynamically adapt corrective actions. The fusion of fuzzy logic and mathematical transformations ensures localized and adaptive corrections, effectively restoring sharpness in blurred regions while the preservation of regions with minimal distortion. Additionally, fuzzy edge enhancement is introduced to emphasize edges and suppress noise, further improving image quality. The final restoration process includes normalization and structural constraints to ensure the output aligns with the original unblurred image. Experimental results showcase the performance and reliability of the developed framework to restore clarity, preserve fine details, and minimize artifacts, making it a robust solution for diverse blurring scenarios. The proposed approach offers a significant advancement in blurred image restoration, combining the adaptability of fuzzy logic with the precision of mathematical computations to achieve superior results.
Quantum-enhanced sensing has emerged as a transformative technology with the potential to surpass classical sensing modalities in precision and sensitivity. This study explores the advancements and applications of quantum-enhanced sensing, emphasizing its capacity to bridge fundamental physics and practical implementations. The current progress in experimental demonstrations of quantum-enhanced sensing systems was reviewed, focusing on breakthroughs in metrology and the development of physically realizable sensor architectures. Two practical implementations of quantum-enhanced sensors based on trapped ions were proposed. The first design utilizes Ramsey interferometry with spin-squeezed atomic ensembles, employing laser-induced spin-exchange interactions to reconstruct the sensing Hamiltonian. This approach enables measurement rates to scale with the number of sensing atoms, achieving sensitivity enhancements beyond the standard quantum limit (SQL). The second implementation introduces mean-field interactions mediated by coupled optical cavities that share coherent atomic probes, enabling the realization of high-performance sensing systems. Both sensor systems were demonstrated to be feasible on state-of-the-art ion-trap platforms, offering promising benchmarks for future applications in metrology and imaging. Particular attention was given to the integration of quantum-enhanced sensing with complementary imaging technologies, which continues to gain traction in medical imaging and other fields. The mutual reinforcement of quantum and complementary technologies is increasingly supported by significant investments from governmental, academic, and commercial entities. The ongoing pursuit of improved measurement resolution and imaging fidelity underscores the interdependence of these innovations, advancing the transition of quantum-enhanced sensing from fundamental research to widespread practical use.
Accurate detection of road cracks is essential for maintaining infrastructure integrity, ensuring road safety, and preventing costly structural damage. However, challenges such as varying illumination conditions, noise, irregular crack patterns, and complex background textures often hinder reliable detection. To address these issues, a novel Fuzzy-Powered Multi-Scale Optimization (FMSO) model was proposed, integrating adaptive fuzzy operators, multi-scale level set evolution, Dynamic Graph Energy Minimization (GEM), and Hybrid Swarm Optimization (HSO). The FMSO model employs multi-resolution segmentation, entropy-based fuzzy weighting, and adaptive optimization strategies to enhance detection accuracy, while adaptive fuzzy operators mitigate the impact of illumination variations. Multi-scale level set evolution refines crack boundaries with high precision, and GEM effectively separates cracks from intricate backgrounds. Furthermore, HSO dynamically optimizes segmentation parameters, ensuring improved accuracy. The model was rigorously evaluated using multiple benchmark datasets, with performance metrics including accuracy, precision, recall, and F1-score. Experimental results demonstrate that the FMSO model surpasses existing methods, achieving superior accuracy, enhanced precision, and higher recall. Notably, the model effectively reduces false positives while maintaining sensitivity to fine crack details. The integration of fuzzy logic and multi-scale optimization techniques renders the FMSO model highly adaptable to varying road conditions and imaging environments, making it a robust solution for infrastructure maintenance. This approach not only advances the field of road crack detection but also provides a scalable framework for addressing similar challenges in other domains of image analysis and pattern recognition.
This study presents a novel image restoration method, designed to enhance defective fuzzy images, by utilizing the Fuzzy Einstein Geometric Aggregation Operator (FEGAO). The method addresses the challenges posed by non-linearity, uncertainty, and complex degradation in defective images. Traditional image enhancement approaches often struggle with the imprecision inherent in defect detection. In contrast, FEGAO employs the Einstein t-norm and t-conorm for non-linear aggregation, which refines pixel coordinates and improves the accuracy of feature extraction. The proposed approach integrates several techniques, including pixel coordinate extraction, regional intensity refinement, multi-scale Gaussian correction, and a layered enhancement framework, thereby ensuring superior preservation of details and minimization of artifacts. Experimental evaluations demonstrate that FEGAO outperforms conventional methods in terms of image resolution, edge clarity, and noise robustness, while maintaining computational efficiency. Comparative analysis further underscores the method’s ability to preserve fine details and reduce uncertainty in defective images. This work offers significant advancements in image restoration by providing an adaptive, efficient solution for defect detection, machine vision, and multimedia applications, establishing a foundation for future research in fuzzy logic-based image processing under degraded conditions.
Blackhole attacks represent a significant threat to the security of communication networks, particularly in emerging network architectures such as Mobile Ad Hoc Networks (MANETs). These attacks, characterized by their ability to obscure malicious behavior, evade conventional detection methods due to their loosely defined signatures and their ability to bypass traditional filtering mechanisms. This study investigates the application of machine learning techniques, specifically Support Vector Machine (SVM), Convolutional Neural Network (CNN), and Decision Tree (DT), for the detection and mitigation of blackhole attacks in MANETs. Simulations conducted in MATLAB 2023a examined network configurations with node densities of 50, 100, 250, and 500 nodes to assess the performance of these classifiers in comparison to conventional detection approaches. The results demonstrated that both SVM and CNN achieved near-perfect detection accuracy of 100% across all network configurations, outperforming traditional methods. SVM was chosen due to its efficacy in handling high-dimensional data, CNN for its ability to learn complex, nonlinear hierarchical features, and DT for its interpretability. The findings underscore the potential of these machine learning models in enhancing the precision of blackhole attack detection, thereby improving network security. Future research is recommended to explore the scalability and training efficiency of these models, particularly through the integration of advanced techniques such as model fusion and deep learning architectures. This study contributes to the growing body of literature on radar wave radio (RWR)-based and machine learning-based attack detection and highlights the potential of artificial intelligence (AI) solutions in transforming traditional emitter identification methods, offering significant improvements to network protection systems.
Open-source intelligence in aerospace technology often contains lengthy text and numerous technical terms, which can affect classification accuracy. To enhance the precision of classifying such intelligence, a classification algorithm integrating the Bidirectional Encoder Representations from Transformers (BERT) and Extreme Gradient Boosting (XGBoost) models was proposed. Initially, key features within the intelligence were extracted through the deep structure of the BERT model. Subsequently, the XGBoost model was utilised to replace the final output layer of BERT, applying the extracted features for classification. To verify the algorithm's effectiveness, comparative experiments were conducted against prominent language models such as Text Recurrent Convolutional Neural Network (TextRCNN) and Deep Pyramid Convolutional Neural Network (DPCNN). Experimental results demonstrate that, for open-source intelligence classification in aerospace technology, this algorithm achieved accuracy improvements of 1.9% and 2.2% over the TextRCNN and DPCNN models, respectively, confirming the algorithm's efficacy in relevant classification tasks.
In the field of jurisprudence, judgment element extraction has become a crucial aspect of legal judgment prediction research. The introduction of pre-trained language models has provided significant momentum for the advancement of Natural Language Processing (NLP) technologies, with the Bidirectional Encoder Representations from Transformers (BERT) model being particularly notable for its ability to enhance semantic understanding in unsupervised learning. A fusion model combining BERT and an attention mechanism-based Recurrent Convolutional Neural Network (RCNN) was utilized in this study for multi-label classification tasks, aiming to further extract contextual features from legal texts. The dataset used in this research was derived from the "China Legal Research Cup" judgment element extraction competition, which includes three types of cases (divorce, labor, and lending disputes), with each case type divided into 20 label categories. Four comparative experiments were conducted to investigate the optimization of the model by placing the attention mechanism at different positions. At the same time, previous models were learned and studied and their advantages were analyzed. The results obtained from replicating and optimizing those previous models demonstrate promising legal instrument classification performance.
The complexity and variability of Internet traffic data present significant challenges in feature extraction and selection, often resulting in ineffective abnormal traffic monitoring. To address these challenges, an improved Bidirectional Long Short-Term Memory (BiLSTM) network-based approach for Internet abnormal traffic monitoring was proposed. In this method, a constrained minimum collection node coverage strategy was first applied to optimize the selection of collection nodes, ensuring comprehensive data coverage across network nodes while minimizing resource consumption. The collected traffic dataset was then transformed to enhance data validity. To enable more robust feature extraction, a combined Convolutional Neural Network (CNN) and BiLSTM model was employed, allowing for a comprehensive analysis of data characteristics. Additionally, an attention mechanism was incorporated to weigh the significance of attribute features, further enhancing classification accuracy. The final traffic monitoring results were produced through a softmax classifier, demonstrating that the proposed method yields a high monitoring accuracy with a low false positive rate of 0.2, an Area Under the Curve (AUC) of 0.95, and an average monitoring latency of 5.7 milliseconds (ms). These results indicate that the method provides an efficient and rapid response to Internet traffic anomalies, with a marked improvement in monitoring performance and resource efficiency.
The traditional K-means clustering algorithm has unstable clustering results and low efficiency due to the random selection of initial cluster centres. To address the limitations, an improved K-means clustering algorithm based on adaptive guided differential evolution (AGDE-KM) was proposed. First, adaptive operators were designed to enhance global search capability in the early stages and accelerate convergence in later stages. Second, a multi-mutation strategy with a weighted coefficient was introduced to leverage the advantages of different mutation strategies during various evolutionary phases, balancing global and local search capabilities and expediting convergence. Third, a Gaussian perturbation crossover operation was proposed based on the best individual in the current population, providing individuals with superior evolution directions while preserving population diversity across dimensions, thereby avoiding the local optima of the algorithm. The optimal solution output at the end of the algorithm implementation was used as the initial cluster centres, replacing the cluster centres randomly selected by the traditional K-means clustering algorithm. The proposed algorithm was evaluated on public datasets from the UCI repository, including Vowel, Iris, and Glass, as well as a synthetic dataset (Jcdx). The sum of squared errors (SSE) was reduced by 5.65%, 19.59%, 13.31%, and 6.1%, respectively, compared to traditional K-means. Additionally, clustering time was decreased by 83.03%, 81.33%, 77.47%, and 92.63%, respectively. Experimental results demonstrate that the proposed improved algorithm significantly enhances convergence speed and optimisation capability, significantly improving the clustering effectiveness, efficiency, and stability.
The classification of fruit ripeness and detection of defects are critical processes in the agricultural industry to minimize losses during commercialization. This study evaluated the performance of three Convolutional Neural Network (CNN) architectures—Extreme Inception Network (XceptionNet), Wide Residual Network (Wide ResNet), and Inception Version 4 (Inception V4)—in predicting the ripeness and quality of tomatoes. A dataset comprising 2,589 images of beef tomatoes was assembled from Golden Fingers Farms and Ranches Limited, Abuja, Nigeria. The samples were categorized into six classes representing five progressive ripening stages and a defect class, based on the United States Department of Agriculture (USDA) colour chart. To enhance the dataset's size and diversity, image augmentation through geometric transformations was employed, increasing the dataset to 3,000 images. Fivefold cross-validation was conducted to ensure a robust evaluation of the models' performance. The Wide ResNet model demonstrated superior performance, achieving an average accuracy of 97.87%, surpassing the 96.85% and 96.23% achieved by XceptionNet and Inception V4, respectively. These findings underscore the potential of Wide ResNet as an effective tool for accurately detecting ripeness levels and defects in tomatoes. The comparative analysis highlights the effectiveness of deep learning (DL) techniques in addressing challenges in agricultural automation and quality assessment. The proposed methodology offers a scalable solution for implementing automated ripeness and defect detection systems, with significant implications for reducing waste and improving supply chain efficiency.
The traditional channel scheduling methods in short-range wireless communication networks are often constrained by fixed rules, resulting in inefficient channel resource utilization and unstable data communication. To address these limitations, a novel multi-channel scheduling approach, based on a Q-learning feedback mechanism, was proposed. The architecture of short-range wireless communication networks was analyzed, focusing on the core network system and wireless access network structures. The network channel nodes were optimized by deploying Dijkstra's algorithm in conjunction with an undirected graph representation of the communication nodes within the network. Multi-channel state characteristic parameters were computed, and a channel state prediction model was constructed to forecast the state of the network channels. The Q-learning feedback mechanism was employed to implement multi-channel scheduling, leveraging the algorithm’s reinforcement learning capabilities and framing the scheduling process as a Markov decision-making problem. Experimental results demonstrate that this method achieved a maximum average packet loss rate of 0.03 and a network throughput of up to 4.5 Mbps, indicating high channel resource utilization efficiency. Moreover, in low-traffic conditions, communication delay remained below 0.4 s, and in high-traffic scenarios, it varied between 0.26 and 0.4 s. These outcomes suggest that the proposed approach enables efficient and stable transmission of communication data, maintaining both low packet loss and high throughput.