This study evaluates the potential of nitrogen and hydrogen as alternative working fluids in air conditioning systems to improve thermal comfort and optimize energy efficiency, using computational fluid dynamics (CFD) simulations. A controlled indoor environment measuring 6 m $\times$ 4.5 m $\times$ 3 m was simulated, with nitrogen and hydrogen tested at inlet velocities of 0.7 m/s, 0.8 m/s, 0.9 m/s, 1.0 m/s, and 1.1 m/s, and an inlet temperature fixed at 293 K (20℃). The analysis focused on the impact of these gases on room and outlet temperatures to assess airflow distribution, heat transfer, and thermal comfort compared to traditional air-based systems. Results indicated that nitrogen improved airflow uniformity and facilitated heat transfer but exhibited limitations in effectively reducing room temperature due to its thermal properties. In contrast, hydrogen demonstrated stable outlet temperatures across all velocities, benefiting from its higher thermal conductivity; however, room temperatures showed significant variation, particularly at higher inlet velocities. Temperature prediction errors in the CFD model ranged from 0.003% to 2.78%, suggesting high accuracy yet underscoring the need for refinement in simulation methods. The findings highlight the promise of nitrogen and hydrogen in optimizing air conditioning system performance but emphasize the necessity for further investigation into the practical implications, specifically regarding operational safety, energy efficiency, and environmental impacts.
Job scheduling for a single machine (JSSM) remains a core challenge in manufacturing and service operations, where optimal job sequencing is essential to minimize flow time, reduce delays, prioritize high-value tasks, and enhance overall system efficiency. This study addresses JSSM by developing a hybrid solution aimed at balancing multiple performance objectives and minimizing overall processing time. Eight established scheduling rules were examined through a comprehensive simulation based on randomly generated scenarios, each defined by three parameters: processing time, customer weight, and job due date. Performance was evaluated using six key metrics: flow time, total delay, number of delayed jobs, maximum delay, average delay of delayed jobs, and average weight of delayed jobs. A multi-criteria decision-making (MCDM) framework was applied to identify the most effective scheduling rule. This framework combines two approaches: the Analytic Hierarchy Process (AHP), used to assign relative importance to each criterion, and the Evaluation based on Distance from Average Solution (EDAS) method, applied to rank the scheduling rules. AHP weights were determined by surveying expert assessments, whose averaged responses formed a consensus on priority ranking. Results indicate that the Earliest Due Date (EDD) rule consistently outperformed other rules, likely due to the high weighting of delay-sensitive criteria within the AHP, which positions EDD favourably in scenarios demanding stringent adherence to deadlines. Following this initial rule-based scheduling phase, an optimization stage was introduced, involving four Tabu Search (TS) techniques: job swapping, block swapping, job insertion, and block insertion. The TS optimization yielded marked improvements, particularly in scenarios with high job volumes, significantly reducing delays and improving performance metrics across all criteria. The adaptability of this hybrid MCDM framework is highlighted as a primary contribution, with demonstrated potential for broader application. By adjusting weights, criteria, or search parameters, the proposed method can be tailored to diverse real-time scheduling challenges across different sectors. This integration of rule-based scheduling with metaheuristic search underscores the efficacy of hybrid approaches for complex scheduling problems.
The integration of artificial intelligence (AI) and robotics into the warehouse management system (WMS) has substantially advanced supply chain (SC) operations, offering notable improvements in efficiency, accuracy, and economic resilience. In warehousing environments, AI algorithms and robotized systems enable rapid and precise product retrieval from storage while optimizing routing and packaging, thereby reducing order preparation time and enhancing delivery reliability. The implementation of these advanced technologies also results in fewer errors, improved customer satisfaction, and streamlined SC processes, empowering organizations to better manage inventory and respond swiftly to fluctuating market demands. Such innovations allow for reduced operating costs, enhanced productivity, and increased sustainability. Autonomous mobile robots (AMRs), automated guided vehicles (AGVs), and drones, among other cutting-edge solutions, are increasingly incorporated into the WMS to minimize physical labor and mitigate workplace injuries. Despite these benefits, considerable challenges remain, including the high initial costs and requisite technical expertise for ongoing maintenance. The integration of new AI and robotic technologies into pre-existing systems necessitates careful evaluation, substantial employee training, and process adaptation. Nonetheless, these technologies play a crucial role in fostering environmentally and socially sustainable operations within warehouses and broader SCs, contributing to reduced carbon emissions and the elimination of hazardous tasks for human workers. This study aims to identify the most effective AI and robotic technologies for a sustainable WMS, with recommendations tailored to maximize SC value through automation. A detailed examination of existing warehouse practices is essential to pinpoint areas where automation can yield the most substantial impact and deliver long-term resilience and value for SCs.
In order to better understand the competitive dynamics between e-commerce platforms and traditional retail outlets, a Stackelberg game model was developed. Subsequently, the Non-dominated Sorting Genetic Algorithm II (NSGA-II) was employed to determine the Pareto solution set for this multi-objective optimization problem. The findings reveal that: a) The effect of consumer reference quality can lead enterprises to adjust their strategy levels downwards, potentially resulting in profit loss under certain conditions. b) When the influence of competitive intensity on market demand is minimal, a reduction in enterprise profits occurs in both centralized and cost-sharing decision-making frameworks, with more significant detriment observed in the cost-sharing mode; conversely, when the influence is substantial, enhancements in competitive intensity can significantly increase overall system profits. c) The model's validity was confirmed through the application of the NSGA-II.
The increasing demand for electricity, coupled with the limitations of centralised power generation, has necessitated the transition towards smart grid technologies as a critical evolution of traditional power systems. The smart grid represents a significant transformation from the conventional grid, offering a pathway towards modernising energy infrastructure. This review aims to present a comprehensive analysis of the advantages and challenges of smart grid implementation, particularly within the context of the Kurdistan Region of Iraq. Key benefits such as improved grid intelligence, enhanced reliability, and sustainability were highlighted. However, several challenges were identified, including cybersecurity risks, regulatory complexities, and issues of interoperability, which collectively pose obstacles to widespread adoption. Furthermore, the review examines the current energy network in the Kurdistan region and proposes a framework for integrating smart grid technologies. Strategies for addressing the identified challenges were discussed, emphasising the importance of overcoming these barriers to facilitate the region's transition to a more advanced and efficient energy infrastructure.
The rapid advancement of technology has correspondingly escalated the sophistication of cyber threats. In response, the integration of artificial intelligence (AI) into cybersecurity (CS) frameworks has been recognized as a crucial strategy to bolster defenses against these evolving challenges. This analysis scrutinizes the effects of AI implementation on CS effectiveness, focusing on a case study involving company XYZ's adoption of an AI-driven threat detection system. The evaluation centers on several pivotal metrics, including False Positive Rate (FPR), Detection Accuracy (DA), Mean Time to Detect (MTTD), and Operational Efficiency (OE). Findings from this study illustrate a marked reduction in false positives, enhanced DA, and more streamlined security operations. The integration of AI has demonstrably fortified CS resilience and expedited incident response capabilities. Such improvements not only underscore the potential of AI-driven solutions to significantly enhance CS measures but also highlight their necessity in safeguarding digital assets within a continuously evolving threat landscape. The implications of these findings are profound, suggesting that leveraging AI technologies is imperative for effectively mitigating cyber threats and ensuring robust digital security in contemporary settings.
The burgeoning application of artificial intelligence (AI) technologies for the diagnosis and detection of defects has marked a significant area of interest among researchers in recent years. This study presents a fuzzy logic-based approach to identify failures within industrial systems, with a focus on operational anomalies in a real-world context, particularly within the competitive landscape of Omar Benamour, in Al-Fajjouj region, Guelma, Algeria. The analysis has been started with the employment of the Activity-Based Costing (ABC) method to identify the critical machinery within the K-short dough production line. Subsequently, an elaborate failure tree analysis has been conducted on the pressing machine, enabling the deployment of a fuzzy logic approach for the detection of failures in the dough cutter of AMOR BENAMOR's K production line press. The effectiveness of the proposed method has been validated through an evaluation conducted with an authentic and real-time data from the facility, where the study took place. The results underscore the efficacy of the fuzzy logic approach in enhancing fault detection within industrial systems, offering substantial implications for the advancement of defect diagnosis methodologies. The study advocates for the integration of fuzzy logic principles in the operational oversight of industrial machinery, aiming to mitigate potential failures and optimize production efficiency.
In the realm of Wireless Sensor Networks (WSNs), energy efficiency emerges as a paramount concern due to the inherent limitations in the energy capacity of sensor nodes. The extension of network lifespan is critically dependent on the strategic selection of Cluster Heads (CHs), a process that necessitates a nuanced approach to optimize communication, resource allocation, and network performance overall. This study proposes a novel methodology for CH selection, integrating Multiple Criteria Decision Making (MCDM) with the K-Means algorithm to facilitate a more discerning aggregation and forwarding of data to the network sink. Central to this approach is the application of the Einstein Weighted Averaging Aggregation (EWA) operator, which introduces a layer of sophistication in handling the uncertainties inherent in WSN deployments. The efficiency of CH selection is vital, as CHs serve as pivotal nodes within the network, their selection and operational efficiency directly influencing the network's energy consumption and data processing capabilities. By employing a meticulously designed clustering process via the K-Means algorithm and selecting CHs based on a comprehensive set of parameters, including, but not limited to, residual energy and node proximity, this methodology seeks to substantially enhance the energy efficiency of WSNs. Comparative analysis with the Low-Energy Adaptive Cluster Hierarchy (LEACH)-Fuzzy Clustering (FC) algorithm underscores the efficacy of the proposed approach, demonstrating a 15% improvement in network lifespan. This advancement not only ensures optimal utilization of limited resources but also promotes the sustainability of WSN deployments, a critical consideration for the widespread application of these networks in various fields. The findings of this study underscore the significance of adopting sophisticated, algorithmically driven strategies for CH selection, highlighting the potential for significant enhancements in WSN longevity through methodical, data-informed decision-making processes.
This study introduces an advanced technology for risk analysis in investment projects within the extractive industry, specifically focusing on innovative mining ventures. The research primarily investigates various determinants influencing project risks, including production efficiency, cost, informational content, resource potential, organizational structure, external environmental influences, and environmental impacts. In addressing the research challenge, system-cognitive models from the Eidos intellectual framework are employed. These models quantitatively reflect the informational content observed across different gradations of descriptive scales, predicting the transition of the modelled object into a state corresponding to specific class gradations. A comprehensive analysis of strengths, weaknesses, opportunities and threats (SWOT) has been conducted, unveiling the dynamic interplay of development factors against the backdrop of threats and opportunities within mineral deposits exploitation projects. This analysis facilitates the identification of critical problem areas, bottlenecks, prospects, and risks, considering environmental considerations. The application of this novel intelligent technology significantly streamlines the development process for mining investment projects, guiding the selection of ventures that promise enhanced production efficiency, cost reduction, and minimized environmental harm. The methodological approach adopted in this study aligns with the highest standards of academic rigour, ensuring consistency in the use of professional terminology throughout the article and adhering to the stylistic and structural norms prevalent in leading academic journals. By leveraging an intelligent, systematic framework for risk analysis, this research contributes valuable insights into optimizing investment decisions in the mining sector, emphasizing sustainability and economic viability.
In the realm of high-definition surveillance for dense traffic environments, the accurate detection and classification of vehicles remain paramount challenges, often hindered by missed detections and inaccuracies in vehicle type identification. Addressing these issues, an enhanced version of the You Only Look Once version v5s (YOLOv5s) algorithm is presented, wherein the conventional network structure is optimally modified through the partial integration of the Swin Transformer V2. This innovative approach leverages the convolutional neural networks' (CNNs) proficiency in local feature extraction alongside the Swin Transformer V2's capability in global representation capture, thereby creating a symbiotic system for improved vehicle detection. Furthermore, the introduction of the Similarity-based Attention Module (SimAM) within the CNN framework plays a pivotal role, dynamically refocusing the feature map to accentuate local features critical for accurate detection. An empirical evaluation of this augmented YOLOv5s algorithm demonstrates a significant uplift in performance metrics, evidencing an average detection precision (mAP@0.5:0.95) of 65.7%. Specifically, in the domain of vehicle category identification, a notable increase in the true positive rate by 4.48% is observed, alongside a reduction in the false negative rate by 4.11%. The culmination of these enhancements through the integration of Swin Transformer and SimAM within the YOLOv5s framework marks a substantial advancement in the precision of vehicle type recognition and reduction of target miss detection in densely populated traffic flows. The methodology's success underscores the efficacy of this integrated approach in overcoming the prevalent limitations of existing vehicle detection algorithms under complex surveillance scenarios.
To address the rate matching issue between high-bandwidth and high-sampling-rate analog-to-digital converters (ADCs) and low-bandwidth and low-sampling-rate baseband processors, the key technology of digital downconversion is introduced. This approach relocates the intermediate-frequency baseband signal to a vicinity of the baseband, laying a foundation for subsequent Digital Signal Processor (DSP) analysis and processing. In an innovative application of the Coordinate Rotation Digital Computer (CORDIC) algorithm for Numerically Controlled Oscillator (NCO) in a pipeline design, the phase differences of five parallel signals are measured, facilitating real-time parallel processing of the phase and amplitude relationships of multiple signals. The Field Programmable Gate Array (FPGA) design and implementation of the digital mixer module and filter bank for digital downconversion have been accomplished. A test board for the direction-finding application of five digital downconversion channels has been constructed, with the FMQL45T900 as its core. The correctness of the direction-finding data has been validated through practical application, demonstrating a significant improvement in power consumption compared to methods documented in other literature, thereby enhancing overall efficiency. The digital downconversion technology based on the CORDIC algorithm is applicable in various fields, including military communications, broadcasting, and radar navigation systems.
The transition from traditional production activities to a manufacturing-dominated economy has been a hallmark of industrial evolution, culminating in the advent of the fourth industrial revolution. This phase is characterized by the seamless integration of digital advancements across all sectors of global industry, heralding significant strides in meeting the evolving demands of markets and consumers. The concept of the smart factory stands at the forefront of this transformation, embedding sustainability, which is defined as economic viability, environmental stewardship, and social responsibility, into its core principles. This research focuses on the critical role of autonomous material handling technologies within these smart manufacturing environments, emphasizing their contribution to enhancing industrial productivity. The automation of material handling, propelled by the exigencies of reducing material damage, minimizing human intervention in repetitive tasks, and mitigating errors and service delays, is increasingly viewed as indispensable for achieving sustainable industrial operations. The employment of artificial intelligence (AI) in material handling not only offers substantial benefits in terms of operational efficiency and sustainability but also introduces specific challenges that must be navigated to align with the smart factory paradigm. By examining the integration of autonomous material handling solutions, traditionally epitomized by the utilization of forklifts in industrial settings, this study delineates the essential benchmarks for their implementation, ensuring compatibility with the overarching objectives of smart manufacturing systems. Through this lens, the paper articulates the dual imperative of aligning material handling technologies with environmental and social sustainability criteria, while also ensuring their economic feasibility.