Evaluating renewable energy policies is crucial for fostering sustainable development, particularly within the European Union (EU), where energy management must account for economic, environmental, and social criteria. A stable framework is proposed that integrates multiple perspectives by synthesizing the rankings derived from four widely recognized Multi-Criteria Decision Analysis (MCDA) methods—Measurement of Alternatives and Ranking according to Compromise Solution (MARCOS), Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), Stable Preference Ordering Towards Ideal Solution (SPOTIS), and Multi-Objective Optimization by Ratio Analysis (MOORA). This approach addresses the inherent variability in individual MCDA techniques by applying Copeland’s compromise method, ensuring a consensus ranking that reflects the balanced performance of renewable energy systems across 16 EU countries. To further enhance the reliability of the framework, the Stochastic Identification of Weights (SITW) approach is employed, optimizing the criteria weights and strengthening the consistency of the evaluation process. The results reveal a strong alignment between the rankings generated by individual MCDA methods and the compromise rankings, particularly among the highest-performing alternatives. This alignment highlights the stability of the framework, enabling the identification of critical drivers of renewable energy policy performance—most notably energy efficiency and environmental sustainability. The compromise approach proves effective in balancing multiple, sometimes conflicting perspectives, offering policymakers a structured tool for informed decision-making in the complex domain of energy management. The findings contribute to the development of advanced frameworks for decision-making by demonstrating that compromise rankings can offer robust solutions while maintaining methodological consistency. Furthermore, this framework provides valuable insights into the complex dynamics of renewable energy performance evaluation. Future research should explore the applicability of this methodology beyond the EU context, incorporating additional dimensions such as social, technological, and institutional factors, and addressing the dynamic evolution of energy policies. This framework offers a solid foundation for refining policy evaluation strategies, supporting sustainable energy management efforts in diverse geographic regions.
The integration of Electric Vehicles (EVs) into modern power grids presents both challenges and opportunities. This study investigates the influence of slack bus compensation on the stability of voltage levels within these grids, particularly as EV penetration increases. A comprehensive simulation framework is developed to model various grid configurations, accounting for different scenarios of EV load integration. Historical charging data is meticulously analysed to predict future load patterns, indicating that heightened levels of EV integration lead to a notable decrease in voltage stability. Specifically, voltage levels were observed to decline from 230 V to 210 V under conditions of 100% EV penetration, necessitating an increase in slack bus compensation from 0 MW to 140 MW to sustain system balance. Advanced machine learning techniques are employed to forecast real-time load demands, significantly reducing both Mean Absolute Error (MAE) and Root Mean Square Error (RMSE), thereby optimising slack bus performance. The results underscore the critical role of real-time load forecasting and automated control strategies in addressing the challenges posed by EV integration into power grids. Furthermore, the study demonstrates that intelligent systems, coupled with machine learning, can enhance power flow management and bolster grid stability, ultimately improving operational efficiency in the distribution of energy. Future research will focus on refining machine learning models through the utilisation of more granular data sets and exploring decentralized control methodologies, such as federated learning, thereby providing valuable insights for grid operators as the adoption of EVs continues to expand.
Bibliometric analysis is a quantitative research method employed to measure and assess the impact, structure, and trends within academic publications. It aims to uncover patterns, connections, and research gaps either within a specific field or across interdisciplinary domains. This study utilizes bibliometric methods to investigate research gaps within the digital business domain, focusing on qualitative insights identified in existing literature. A systematic literature review (SLR) approach is adopted to ensure a rigorous synthesis of relevant studies. The analysis follows three key phases: data collection, bibliometric evaluation, and data visualization. Through these phases, trends, thematic gaps, and areas for future exploration are identified, offering a clearer understanding of the evolution and direction of digital business research. The insights derived are intended to inform sustainable business practices, with implications for environmentally conscious business models, value-driven marketing strategies, and the integration of sustainable operations. Moreover, the findings highlight potential avenues for enhanced technological innovation and interdisciplinary collaboration in digital business. This study provides a robust framework for scholars seeking to explore uncharted areas within digital business and offers actionable guidance on key research themes requiring further investigation. The use of bibliometric tools ensures comprehensive coverage of existing literature and fosters the development of a coherent research agenda aligned with emerging trends in the field.
Container-based virtualization has emerged as a leading alternative to traditional cloud-based architectures due to its lower overhead, enhanced scalability, and adaptability. Kubernetes, one of the most widely adopted open-source container orchestration platforms, facilitates dynamic resource allocation through the Horizontal Pod Autoscaler (HPA). This auto-scaling mechanism enables efficient deployment and management of microservices, allowing for rapid development of complex SaaS applications. However, recent studies have identified several vulnerabilities in auto-scaling systems, including brute force attacks, Denial-of-Service (DoS) attacks, and YOYO attacks, which have led to significant performance degradation and unexpected downtimes. In response to these challenges, a novel approach is proposed to ensure uninterrupted deployment and enhanced resilience against such attacks. By leveraging Helm for deployment automation, Prometheus for metrics collection, and Grafana for real-time monitoring and visualisation, this framework improves the Quality of Service (QoS) in Kubernetes clusters. A primary focus is placed on achieving optimal resource utilisation while meeting Service Level Objectives (SLOs). The proposed architecture dynamically scales workloads in response to fluctuating demands and strengthens security against autoscaling-specific attacks. An on-premises implementation using Kubernetes and Docker containers demonstrates the feasibility of this approach by mitigating performance bottlenecks and preventing downtime. The contribution of this research lies in the ability to enhance system robustness and maintain service reliability under malicious conditions without compromising resource efficiency. This methodology ensures seamless scalability and secure operations, making it suitable for enterprise-level microservices and cloud-native applications.
The traditional manufacturing sector in China is increasingly challenged by rising labour costs and the diminishing demographic advantage. These issues exacerbate existing inefficiencies, such as limited value addition, high resource consumption, prolonged production cycles, inconsistent product quality, and inadequate automation. To address these challenges, a production scheduling framework is proposed, guided by three key objectives: the prioritisation of high-value orders, the reduction of total processing time, and the earliest possible completion of all orders. This study introduces a multi-objective constrained greedy model designed to optimise scheduling by balancing these objectives through maximum weight allocation, shortest processing time selection, and adherence to the earliest deadlines. The proposed approach incorporates comprehensive reward and penalty factors to account for deviations in performance, thus fostering a balance between operational efficiency and product quality. By implementing the optimised scheduling strategy, it is anticipated that significant improvements will be achieved in production efficiency, workforce motivation, product quality, and organisational reputation. The enhanced operational outcomes are expected to strengthen the core competitiveness of enterprises, particularly within the increasingly complex landscape of pull production systems. This research offers valuable insights for manufacturers seeking to transition towards more efficient, automated, and customer-centric production models, addressing both short-term operational challenges and long-term strategic objectives.
Logistics performance plays a pivotal role in fostering economic growth and enhancing global competitiveness. This study aims to evaluate the logistics performance of G8 nations through multi-criteria decision-making (MCDM) models. Standard Deviation (SD) has been applied to determine the weights of evaluation criteria, while the Alternative Ranking Order Method Accounting for Two-Step Normalization (AROMAN) has been employed to rank the countries based on their performance. The findings indicate that Timeliness emerges as the most critical factor influencing logistics efficiency. Among the G8 nations, Germany achieves the highest logistics performance, reflecting the robustness of its logistical infrastructure and operational efficiency. The results reinforce the premise that logistics performance is instrumental to both international trade and economic competitiveness. Nations demonstrating strong logistical capabilities are better positioned to excel in global markets, while those with underdeveloped logistics systems may face increased economic vulnerabilities. Enhancing logistical frameworks, including infrastructure and systems, is therefore essential for nations striving to improve their global standing. The insights presented underscore the importance of strategic investment in logistics infrastructure as a key policy instrument for enhancing economic resilience and international trade potential.
The efficiency of utility vehicle fleets in municipal waste management plays a crucial role in enhancing the sustainability and effectiveness of non-hazardous waste disposal systems. This research investigates the operational performance of a local utility company's vehicle fleet, with a specific focus on waste separation at the source and its implications for meeting environmental standards in Europe and beyond. The study aims to identify the most efficient vehicle within the fleet, contributing to broader goals of environmental preservation and waste reduction, with a long-term vision of achieving "zero waste". Efficiency was evaluated using Data Envelopment Analysis (DEA), where key input parameters included fuel costs, regular maintenance expenses, emergency repair costs, and the number of minor accidents or damages. The output parameter was defined as the vehicle's working hours. Following the DEA results, the Criteria Importance Through Intercriteria Correlation (CRITIC) method was employed to assign weightings to the criteria, ensuring an accurate reflection of their relative importance. The Measurement of Alternatives and Ranking according to Compromise Solution (MARCOS) method was then applied to rank the vehicles based on their overall efficiency. The analysis, conducted over a five-year period (2019-2023), demonstrated that Vehicle 3 (MAN T32-J-339) achieved the highest operational efficiency, particularly in 2020. These findings underscore the potential for optimising fleet performance in waste management systems, contributing to a cleaner urban environment and aligning with global sustainability objectives. The proposed model provides a robust framework for future applications in similar municipal settings, supporting the transition towards more eco-friendly waste management practices.
This study investigates the application of Multi-Criteria Decision Analysis (MCDA) methods to the classification of research papers within a Systematic Literature Review (SLR). Distinctions are drawn between compensatory and non-compensatory MCDA approaches, which, despite their distinctiveness, have often been applied interchangeably, leading to a need for clarification in their usage. To address this, the methods of Entropy Weight Method (EWM), Analytic Hierarchy Process (AHP), and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) were utilized to determine the parameters for ranking papers within an SLR portfolio. The source of this ranking comprised publications from three major databases: Scopus, ScienceDirect, and Web of Science. From an initial yield of 267 articles, a final portfolio of 90 articles was established, highlighting not only the compensatory and non-compensatory classifications but also identifying methods that incorporate features of both. This nuanced categorization reveals the complexity and necessity of selecting an appropriate MCDA method based on the dataset characteristics, which may exhibit attributes of both approaches. The analysis further illuminated the geographical distribution of publications, leading contributors, thematic areas, and the prevalence of specific MCDA methods. This study underscores the importance of methodological precision in the application of MCDA to systematic reviews, providing a refined framework for evaluating academic literature.
In the dynamic landscape of mobile technology, where a myriad of options burgeons, compounded by fluctuating features, diverse price points, and a plethora of specifications, the task of selecting the optimum mobile phone becomes formidable for consumers. This complexity is further exacerbated by the intrinsic ambiguity and uncertainty characterizing consumer preferences. Addressed herein is the deployment of fuzzy hypersoft sets (FHSS) in conjunction with machine learning techniques to forge a decision support system (DSS) that refines the mobile phone selection process. The proposed framework harnesses the synergy between FHSS and machine learning to navigate the multifaceted nature of consumer choices and the attributes of the available alternatives, thereby offering a structured approach aimed at maximizing consumer satisfaction while accommodating various determinants. The integration of FHSS is pivotal in managing the inherent ambiguity and uncertainty of consumer preferences, providing a comprehensive decision-making apparatus amidst a plethora of choices. The elucidation of this study encompasses an easy-to-navigate framework, buttressed by sophisticated Python codes and algorithms, to ameliorate the selection process. This methodology engenders a personalized and engaging avenue for mobile phone selection in an ever-evolving technological epoch. The fidelity to professional terminologies and their consistent application throughout this discourse, as well as in subsequent sections of the study, underscores the meticulous approach adopted to ensure clarity and precision. This study contributes to the extant literature by offering a novel framework that melds the principles of fuzzy set (FS) theory with advanced computational techniques, thereby facilitating a nuanced decision-making process in the realm of mobile phone selection.
In general, a stable and strong system shouldn't have an overly sensitive/dependent response to inputs (unless consciously and planned desired), as this would reduce efficiency. As in other techniques, approaches, and methodologies, if the results are excessively affected when the input parameters change in MCDM methods, this situation is identified with sensitivity analyses. Oversensitivity is generally accepted as a problem in the MCDM (Multi-Criteria Decision Making) methodology family, which has more than 200 members according to the current literature. The MCDM family is not just a weight coefficient-sensitive methodology. MCDM types can also be sensitive to many different calculation parameters such as data type, normalization, fundamental equation, threshold value, preference function, etc. Many studies to understand the degree of sensitivity simply monitor whether the ranking position of the best alternative changes. However, this is incomplete for understanding the nature of sensitivity, and more evidence is undoubtedly needed to gain insight into this matter. Observing the holistic change of all alternatives compared to a single alternative provides the researcher with more reliable and generalizing evidence, information, or assumptions about the degree of sensitivity of the system. In this study, we assigned a fixed reference point to measure sensitivity with a more robust approach. Thus, we took the distance to the fixed point as a base reference while observing the changeable MCDM results. We calculated sensitivity to normalization, not just sensitivity to weight coefficients. In addition, past MCDM studies accept existing data as the only criterion in sensitivity analysis and make generalizations easily. To show that the model proposed in this study is not a coincidence, in addition to the graphics card selection problem, an exploratory validation was performed for another problem with a different set of data, alternatives, and criteria. We comparatively measured sensitivity using the relationship between MCDM-based performance and the static reference point. We statistically measured the sensitivity with four types of weighting methods and 7 types of normalization techniques with the PROBID method. The striking result, confirmed by 56 different MCDM ranking findings, was this: In general, if the sensitivity of an MCDM method is high, the relationship of that MCDM method to a fixed reference point is low. On the other hand, if the sensitivity is low, a high correlation with the reference point is produced. In short, uncontrolled hypersensitivity disrupts not only the ranking but also external relations, as expected.
In the evolution of blockchain technology, the traditional single-chain structure has faced significant challenges, including low throughput, high latency, and limited scalability. This paper focuses on leveraging multichain sharding technology to overcome these constraints and introduces a high-performance carbon cycle supply data sharing method based on a blockchain multichain framework. The aim is to address the difficulties encountered in traditional carbon data processing. The proposed method involves partitioning a consortium chain into multiple subchains and constructing a unique “child/parent” chain architecture, enabling cross-chain data access and significantly increasing throughput. Furthermore, the scheme enhances the security and processing capacity of subchains by dynamically increasing the number of validator broadcasting nodes and implementing parallel node operations within subchains. This approach effectively solves the problems of low throughput in single-chain blockchain networks and the challenges of cross-chain data sharing, realizing more efficient and scalable blockchain applications.