Bibliometric analysis is a quantitative research method employed to measure and assess the impact, structure, and trends within academic publications. It aims to uncover patterns, connections, and research gaps either within a specific field or across interdisciplinary domains. This study utilizes bibliometric methods to investigate research gaps within the digital business domain, focusing on qualitative insights identified in existing literature. A systematic literature review (SLR) approach is adopted to ensure a rigorous synthesis of relevant studies. The analysis follows three key phases: data collection, bibliometric evaluation, and data visualization. Through these phases, trends, thematic gaps, and areas for future exploration are identified, offering a clearer understanding of the evolution and direction of digital business research. The insights derived are intended to inform sustainable business practices, with implications for environmentally conscious business models, value-driven marketing strategies, and the integration of sustainable operations. Moreover, the findings highlight potential avenues for enhanced technological innovation and interdisciplinary collaboration in digital business. This study provides a robust framework for scholars seeking to explore uncharted areas within digital business and offers actionable guidance on key research themes requiring further investigation. The use of bibliometric tools ensures comprehensive coverage of existing literature and fosters the development of a coherent research agenda aligned with emerging trends in the field.
Container-based virtualization has emerged as a leading alternative to traditional cloud-based architectures due to its lower overhead, enhanced scalability, and adaptability. Kubernetes, one of the most widely adopted open-source container orchestration platforms, facilitates dynamic resource allocation through the Horizontal Pod Autoscaler (HPA). This auto-scaling mechanism enables efficient deployment and management of microservices, allowing for rapid development of complex SaaS applications. However, recent studies have identified several vulnerabilities in auto-scaling systems, including brute force attacks, Denial-of-Service (DoS) attacks, and YOYO attacks, which have led to significant performance degradation and unexpected downtimes. In response to these challenges, a novel approach is proposed to ensure uninterrupted deployment and enhanced resilience against such attacks. By leveraging Helm for deployment automation, Prometheus for metrics collection, and Grafana for real-time monitoring and visualisation, this framework improves the Quality of Service (QoS) in Kubernetes clusters. A primary focus is placed on achieving optimal resource utilisation while meeting Service Level Objectives (SLOs). The proposed architecture dynamically scales workloads in response to fluctuating demands and strengthens security against autoscaling-specific attacks. An on-premises implementation using Kubernetes and Docker containers demonstrates the feasibility of this approach by mitigating performance bottlenecks and preventing downtime. The contribution of this research lies in the ability to enhance system robustness and maintain service reliability under malicious conditions without compromising resource efficiency. This methodology ensures seamless scalability and secure operations, making it suitable for enterprise-level microservices and cloud-native applications.
The traditional manufacturing sector in China is increasingly challenged by rising labour costs and the diminishing demographic advantage. These issues exacerbate existing inefficiencies, such as limited value addition, high resource consumption, prolonged production cycles, inconsistent product quality, and inadequate automation. To address these challenges, a production scheduling framework is proposed, guided by three key objectives: the prioritisation of high-value orders, the reduction of total processing time, and the earliest possible completion of all orders. This study introduces a multi-objective constrained greedy model designed to optimise scheduling by balancing these objectives through maximum weight allocation, shortest processing time selection, and adherence to the earliest deadlines. The proposed approach incorporates comprehensive reward and penalty factors to account for deviations in performance, thus fostering a balance between operational efficiency and product quality. By implementing the optimised scheduling strategy, it is anticipated that significant improvements will be achieved in production efficiency, workforce motivation, product quality, and organisational reputation. The enhanced operational outcomes are expected to strengthen the core competitiveness of enterprises, particularly within the increasingly complex landscape of pull production systems. This research offers valuable insights for manufacturers seeking to transition towards more efficient, automated, and customer-centric production models, addressing both short-term operational challenges and long-term strategic objectives.
Logistics performance plays a pivotal role in fostering economic growth and enhancing global competitiveness. This study aims to evaluate the logistics performance of G8 nations through multi-criteria decision-making (MCDM) models. Standard Deviation (SD) has been applied to determine the weights of evaluation criteria, while the Alternative Ranking Order Method Accounting for Two-Step Normalization (AROMAN) has been employed to rank the countries based on their performance. The findings indicate that Timeliness emerges as the most critical factor influencing logistics efficiency. Among the G8 nations, Germany achieves the highest logistics performance, reflecting the robustness of its logistical infrastructure and operational efficiency. The results reinforce the premise that logistics performance is instrumental to both international trade and economic competitiveness. Nations demonstrating strong logistical capabilities are better positioned to excel in global markets, while those with underdeveloped logistics systems may face increased economic vulnerabilities. Enhancing logistical frameworks, including infrastructure and systems, is therefore essential for nations striving to improve their global standing. The insights presented underscore the importance of strategic investment in logistics infrastructure as a key policy instrument for enhancing economic resilience and international trade potential.
The efficiency of utility vehicle fleets in municipal waste management plays a crucial role in enhancing the sustainability and effectiveness of non-hazardous waste disposal systems. This research investigates the operational performance of a local utility company's vehicle fleet, with a specific focus on waste separation at the source and its implications for meeting environmental standards in Europe and beyond. The study aims to identify the most efficient vehicle within the fleet, contributing to broader goals of environmental preservation and waste reduction, with a long-term vision of achieving "zero waste". Efficiency was evaluated using Data Envelopment Analysis (DEA), where key input parameters included fuel costs, regular maintenance expenses, emergency repair costs, and the number of minor accidents or damages. The output parameter was defined as the vehicle's working hours. Following the DEA results, the Criteria Importance Through Intercriteria Correlation (CRITIC) method was employed to assign weightings to the criteria, ensuring an accurate reflection of their relative importance. The Measurement of Alternatives and Ranking according to Compromise Solution (MARCOS) method was then applied to rank the vehicles based on their overall efficiency. The analysis, conducted over a five-year period (2019-2023), demonstrated that Vehicle 3 (MAN T32-J-339) achieved the highest operational efficiency, particularly in 2020. These findings underscore the potential for optimising fleet performance in waste management systems, contributing to a cleaner urban environment and aligning with global sustainability objectives. The proposed model provides a robust framework for future applications in similar municipal settings, supporting the transition towards more eco-friendly waste management practices.
This study investigates the application of Multi-Criteria Decision Analysis (MCDA) methods to the classification of research papers within a Systematic Literature Review (SLR). Distinctions are drawn between compensatory and non-compensatory MCDA approaches, which, despite their distinctiveness, have often been applied interchangeably, leading to a need for clarification in their usage. To address this, the methods of Entropy Weight Method (EWM), Analytic Hierarchy Process (AHP), and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) were utilized to determine the parameters for ranking papers within an SLR portfolio. The source of this ranking comprised publications from three major databases: Scopus, ScienceDirect, and Web of Science. From an initial yield of 267 articles, a final portfolio of 90 articles was established, highlighting not only the compensatory and non-compensatory classifications but also identifying methods that incorporate features of both. This nuanced categorization reveals the complexity and necessity of selecting an appropriate MCDA method based on the dataset characteristics, which may exhibit attributes of both approaches. The analysis further illuminated the geographical distribution of publications, leading contributors, thematic areas, and the prevalence of specific MCDA methods. This study underscores the importance of methodological precision in the application of MCDA to systematic reviews, providing a refined framework for evaluating academic literature.
In the dynamic landscape of mobile technology, where a myriad of options burgeons, compounded by fluctuating features, diverse price points, and a plethora of specifications, the task of selecting the optimum mobile phone becomes formidable for consumers. This complexity is further exacerbated by the intrinsic ambiguity and uncertainty characterizing consumer preferences. Addressed herein is the deployment of fuzzy hypersoft sets (FHSS) in conjunction with machine learning techniques to forge a decision support system (DSS) that refines the mobile phone selection process. The proposed framework harnesses the synergy between FHSS and machine learning to navigate the multifaceted nature of consumer choices and the attributes of the available alternatives, thereby offering a structured approach aimed at maximizing consumer satisfaction while accommodating various determinants. The integration of FHSS is pivotal in managing the inherent ambiguity and uncertainty of consumer preferences, providing a comprehensive decision-making apparatus amidst a plethora of choices. The elucidation of this study encompasses an easy-to-navigate framework, buttressed by sophisticated Python codes and algorithms, to ameliorate the selection process. This methodology engenders a personalized and engaging avenue for mobile phone selection in an ever-evolving technological epoch. The fidelity to professional terminologies and their consistent application throughout this discourse, as well as in subsequent sections of the study, underscores the meticulous approach adopted to ensure clarity and precision. This study contributes to the extant literature by offering a novel framework that melds the principles of fuzzy set (FS) theory with advanced computational techniques, thereby facilitating a nuanced decision-making process in the realm of mobile phone selection.
In general, a stable and strong system shouldn't have an overly sensitive/dependent response to inputs (unless consciously and planned desired), as this would reduce efficiency. As in other techniques, approaches, and methodologies, if the results are excessively affected when the input parameters change in MCDM methods, this situation is identified with sensitivity analyses. Oversensitivity is generally accepted as a problem in the MCDM (Multi-Criteria Decision Making) methodology family, which has more than 200 members according to the current literature. The MCDM family is not just a weight coefficient-sensitive methodology. MCDM types can also be sensitive to many different calculation parameters such as data type, normalization, fundamental equation, threshold value, preference function, etc. Many studies to understand the degree of sensitivity simply monitor whether the ranking position of the best alternative changes. However, this is incomplete for understanding the nature of sensitivity, and more evidence is undoubtedly needed to gain insight into this matter. Observing the holistic change of all alternatives compared to a single alternative provides the researcher with more reliable and generalizing evidence, information, or assumptions about the degree of sensitivity of the system. In this study, we assigned a fixed reference point to measure sensitivity with a more robust approach. Thus, we took the distance to the fixed point as a base reference while observing the changeable MCDM results. We calculated sensitivity to normalization, not just sensitivity to weight coefficients. In addition, past MCDM studies accept existing data as the only criterion in sensitivity analysis and make generalizations easily. To show that the model proposed in this study is not a coincidence, in addition to the graphics card selection problem, an exploratory validation was performed for another problem with a different set of data, alternatives, and criteria. We comparatively measured sensitivity using the relationship between MCDM-based performance and the static reference point. We statistically measured the sensitivity with four types of weighting methods and 7 types of normalization techniques with the PROBID method. The striking result, confirmed by 56 different MCDM ranking findings, was this: In general, if the sensitivity of an MCDM method is high, the relationship of that MCDM method to a fixed reference point is low. On the other hand, if the sensitivity is low, a high correlation with the reference point is produced. In short, uncontrolled hypersensitivity disrupts not only the ranking but also external relations, as expected.
In the evolution of blockchain technology, the traditional single-chain structure has faced significant challenges, including low throughput, high latency, and limited scalability. This paper focuses on leveraging multichain sharding technology to overcome these constraints and introduces a high-performance carbon cycle supply data sharing method based on a blockchain multichain framework. The aim is to address the difficulties encountered in traditional carbon data processing. The proposed method involves partitioning a consortium chain into multiple subchains and constructing a unique “child/parent” chain architecture, enabling cross-chain data access and significantly increasing throughput. Furthermore, the scheme enhances the security and processing capacity of subchains by dynamically increasing the number of validator broadcasting nodes and implementing parallel node operations within subchains. This approach effectively solves the problems of low throughput in single-chain blockchain networks and the challenges of cross-chain data sharing, realizing more efficient and scalable blockchain applications.
The cold chain industry plays a pivotal role in ensuring the quality and safety of temperature-sensitive products throughout their journey from production to consumption. Central to this process is the effective monitoring of temperature fluctuations, which directly impacts product integrity. With an array of temperature monitoring devices available in the market, selecting the most suitable option becomes a critical task for organizations operating within the cold chain. This paper presents a comprehensive analysis of seven prominent temperature monitoring devices utilized in the cold chain industry. Through a systematic evaluation process, each device is rigorously assessed across six key criteria groups: price, accuracy, usability, monitoring and reporting capabilities, flexibility, and capability. A total of 23 independent metrics are considered within these criteria, providing a holistic view of each device's performance. Building upon this analysis, a robust decision support model is proposed to facilitate the selection process for organizations. The model integrates the findings from the evaluation, allowing stakeholders to make informed decisions based on their specific requirements and priorities. Notably, the Chemical Time Temperature Integrators (CTTI) emerge as the top-ranked device, demonstrating superior performance across multiple criteria. The implications of this research extend beyond device selection, offering valuable insights for enhancing cold chain efficiency and product quality. By leveraging the decision support model presented in this study, organizations can streamline their temperature monitoring processes, mitigate risks associated with temperature excursions, and ultimately optimize their cold chain operations. This study serves as a foundation for further research in the field of cold chain management, paving the way for advancements in temperature monitoring technology and strategies. Future studies may explore additional criteria or expand the analysis to include a broader range of devices, contributing to ongoing efforts aimed at improving cold chain sustainability and reliability.
In multi-criteria decision-making (MCDM), accurately quantifying qualitative data and simulating real-world scenarios remains a significant challenge, particularly in the presence of inherent imprecision and incompleteness of information. Fuzzy logic, recognized for its capacity to model uncertainty and ambiguity, emerges as a pivotal theory in decision-making processes. This study introduces an enhancement to the Defining Interrelationships Between Ranked Criteria II (DIBR II) method, employing triangular fuzzy numbers with variable confidence intervals for the determination of criteria weight coefficients-essential for assessing their significance and impact on final decisions. The enhanced method, hereafter referred to as the Fuzzy-DIBR II (F-DIBR II), is elaborated upon through a comprehensive description of its algorithmic steps, underscored by a numerical example that highlights its potential. Validation of F-DIBR II is undertaken via a comparative analysis against the traditional DIBR II approach, placing particular emphasis on its application within the Fuzzy Complex Proportional Assessment (COPRAS) framework, geared towards evaluating sustainable mobility measures. This focal point not only reaffirms the necessity of integrating fuzzy logic into the DIBR II methodology but also validates its practical applicability in addressing real-world issues. Contributions of this research extend beyond the theoretical enhancements of fuzzy theory within the MCDM landscape, offering tangible implications for the application of F-DIBR II in sustainable mobility analyses. The consistency in professional terminology throughout the study ensures clarity and coherence, aligning with the stringent standards of top-tier academic journals.