Javascript is required
Search
Volume 3, Issue 2, 2024

Abstract

Full Text|PDF|XML
The cold chain industry plays a pivotal role in ensuring the quality and safety of temperature-sensitive products throughout their journey from production to consumption. Central to this process is the effective monitoring of temperature fluctuations, which directly impacts product integrity. With an array of temperature monitoring devices available in the market, selecting the most suitable option becomes a critical task for organizations operating within the cold chain. This paper presents a comprehensive analysis of seven prominent temperature monitoring devices utilized in the cold chain industry. Through a systematic evaluation process, each device is rigorously assessed across six key criteria groups: price, accuracy, usability, monitoring and reporting capabilities, flexibility, and capability. A total of 23 independent metrics are considered within these criteria, providing a holistic view of each device's performance. Building upon this analysis, a robust decision support model is proposed to facilitate the selection process for organizations. The model integrates the findings from the evaluation, allowing stakeholders to make informed decisions based on their specific requirements and priorities. Notably, the Chemical Time Temperature Integrators (CTTI) emerge as the top-ranked device, demonstrating superior performance across multiple criteria. The implications of this research extend beyond device selection, offering valuable insights for enhancing cold chain efficiency and product quality. By leveraging the decision support model presented in this study, organizations can streamline their temperature monitoring processes, mitigate risks associated with temperature excursions, and ultimately optimize their cold chain operations. This study serves as a foundation for further research in the field of cold chain management, paving the way for advancements in temperature monitoring technology and strategies. Future studies may explore additional criteria or expand the analysis to include a broader range of devices, contributing to ongoing efforts aimed at improving cold chain sustainability and reliability.

Abstract

Full Text|PDF|XML

In the evolution of blockchain technology, the traditional single-chain structure has faced significant challenges, including low throughput, high latency, and limited scalability. This paper focuses on leveraging multichain sharding technology to overcome these constraints and introduces a high-performance carbon cycle supply data sharing method based on a blockchain multichain framework. The aim is to address the difficulties encountered in traditional carbon data processing. The proposed method involves partitioning a consortium chain into multiple subchains and constructing a unique “child/parent” chain architecture, enabling cross-chain data access and significantly increasing throughput. Furthermore, the scheme enhances the security and processing capacity of subchains by dynamically increasing the number of validator broadcasting nodes and implementing parallel node operations within subchains. This approach effectively solves the problems of low throughput in single-chain blockchain networks and the challenges of cross-chain data sharing, realizing more efficient and scalable blockchain applications.

Abstract

Full Text|PDF|XML
In general, a stable and strong system shouldn't have an overly sensitive/dependent response to inputs (unless consciously and planned desired), as this would reduce efficiency. As in other techniques, approaches, and methodologies, if the results are excessively affected when the input parameters change in MCDM methods, this situation is identified with sensitivity analyses. Oversensitivity is generally accepted as a problem in the MCDM (Multi-Criteria Decision Making) methodology family, which has more than 200 members according to the current literature. The MCDM family is not just a weight coefficient-sensitive methodology. MCDM types can also be sensitive to many different calculation parameters such as data type, normalization, fundamental equation, threshold value, preference function, etc. Many studies to understand the degree of sensitivity simply monitor whether the ranking position of the best alternative changes. However, this is incomplete for understanding the nature of sensitivity, and more evidence is undoubtedly needed to gain insight into this matter. Observing the holistic change of all alternatives compared to a single alternative provides the researcher with more reliable and generalizing evidence, information, or assumptions about the degree of sensitivity of the system. In this study, we assigned a fixed reference point to measure sensitivity with a more robust approach. Thus, we took the distance to the fixed point as a base reference while observing the changeable MCDM results. We calculated sensitivity to normalization, not just sensitivity to weight coefficients. In addition, past MCDM studies accept existing data as the only criterion in sensitivity analysis and make generalizations easily. To show that the model proposed in this study is not a coincidence, in addition to the graphics card selection problem, an exploratory validation was performed for another problem with a different set of data, alternatives, and criteria. We comparatively measured sensitivity using the relationship between MCDM-based performance and the static reference point. We statistically measured the sensitivity with four types of weighting methods and 7 types of normalization techniques with the PROBID method. The striking result, confirmed by 56 different MCDM ranking findings, was this: In general, if the sensitivity of an MCDM method is high, the relationship of that MCDM method to a fixed reference point is low. On the other hand, if the sensitivity is low, a high correlation with the reference point is produced. In short, uncontrolled hypersensitivity disrupts not only the ranking but also external relations, as expected.

Abstract

Full Text|PDF|XML

In the dynamic landscape of mobile technology, where a myriad of options burgeons, compounded by fluctuating features, diverse price points, and a plethora of specifications, the task of selecting the optimum mobile phone becomes formidable for consumers. This complexity is further exacerbated by the intrinsic ambiguity and uncertainty characterizing consumer preferences. Addressed herein is the deployment of fuzzy hypersoft sets (FHSS) in conjunction with machine learning techniques to forge a decision support system (DSS) that refines the mobile phone selection process. The proposed framework harnesses the synergy between FHSS and machine learning to navigate the multifaceted nature of consumer choices and the attributes of the available alternatives, thereby offering a structured approach aimed at maximizing consumer satisfaction while accommodating various determinants. The integration of FHSS is pivotal in managing the inherent ambiguity and uncertainty of consumer preferences, providing a comprehensive decision-making apparatus amidst a plethora of choices. The elucidation of this study encompasses an easy-to-navigate framework, buttressed by sophisticated Python codes and algorithms, to ameliorate the selection process. This methodology engenders a personalized and engaging avenue for mobile phone selection in an ever-evolving technological epoch. The fidelity to professional terminologies and their consistent application throughout this discourse, as well as in subsequent sections of the study, underscores the meticulous approach adopted to ensure clarity and precision. This study contributes to the extant literature by offering a novel framework that melds the principles of fuzzy set (FS) theory with advanced computational techniques, thereby facilitating a nuanced decision-making process in the realm of mobile phone selection.

Open Access
Research article
A Novel Approach for Systematic Literature Reviews Using Multi-Criteria Decision Analysis
Vilmar Steffen ,
Maiquiel Schmidt de Oliveira ,
flavio trojan
|
Available online: 05-22-2024

Abstract

Full Text|PDF|XML
This study investigates the application of Multi-Criteria Decision Analysis (MCDA) methods to the classification of research papers within a Systematic Literature Review (SLR). Distinctions are drawn between compensatory and non-compensatory MCDA approaches, which, despite their distinctiveness, have often been applied interchangeably, leading to a need for clarification in their usage. To address this, the methods of Entropy Weight Method (EWM), Analytic Hierarchy Process (AHP), and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) were utilized to determine the parameters for ranking papers within an SLR portfolio. The source of this ranking comprised publications from three major databases: Scopus, ScienceDirect, and Web of Science. From an initial yield of 267 articles, a final portfolio of 90 articles was established, highlighting not only the compensatory and non-compensatory classifications but also identifying methods that incorporate features of both. This nuanced categorization reveals the complexity and necessity of selecting an appropriate MCDA method based on the dataset characteristics, which may exhibit attributes of both approaches. The analysis further illuminated the geographical distribution of publications, leading contributors, thematic areas, and the prevalence of specific MCDA methods. This study underscores the importance of methodological precision in the application of MCDA to systematic reviews, providing a refined framework for evaluating academic literature.
- no more data -