Optimizing Railway Train Selection in Pakistan Using Confidence-Driven Intuitionistic Fuzzy Methods with Einstein-Based Operators
Abstract:
The optimization of railway train selection in Pakistan has become increasingly critical due to rapid population growth and rising travel demands. Despite efforts by the Railway Transport (RT) Department to enhance efficiency, productivity, and safety through policy reforms and infrastructure advancements, persistent challenges such as outdated technology, infrastructure bottlenecks, frequent delays, and inadequate maintenance continue to hinder progress. Addressing these issues is imperative to ensuring sustainable, efficient, and resilient railway operations. Given the multifaceted and uncertain nature of railway system modeling and management, decision-making (DM) processes necessitate robust methodologies capable of handling imprecise and ambiguous data. In this study, an innovative DM framework is introduced, leveraging intuitionistic fuzzy sets (IFSs) as an advanced extension of fuzzy sets (FSs) to manage uncertainty and hesitation in complex scenarios. By employing Einstein t-norm and t-conorm-based operators, novel operational laws for intuitionistic fuzzy credibility numbers (IFCNs) are proposed. Three key aggregation techniques—Confidence Intuitionistic Fuzzy Credibility Einstein Weighted Averaging (CIFCEWA), Confidence Intuitionistic Fuzzy Credibility Einstein Ordered Weighted Averaging (CIFCEOWA), and Confidence Intuitionistic Fuzzy Credibility Einstein Hybrid Weighted Averaging (CIFCEHWA) operators—are developed to provide a structured approach for processing and analyzing intuitionistic fuzzy data. To evaluate the practical applicability and reliability of the proposed methodology, a structured DM algorithm is formulated and validated using a real-world railway train selection case study. The incorporation of confidence levels within the IFCN framework enhances DM precision by quantifying the degree of certainty, thereby reducing risk and improving reliability. The findings demonstrate that the proposed approach effectively addresses the inherent uncertainties in railway selection processes, leading to more informed and strategic planning. Furthermore, the applicability of IFCNs extends beyond railway systems, offering valuable insights for domains such as artificial intelligence, financial DM, management science, and engineering, where uncertainty plays a pivotal role.1. Introduction
RT in Pakistan is a momentous and significant way to link different cities, towns, and remote areas. In Pakistan, railways provide the best service for moving goods and people. Though it faces some problems, such as an old structure and lack of modernization, efforts are being made to progress it. With renovations and investments, railways can support economic growth and better local connections. It has big potential to play a bigger role in the country’s growth. Railway transportation plays an imperative and vital role in improving the economy and refining connections among places. In 2022, China's railways played a significant role in the transport system, carrying 1,472.9 billion passengers along with 1.512 billion tons of goods. While India's railways transport 6.48 billion passengers with 3,643.8 billion tons of goods. In Pakistan, the railway system played a vital role in 2022, transporting 35.7 million people and 8 million tons of goods. This information shows how the railway system is the main key to transport and economic growth. But, in Pakistan, railway accidents have been increasing gradually, from 117 in 2013 to 147 in 2020. This rise highlights the serious need for better railway structure and better safety.
RT is the backbone of Pakistan's economy, playing a vital role in connecting cities, industries, and trade hubs across the country. It helps the efficient movement of goods and passengers, reducing costs and boosting economic growth. As a lifeline for commerce and connectivity, it remains crucial for national development. Railways are commonly uncertain due to various factors such as ageing infrastructure, financial mismanagement, lack of modernization, overcrowding, and poor service. Quality, including FS theory, into RT processes can enhance DM, improve resource management, and help farmers and stakeholders navigate the inherent complexities and uncertainties associated with the agricultural domain. A comprehensive strategy for RT focuses on improving efficiency, safety, and connectivity. This includes modernizing infrastructure, using advanced technologies for smoother operations, and adopting eco-friendly practices. Enhancing customer experience through better services and affordable pricing is crucial. Expanding rail networks to connect remote areas boosts economic growth and accessibility. Strong policies and investments ensure sustainable and reliable RT for all.
Risk assessments are often done using some key methods: qualitative, quantitative, and semi-quantitative. The semi-quantitative method is considered to be the more suitable because it combines the strengths of both the other methods, based on the expert opinion and data to assess risks [1], [2], [3], [4]. This method is structured, elastic, flexible, and useful in several fields, such as engineering, healthcare, and finance. FS [5] provides a more powerful framework for improving the DM process and resource management in RT by addressing fuzziness and uncertainty critical in the system. Its ability to handle vague and imperfect information enables more accurate modeling of complex railway dynamics. By accommodating uncertain variables such as weather conditions, demand fluctuations, and maintenance schedules, FS theory provides decision-makers with a robust tool for managing challenges. Its flexibility extends beyond RT, finding applications in diverse fields requiring nuanced data interpretation. So, the FS model offers a valuable structure for modeling and addressing vagueness in RT systems. Its application can lead to more effective DM, improved resource utilization, and enhanced overall efficiency in the RT department. Fuzzy systems are used in various different areas because they are good at dealing with hesitation and unclear information. However, it may face challenges in cases where the data involves both unsatisfactory and satisfactory information simultaneously. However, its effectiveness may be limited in situations involving contradictory data, requiring supplementary methods for comprehensive analysis. Atanassov [6] introduced IFSs as an extension of FSs. While FS uses membership degrees to represent the membership of an element to a set, IFSs extend this concept by introducing the notion of non-membership degree. In IFSs, each element can be presented as: $\left( \mu ,v \right)$, where $\mu $ stands for membership and $v$ for non-membership with $\mu +v\le 1$.
Aggregation operators are essential in the DM process as they combine multiple inputs into a single representative value, enabling clear and balanced assessments and evaluations. By synthesizing diverse data, they support informed and effective choices in various fields. IFS has become a valuable tool for handling imprecision and hesitation in DM, making it widely used in various real-world problems [7], [8], [9], [10]. Hung and Chen [11] proposed a TOPSIS method incorporating entropy weight to solve the DM process under the IF environment, enhancing the precision of MADM. Xu [12] extended this by exploring fuzzy GDM problems, where attribute values are represented as intuitionistic fuzzy numbers (IFNs) and weighted according to diverse preference structures. Recent advancements include the introduction of Dombi's norms, prioritizing variability in IFS applications [13]. Liu et al. [14] used Dombi operations with IF information to solve MAGDM problems by developing the Dombi Bonferroni mean method. Huang et al. [15] familiarized a model called the Z-cloud rough number-based BWM-MABAC, which helps appraise various choices. Similarly, Xiao et al. [16] presented a DM model using q-rung orthopair fuzzy numbers (q-ROFN) with new scoring approaches, useful for selecting manufacturers. Huang et al. [17] proposed a method called failure mode and effect analysis using T-spherical fuzzy numbers to enhance decision accuracy. Xiao et al. [18] developed the TODIM method, while Ye et al. [19] developed a DM algorithm to handle the vagueness and uncertainty in the decision process. Qiyas et al. [20] introduced the TOPSIS method based on credibility numbers. Yahya et al. [21], [22] explored the use of Frank techniques to analyze S-box structures, which play a key role in image encryption. They also investigated medical diagnosis by applying the fuzzy credibility Dombi Bonferroni mean operator to improve the accuracy of results. Qiyas et al. [23] further contributed by proposing an extended GRA approach that supports MCGDM based on geometric techniques using IFCNs. Rahman et al. [24], [25] added to this field by developing various methods that enhance DM in different scenarios. Together, these works highlight the power of fuzzy credibility concepts and advanced aggregation methods in solving complex problems across fields like encryption, medical diagnosis, and group DM. Wang and Liu [26], [27] presented averaging and geometric operators based on IFNs. Ozlu and Akta [28] introduced the correlation coefficient for r, s, t-spherical hesitant fuzzy model and applied it to real-world problems. Later, Ozlu [29], [30], [31], [32], [33] familiarized numerous innovative ideas in FS theory, expanding its applications. These contributions include Aczel–Alsina methods and probabilistic hesitant FSs. Moreover, he presented a single-valued neutrosophic type-2 hesitant fuzzy model and a picture type-2 hesitant fuzzy model, further refining uncertainty modeling. Another significant advancement was the introduction of bipolar-valued complex hesitant fuzzy Dombi approaches, enhancing DM frameworks. These developments have greatly improved fuzzy logic, offering more robust tools for handling imprecise and complex information.
Building on previous study [20], which used credibility numbers to develop Dombi operational laws and corresponding methods, this research presents credibility numbers under a confidence level to formulate Einstein operational laws. Confidence level plays an important role in the DM process. Based on these, we present a series of Einstein operators that offer greater elasticity and flexibility compared to Dombi methods. In this study, we present three methods, such as CIFCEWA, CIFCEOWA, and CIFCEHWA. By including confidence levels, our approach ensures more flexible and reliable aggregation. This innovation contributes to more effective decision analysis in ambiguous environments.
The paper is structured as follows: Section 2 offers a comprehensive review of some existing approaches related to our study work. Section 3 presents Einstein norms, and some related results work in this context. In Section 4, we introduce three new methods, including CIFCEWA, CIFCEOWA, and CIFCEHWA, providing advanced tools for better analysis in DM. A real-life application is established in Section 5, showcasing the practicality of the proposed approaches. Section 6 presents a detailed practical example, illustrating the effectiveness and robustness of the methods. Section 7 compares the new approaches with some existing ones, highlighting their strengths and weaknesses. Section 8 presents the benefits and limitations of the newly developed proposed approaches. Together, these sections provide a clear understanding of how the new methods perform and where they can be improved. Finally, Section 9 concludes the paper by summarizing the main findings, contributions, and implications, while also suggesting directions for future study and possible applications of the proposed approaches.
2. Preliminaries
This section introduces key concepts including FS, IFS, score, and accuracy measures using IFNs, helping to understand how imprecision and uncertainty are managed in real-life problems.
Definition 1: Let F be a FS, mathematically it can be defined over a universal set U as [5] :
$F=\left\{ u,{{x}_{F}}\left( u \right)=\left| u\in U \right. \right\}$, where ${{\mu }_{F}}(x):U\to [ 0,1]$ and called the degree of membership.
Definition 2: An IFS “I” can be described as a collection of functions that transform from a universal set “U” to the closed interval [0, 1], defined as [6]: $I=\left\{ \left. u,{{x}_{I}}\left( u \right),{{r}_{I}}\left( u \right) \right|u\in U \right\}$, where ${{x}_{I}}(u):U\to [ 0,1]$, ${{r}_{I}}(u):U\to [ 0,1]$ respectively called the degrees of membership and non-membership under conditions: $0 < {{x}_{I}}(u)+{{r}_{I}}(u)\le 1$.
The score and accuracy functions provide a way to evaluate and rank it. The score represents the overall preference and tendency of the IFN, while the accuracy measures how precise and balanced it is between membership and non-membership. These functions help in DM by quantifying the reliability and favorability of the IFN.
Definition 3: Let $\alpha =\left( x,r \right)$ be IFN, then, its score and accuracy mathematically can be presented as follows [26]: $scor\left( \alpha \right)=x-r$ and $acor\left( \alpha \right)=x+r$ under limitations, such as $\text{ }scor\left( \alpha \right)\in [-1,1]$, $acor\left( \alpha \right)\in [ 0,2]$, respectively.
3. Einstein Operational Laws under IFCNs
The Einstein T-norm and T-conorm are fundamental ideas in fuzzy logic, offering powerful tools for modeling interactions between FSs. These norms are based on specific operational laws that enable the systematic combination of fuzzy information, making them integral to DM processes under hesitation. These operators provide a flexible framework for integrating multiple sources of information while preserving the inherent fuzziness of the data. Consequently, the Einstein norms contribute significantly to building robust systems capable of addressing complex real-world problems where ambiguity plays a critical role. Their application spans various domains, including artificial intelligence, data fusion, and computational intelligence.
Definition 4: Let p and q be any two real numbers, and then Einstein T-norm and T-conorm can be mathematically defined as:
where, $ \left( p,q \right)\in \left[ 0,1 \right]$; S represents T-norm and T represents T-conorm.
Definition 5: Let ${{\alpha }_{j}}=\left( \left\langle {{x}_{j}},{{r}_{j}} \right\rangle ,\left\langle {{\xi }_{j}},{{\zeta }_{j}} \right\rangle \right)\left( j=1,2 \right)$ be IFCNs, then
i) $ {{\alpha }_{1}}\oplus {{\alpha }_{2}}\text{ }=\text{ }\left( \left\langle \frac{{{x}_{1}}+{{x}_{2}}}{1+{{x}_{1}}{{x}_{2}}},\frac{{{r}_{1}}+{{r}_{2}}}{1+{{r}_{1}}{{r}_{2}}} \right\rangle ,\left\langle \frac{{{\xi }_{1}}{{\xi }_{2}}}{1+\left( 1-{{\xi }_{1}} \right)\left( 1-{{\xi }_{2}} \right)},\frac{{{\zeta }_{1}}{{\zeta }_{2}}}{1+\left( 1-{{\zeta }_{1}} \right)\left( 1-{{\zeta }_{2}} \right)} \right\rangle \right)$
ii) $ {{\alpha }_{1}}\otimes {{\alpha }_{2}}\text{ }=\text{ }\left( \left\langle \frac{{{x}_{1}}{{x}_{2}}}{1+\left( 1-{{x}_{1}} \right)\left( 1-{{x}_{2}} \right)},\frac{{{r}_{1}}{{r}_{2}}}{1+\left( 1-{{r}_{1}} \right)\left( 1-{{r}_{2}} \right)} \right\rangle ,\left\langle \frac{{{\xi }_{1}}+{{\xi }_{2}}}{1+{{\xi }_{1}}{{\xi }_{2}}},\frac{{{\zeta }_{1}}+{{\zeta }_{2}}}{1+{{\zeta }_{1}}{{\zeta }_{2}}} \right\rangle \right)$
iii) $ \hbar \left( \alpha \right)\text{ }=\text{ }\left( \left\langle \frac{\left( 1+{{x}^{\hbar }} \right)-\left( 1-{{x}^{\hbar }} \right)}{\left( 1+{{x}^{\hbar }} \right)+\left( 1-{{x}^{\hbar }} \right)},\frac{\left( 1+{{r}^{\hbar }} \right)-\left( 1-{{r}^{\hbar }} \right)}{\left( 1+{{r}^{\hbar }} \right)+\left( 1-{{r}^{\hbar }} \right)} \right\rangle ,\left\langle \frac{2{{\xi }^{\hbar }}}{\left( 2-{{\xi }^{\hbar }} \right)+{{\xi }^{\hbar }}},\frac{2{{\zeta }^{\hbar }}}{\left( 2-{{\zeta }^{\hbar }} \right)+{{\zeta }^{\hbar }}} \right\rangle \right)$
iv) $ {{\left( \alpha \right)}^{\hbar }}\text{ }=\text{ }\left( \left\langle \frac{2{{x}^{\hbar }}}{\left( 2-{{x}^{\hbar }} \right)+{{x}^{\hbar }}},\frac{2{{r}^{\hbar }}}{\left( 2-{{r}^{\hbar }} \right)+{{r}^{\hbar }}} \right\rangle ,\left\langle \frac{\left( 1+{{\xi }^{\hbar }} \right)-\left( 1-{{\xi }^{\hbar }} \right)}{\left( 1+{{\xi }^{\hbar }} \right)+\left( 1-{{\xi }^{\hbar }} \right)},\frac{\left( 1+{{\zeta }^{\hbar }} \right)-\left( 1-{{\zeta }^{\hbar }} \right)}{\left( 1+{{\zeta }^{\hbar }} \right)+\left( 1-{{\zeta }^{\hbar }} \right)} \right\rangle \right)$
Theorem 1: Let ${{\alpha }_{j}}=\left( \left\langle {{x}_{j}},{{r}_{j}} \right\rangle ,\left\langle {{\xi }_{j}},{{\zeta }_{j}} \right\rangle \right)\left( j=1,2 \right)$ be IFCNs, then
i) ${{\alpha }_{1}}\cup {{\alpha }_{2}}={{\alpha }_{2}}\cup {{\alpha }_{1}}$
ii) ${{\alpha }_{1}}\cap {{\alpha }_{2}}={{\alpha }_{2}}\cap {{\alpha }_{1}}$
Proof:
i) Since ${{\alpha }_{j}}=\left( \left\langle {{x}_{j}},{{r}_{j}} \right\rangle ,\left\langle {{\xi }_{j}},{{\zeta }_{j}} \right\rangle \right)\left( j=1,2 \right)$ are IFCNs, then
\[\begin{aligned} \alpha_1 \cup \alpha_2 & =\left(\left[\max \left\{x_1, x_2\right\}, \max \left\{r_1, r_2\right\}, \min \left\{\xi_1, \xi_2\right\}, \min \left\{\zeta_1, \zeta_2\right\}\right]\right) \\ & =\left(\left[\max \left\{x_2, x_1\right\}, \max \left\{r_2, r_1\right\}, \min \left\{\xi_{12}, \xi_1\right\}, \min \left\{\zeta_2, \zeta_1\right\}\right]\right) \\ & =\alpha_2 \cup \alpha_1\end{aligned}\]
ii) Again, we have
\[\begin{aligned} \alpha_1 \cap \alpha_2 & =\left(\left[\min \left\{x_1, x_2\right\}, \min \left\{r_1, r_2\right\}, \max \left\{\xi_1, \xi_2\right\}, \max \left\{\zeta_1, \zeta_2\right\}\right]\right) \\ & =\left(\left[\min \left\{x_2, x_1\right\}, \min \left\{r_2, r_1\right\}, \max \left\{\xi_2, \xi_1\right\}, \max \left\{\zeta_2, \zeta_1\right\}\right]\right) \\ & =\alpha_2 \cap \alpha_1 \end{aligned}\]
Theorem 2: Let ${{\alpha }_{j}}=\left( \left\langle {{x}_{j}},{{r}_{j}} \right\rangle ,\left\langle {{\xi }_{j}},{{\zeta }_{j}} \right\rangle \right)\left( j=1,2 \right)$ be IFCNs and $\hbar > 0$, then
i) ${{\alpha }_{1}}\oplus {{\alpha }_{2}}={{\alpha }_{2}}\oplus {{\alpha }_{1}}$
ii) ${{\alpha }_{1}}\otimes {{\alpha }_{2}}={{\alpha }_{2}}\otimes {{\alpha }_{1}}$
iii) $\hbar \left( {{\alpha }_{1}}\oplus {{\alpha }_{2}} \right)=\hbar \left( {{\alpha }_{1}} \right)\oplus \hbar \left( {{\alpha }_{2}} \right)$
iv) ${{\left( {{\alpha }_{1}}\otimes {{\alpha }_{2}} \right)}^{\hbar }}={{\left( {{\alpha }_{1}} \right)}^{\hbar }}\otimes {{\left( {{\alpha }_{2}} \right)}^{\hbar }}$
Proof: Since ${{\alpha }_{j}}=\left( \left\langle {{x}_{j}},{{r}_{j}} \right\rangle ,\left\langle {{\xi }_{j}},{{\zeta }_{j}} \right\rangle \right)\left( j=1,2 \right)$ are IFCNs, then we have:
i) By using Definition 5, we have
\[\begin{aligned} & {{\alpha }_{1}}\oplus {{\alpha }_{2}}\text{ }=\text{ }\left( \left\langle \frac{{{x}_{1}}+{{x}_{2}}}{1+{{x}_{1}}{{x}_{2}}},\frac{{{r}_{1}}+{{r}_{2}}}{1+{{r}_{1}}{{r}_{2}}} \right\rangle ,\left\langle \frac{{{\xi }_{1}}{{\xi }_{2}}}{1+\left( 1-{{\xi }_{1}} \right)\left( 1-{{\xi }_{2}} \right)},\frac{{{\zeta }_{1}}{{\zeta }_{2}}}{1+\left( 1-{{\zeta }_{1}} \right)\left( 1-{{\zeta }_{2}} \right)} \right\rangle \right) \\ & \text{ }=\text{ }\left( \left\langle \frac{{{x}_{2}}+{{x}_{1}}}{1+{{x}_{2}}{{x}_{1}}},\frac{{{r}_{2}}+{{r}_{1}}}{1+{{r}_{2}}{{r}_{1}}} \right\rangle ,\left\langle \frac{{{\xi }_{2}}{{\xi }_{1}}}{1+\left( 1-{{\xi }_{2}} \right)\left( 1-{{\xi }_{1}} \right)},\frac{{{\zeta }_{2}}{{\zeta }_{1}}}{1+\left( 1-{{\zeta }_{2}} \right)\left( 1-{{\zeta }_{1}} \right)} \right\rangle \right) \\ & \text{ =}{{\alpha }_{2}}\oplus {{\alpha }_{1}} \\ \end{aligned}\]
ii) Again, by Definition 5, we have:
\[\begin{aligned} & {{\alpha }_{1}}\otimes {{\alpha }_{2}}\text{ }=\text{ }\left( \left\langle \frac{{{x}_{1}}{{x}_{2}}}{1+\left( 1-{{x}_{1}} \right)\left( 1-{{x}_{2}} \right)},\frac{{{r}_{1}}{{r}_{2}}}{1+\left( 1-{{r}_{1}} \right)\left( 1-{{r}_{2}} \right)} \right\rangle '\left\langle \frac{{{\xi }_{1}}+{{\xi }_{2}}}{1+{{\xi }_{1}}{{\xi }_{2}}},\frac{{{\zeta }_{1}}+{{\zeta }_{2}}}{1+{{\zeta }_{1}}{{\zeta }_{2}}} \right\rangle \right) \\ & \text{ }=\text{ }\left( \left\langle \frac{{{x}_{2}}{{x}_{1}}}{1+\left( 1-{{x}_{2}} \right)\left( 1-{{x}_{1}} \right)},\frac{{{r}_{2}}{{r}_{1}}}{1+\left( 1-{{r}_{2}} \right)\left( 1-{{r}_{1}} \right)} \right\rangle,\left\langle \frac{{{\xi }_{2}}+{{\xi }_{1}}}{1+{{\xi }_{2}}{{\xi }_{1}}},\frac{{{\zeta }_{2}}+{{\zeta }_{1}}}{1+{{\zeta }_{2}}{{\zeta }_{1}}} \right\rangle \right) \\ & \text{ }={{\alpha }_{2}}\oplus {{\alpha }_{1}}\text{ } \\ \end{aligned}\]
iii) As $\hbar > 0$, then by Definition 5, we have:
\[\begin{aligned} & \hbar \left( {{\alpha }_{1}}\oplus {{\alpha }_{2}} \right) \\ & =\left( \left\langle \begin{aligned} & \frac{\prod\limits_{j=1}^{2}{{{\left( 1+{{x}_{j}} \right)}^{\hbar }}-\prod\limits_{j=1}^{2}{{{\left( 1-{{x}_{j}} \right)}^{\hbar }}}}}{\prod\limits_{j=1}^{2}{{{\left( 1+{{x}_{j}} \right)}^{\hbar }}+\prod\limits_{j=1}^{2}{{{\left( 1-{{x}_{j}} \right)}^{\hbar }}}}}, \\ & \frac{\prod\limits_{j=1}^{2}{{{\left( 1-{{r}_{j}} \right)}^{\hbar }}}-\prod\limits_{j=1}^{2}{{{\left( 1-{{r}_{j}} \right)}^{\hbar }}}}{\prod\limits_{j=1}^{2}{{{\left( 1-{{r}_{j}} \right)}^{\hbar }}-\prod\limits_{j=1}^{2}{{{\left( 1-{{r}_{j}} \right)}^{\hbar }}}}} \\ \end{aligned} \right\rangle ,\left\langle \frac{2\prod\limits_{j=1}^{2}{\xi _{j}^{\hbar }}}{\prod\limits_{j=1}^{2}{{{\left( 2-{{\xi }_{j}} \right)}^{\hbar }}+\prod\limits_{j=1}^{2}{\xi _{j}^{\hbar }}}},\frac{2\prod\limits_{j=1}^{2}{\zeta _{j}^{\hbar }}}{\prod\limits_{j=1}^{2}{{{\left( 2-{{\zeta }_{j}} \right)}^{\hbar }}+\prod\limits_{j=1}^{2}{\zeta _{j}^{\hbar }}}} \right\rangle \right) \\ & \text{ =}\left( \left\langle \frac{{{\left( 1+{{x}_{1}} \right)}^{\hbar }}-{{\left( 1-{{x}_{1}} \right)}^{\hbar }}}{{{\left( 1+{{x}_{1}} \right)}^{\hbar }}+{{\left( 1-{{x}_{1}} \right)}^{\hbar }}},\frac{{{\left( 1+{{r}_{1}} \right)}^{\hbar }}-{{\left( 1-{{r}_{1}} \right)}^{\hbar }}}{{{\left( 1+{{r}_{1}} \right)}^{\hbar }}+{{\left( 1-{{r}_{1}} \right)}^{\hbar }}} \right\rangle ,\left\langle \frac{\xi _{1}^{\hbar }}{{{\left( 2-{{\xi }_{1}} \right)}^{\hbar }}+\xi _{1}^{\hbar }},\frac{\zeta _{1}^{\hbar }}{{{\left( 2-\zeta _{1}^{\lambda } \right)}^{\hbar }}+\zeta _{1}^{\hbar }} \right\rangle \right)\oplus \\ & \text{ }\left( \left\langle \frac{{{\left( 1+{{x}_{2}} \right)}^{\hbar }}-{{\left( 1-{{x}_{2}} \right)}^{\hbar }}}{{{\left( 1+{{x}_{2}} \right)}^{\hbar }}+{{\left( 1-{{x}_{2}} \right)}^{\hbar }}},\frac{{{\left( 1+{{r}_{2}} \right)}^{\hbar }}-{{\left( 1-{{r}_{2}} \right)}^{\hbar }}}{{{\left( 1+{{r}_{2}} \right)}^{\hbar }}+{{\left( 1-{{r}_{2}} \right)}^{\hbar }}} \right\rangle ,\left\langle \frac{\xi _{2}^{\hbar }}{{{\left( 2-{{\xi }_{2}} \right)}^{\hbar }}+\xi _{2}^{\hbar }},\frac{\zeta _{2}^{\hbar }}{{{\left( 2-\zeta _{2}^{\hbar } \right)}^{\hbar }}+\zeta _{2}^{\hbar }} \right\rangle \right) \\ & =\hbar \left( {{\alpha }_{1}} \right)\oplus \hbar \left( {{\alpha }_{2}} \right) \\ \end{aligned}\]
iv) Again, by Definition 5, where $\hbar > 0$, we have:
\[\begin{aligned} & {{\left( {{\alpha }_{1}}\otimes {{\alpha }_{2}} \right)}^{\hbar }} \\ & =\left( \left\langle \frac{2\prod\limits_{j=1}^{2}{x_{j}^{\hbar }}}{\prod\limits_{j=1}^{2}{{{\left( 2-{{x}_{j}} \right)}^{\hbar }}+\prod\limits_{j=1}^{2}{x_{j}^{\hbar }}}},\frac{2\prod\limits_{j=1}^{2}{r_{j}^{\hbar }}}{\prod\limits_{j=1}^{2}{{{\left( 2-{{r}_{j}} \right)}^{\hbar }}+\prod\limits_{j=1}^{2}{r_{j}^{\hbar }}}} \right\rangle ,\left\langle \begin{aligned} & \frac{\prod\limits_{j=1}^{2}{{{\left( 1+{{\xi }_{j}} \right)}^{\hbar }}-\prod\limits_{j=1}^{2}{{{\left( 1-{{\xi }_{j}} \right)}^{\hbar }}}}}{\prod\limits_{j=1}^{2}{{{\left( 1+{{\xi }_{j}} \right)}^{\hbar }}+\prod\limits_{j=1}^{2}{{{\left( 1-{{\xi }_{j}} \right)}^{\hbar }}}}}, \\ & \frac{\prod\limits_{j=1}^{2}{{{\left( 1-{{\zeta }_{j}} \right)}^{\hbar }}}-\prod\limits_{j=1}^{2}{{{\left( 1-{{\zeta }_{j}} \right)}^{\hbar }}}}{\prod\limits_{j=1}^{2}{{{\left( 1-{{\zeta }_{j}} \right)}^{\hbar }}-\prod\limits_{j=1}^{2}{{{\left( 1-{{\zeta }_{j}} \right)}^{\hbar }}}}} \\ \end{aligned} \right\rangle \right) \\ & \text{ =}\left( \left\langle \frac{x_{1}^{\hbar }}{{{\left( 2-{{x}_{1}} \right)}^{\hbar }}+x_{1}^{\hbar }},\frac{r_{1}^{\hbar }}{{{\left( 2-r_{1}^{\lambda } \right)}^{\hbar }}+r_{1}^{\hbar }} \right\rangle ,\left\langle \frac{{{\left( 1+{{\xi }_{1}} \right)}^{\hbar }}-{{\left( 1-{{\xi }_{1}} \right)}^{\hbar }}}{{{\left( 1+{{\xi }_{1}} \right)}^{\hbar }}+{{\left( 1-{{\xi }_{1}} \right)}^{\hbar }}},\frac{{{\left( 1+{{\zeta }_{1}} \right)}^{\hbar }}-{{\left( 1-{{\zeta }_{1}} \right)}^{\hbar }}}{{{\left( 1+{{\zeta }_{1}} \right)}^{\hbar }}+{{\left( 1-{{\zeta }_{1}} \right)}^{\hbar }}} \right\rangle \right)\otimes \\ & \text{ }\left( \left\langle \frac{x_{2}^{\hbar }}{{{\left( 2-{{x}_{2}} \right)}^{\hbar }}+x_{2}^{\hbar }},\frac{r_{2}^{\hbar }}{{{\left( 2-r_{2}^{\hbar } \right)}^{\hbar }}+r_{2}^{\hbar }} \right\rangle ,\left\langle \frac{{{\left( 1+{{\xi }_{2}} \right)}^{\hbar }}-{{\left( 1-{{\xi }_{2}} \right)}^{\hbar }}}{{{\left( 1+{{\xi }_{2}} \right)}^{\hbar }}+{{\left( 1-{{\xi }_{2}} \right)}^{\hbar }}},\frac{{{\left( 1+{{\zeta }_{2}} \right)}^{\hbar }}-{{\left( 1-{{\zeta }_{2}} \right)}^{\hbar }}}{{{\left( 1+{{\zeta }_{2}} \right)}^{\hbar }}+{{\left( 1-{{\zeta }_{2}} \right)}^{\hbar }}} \right\rangle \right) \\ & ={{\left( {{\alpha }_{1}} \right)}^{\hbar }}\otimes {{\left( {{\alpha }_{2}} \right)}^{\hbar }} \\ \end{aligned}\]
4. Aggregation Operators under IFCNs
In this sector, we developed three advanced Einstein methods that utilize credibility numbers under a given confidence level. These approaches, namely the CIFCEWAA operator, CIFCEOWAA operator, and CIFCEHWAA operator, provide robust tools for handling uncertain information in DM processes.
Definition 6: Let ${{\alpha }_{j}}=\left\langle {{x}_{j}},{{r}_{j}} \right\rangle ,\left\langle {{\xi }_{j}},{{\zeta }_{j}} \right\rangle \left( 1\le j\le n \right)$ be a finite collection of IFCNs, and $w=\left( {{w}_{1}},{{w}_{2}},...,{{w}_{n}} \right)$ be their weighted vector with conditions: ${{w}_{j}}\in \left[ 0,1 \right]$ and $\sum\limits_{j=1}^{n}{{{w}_{j}}=1}$. Moreover, ${\hat{\lambda}_{j}}\left( 1\le j\le n \right)$ be their corresponding confidence level with condition: ${\hat{\lambda}_{j}}\in \left[ 0,1 \right]$, then the CIFCEWAA operator can be written as:
\[\begin{aligned} & \text{CIFCEWA}{{\text{A}}_{w}}\left( \left\langle {{\alpha }_{1}},{\hat{\lambda}_{1}} \right\rangle ,\left\langle {{\alpha }_{2}},{\hat{\lambda}_{2}} \right\rangle ,...,\left\langle {{\alpha }_{n}},{\hat{\lambda}_{n}} \right\rangle \right) \\ & =\left( \begin{aligned} & \left\langle \frac{\prod\limits_{j=1}^{n}{{{\left( 1+{{x}_{j}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}-\prod\limits_{j=1}^{n}{{{\left( 1-{{x}_{j}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}}{\prod\limits_{j=1}^{n}{{{\left( 1+{{x}_{j}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}+\prod\limits_{j=1}^{n}{{{\left( 1-{{x}_{j}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}},\frac{\prod\limits_{j=1}^{n}{{{\left( 1+{{r}_{j}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}-\prod\limits_{j=1}^{n}{{{\left( 1-{{r}_{j}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}}{\prod\limits_{j=1}^{n}{{{\left( 1+{{r}_{j}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}+\prod\limits_{j=1}^{n}{{{\left( 1-{{r}_{j}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}} \right\rangle , \\ & \left\langle \frac{2\prod\limits_{j=1}^{n}{{{\left( {{\xi }_{j}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}}{\prod\limits_{j=1}^{n}{{{\left( 2-{{\xi }_{j}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}+\prod\limits_{j=1}^{n}{{{\left( {{\xi }_{j}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}}},\frac{2\prod\limits_{j=1}^{n}{{{\left( {{\zeta }_{j}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}}{\prod\limits_{j=1}^{n}{{{\left( 2-{{\zeta }_{j}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}+\prod\limits_{j=1}^{n}{{{\left( {{\zeta }_{j}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}}} \right\rangle \\ \end{aligned} \right) \\ \end{aligned}\]
The above-mentioned IFCEWA aggregation operator under the confidence level focuses solely on the weighted vector, ensuring that each criterion's importance is directly considered in the aggregation process. In contrast, the IFCEOWA aggregation operator under the confidence level prioritizes the only the ordered position of the weighted vector, emphasizing the ranking of criteria rather than their assigned weights. This distinction allows IFCEWA to reflect individual importance, while IFCEOWA captures attitudinal characteristics in DM.
Definition 7: Let ${{\alpha }_{j}}=\left\langle {{x}_{j}},{{r}_{j}} \right\rangle ,\left\langle {{\xi }_{j}},{{\zeta }_{j}} \right\rangle \left( 1\le j\le n \right)$ be IFCNs, and $w=\left( {{w}_{1}},{{w}_{2}},...,{{w}_{n}} \right)$ be their weights with ${{w}_{j}}\in \left[ 0,1 \right]$, $\sum\limits_{j=1}^{n}{{{w}_{j}}=1}$ and ${\hat{\lambda}_{j}}\left( 1\le j\le n \right)$ be their corresponding confidence level with condition: ${\hat{\lambda}_{j}}\in \left[ 0,1 \right]$ and $\left( \delta \left( 1 \right),\delta \left( 2 \right),...,\delta \left( n \right) \right)$ be any rearrangement of $\left( 1,2,...,n \right)$, then the CIFCEOWAA operator can be express as:
\[\begin{aligned} & \text{CIFCEOWA}{{\text{A}}_{w}}\left( \left\langle {{\alpha }_{1}},{\hat{\lambda}_{1}} \right\rangle ,\left\langle {{\alpha }_{2}},{\hat{\lambda}_{2}} \right\rangle ,...,\left\langle {{\alpha }_{n}},{\hat{\lambda}_{n}} \right\rangle \right) \\ & =\left( \begin{aligned} & \left\langle \frac{\prod\limits_{j=1}^{n}{{{\left( 1+{{x}_{\delta \left( j \right)}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}-\prod\limits_{j=1}^{n}{{{\left( 1-{{x}_{\delta \left( j \right)}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}}{\prod\limits_{j=1}^{n}{{{\left( 1+{{x}_{\delta \left( j \right)}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}+\prod\limits_{j=1}^{n}{{{\left( 1-{{x}_{\delta \left( j \right)}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}},\frac{\prod\limits_{j=1}^{n}{{{\left( 1+{{r}_{\delta \left( j \right)}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}-\prod\limits_{j=1}^{n}{{{\left( 1-{{r}_{\delta \left( j \right)}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}}{\prod\limits_{j=1}^{n}{{{\left( 1+{{r}_{\delta \left( j \right)}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}+\prod\limits_{j=1}^{n}{{{\left( 1-{{r}_{\delta \left( j \right)}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}} \right\rangle , \\ & \left\langle \frac{2\prod\limits_{j=1}^{n}{{{\left( {{\xi }_{\delta \left( j \right)}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}}{\prod\limits_{j=1}^{n}{{{\left( 2-{{\xi }_{\delta \left( j \right)}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}+\prod\limits_{j=1}^{n}{{{\left( {{\xi }_{\delta \left( j \right)}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}}},\frac{2\prod\limits_{j=1}^{n}{{{\left( {{\zeta }_{\delta \left( j \right)}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}}{\prod\limits_{j=1}^{n}{{{\left( 2-{{\zeta }_{\delta \left( j \right)}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}+\prod\limits_{j=1}^{n}{{{\left( {{\zeta }_{\delta \left( j \right)}} \right)}^{{\hat{\lambda}_{j}}{{w}_{j}}}}}}} \right\rangle \\ \end{aligned} \right) \\ \end{aligned}\]
The CIFCEHWAA operator is an advanced method that unifies and extends the idea of the CIFCEWAA and CIFCEOWAA approaches. By integrating elements from both the weighted averaging and ordered weighted averaging approaches, it offers a hybrid methodology for handling IFCNs under varying confidence levels. Thus, the CIFCEHWAA operator is more flexible as compared to their existing models, such as the CIFCEWAA and CIFCEOWAA operators.
Definition 8: Let ${{\alpha }_{j}}=\left\langle {{x}_{j}},{{r}_{j}} \right\rangle ,\left\langle {{\xi }_{j}},{{\zeta }_{j}} \right\rangle \left( j=1,2,...,n \right)$ be IFCNs, $\varphi =\left( {{\varphi }_{1}},{{\varphi }_{2}},...,{{\varphi }_{n}} \right)$ and $w=\left( {{w}_{1}},{{w}_{2}},...,{{w}_{n}} \right)$ be their associated and weighted vectors under restriction: both weights belonging to [0,1] and their sum must be equal to 1, then the CIFCEHWAA operator can be written as:
\[\begin{aligned} & \text{CIFCEHWA}{{\text{A}}_{\varphi ,w}}\left( \left\langle {{\alpha }_{1}},{{\hat{\lambda}}_{1}} \right\rangle ,\left\langle {{\alpha }_{2}},{{\hat{\lambda}}_{2}} \right\rangle ,...,\left\langle {{\alpha }_{n}},{{\hat{\lambda}}_{n}} \right\rangle \right) \\ & =\left( \begin{aligned} & \left\langle \frac{\prod\limits_{j=1}^{n}{{{\left( 1+{{x}_{{{{\dot{\alpha }}}_{\delta \left( j \right)}}}} \right)}^{{{\hat{\lambda}}_{j}}{{w}_{j}}}}}-\prod\limits_{j=1}^{n}{{{\left( 1-{{x}_{{{{\dot{\alpha }}}_{\delta \left( j \right)}}}} \right)}^{{{\hat{\lambda}}_{j}}{{w}_{j}}}}}}{\prod\limits_{j=1}^{n}{{{\left( 1+{{x}_{{{{\dot{\alpha }}}_{\delta \left( j \right)}}}} \right)}^{{{\hat{\lambda}}_{j}}{{w}_{j}}}}}+\prod\limits_{j=1}^{n}{{{\left( 1-{{x}_{{{{\dot{\alpha }}}_{\delta \left( j \right)}}}} \right)}^{{{\hat{\lambda}}_{j}}{{w}_{j}}}}}},\frac{\prod\limits_{j=1}^{n}{{{\left( 1+{{r}_{{{{\dot{\alpha }}}_{\delta \left( j \right)}}}} \right)}^{{{\hat{\lambda}}_{j}}{{w}_{j}}}}}-\prod\limits_{j=1}^{n}{{{\left( 1-{{r}_{{{{\dot{\alpha }}}_{\delta \left( j \right)}}}} \right)}^{{{\hat{\lambda}}_{j}}{{w}_{j}}}}}}{\prod\limits_{j=1}^{n}{{{\left( 1+{{r}_{{{{\dot{\alpha }}}_{\delta \left( j \right)}}}} \right)}^{{{\hat{\lambda}}_{j}}{{w}_{j}}}}}+\prod\limits_{j=1}^{n}{{{\left( 1-{{r}_{{{{\dot{\alpha }}}_{\delta \left( j \right)}}}} \right)}^{{{\hat{\lambda}}_{j}}{{w}_{j}}}}}} \right\rangle , \\ & \left\langle \frac{2\prod\limits_{j=1}^{n}{{{\left( {{\xi }_{{{{\dot{\alpha }}}_{\delta \left( j \right)}}}} \right)}^{{{\hat{\lambda}}_{j}}{{w}_{j}}}}}}{\prod\limits_{j=1}^{n}{{{\left( 2-{{\xi }_{{{{\dot{\alpha }}}_{\delta \left( j \right)}}}} \right)}^{{{\hat{\lambda}}_{j}}{{w}_{j}}}}+\prod\limits_{j=1}^{n}{{{\left( {{\xi }_{{{{\dot{\alpha }}}_{\delta \left( j \right)}}}} \right)}^{{{\hat{\lambda}}_{j}}{{w}_{j}}}}}}},\frac{2\prod\limits_{j=1}^{n}{{{\left( {{\zeta }_{\delta \left( j \right)}} \right)}^{{{\hat{\lambda}}_{j}}{{w}_{j}}}}}}{\prod\limits_{j=1}^{n}{{{\left( 2-{{\zeta }_{{{{\dot{\alpha }}}_{\delta \left( j \right)}}}} \right)}^{{{\hat{\lambda}}_{j}}{{w}_{j}}}}+\prod\limits_{j=1}^{n}{{{\left( {{\zeta }_{{{{\dot{\alpha }}}_{\delta \left( j \right)}}}} \right)}^{{{\hat{\lambda}}_{j}}{{w}_{j}}}}}}} \right\rangle \\ \end{aligned} \right) \\ \end{aligned}\]
where, ${{\dot{\alpha }}_{\delta \left( j \right)}}=n{{\varphi }_{j}}{{\alpha }_{j}}$, where ${{\dot{\alpha }}_{\delta \left( j \right)}}$ is the greatest value and
5. Real World Applications
The DM process is a rational process for choosing the best choice or option from multiple choices. This process is important and basic to human behavior, influencing personal choices and professional strategies. The operational DM process requires a balance between rational analysis and intuitive and natural judgement, often drawing from both personal experiences and logical reasoning. It is important to adapt and refine decisions based on feedback and changing circumstances, as this supports continuous development. While individual decisions play a significant role, seeking advice or input from others can provide valuable perspectives and enhance the DM process. In complex situations, especially in business or organizational contexts, advanced methods like intuitionistic fuzzy information are employed to handle vagueness and fuzziness. These approaches integrate credibility numbers with new methods such as CIFCEWAA, CIFCEOWAA, and CIFCEHWAA to advance decision outcomes. By leveraging such approaches, experts can address ambiguity effectively, ensuring more informed and reliable choices. This comprehensive technique underscores the importance of blending intuition, structured methodologies, and adaptability in achieving optimal decisions.
In this algorithm, we consider a group of options, such as: ${{\aleph }_{i}}=\left\{ i=1,2,...,m \right\}$ and a group of attributes or criteria: ${{\Im }_{j}}=\left\{ j=1,2,...,n \right\}$ with $w=\left( {{w}_{1}},{{w}_{2}},...,{{w}_{n}} \right)$. Let ${{D}_{r}}=\left\{ r=1,2,...,k \right\}$ be a group of experts, whose weights are $\omega =\left( {{\omega }_{1}},{{\omega }_{2}},...,{{\omega }_{k}} \right)$. The algorithm typically includes the following main and key steps:
Step 1: In Step 1, we begin by capturing the insights shared by decision-makers through the use of a matrix that arranges and appraises many attributes and options. This structured approach allows for a clear comparison of different alternatives, ensuring that all relevant factors are considered before making informed decisions.
Step 2: Step 2 contains merging all individual decision matrices into a unified decision matrix to provide a complete overview of the numerous factors being considered. This consolidated matrix allows for a holistic comparison of all alternatives and attributes, ensuring a more efficient and accurate DM process by synthesizing and organizing the data into one rational and logical structure.
Step 3: In Step 3, methodically compute preference values by applying the proposed approaches, such as weighted scoring or pairwise comparison, to the collective decision matrix, ensuring all relevant factors and considerations are integrated into a comprehensive evaluation.
Step 4: Scheming the score values for all preference values involves assigning numerical values to each preference based on its relative utility and importance within a precise context, often using methods such as weighted scoring, ranking, or utility functions.
Step 5: When ranking based on scores values, options are assessed and sorted in descending order, with the one having the highest score being selected first.
6. Practical Example
Railway trains (RT) offer various benefits, making them an efficient and maintainable mode of transportation. They are known for their high capacity, transporting large numbers of passengers or goods at once, which reduces congestion on roads and lowers the conservational impact of traffic. Trains are also highly consistent and punctual, offering scheduled services that minimize delays. Moreover, they deliver energy efficacy compared to other approaches of transport, such as cars and airplanes, contributing to reduced fuel consumption and emissions. With their ability to connect cities and remote areas, trains support economic growth, tourism, and offer a safer travel option. The Ministry of Railways (MOR) in Pakistan is focused on enhancing transportation systems, with a particular interest in developing Tanis transport as a viable option for the future. As the country strives to improve its infrastructure, selecting the most suitable mode of transport is critical for cost-effectiveness, ensuring sustainability and efficiency. Tanis transport, with its possibility for faster, smoother, and more environmentally friendly travel, could complement the existing railway network and reduce road congestion. By integrating such systems, the Ministry aims to create a more reliable and modern transport network for Pakistan’s growing population. For this, the government of Pakistan makes a committee of a group of 4 specialists: (D1, D2, D3, D4) whose weights are $\omega =\left( 0.2,0.3,0.4,0.1 \right)$. There are many trains in Pakistan, but in the initial selection, the committee considers only the following four trains:
Green Line Express (${{\aleph }_{1}}$): Inaugurated in 2015, the Green Line Express is Pakistan Railways' flagship service, operating daily between Karachi and Islamabad. Covering a distance of approximately 1,521 km in about 21 hours, it offers various classes, including Economy, AC Standard, AC Business, and a Parlour Car. Khyber Mail (${{\aleph }_{2}}$): Established in 1920, Khyber Mail is one of Pakistan's oldest and most prestigious passenger trains. It operates daily between Karachi and Peshawar, covering a distance of 1,764 km in approximately 32 hours. The train offers AC Sleeper, AC Business, AC Standard, and Economy Class accommodations, providing sleeping and catering facilities. Karakoram Express (${{\aleph }_{3}}$): Launched on 14 August 2002, the Karakoram Express runs daily between Karachi and Lahore. It covers a distance of 1,241 km in about 17 hours and 45 minutes. The train offers AC Business, AC Standard, and Economy Class accommodations, with sleeping and catering services available. Pak Business Express (${{\aleph }_{4}}$): Introduced in 2012, the Pak Business Express operates daily between Karachi and Lahore, covering a distance of 1,214 km. The journey takes approximately 19 hours and 30 minutes. The train offers Economy Class, AC Standard, and AC Business accommodations, with sleeping and catering facilities. To select the most suitable trains in Pakistan Railways, the following criteria can be considered, with weights are: $w=\left( 0.2,0.3,0.4,0.1 \right)$. Train Class and Facilities (${{\Im }_{1}}$): These Trains offer a variety of classes such as Economy, AC Standard, and AC Sleeper, providing different comfort levels and pricing options for passengers. Punctuality and Reliability (${{\Im }_{2}}$): Punctuality and reliability are demonstrated by a strong track record of minimal delays and cancellations, ensuring commitments are consistently met on time. Safety and Maintenance Standards (${{\Im }_{3}}$): Safety and maintenance standards ensure that train coaches, tracks, and safety features are regularly inspected and maintained to meet required safety protocols, minimizing risks and ensuring smooth operations. Ticket Pricing and Affordability (${{\Im }_{4}}$): Ticket pricing should reflect value for money, offering a competitive alternative to other modes of transport in terms of cost, convenience, and service.
Step 1: This step contains all information of the all experts, as shown in Table 1, Table 2, Table 3, and Table 4.
${{\Im }_{1}}$ | ${{\Im }_{2}}$ | ${{\Im }_{3}}$ | ${{\Im }_{4}}$ | |
${{\aleph }_{1}}$ | $\left\langle\binom{\langle 0.47,0.38\rangle}{\langle 0.43,0.51\rangle}, 0.6\right\rangle$ | $\left\langle\binom{\langle 0.52,0.35\rangle}{\langle 0.44,0.49\rangle}, 0.8\right\rangle$ | $\left\langle\binom{\langle 0.48,0.34\rangle}{\langle 0.42,0.41\rangle}, 0.9\right\rangle$ | $\left\langle\binom{\langle 0.56,0.34\rangle}{\langle 0.43,0.51\rangle}, 0.8\right\rangle$ |
${{\aleph }_{2}}$ | $\left\langle\binom{\langle 0.48,0.41\rangle}{\langle 0.45,0.44\rangle}, 0.4\right\rangle$ | $\left\langle\binom{\langle 0.47,0.38\rangle}{\langle 0.43,0.51\rangle}, 0.5\right\rangle$ | $\left\langle\binom{\langle 0.52,0.35\rangle}{\langle 0.44,0.49\rangle}, 0.8\right\rangle$ | $\left\langle\binom{\langle 0.48,0.34\rangle}{\langle 0.42,0.41\rangle}, 0.9\right\rangle$ |
${{\aleph }_{3}}$ | $\left\langle\binom{\langle 0.56,0.34\rangle}{\langle 0.43,0.51\rangle}, 0.8\right\rangle$ | $\left\langle\binom{\langle 0.48,0.41\rangle}{\langle 0.45,0.44\rangle}, 0.4\right\rangle$ | $\left\langle\binom{\langle 0.47,0.38\rangle}{\langle 0.43,0.51\rangle}, 0.6\right\rangle$ | $\left\langle\binom{\langle 0.52,0.35\rangle}{\langle 0.44,0.49\rangle}, 0.3\right\rangle$ |
${{\aleph }_{4}}$ | $\left\langle\binom{\langle 0.53,0.39\rangle}{\langle 0.49,0.43\rangle}, 0.5\right\rangle$ | $\left\langle\binom{\langle 0.55,0.34 \rangle }{\langle 0.43,0.51\rangle},0.4\right\rangle$ | $\left\langle\binom{\langle 0.48,0.41\rangle}{\langle 0.45,0.44\rangle}, 0.5\right\rangle$ | $\left\langle\binom{\langle 0.47,0.38\rangle}{\langle 0.43,0.51\rangle}, 0.7\right\rangle$ |
$\Im_1$ | $\Im_2$ | $\Im_3$ | $\Im_4$ | |
$\aleph_1$ | $\left\langle \binom{\langle 0.53,0.39\rangle}{\langle 0.49,0.43\rangle}, 0.5 \right\rangle$ | $\left\langle \binom{\langle 0.55,0.34\rangle}{\langle 0.43,0.51\rangle}, 0.4 \right\rangle$ | $\left\langle \binom{\langle 0.48,0.41\rangle}{\langle 0.45,0.44\rangle}, 0.5 \right\rangle$ | $\left\langle \binom{\langle 0.47,0.38\rangle}{\langle 0.43,0.51\rangle}, 0.7 \right\rangle$ |
$\aleph_2$ | $\left\langle \binom{\langle 0.48,0.41\rangle}{\langle 0.45,0.44\rangle}, 0.4 \right\rangle$ | $\left\langle \binom{\langle 0.47,0.38\rangle}{\langle 0.43,0.51\rangle}, 0.5 \right\rangle$ | $\left\langle \binom{\langle 0.52,0.35\rangle}{\langle 0.44,0.49\rangle}, 0.8 \right\rangle$ | $\left\langle \binom{\langle 0.48,0.34\rangle}{\langle 0.42,0.41\rangle}, 0.9 \right\rangle$ |
$\aleph_3$ | $\left\langle \binom{\langle 0.52,0.35\rangle}{\langle 0.44,0.49\rangle}, 0.8 \right\rangle$ | $\left\langle \binom{\langle 0.48,0.41\rangle}{\langle 0.45,0.44\rangle}, 0.4 \right\rangle$ | $\left\langle \binom{\langle 0.42,0.35\rangle}{\langle 0.46,0.49\rangle}, 0.9 \right\rangle$ | $\left\langle \binom{\langle 0.52,0.35\rangle}{\langle 0.44,0.49\rangle}, 0.3 \right\rangle$ |
$\aleph_4$ | $\left\langle \binom{\langle 0.56,0.34\rangle}{\langle 0.43,0.51\rangle}, 0.4 \right\rangle$ | $\left\langle \binom{\langle 0.45,0.45\rangle}{\langle 0.45,0.44\rangle}, 0.7 \right\rangle$ | $\left\langle \binom{\langle 0.47,0.38\rangle}{\langle 0.43,0.51\rangle}, 0.6 \right\rangle$ | $\left\langle \binom{\langle 0.56,0.35\rangle}{\langle 0.44,0.46\rangle}, 0.6 \right\rangle$ |
$\Im_1$ | $\Im_2$ | $\Im_3$ | $\Im_4$ | |
$\aleph_1$ | $\left\langle \binom{\langle 0.49,0.37\rangle}{\langle 0.49,0.41\rangle}, 0.3 \right\rangle$ | $\left\langle \binom{\langle 0.52,0.35\rangle}{\langle 0.44,0.49\rangle}, 0.5 \right\rangle$ | $\left\langle \binom{\langle 0.51,0.34\rangle}{\langle 0.43,0.51\rangle}, 0.2 \right\rangle$ | $\left\langle \binom{\langle 0.49,0.34\rangle}{\langle 0.36,0.44\rangle}, 0.9 \right\rangle$ |
$\aleph_2$ | $\left\langle \binom{\langle 0.52,0.35\rangle}{\langle 0.44,0.49\rangle}, 0.8 \right\rangle$ | $\left\langle \binom{\langle 0.55,0.33\rangle}{\langle 0.44,0.49\rangle}, 0.5 \right\rangle$ | $\left\langle \binom{\langle 0.47,0.38\rangle}{\langle 0.43,0.51\rangle}, 0.5 \right\rangle$ | $\left\langle \binom{\langle 0.48,0.41\rangle}{\langle 0.45,0.44\rangle}, 0.5 \right\rangle$ |
$\aleph_3$ | $\left\langle \binom{\langle 0.47,0.38\rangle}{\langle 0.43,0.51\rangle}, 0.6 \right\rangle$ | $\left\langle \binom{\langle 0.53,0.35\rangle}{\langle 0.45,0.45\rangle}, 0.5 \right\rangle$ | $\left\langle \binom{\langle 0.48,0.41\rangle}{\langle 0.45,0.44\rangle}, 0.4 \right\rangle$ | $\left\langle \binom{\langle 0.52,0.35\rangle}{\langle 0.44,0.49\rangle}, 0.8 \right\rangle$ |
$\aleph_4$ | $\left\langle \binom{\langle 0.48,0.41\rangle}{\langle 0.45,0.44\rangle}, 0.5 \right\rangle$ | $\left\langle \binom{\langle 0.49,0.34\rangle}{\langle 0.36,0.44\rangle}, 0.5 \right\rangle$ | $\left\langle \binom{\langle 0.45,0.45\rangle}{\langle 0.45,0.44\rangle}, 0.7 \right\rangle$ | $\left\langle \binom{\langle 0.53,0.35\rangle}{\langle 0.44,0.49\rangle}, 0.3 \right\rangle$ |
$\Im_1$ | $\Im_2$ | $\Im_3$ | $\Im_4$ | |
$\aleph_1$ | $\left\langle \binom{\langle 0.54,0.44\rangle}{\langle 0.43,0.51\rangle}, 0.6 \right\rangle$ | $\left\langle \binom{\langle 0.45,0.41\rangle}{\langle 0.45,0.44\rangle}, 0.3 \right\rangle$ | $\left\langle \binom{\langle 0.47,0.38\rangle}{\langle 0.43,0.51\rangle}, 0.5 \right\rangle$ | $\left\langle \binom{\langle 0.48,0.41\rangle}{\langle 0.45,0.44\rangle}, 0.5 \right\rangle$ |
$\aleph_2$ | $\left\langle \binom{\langle 0.47,0.38\rangle}{\langle 0.43,0.51\rangle}, 0.5 \right\rangle$ | $\left\langle \binom{\langle 0.47,0.38\rangle}{\langle 0.43,0.51\rangle}, 0.6 \right\rangle$ | $\left\langle \binom{\langle 0.55,0.36\rangle}{\langle 0.44,0.46\rangle}, 0.4 \right\rangle$ | $\left\langle \binom{\langle 0.43,0.34\rangle}{\langle 0.45,0.44\rangle}, 0.3 \right\rangle$ |
$\aleph_3$ | $\left\langle \binom{\langle 0.48,0.41\rangle}{\langle 0.45,0.44\rangle}, 0.4 \right\rangle$ | $\left\langle \binom{\langle 0.42,0.35\rangle}{\langle 0.46,0.49\rangle}, 0.9 \right\rangle$ | $\left\langle \binom{\langle 0.47,0.38\rangle}{\langle 0.43,0.51\rangle}, 0.6 \right\rangle$ | $\left\langle \binom{\langle 0.53,0.35\rangle}{\langle 0.45,0.45\rangle}, 0.5 \right\rangle$ |
$\aleph_4$ | $\left\langle \binom{\langle 0.52,0.35\rangle}{\langle 0.44,0.49\rangle}, 0.8 \right\rangle$ | $\left\langle \binom{\langle 0.52,0.35\rangle}{\langle 0.44,0.49\rangle}, 0.8 \right\rangle$ | $\left\langle \binom{\langle 0.55,0.33\rangle}{\langle 0.44,0.49\rangle}, 0.5 \right\rangle$ | $\left\langle \binom{\langle 0.48,0.34\rangle}{\langle 0.42,0.41\rangle}, 0.9 \right\rangle$ |
Step 2: In Step 2, all individual tables (Table 1, Table 2, Table 3, and Table 4) are combined into a single matrix using the CIFCEWAA operator. This process involves incorporating the weight vector $\omega =\left( 0.2,0.3,0.4,0.1 \right)$ to aggregate the data effectively. The resulting consolidated matrix is presented as Table 5, ensuring a unified representation of the combined information.
$\Im_1$ | $\Im_2$ | $\Im_3$ | $\Im_4$ | |
$\aleph_1$ | $\left\langle \binom{\langle 0.54,0.35\rangle}{\langle 0.43,0.33\rangle} \right\rangle$ | $\left\langle \binom{\langle 0.56,0.32\rangle}{\langle 0.40,0.54\rangle} \right\rangle$ | $\left\langle \binom{\langle 0.47,0.38\rangle}{\langle 0.48,0.41\rangle} \right\rangle$ | $\left\langle \binom{\langle 0.47,0.38\rangle}{\langle 0.43,0.51\rangle} \right\rangle$ |
$\aleph_2$ | $\left\langle \binom{\langle 0.52,0.37\rangle}{\langle 0.46,0.38\rangle} \right\rangle$ | $\left\langle \binom{\langle 0.57,0.30\rangle}{\langle 0.40,0.38\rangle} \right\rangle$ | $\left\langle \binom{\langle 0.52,0.35\rangle}{\langle 0.44,0.49\rangle} \right\rangle$ | $\left\langle \binom{\langle 0.38,0.46\rangle}{\langle 0.52,0.37\rangle} \right\rangle$ |
$\aleph_3$ | $\left\langle \binom{\langle 0.38,0.46\rangle}{\langle 0.52,0.37\rangle} \right\rangle$ | $\left\langle \binom{\langle 0.48,0.41\rangle}{\langle 0.45,0.44\rangle} \right\rangle$ | $\left\langle \binom{\langle 0.56,0.32\rangle}{\langle 0.40,0.54\rangle} \right\rangle$ | $\left\langle \binom{\langle 0.52,0.35\rangle}{\langle 0.44,0.49\rangle} \right\rangle$ |
$\aleph_4$ | $\left\langle \binom{\langle 0.56,0.32\rangle}{\langle 0.40,0.54\rangle} \right\rangle$ | $\left\langle \binom{\langle 0.48,0.41\rangle}{\langle 0.45,0.44\rangle} \right\rangle$ | $\left\langle \binom{\langle 0.47,0.38\rangle}{\langle 0.43,0.51\rangle} \right\rangle$ | $\left\langle \binom{\langle 0.57,0.30\rangle}{\langle 0.40,0.38\rangle} \right\rangle$ |
Step 3: Again using the CIFCEWAA approach, with $w=\left( 0.2,0.3,0.4,0.1 \right)$, and get the preference values as: ${{r}_{1}}:\left( \left\langle 0.60,0.34 \right\rangle ,\left\langle 0.53,0.44 \right\rangle \right)$, ${{r}_{2}}:\left( \left\langle 0.58,0.36 \right\rangle ,\left\langle 0.49,0.45 \right\rangle \right)$, ${{r}_{3}}:\left( \left\langle 0.53,0.37 \right\rangle ,\left\langle 0.47,0.34 \right\rangle \right)$, ${{r}_{4}}:\left( \left\langle 0.49,0.38 \right\rangle ,\left\langle 0.46,0.43 \right\rangle \right)$.
Step 4: Calculating the score function for further process, we have
\[scor\left( {{r}_{1}} \right)=\frac{1}{4}\left[ 0.60-0.34-0.53-0.44 \right]=-0.17\]
\[scor\left( {{r}_{2}} \right)=\frac{1}{4}\left[ 0.58-0.36-0.49-0.45 \right]=-0.20\]
\[scor\left( {{r}_{3}} \right)=\frac{1}{4}\left[ 0.53-0.37-0.47-0.34 \right]=-0.16\]
\[scor\left( {{r}_{4}} \right)=\frac{1}{4}\left[ 0.49-0.38-0.46-0.43 \right]=-0.19\]
Step 5: Ranking ${{\aleph }_{3}}>{{\aleph }_{1}}> {{\aleph }_{4}}>{{\aleph }_{2}}$. Thus, the more suitable train is Karakoram Express.
Table 6 and Table 7 present the score for various methods, and Figure 1 ranks these methods.

CIFCEWAA | CIFCEOWAA | CIFCEHWAA | |
$\aleph_1$ | -0.17 | -0.21 | -0.18 |
$\aleph_2$ | -0.20 | -0.25 | -0.23 |
$\aleph_3$ | -0.16 | -0.15 | -0.17 |
$\aleph_4$ | -0.19 | -0.23 | -0.21 |
Operators | Score Functions | Ranking |
CIFCEWAA | $\operatorname{scor}\left(r_3\right) > \operatorname{scor}\left(r_1\right) > \operatorname{scor}\left(r_4\right) > \operatorname{scor}\left(r_2\right)$ | $\aleph_3 > \aleph_1 > \aleph_4 > \aleph_2$ |
CIFCEOWAA | $\operatorname{scor}\left(r_3\right) > \operatorname{scor}\left(r_1\right) > \operatorname{scor}\left(r_4\right) > \operatorname{scor}\left(r_2\right)$ | $\aleph_3> \aleph_1 > \aleph_4> \aleph_2$ |
CIFCEHWAA | $\operatorname{scor}\left(r_3\right) > \operatorname{scor}\left(r_1\right) > \operatorname{scor}\left(r_4\right) > \operatorname{scor}\left(r_2\right)$ | $\aleph_3> \aleph_1 > \aleph_4> \aleph_2$ |
7. Comparative and Sensitive Analysis
To validate our new proposed research study, we compare the novel methods with some existing methods under IF-information. While several existing research have explored operators using IFNs based on specific operational laws, direct comparisons are challenging due to data representation in IFC-information. As a result, the credibility terms of MD and NMD are excluded from consideration, ensuring a fair and meaningful evaluation of the proposed methods.
Methods | Score Function | Ranking | |||
Score $\left(r_1\right)$ | Score $\left(r_2\right)$ | Score $\left(r_3\right)$ | |||
Wang and Liu [26] | -0.15 | -0.25 | -0.11 | -0.22 | $\aleph_3 > \aleph_1 > \aleph_4 > \aleph_2$ |
Wang and Liu [27] | -0.17 | -0.27 | -0.14 | -0.21 | $\aleph_3 > \aleph_1 > \aleph_4 > \aleph_2$ |
Qiyas et al. [23] | -0.23 | -0.27 | -0.21 | -0.37 | $\aleph_3 > \aleph_1 > \aleph_4 > \aleph_2$ |
Yahya et al. [22] | -0.18 | -0.33 | -0.10 | -0.25 | $\aleph_3 > \aleph_1 > \aleph_4 > \aleph_2$ |
Qiyas et al. [20] | -0.19 | -0.37 | -0.13 | -0.28 | $\aleph_3 > \aleph_1 > \aleph_4 > \aleph_2$ |
CIFCEWAA (proposed) | -0.17 | -0.20 | -0.16 | -0.19 | $\aleph_3 > \aleph_1 > \aleph_4 > \aleph_2$ |
CIFCEOWAA (proposed) | -0.21 | -0.25 | -0.15 | -0.23 | $\aleph_3 > \aleph_1 > \aleph_4 > \aleph_2$ |
CIFCEHWAA (proposed) | -0.18 | -0.23 | -0.17 | -0.21 | $\aleph_3 > \aleph_1 > \aleph_4 > \aleph_2$ |
The ranking results in Table 8 reveal remarkable differences between the proposed method and existing approaches, emphasizing the sensitivity of ranking orders to changes in credibility degrees. While all approaches yield the same ranking order, the new method exhibits smaller differences among alternative scores, whereas existing approaches show significantly larger variations. This shows that our novel proposed method provides a more balanced and stable evaluation of different choices. The minimal differences in scores enhance the reliability of DM, reducing the risk of extreme variations. In contrast, the high score disparities in existing approaches may lead to unreliable selections. Hence, the proposed approach ensures a more precise and consistent ranking, making it a superior choice for choosing the best choice. This approach ensures that DM is both more credible and reasonable. Consequently, the method significantly improves the reliability of IFC-information and decision results.
8. Advantages and Disadvantages of the Proposed Methods
Our proposed aggregation operators, utilizing credibility numbers, demonstrate practical value in real-life scenarios by enhancing DM, improving precision, and effectively managing hesitation and uncertainty in complex environments. These methods offer greater flexibility, reliability, and adaptability in assessing alternatives, making them particularly useful for selecting the optimal option. However, several challenges remain, including the difficulty of interpreting results and the lack of standardized evaluation criteria. Additionally, these methods often involve high computational complexity and require the collection of accurate data, which can hinder their practical implementation and affect overall reliability.
9. Conclusion
Trains play a vital role in Pakistan's economy by serving as a cost-effective and reliable means of transporting goods and people across the country. They facilitate trade by connecting industrial hubs with ports and remote areas, boosting economic activity. Additionally, the railway network supports employment and contributes to regional development. However, this sector faces diverse challenges, including rapid technological advancements, shifting market demands, regulatory complexities, and the need for sustainable practices. Fuzzy logic models have gained significant traction in the trains sector due to their ability to handle uncertainties and imprecise information inherent in RT systems. These models provide a flexible and adaptive framework for DM, resource management, and precision RT systems. In this study, we have introduced innovative IFE-methods based on credibility numbers, marking a significant advancement in DM processes. Among the key contributions, we introduced several novel techniques, including the CIFCEWAA, CIFCEOWAA, and CIFCEHWAA operators, each designed to enhance the accuracy and reliability of aggregated decisions in complex, uncertain environments. These methods play a vital role in DM processes by consolidating information obtained from multiple experts or criteria. Essentially, aggregation operators contribute to the integration of diverse information, aiding decision-makers in making well-rounded and informed choices. For instance, in an MCDM context, they can be employed to aggregate the preferences of different decision-makers or criteria. This novel model is elucidated through a practical example that pertains to choosing the most fitting choice from a range of available choices. Finally, the effectiveness, proficiency, and efficiency of the proposed model are substantiated through a comparison and sensitivity analysis. This involves a thorough examination to showcase how well the method performs in relation to other approaches, demonstrating its prowess and competence.
Moreover, this work can be expanded into numerous areas, such as advanced mathematical methods and different set theories. It can be applied to Hamacher methods, complex logarithmic methods, and complex interval-valued techniques. Additionally, it extends to complex power approaches, Dombi methods, and their interval-valued versions. These approaches help in solving complex DM and computational problems. Overall, the research opens new possibilities for mathematical modeling and real-world applications.
The data used to support the research findings are available from the corresponding author upon request.
The authors declare no conflict of interest.
