Evaluating Supply Chain Efficiency Under Uncertainty: An Integration of Rough Set Theory and Data Envelopment Analysis
Abstract:
The evaluation of supply chain (SC) efficiency in the presence of uncertainty presents significant challenges due to the multi-criteria nature of SC performance and the inherent ambiguities in both input and output data. This study proposes an innovative framework that combines Rough Set Theory (RST) with Data Envelopment Analysis (DEA) to address these challenges. By employing rough variables, the framework captures uncertainty in the measurement of inputs and outputs, defining efficiency intervals that reflect the imprecision of real-world data. In this approach, rough sets are used to model the vagueness and granularity of the data, while DEA is applied to assess the relative efficiency of decision-making units (DMUs) within the SC. The effectiveness of the proposed model is demonstrated through case studies that highlight its capacity to handle ambiguous and incomplete data. The results reveal the model’s superiority in providing actionable insights for identifying inefficiencies and areas for improvement within the SC, thus offering a more robust and flexible evaluation framework compared to traditional methods. Moreover, this integrated approach allows decision-makers to assess the efficiency of SC more effectively, taking into account the uncertainty and complexity inherent in the data. These findings contribute significantly to the field of supply chain management (SCM) by offering an enhanced tool for performance assessment that is both comprehensive and adaptable to varying operational contexts.1. Introduction
Global market integration has turned SCM into a compelling and widely discussed topic [1]. An efficient SC offers numerous benefits, such as cost reduction, expanded market presence and revenue growth, and the cultivation of sustainable customer relationships [2]. Moreover, scholars have shown that measuring SC performance can enhance overall performance [3]. SC efficiency relies on the integrated performance of all its members, making the management of overall efficiency a complex and challenging task [4].
Given that performance pertains to predetermined parameters and measurement signifies the capability to monitor events and activities meaningfully, performance measurement is described as the procedure of evaluating the effectiveness and efficiency of actions through quantification [5]. Several tools for measuring performance include the performance measurement questionnaire/matrix, criteria for designing measurement systems, the balanced scorecard (BSC), and computer-aided manufacturing (CAM) methods [6]. Nevertheless, using these approaches reveals several limitations, including an absence of strategic direction, a preference for local optimization at the expense of continuous improvement, and a limitation in delivering adequate insights into competitors [7].
Typically, the SC efficiency, often viewed as a series of basic business functions, is calculated through the ratio of revenue to the SC total operational cost [8]. Even so, with the growing demand for fast delivery and quick order fulfillment, emerging trends have surfaced. As a result, alongside traditional financial measures, other significant criteria, like customer satisfaction, should also be considered [9]. The rise of diverse performance measures (PMs) has added complexity and sophistication to efficiency assessment. An effective performance evaluation tool should go beyond quantitative analysis, incorporating qualitative insights to ensure alignment with the organization's strategic objectives [10].
This study primarily aims to present a practical application of DEA to assess SC performance. To achieve the objective, the paper is structured as follows: A brief overview of measuring SC performance with traditional tools, followed by reviewing DEA and RDEA literature, including their concepts and applications in the SC. Next, the methodology is explained, and RDEA models are expanded to assess SC efficiency and their organizational applications. The final section includes the study outcomes.
2. Background
In the performance evaluation process, selecting the appropriate PMs is a crucial task, as they serve as the basis for management actions and improvement solutions. Naturally, these measures differ across various fields [11].
The literature review indicates a focus on cost-based PMs. The cost metric is easy to understand, which is why managers have traditionally favoured it [12]. However, the rigidity and misalignment with strategic focus prompted scholars to pursue more comprehensive measures that integrate both qualitative and quantitative elements in SC evaluation. Beamon introduced three measures: those related to 1) resources, 2) output, and 3) flexibility [13]. Expanding on these measures results in a novel framework for evaluating SC that assesses performance at the operational, tactical, and strategic levels [14].
Pittiglio, Rabin, Todd, and McGrath developed the first universal PMs, PRTM, which is known as the primary comprehensive method for world-class SCM. The key factors for SC excellence in PRTM are introduced as asset management, logistics and cost, flexibility and responsiveness, and delivery performance. The PRTM concept was later expanded, leading to the development of the SC Operations Reference (SCOR) by the SC Council [15]. The SCOR model is structured across four levels based on the framework of plan, source, make, and deliver [16].
Although various researchers and findings have contributed to the study of performance measurement, some gaps remain in certain areas. One notable gap is the absence of valid measurement criteria and effective methodologies for consolidating various PMs into a single index. Many methodologies overlook the differing relative importance of these measures across different firms. Additionally, there was no all-encompassing measure of overall SC performance available for comparing performance with other industry players [17].
While SCOR offers numerous advantages, its application appears somewhat rigid and requires further improvement. As SC networks grow, the SCOR model must be more adaptable and capable of providing an effective platform for measuring these complexities. For instance, the bullwhip effect is a phenomenon that can impact the entire SC, and sometimes controlling this effect lies beyond the managers' control. However, SCOR offers deterministic performance metrics that administrators/managers can control. Therefore, SCOR needs to be more adaptable to synchronize various elements. Additionally, a review of the literature reveals that previous research has overlooked collaborative relationships in projects involving joint decision-making.
Measurement tools can generally be categorized into two types, each using different methods for evaluation. Gap-based techniques are typically employed for PM in parametric analysis. Some of these include “RADAR" and “SPIDER" diagrams, as well as the “Z" chart, which are highly graphical, making them simple to comprehend. Nevertheless, they are less effective when an analyst has to integrate various elements into a single comprehensive view.
Another parametric method commonly used in various fields is the ratio, computing and helping assess the output versus input relative efficiency. However, various ratios provide distinct interpretations, and synthesizing the complete set of ratios into a singular comprehensive assessment might be difficult.
The Analytical Hierarchy Process (AHP) is another method applied in PM to analyze data. It leverages subjective opinions to transform various weighted scores into a single composite score [18]. This strategy offers significant managerial insights for quantifying metrics, although it is highly subjective.
Statistical techniques, such as regression, are valuable instruments to a certain degree. They are parametric metrics that can yield significant correlations for decision-makers (DMs). Regression can examine just one output at a time; if additional criteria are included, the procedure must be reiterated. Furthermore, it can solely account for average values, which may not accurately represent real-world situations. This approach presupposes that all enterprises exhibit uniform performance in the amalgamation of their input components [19].
A most commonly used non-parametric tool that provides a comprehensive framework for PM is the BSC [20]. BSC considers several critical areas, including process, product, market development, and customer [21]. It links a firm’s strategic goals to a set of measures. However, BSC lacks the mathematical logic to establish relationships among various indicators, making it challenging to compare internal and intra-performance within the firm. Therefore, while BSC provides valuable insights, it is not sufficient on its own for evaluation. To make more informed judgments, the use of parametric methods is recommended alongside BSC.
DEA is another evaluation tool, accommodating both qualitative and quantitative measures, allowing DMs to make informed judgments about resource utilization efficiency. It utilizes the efficient frontier, as proposed by Farrell [22]. DEA calculates efficiency by dividing the weighted aggregate of outputs by the weighted aggregate of inputs, considering multiple inputs and outputs. The resulting efficiencies vary from 0 to 1, with 1 denoting the most efficient DMU. Although DEA mitigates certain shortcomings of alternative methods, it possesses its own disadvantages:
The accessibility of data is essential for rendering DEA results significant. All inputs and outputs of a DMU are essential for obtaining trustworthy results. Nonetheless, acquiring certain data can be difficult, and in other instances, companies may be reluctant to disclose it. Furthermore, the number of DMUs under comparison must not go below a certain threshold, as an insufficient number of DMUs would result in a proliferation of efficient DMUs. DEA presumes that all DMUs within an organization possess identical strategic goals and objectives, rendering it inappropriate to compare DMUs that vary in these aspects. The analysis of DEA data is equally essential. Although DEA offers adequate rankings for efficient DMUs, it seldom examines the root causes of inefficiency or proposes remedies for enhancement. The interplay of advantages and disadvantages of each tool complicates the selection of the most suitable option for performance assessment. Nonetheless, DEA possesses numerous features that set it apart from alternative techniques. The subsequent section elucidates these qualities to illustrate why it is regarded as an excellent instrument for assessing SC performance.
DEA assesses DMUs qualitatively and quantitatively, accommodating various inputs and outputs. DMU denotes a DMU utilized for comparing various firms or evaluating the effectiveness of an individual firm over time. The DEA has attracted considerable interest, prompting numerous scholars to investigate and create diverse models. These models primarily vary in orientation, disposability, diversification, returns to scale, and the measurements employed.
The fundamental principle of assessing efficiency in DEA is the efficient frontier function, which facilitates the distinction between efficient and inefficient units. The examination of ineffective units encompasses two facets. Initially, it can ascertain the utmost input level necessary to produce a specified quantity of outputs, referred to as the minimal principle of efficiency. Secondly, it can ascertain the maximum output level attainable for a specified quantity of inputs, known as the maximal principle of efficiency. The principal advantage of the DEA model utilized in this study is its capacity to amalgamate several inputs and outputs into a singular summary metric, facilitating the identification of the most efficient unit.
Let$~{{X}_{ij}},~i=1,2,\ldots ,~m$ and ${{Y}_{rj}},~~r=1,2,\ldots ,s$ be the i-th input and r-th output of the j-th DMU, j = 1,2, ..., n. The DEA model is used to measure the relative efficiency of $DM{{U}_{o}}$ under the constant returns to scale assumption known as the CCR (Charnes, Cooper, and Rhodes) model [23]:
The dual model is shown as follows:
The BCC input-oriented value-based model presented below can be utilized to evaluate efficiencies [24].
The dual model is as follows:
An evaluation tool should possess certain characteristics to be effective and appropriate. Some scholars argue that simplicity and clarity are important factors to consider when selecting a tool [25], [26], [27]. Furthermore, such a tool must be dependable, producing output results that are sufficiently realistic to facilitate the decision-making process.
DEA is a standardized, transparent, and robust approach that meets all the requirements for evaluation and offers additional features that make it an ideal tool for SC benchmarking. It can process multiple elements without the need to specify relationships among PMs [23]. The efficient frontier concept used in DEA serves as a reliable empirical standard of excellence, and it can simultaneously analyze both qualitative and quantitative measures. Additionally, DEA does not require priority estimates, which enhances the acceptability of its results. It offers insights into both efficient and inefficient DMUs, and its considerable versatility facilitates seamless integration with various analytical methodologies, including statistical analysis and multi-criteria decision-making procedures [28], [29], [30].
3. Rough DEA Model (RDEA)
Although DEA has been utilized across multiple domains, the majority of research presumes that the input and output parameters of the SC are deterministic. In practical situations, elements such as allocations, shipping expenses, demand fluctuations, and the geographical positions of consumers and facilities are frequently changing. Consequently, unpredictability must be taken into account when assessing SC performance. Due to the considerable influence of uncertainty on the SC, numerous academics have investigated the principles and applications of stochastic DEA and fuzzy DEA. For instance, for situations where PMs are uncertain, a stochastic chance-constrained DEA model is proposed [31]. In addition, many researchers discussed diverse applications of incorporating uncertainty to evaluate SC performance [32], [33], [34], [35], [36], [37].
In practical decision-making, we frequently face a turbulent and ambiguous environment. Pawlak's RST and Liu's rough variable notion can be utilized to tackle this difficulty [38]. A literature study indicates that, although numerous researchers have investigated stochastic DEA and fuzzy DEA, works on RDEA are scarce. The integration of DEA with RST constitutes a compelling domain for further exploration. This research presents a DEA model that integrates rough set parameters and illustrates its applicability in evaluating real-world SC performance.
A rough set is defined by two sets that delineate the approximation of its upper and lower limits, represented as $(\underset{\scriptscriptstyle-}{X}, \bar{X})$. To mitigate uncertainty, imprecise variables must be transformed into definitive values. This article employs the $\alpha$-optimistic and $\alpha$-pessimistic value operators to address this problem. By establishing the trust level of $\alpha$ for imprecise variables, the CCR-DEA model can be converted into a set of maximum and minimum programming challenges [39]. In this regard, consider $\xi = \left(\left[\text{a}, \text{b}\right], \left[\text{c}, \text{d}\right]\right)$ as a rough variable where $\text{c} \leq \text{a} < \text{b} \leq \text{d}$. Accordingly, the optimistic value for $\xi$ is computed as follows:
The pessimistic value can obtained as follows [39]:
Having the optimistic and pessimistic values, the RDEA model can be formulated as follows:
The variable upper limit is:
Here, for any DMU, intervals with lower and upper limits have been established. To rank the DMUs, the concept of efficiency maximum loss (that a DM might undergo) has been applied, which is given as follows:
Accordingly, the efficiency interval that satisfies the following condition is selected as the optimal efficiency interval according to the minimax regret criterion.
Xu et al. [39] proposed a procedure for ranking efficiency intervals, which is outlined as follows:
Step 1: The efficiency maximum loss is computed using Eq. (10), and the efficiency interval with the smallest maximum loss is selected. Let ${{A}_{i1}}$ denote the selected interval.
Step 2: ${{A}_{i1}}$ is excluded, and the efficiency maximum loss is recalculated. The efficiency interval with the smallest maximum loss is then chosen from the remaining (n-1) intervals. Let ${{A}_{i2}}$ denote the selected interval.
Step 3: ${{A}_{i2}}$ is excluded, and the efficiency maximum loss is recalculated. The efficiency interval with the smallest maximum loss is selected from the remaining (n-2) intervals. Let ${{A}_{i3}}$ denote the selected interval.
Step 4: The process is repeated until only one efficiency interval, ${{A}_{in}}$, remains. The final ranking is therefore:
\[A_{i1}>A_{i2}>\ldots >A_{in}\]
Consequently, to rank the DMUs, the efficiency maximum losses for all intervals are computed, with the DMU exhibiting the smallest maximum loss being deemed the most efficient.
The model depicted in Section 3 can be utilized in order to evaluate the efficiency of the SC in the presence of inaccurate variables. In the SC performance evaluation, some measures and indices are usually changing; hence, we must consider uncertainty in the SC performance evaluation problem. For these measures, we can only approximate the upper and lower limits. So, the authors have created a new model for evaluating the efficiency of the SC that considers the features of uncertainty in different variables. RDEA has been employed to assess the SC network operation efficiency in the furniture manufacturing industry. Although this model has been created inspiringly, reviewing it reveals some basic problems. The article's primary objective is not only to discuss the above article but also to propose a proper model instead of it. Before that, we are going to make it obvious how the framework of the model is characterized.
Based on existing research on index systems and the principles of DEA methodology, an evaluation index system has been established for the SC performance evaluation problem. These measures are shown in Table 1.
Factor | Index | Measure | Unit | Factor | Index | Measure | Unit |
Input | Cost Time Human resource | Direct Cost Operation costs Transaction Expenses Order lead time Total volume of employees | 10,000 Yuan 10,000 Yuan 10,000 Yuan Day Person | Output | Flexibility Financial Service | Product Flexibility Delivery Flexibility Sales Volume Net Profit Order Fulfillment rate Percentage of on-time Delivery | No Dimension 1/Day 10,000 Yuan 10,000 Yuan % % |
Based on the literature on real SC performance evaluation problems, these measures were categorized into two groups that show the system inputs and outputs. So, this problem contains five inputs and six outputs. The SC performance evaluation problem, well thought out in the paper, was extracted from the six largest furniture manufacturers in the western region of China. Since it was complicated to illustrate the problem parameters as known variables, the cost index is regarded as rough, and the other indices are considered deterministic variables. As a result, rough variables that an interval can approximate are as follows:
· Direct cost
· Operational cost
· Transactional cost
Although innovative aspects of the article, reviewing it reveals some basic problems. The primary aim is to discuss the problems of the aforementioned article. The next section presents an appropriate model to replace it.
The results presented in the original article are incorrect. By following the procedure outlined in the proposed model, the following results would be obtained. These results were generated using the GAMS software, as shown in Table 2.
DMU | Efficiency Interval |
1 | [1.00,1.00] |
2 | [1.00,1.00] |
3 | [1.00,1.00] |
4 | [1.00,1.00] |
5 | [1.00,1.00] |
6 | [1.00,1.00] |
As shown, all the DMUs have been classified as efficient, indicating that the model fails to distinguish between efficient and inefficient DMUs. This issue is, however, inherent in the methodology, as DEA is unable to differentiate between efficient and inefficient DMUs when the number of DMUs relative to inputs and outputs is low. It has been demonstrated that if the following equality is not satisfied, the results derived from the DMU model cannot be considered reliable. To address this issue, several solutions have been proposed in the literature, although these have not been mentioned by the authors. One alternative that could be employed is a simple weighted scoring approach. Other potential solutions include multi-objective DEA and goal programming. However, due to the lack of sufficient data regarding these measures in South China, these alternatives could not be implemented.
Based on the DEA literature and the classic CCR model, several conditions must be satisfied in order to construct the production possibility set (PPS). These conditions are as follows:
- The observed activities (${{x}_{j}},{{y}_{j}})$, $\left( j=1,..,n \right)$ belong to PPS.
- Fixed Return to scale
- Convexity
- Plausibility
However, the model proposed by the authors does not satisfy one of the necessary conditions. Specifically, the first output—product flexibility—lacks a defined dimension, and the fifth output—order fulfillment rate—and the sixth output—percentage of on-time delivery—are expressed as percentages, which violates the convexity rule. To illustrate this, consider the index of the fraction of defective goods to total produced goods as one of the efficiency measures. Suppose that in the first factory, 100 goods are produced, of which 5 are defective, and in the second factory, 200 goods are produced, with 10 defective. If a new factory is to be constructed using 0.1 and 0.8 weights from the first and second factories, respectively, the following calculation is made:
$New~factory \{\begin{matrix}Number~of~produced~goods=0.2(100)+0.8(200)=180 \\Number~of~defected~goods=0.2(5)+0.8(10)=9 \\\end{matrix}$
In the reconstructed factory, the fraction of defective goods to total produced goods would be 9/180. However, if the convexity axiom is directly applied to the factory rates, the following result is obtained:
$New~factory=0.2\left(\frac{5}{100}\right)+0.8\left(\frac{10}{200}\right)=\frac{5}{100}$
This result does not equal $\frac{9}{180}$, which demonstrates that the convexity axiom is not applicable to indices that are ratios of two primary indices.
Therefore, for the construction of the PPS, the convexity rule cannot be considered and must be omitted [40].
In fact, it is not feasible to apply the BCC DEA model under these conditions. By omitting the convexity rule, the FDH model is derived as follows:
In this model, for all inputs and outputs, the convexity condition does not exist. But we can say that this condition is only for $\text{Y}_1, \text{Y}_5, \text{and}~ \text{Y}_6$ does not exist. Now, the model should be designed so that the convexity condition only applies to some indicators. The customized model is presented in section 3.2.
In this model, the convexity condition is not applied to all inputs and outputs. However, it can be stated that this condition does not hold only for $\text{Y}_1, \text{Y}_5, \text{and}~ \text{Y}_6$. Therefore, the model should be designed in such a way that the convexity condition applies exclusively to certain indicators. The customized model is presented in Section 3.2.
Let ${{X}_{O}}$ and ${{Y}_{O}}$ represent the input and output vectors of $DM{{U}_{O}}$, respectively. According to the DEA literature, if $DM{{U}_{O}}$ is deemed inefficient, then its input and output will satisfy the following conditions:
$\begin{pmatrix} {{\text{X}}_{0}}, & {{\text{Y}}_{0}} \\ \end{pmatrix} \to \begin{pmatrix} \theta_{0}^{*}{{\text{X}}_{0}}-{{\text{S}}^{{{-}^{*}}}}, & {{\text{Y}}_{0}}+{{\text{S}}^{{{+}^{*}}}} \\ \end{pmatrix}$
The fifth output (order fulfillment rate) and the sixth output (percentage of on-time delivery) are expressed as percentages. However, there is no constraint ensuring that ${{\text{Y}}_{0}}+{{\text{S}}^{{{+}^{\text{*}}}}}$ is less than 1. To address this issue, two additional constraints need to be introduced for the fifth and sixth outputs.
$0 \leq {{\text{Y}}_{5}}+{{\text{S}}^{{{+}^{*}}}} \leq 1$
$0 \leq {{\text{Y}}_{6}}+{{\text{S}}^{{{+}^{*}}}} \leq 1$
The first output “product flexibility” has no dimension. We guess that in order to determine this measure and also make it quantitative; the authors have used a ranking model from 1 to 9. If this is true, then obviously we have to add a similar constraint as 3.1.3 to prevent increasing this measure more than 9 or decreasing it less than 1. The corresponding constraint is:
$1 \leq {{\text{Y}}_{1}}+{{\text{S}}^{{{+}^{*}}}} \leq 9$
Since we have no information about this index in the main article, we ignore this restriction.
The fifth input “human resource” is a kind of measure that has to be considered as an integer. So if we can not employ human resources in part time then we have to attach the constraint of being an integer to the model. In this case, due to lack of information, we assume that this index can be non-integer.
Finally we must mention that in the primary article, the following has been proposed to rank the efficient and inefficient DMUs.
$\min \{ \max ( {{\text{r}}_{\text{i}}} ) \} = \min \{ \max [ \max ( \theta_{\text{j}}^{\inf ( \alpha )} )-\theta_{\text{i}}^{\sup ( \alpha )}, 0 ] \}$
Now consider the condition that for some DMUs, both the upper and lower limit values are equal to 1. Since the logic of this formula is based on the loss of efficiency, in this situation, these DMs come across any loss of efficiency. So the above analysis calculates the inefficiency for them as 0 and makes it impossible to rank the efficient DMUs.
In view of the points that we mentioned, it would be easy to form a new DEA model that does not enclose the problems quoted.
First of all, we must construct the PPS with no consideration of convexity. The new PPS would be as follows. For straightforwardness and exclusive of loss of generality, we consider the set X to contain the input from 1 to m. Then, this input vector can be represented in the form $\text{ }\!\!~\!\!\text{ X}=\left[ \begin{matrix} {{\text{X}}^{\text{C}}} & {{\text{X}}^{\text{NC}}} \\ \end{matrix} \right]$, where ${{\text{X}}^{\text{C}}}$ is the vector of the first $m'$ components of X (with convexity assumption), and ${{\text{X}}^{\text{NC}}}$ the vector of the remaining last components of X (with no convexity assumption).
Similarly, we assume the set Y contains the outputs from 1 to r. Accordingly, each output vector can be shown in the form of $\text{Y}=\left[ \begin{matrix} {{\text{Y}}^{\text{C}}} & {{\text{Y}}^{\text{NC}}} \\ \end{matrix} \right]$. So, minimal PPS satisfying the aforesaid axioms is:
$\text{PPS} = \bigcup_{j=1}^n \left\{ \left. \begin{pmatrix} X & Y \end{pmatrix} \ \right| X^C \geq \lambda_j X_j^C, Y^C \leq \lambda_j Y_j^C, \text{if } \lambda_j > 0 \text{ then } Y_j^{NC} \geq Y^{NC}, X_j^{NC} \leq \theta X^{NC}, \sum_{j=1}^n \lambda_j = 1, \lambda_j \geq 0, \ \forall j = 1, \ldots, n \right\}$
The PPS will now be used to construct the model, as outlined by Podinovski [41]. Generally, the framework of the model is as follows:
To convert the above model into a mixed integer linear programming (MILP) formulation, we will do as follows: For every $j=1,\ldots ,n$, consider the binary variable ${{\delta }_{j}}$ [41]. First, one must have the correspondence between the variables ${{\delta }_{j}}$ and ${{\lambda }_{j}}$:$~{{\delta }_{j}}$=1 if $~{{\lambda }_{j}}>0$ and ${{\delta }_{j}}$=0 if ${{\lambda }_{j}}=0$; To express it in linear form, the minimum positive value, $\underset{\scriptscriptstyle-}{\lambda }$, for each variable$~{{\lambda }_{j}}$ is introduced, which, for all practical purposes, will be negligibly small and effectively equivalent to zero [41]. Accordingly, this paper uses the value $\underset{\scriptscriptstyle-}{\lambda }=0.01$. The two-sided inequality guarantees the correspondence mentioned above: $\underset{\scriptscriptstyle-}{\lambda }{{\delta }_{j}}\le {{\lambda }_{j}}\le {{\delta }_{j}}$. Now, the model can be reformulated in the following model:
Then, we will transform the model into two mixed integer programming problems to find the efficiency interval upper and lower limits.
So, the upper limit for this model can be calculated as:
And the lower limit can be computed by utilizing the following model:
We must prevent increasing the fifth and sixth output-order fulfillment rates and percentage of on-time delivery, respectively, by more than 1. Since models (14) and (15) are designed as mixed integer programming, this restriction in models is redundant.
4. Example
Now that we have constructed an adequate model for evaluating the SC efficiency in the condition of uncertainty, we can utilize the data from the primary article and the modified model to solve the problem. Here we have used the WIN QSB software. The results are listed in Table 3.
DMU | Efficiency Interval |
1 | [1.00,1.00] |
2 | [1.00,1.00] |
3 | [1.00,1.00] |
4 | [1.00,1.00] |
5 | [1.00,1.00] |
6 | [1.00,1.00] |
Because the quantity of the DMs is low, all of the DMUs are introduced as efficient. Since there is not enough information about the inputs and outputs, we cannot even use the weighted restricted methods. It is mentionable that the proposed formula is not able to rank these efficient DMUs.
Now, suppose that DMUs have one input and two outputs. If we assume that the number of inputs and outputs compared with those of DMUs is low, the model can be designed to provide acceptable results. If an input is direct costs $\left( {{X}_{1}} \right)$. and outputs are product flexibility$~\left( {{Y}_{1}} \right)$. and sales volume and net profit $\left( {{Y}_{2}} \right)$. Data is presented in Table 4.
DMU's | $X_1$ | $Y_1$ | $Y_2$ |
1 | ([1650,1775],[1545,1867]) | 0.97 | 2358 |
2 | ([2040,2240],[1915,2460]) | .0.85 | 1947 |
3 | ([1980,2080],[1700,2180]) | 0.88 | 1475 |
4 | ([1760,1840],[1650,19001]) | 0.90 | 1865 |
5 | ([2120,2210],[1920,2300]) | 0.87 | 2170 |
6 | ([1940,2010],[1880,2100]) | 0.85 | 1540 |
As you see here, the first input is rough, and the first output is expressed as a percentage. So, the convexity axiom in this output is not true. The second output is deterministic. Now, we use the main model that is expressed by Xu et al. [39]. Applying the transformation technique presented in this article, we convert the rough variable in Table 4 into internal data. Consider the trust level is$~\propto =0.9$. We put $\alpha$ value into the RDEA models (7) and (8). Take $\text{DM}{{\text{U}}_{2}}$ for instance, under trust level $~\propto =0.9$, the corresponding model will be:
Accordingly, under trust level $~\propto =0.9$, the minimum programming of $\text{DM}{{\text{U}}_{2}}$ will be:
To solve these two models, we get the (16) and (17) optimum solutions. Next, we get ${{\theta }^{\text{inf}\left( \alpha \right)}}=0.7804$ and ${{\theta }^{\text{sup}\left( \alpha \right)}}=0.5999$, then the efficiency interval of $\text{DM}{{\text{U}}_{2}}$ is $\left[ \begin{matrix} 0.5999 & 0.7804 \\ \end{matrix} \right]$. Simultaneously, the efficiency interval of other DMUs can be achieved, as presented in Table 5.
DMU | Efficiency Interval |
1 | [1.00,1.00] |
2 | [0.59,0.78] |
3 | [0.70,0.91] |
4 | [0.81,0.98] |
5 | [0.67,0.83] |
6 | [0.69,0.82] |
The method described in Section 3 is employed to assess the maximum efficiency loss for all DMUs in order to rank the efficiencies of the two effective SC networks. Initially, it is evident that $DM{{U}_{1~}}$exhibits the minimal maximum efficiency loss. Consequently, the hierarchical arrangement of the five efficient SCs is established. The ranking sequence of five effective SCs has been established. Then ranking approach will be repeated. The results are as follows: $DM{{U}_{1}}>DM{{U}_{4}}>DM{{U}_{3}}>DM{{U}_{6}}>DM{{U}_{5}}>DM{{U}_{2}}$.
Now, we use the model that is expressed in previous sections. Suppose the trust level is $\propto =0.9$ and$~\bar{\lambda }=0.01$. We put $\alpha$ and $\lambda$ into the RDEA model (14) and (15). Take $\text{DM}{{\text{U}}_{2}}~$for example, the corresponding programming model of $DM{{U}_{2}}$ will be:
Accordingly, under trust level $\alpha$ =0.9, the minimum programming of $\text{DM}{{\text{U}}_{2}}$ will be:
To solve the two programming problems above, we search out the optimum solution for programming (18) and (19), respectively. Then we get ${{\theta }^{\text{inf}\left( \alpha \right)}}=0.7354$ and ${{\theta }^{\text{sup}\left( \alpha \right)}}=0.5652$, then the efficiency interval of $\text{DM}{{\text{U}}_{2}}$ is $\left[ \begin{matrix} 0.5652 & 0.7354 \\ \end{matrix} \right]$. Simultaneously, the efficiency interval of other DMUs can be achieved, as presented in Table 6.
DMU | Efficiency Interval |
1 | [1.00,1.00] |
2 | [0.57,0.74] |
3 | [0.48,0.63] |
4 | [0.69,0.84] |
5 | [0.67,0.83] |
6 | [0.51,0.61] |
The approach outlined in Section 3 was employed to determine the greatest loss of efficiency for each DMU, facilitating the ranking of the efficiencies of the two effective SC networks.
Obviously, $DM{{U}_{1}}$ has the smallest efficiency maximum loss. So, the efficient SC ranking is determined as follows: $DM{{U}_{1}}>DM{{U}_{4}}>DM{{U}_{5}}>DM{{U}_{2}}>DM{{U}_{6}}>DM{{U}_{3}}$.
As seen here, the results in efficiency score and ranking are different from the Xu et al. model [39] and our model.
5. A Related Case Study
One of the issues with the primary article is the limited number of DMUs relative to the measures. Therefore, we aimed to apply the modified model with a larger number of DMUs, specifically in dairy manufacturing companies that produce a variety of dairy products. A total of 23 product SCs have been identified.
A key factor in an effective performance measurement system is the selection of PMs, as these metrics guide managers in the decision-making process. Since PMs are chosen based on the specific context of the field being studied, they can vary across different situations. A review of the literature on SC performance measurement reveals a strong emphasis on cost-based measures. While their simplicity and ease of understanding make them appealing to managers, their rigidity and lack of alignment with strategic goals led researchers to develop new frameworks to address these limitations. One such effort resulted in the creation of the SC Operations Reference (SCOR) model by the SC Council [42]. This model incorporates key concepts of process measurement, providing a standardized framework for improvement. Additionally, the SCOR model includes key performance areas, such as delivery performance, production flexibility, and order fulfillment rate. The SCOR model operates at four levels of SCM. Level 1 includes top-level metrics—process, plan, source, make, and deliver—which span all parts of the company. Level 2 focuses on process categories and serves as a platform for implementing operations strategies. Level 3 targets process element levels, which are crucial for defining a company's sustainability in the competitive market. Level 4 addresses the implementation phase and outlines several steps for aligning management with changing business conditions. In this paper, the primary objective is to assess the SC performance of 23 networks that operate within the same industry but exhibit different characteristics.
Based on the measures proposed by this model and the nature of the field being surveyed, the most related metrics have been selected, which are as follows:
Input:
- Human resource
- Delay
- Direct and operation costs
Output
- Net profit
- Order fulfillment rate
- Amount of sale
- Percentage of on-time delivery
Actually, a list of these measures was sent to several experts in this field who had at least 5 years of experience in this industry. They were asked to choose the most relevant metrics on the list. Finally a list of 9 measures was derived that contained 3 inputs and 6 outputs. The experts select 3 inputs and 4 outputs. These inputs and outputs are:
1) Human resource (X1): The people who operate and staff an organization and contrast with the material and financial resources of it. Generally, in this paper, the human resource for each DMU shows the members of workers who are needed to run a machine. This number should be calculated regularly by different tools of industrial engineering. Each worker imposes a range of expenses for the organization, so controlling this measure can show a relevant measure for inputs of a product.
2) Lead time (X2): Lead time refers to the total order cycle time, which is the duration between receiving a customer's order and delivering the goods. It plays a crucial role in providing a competitive advantage and has a direct impact on customer satisfaction. Bottlenecks, fluctuations in the volume of orders handled and inefficient processes are the parameters that make variations in activity completion time. So, in this paper, lead time is considered a rough variable.
3) Direct and Operation Costs (X3): Direct cost is a summation of direct labour, raw material, and machine costs, and operation cost contains financial and administrative costs. Usually, several elements affect the cost and we cannot easily assign a value to it. So, this problem leads us to use a better model to overcome this problem and make the results of the study more reliable. In the present study, we utilized rough variables to deal with the inaccuracy of cost. So, to assess the performance of SC under uncertainty, our assumptions are: first, delay and cost are rough variables, and second, other indices are deterministic.
4) Net Profit (Y1): Net profit is one of the measures that experts usually select in order to define the financial output of a company. It is easy to understand and welcomed by managers. Net profit represents how much money a company has earned from selling one unit of a certain product and is calculated as revenue minus expense.
5) Order fulfilment rate (Y2): The main task of an SC is not only providing goods to customers but also managers should look forward to measuring the fulfillment rate of goods with orders. This measure is part of a broader metric, which is customer satisfaction. The interview is an available and simple tool in order to explore the customer's perception. Their needs, level of received service versus their expectations, availability of products, and similar questions are the main fields that the interview should contain.
6) Amount of sales (Y3): The number of sales can reflect different aspects of a company, like performance of marketing and customer satisfaction. That's why it can provide a brilliant measure in order to depict the performance of an organization as a whole. But usually, several elements affect this amount and we cannot easily assign a certain number for different time intervals. For example, in different months of the year, this amount would change. The mentioned reason and so many others guide us to use a better model to overcome this problem and make the results of the study more reliable. In the present study, we have utilized rough variables to deal with the inaccuracy of the amount of sales. So, to evaluate the SC performance under uncertainty, our assumptions are: first, the number of sales is a rough variable and second, other indices are deterministic.
7) Percentage of on-time Delivery (Y4): The delivery channel, vehicle scheduling, and warehouse location are critical aspects of any standard delivery distribution method. The selection of appropriate channels, scheduling, and location strategies can enhance delivery performance. Timely delivery is a critical component of delivery performance that ascertains the occurrence of an optimal delivery and serves as a benchmark for customer service quality.
To establish rough variables, we have developed an approach that requires four different data points to define two intervals, such as ([a,b],(c,d]), where c ≤ a < b ≤ d. The upper approximation of a rough variable can be delineated by the interval of specific data from two distinct years, such as 2009 and 2010. If the sales in 2009 are inferior to those in 2010 for the same season, let p = (1/2)(a + b). The lower approximation of the rough variable can be expressed as an interval around p, symbolized as [p - i, p + i], where i signifies a variable that reflects the distribution features of the sales data.
The transformation approach described in the preceding section converts the rough variables into interval data. If the trust level $\alpha$ is established at 0.9, the $\alpha$ pessimistic and $\alpha$ optimistic values of the rough variable can be computed and later utilized in the RDEA model. If ${{\text{ }\!\!\theta\!\!\text{ }}^{\text{sup}\left( \text{ }\!\!\alpha\!\!\text{ } \right)}}$ and ${{\text{ }\!\!\theta\!\!\text{ }}^{\text{inf}\left( \text{ }\!\!\alpha\!\!\text{ } \right)}}$ be the optimum solution for the RDEA model, then the efficiency interval of DMU0 will be calculated.
By solving the two programs above for each of the DMUs and using data in Table 7, the efficiency interval of DMUs can be derived as:
DMUs | Amount of Sale | Percentage of On-Time Delivery | Order fulfillment rate | Net Profit | Human Resource | Lead Time | Direct and Operation Costs | ||||||
a | b | c | d | a | b | c | d | ||||||
DMU1 | 231,033 | 90% | 80% | 500 | 83 | 10 | 14 | 8 | 18 | 520 | 560 | 450 | 600 |
DMU2 | 181,240 | 92% | 85% | 400 | 64 | 12 | 18 | 10 | 20 | 560 | 600 | 500 | 650 |
DMU3 | 358,827 | 93% | 80% | 500 | 121 | 14 | 20 | 13 | 25 | 620 | 720 | 560 | 780 |
DMU4 | 6,452,260 | 89% | 80% | 200 | 667 | 20 | 25 | 18 | 29 | 710 | 730 | 680 | 760 |
DMU5 | 276,945 | 85% | 92% | 500 | 85 | 18 | 21 | 16 | 25 | 565 | 585 | 470 | 620 |
DMU6 | 620,343 | 80% | 95% | 700 | 827 | 8 | 12 | 5 | 15 | 620 | 685 | 560 | 720 |
DMU7 | 1,196,595 | 94% | 98% | 800 | 2,272 | 8 | 12 | 5 | 15 | 540 | 570 | 520 | 610 |
DMU8 | 2,511,640 | 96% | 87% | 200 | 223 | 12 | 15 | 10 | 20 | 600 | 680 | 590 | 700 |
DMU9 | 4,846,640 | 92% | 87% | 300 | 408 | 11 | 18 | 9 | 20 | 710 | 750 | 590 | 800 |
DMU10 | 2,778,270 | 86% | 87% | 200 | 261 | 15 | 18 | 14 | 26 | 720 | 760 | 700 | 10 |
DMU11 | 989,850 | 87% | 90% | 200 | 255 | 20 | 25 | 18 | 30 | 650 | 700 | 560 | 750 |
DMU12 | 609,138 | 95% | 90% | 200 | 124 | 11 | 18 | 10 | 25 | 610 | 695 | 570 | 750 |
DMU13 | 23,524 | 92% | 92% | 500 | 314 | 19 | 22 | 15 | 26 | 580 | 610 | 540 | 10 |
DMU14 | 145,640 | 90% | 92% | 250 | 186 | 8 | 10 | 5 | 12 | 710 | 750 | 680 | 810 |
DMU15 | 14,868 | 91% | 92% | 650 | 239 | 5 | 9 | 4 | 12 | 510 | 570 | 490 | 80 |
DMU16 | 279,075 | 85% | 90% | 400 | 593 | 10 | 12 | 8 | 15 | 500 | 540 | 450 | 90 |
DMU17 | 1,186,665 | 87% | 90% | 150 | 71 | 15 | 18 | 12 | 25 | 600 | 650 | 480 | 90 |
DMU18 | 263,270 | 98% | 90% | 1,000 | 245 | 11 | 18 | 10 | 25 | 450 | 580 | 400 | 50 |
DMU19 | 27,572 | 94% | 90% | 700 | 78 | 10 | 12 | 8 | 15 | 610 | 680 | 550 | 700 |
DMU20 | 4,574,400 | 93% | 87% | 300 | 275 | 10 | 12 | 8 | 15 | 490 | 550 | 400 | 610 |
DMU21 | 143,968 | 88% | 95% | 600 | 355 | 8 | 10 | 5 | 12 | 710 | 750 | 680 | 780 |
DMU22 | 95,932 | 89% | 90% | 350 | 121 | 12 | 18 | 10 | 24 | 450 | 600 | 400 | 620 |
DMU23 | 8,089 | 92% | 90% | 900 | 46 | 11 | 18 | 9 | 20 | 520 | 610 | 480 | 690 |
Based on the proposed definition, the next step is to rank these SC networks. Specifically, the efficiency maximum loss for each DMU can be calculated, and the DMU with the smallest efficiency maximum loss will represent the most efficient SC. The underlying logic of this evaluation is that a DMU is considered efficient if it is capable of producing the maximum possible outputs; otherwise, it is deemed inefficient.
Following the definition reveals that DMUs 5, 6, 7, 8, 12, 13, 14, 15, 17, 18, 19, 20, 21, and 23 are efficient DMUs because they have the upper and lower limits of 1. In other words, the maximum efficiency loss for them is 0. However, the formula cannot provide any solution to rank these efficient DMUs. Hence the proposed definition cannot provide a broad application in order to rank the intervals. But for the others, utilizing the mentioned formula depicts the following results:
$DM{{U}_{4}}>DM{{U}_{9}}>DM{{U}_{10}}>DM{{U}_{11}}>DM{{U}_{1}}>DM{{U}_{2}}>DM{{U}_{3}}>DM{{U}_{16}}>DM{{U}_{22}}.$
6. Conclusions
This study critically reviewed the work by Xu et al. [39], identifying several significant limitations in their proposed model for SC performance evaluation, including incorrect results, absence of the convexity axiom, insufficient characterization of output variables, inadequate handling of the human resources index, and inability to rank efficient DMUs. To address these issues, a modified framework was developed, incorporating adjustments to overcome the identified shortcomings. The proposed model excludes the convexity condition for specific variables (Y1, Y5, and Y6) to account for the unique characteristics of percentage-based measures, preventing unrealistic increases beyond a value of 1. These adjustments resulted in the development of two enhanced models, denoted as models (14) and (15), which provide a more accurate and adaptable approach to performance evaluation under uncertain conditions. Despite these advancements, the model's effectiveness is constrained by the limited number of DMUs compared to the number of measures, which hinders its ability to distinguish between efficient and inefficient DMUs. Additionally, insufficient data quality and quantity further limit its practical application. To overcome these challenges, it is recommended that future studies prioritize gathering a more robust dataset with an adequate number of DMUs relative to measures. Finally, by applying models (7), (8), (14), and (15) under varying conditions, this study demonstrated that the results differ significantly, emphasizing the importance of model selection in performance evaluation. The findings contribute to the advancement of SC performance assessment methodologies, offering a more robust framework for handling uncertainty and complexity in real-world applications.
The data used to support the research findings are available from the corresponding author upon request.
The authors declare no conflict of interest.