DV-Hop Positioning Method Based on Multi-Strategy Improved Sparrow Search Algorithm
Abstract:
In order to address the problem of large positioning errors in non-ranging positioning algorithms for wireless sensor networks (WSN), this study proposes a Distance Vector-Hop (DV-Hop) positioning method based on the multi-strategy improved sparrow search algorithm (SSA). The method first introduces circle chaotic mapping, adaptive weighting factor, Gaussian variation and an inverse learning strategy to improve the iteration speed and optimization accuracy of the sparrow algorithm, and then uses the improved SSA to estimate the position of the unknown node. Experimental results show that, compared with the original method, the improved DV-Hop algorithm has significantly improved the positioning accuracy.1. Introduction
WSN is a specific type of self-organizing network formed by multiple nodes configured with small or miniature sensors, where accurate node localization is key to WSN applications [1], [2]. Current WSN localization algorithms are usually divided into two main categories: ranging-based localization algorithms and non-ranging ones [3], [4], [5]. The latter is characterized by low energy consumption, low implementation costs and no hardware support, and meets the needs of many applications at the same time. Thus, non-ranging positioning algorithms are of even greater research importance.
The DV-Hop localization algorithm, as a classical non-ranging localization algorithm, has the advantages of simple steps and a large localization coverage area, but also has the obvious disadvantages of being highly influenced by the network topology and having large localization errors. In response to the problem of large DV-Hop localization errors, many scholars have proposed improvements. Zhou et al. [6] improved the two-step hop count and location estimation by receiving different anchor node information for hop count correction and calculating the coordinates of the unknown node using weighted least squares. Zhao and Ma [7] proposed a refined hop numerical method for dividing the dual communication radius to bring the inter-node distance closer to the true value. Shi et al. [8] corrected the average hop distance between beacon nodes based on the similarity coefficients of multi-hop shortest paths between nodes and optimized the node initialization locations using a simulated annealing algorithm. Li [9] first reduced the hopping error by selecting beacon nodes, weighted the hopping distance of anchor nodes according to the distance, and finally optimized the estimation of node positions using a particle swarm algorithm. Chu and Lv [10] proposed a node localisation method based on differential evolution, but differential evolution is greatly affected by parameters, and the control parameters are not appropriately selected, so the diversity is easily lost in the population evolution and the optimal solution cannot be found. Jia et al. [11] used RSSI and error correction to improve the DV-Hop algorithm, which improved the positioning accuracy of unknown nodes to a certain extent, but it is not ideal yet.
The above schemes have improved the positioning accuracy to some extent, but there are still limitations. In order to further improve the positioning accuracy of the DV-Hop algorithm, this study proposes a DV-Hop positioning method based on the modified SSA (MSSADV-Hop). Firstly, the ability of the SSA to jump out of the local optimum and the accuracy of global optimization were improved through multi-strategy improvement. Then the improved SSA was used to solve the optimal value of the node localization objective function in the localization stage, aiming to reduce the localization error. Finally, the validity of the improved method was verified through experimental simulation.
2. Relevant Theories
The positioning process of the DV-Hop algorithm is divided into the following three steps:
Step 1: Obtaining the minimum number of hops. Each anchor node sends a packet to the surrounding nodes by means of a broadcast, which contains information such as coordinates and hop count. At the end of the broadcast, all nodes obtain the minimum number of hops according to the update mechanism.
Step 2: Estimating the average jump distance. After combining the position information between any two anchor nodes with the minimum hop value, the average hop distance can be obtained through Eq. (1). When unknown node $u$ receives a $Hopsize_i$ from anchor node $i$, the estimated distance between the two is calculated using Eq. (2).
Step 3: The coordinates of the unknown nodes can be estimated using the trilateration method.
The SSA is a heuristic optimization algorithm based on the foraging behaviour of sparrows in nature, proposed by Xue and Shen [12] in 2020. The individuals of the population in the sparrow algorithm can be divided into finders, followers and vigilantes in the process of foraging, and all three have their own foraging duties. The algorithm has the advantages of fewer parameters, simple implementation, easy adjustment and optimization, as well as strong foraging ability and fast convergence. It is widely used in solving optimization problems [13], [14].
3. DV-Hop Positioning Method Based on Multi-strategy Improved SSA
Since there is a large error between the unknown node coordinates calculated by the original DV-Hop algorithm and the true value, this study introduces a SSA into the original localization algorithm to solve for the optimal unknown node coordinates using iterated sparrow particles. The distance between the unknown node and the beacon node was calculated and subtracted from the node distance obtained by the DV-Hop algorithm. Finally, the difference between these two was converted into an adaptation function. The position of the unknown node was obtained by solving for the particle with the smallest adaptation value [15]. The adaptability function constructed in this study is as follows:
where, \( f(x, y) \) represents the fitness function value of particle \( (x, y) \), \( M \) represents the number of beacon nodes, \( x_i \) and \( y_i \) represent the horizontal and vertical coordinates of the beacon nodes \( i \), respectively, and \( d_i \) represents the distance between the unknown node and the beacon node \( i \).
Chaotic mappings are often applied in improving population initialization due to their non-linearity, randomness, ergodicity and fractal nature. This strategy can enhance population diversity and improve shortcomings such as small global search range and tendency to fall into local optimality [16]. Commonly used chaotic mappings are logistic mapping, tent mapping, sine mapping, singer mapping, circle mapping, etc. The chaotic sequence distribution is shown in Figure 1.
In order to make the initial solution of the sparrow population more uniformly distributed, the circle chaotic mapping with higher stability was chosen in this study. The expression of this chaotic mapping is shown below.
where, $a$ and $b$ are the influence factors. It has been shown experimentally that the distribution of particles is more uniform when $a$ is taken as 0.5 and $b$ as 0.2.
In the traditional sparrow algorithm, there is the problem of the discoverer not being able to search the initial space sufficiently to make the solution fall into a local optimum, thus failing to achieve the required optimization accuracy. In order to extend the search range of the discoverer, this study first makes the current discoverer position acted upon by the previous generation of discoverers together with the global optimal position. Then, a dynamic weighting factor was introduced in the iterative formula of the discoverer [17]. This improved strategy allows the algorithm to search more widely in the early iterations to avoid falling into a local optimum too early. As the number of iterations increases, the weight factor adaptively decreases and the algorithm performs a deep search for the optimal position, improving the algorithm's optimization search accuracy.
The dynamic weighting factors $\omega$ and the formulae for the new discoverer positions are as follows:
where, $f_{i, j}^t$ is the fitness value of the $j$-th dimension of population individual $i$ at the current number of iterations $t, f_{i, g}^t$ is the global best fitness value at this time, $Q$ is a random number obeying a normal distribution, and $L$ denotes a $1 \times d$ matrix with all elements of 1.
The introduction of a Gaussian variational operator to perturb the globally optimal solution obtained at each iteration prevents the algorithm from falling into a local optimum and premature maturity, while also maintaining the diversity of individuals in the population [18]. The Gaussian probability density formula is shown below.
The variation formula for carrying out specific variation operations is shown below.
where, $x_{i, j}^t$ is the value of the parameter after the normal algorithm iteration, $Gauss \left(x_{i, j}^{t+1}\right)$ is the value after the variation, and $N(0,1)$ denotes a random number conforming to a normal distribution with mathematical expectation of 0 and standard deviation of 1.
The introduction of an elite backward learning strategy can discard individual backward solutions whose fitness values are inferior to the original solution and retain the better solution, thus expanding the search range of the algorithm and improving the ability to find the best [19]. The strategy is expressed in the following formula:
where, $n_j$ denotes a $j$-dimensional random number obeying a $(0,1)$ standard uniform distribution, $x_{j, g}(t)$ denotes the global optimal solution in the $j$-th dimension at $t$ number of iterations, and $x_{j,g}^{\prime}(t)$ denotes the generated inverse solution.
In order to further improve the algorithm's merit-seeking capability, this study adopts a dynamic selection strategy such that the Gaussian variation strategy and the backward learning strategy were executed alternately with a certain probability to dynamically update the target position. The specific choice of which strategy to perform the target position update is determined by the selection probability p, which is calculated as follows:
The specific way to choose the strategy is to select the Gaussian variation strategy if rand<p, and vice versa for the reverse learning strategy. Although the ability of the algorithm to jump out of the local optimum solution was improved to some extent by the two perturbation strategies mentioned above, there is no guarantee that the new solution is better than the original solution. Therefore, after the perturbation, it is necessary to determine whether to update the target position by comparing the fitness values of both.
The specific steps of the MSSADV-Hop are as follows:
Step 1: A number of WSN nodes were deployed in the target area and the minimum number of hops between the anchor and unknown nodes and the respective average distance per hop were estimated.
Step 2: Several parameters of the improved sparrow algorithm were set, such as the population size, the proportion of individuals with three identities (discoverer, follower and alert), the maximum number of iterations, the alert value, the safety value, the dimension size, etc. Then the population was initialized using the circle chaos mapping.
Step 3: The fitness of individual sparrows was calculated and ranked to select the current best fitness value and the worst fitness value, as well as the corresponding position.
Step 4: From the individual sparrows with better fitness values, some of them were selected as discoverers to update their positions according to Formula (6), followed by the original algorithm to update the positions of followers and vigilantes.
Step 5: Optimally positioned individuals were perturbed using Gaussian variation and backward learning strategies to produce new solutions.
Step 6: The retention of the perturbation locations was determined by comparing the fitness values of the old and new solutions.
Step 7: If the new solution was not retained, the process returned to Step 4. Otherwise, the iteration was concluded, and the optimal solution for the objective function was identified as the position of the unknown node.
The exact process is shown in Figure 2.
4. Influence of Experimental Environment Parameters on Positioning Error
In order to verify the effectiveness of the MSSADV-Hop, the traditional DV-Hop algorithm, the DV-Hop based on Particle Swarm Optimization (PSODV-Hop) algorithm [20], the Distance Vector-Hop based on SSA (SSADV-Hop) optimized by the original SSA and the MSSADV-Hop proposed in this study were tested for comparison. To investigate the effectiveness of each algorithm for localization under different node densities, different anchor node densities and communication radii. In order to reduce the error caused by the randomness of the experiment, the average value of each algorithm after 100 experiments was selected in this study. The localization effect of the algorithm was shown by comparing the average error size of all nodes. The average error formula is as follows:
where, $i$ denotes the number of experiments, $M$ denotes the number of unknown nodes, $\left(X_d, Y_d\right)$ and $\left(x_d, y_d\right)$ denote the estimated and actual positions of an unknown node $d$, respectively, and $\overline{\text { error }}$ denotes the average error of all unknown nodes after 100 experiments.
The experimental parameters are shown in Table 1, and the effect of different node densities on positioning is shown in Figure 3.
Parameter | Experimental Environment 1 | Experimental Environment 2 |
Network area range | 100m×100m | 200m×200m |
Total number of nodes | 80, 120, 160, 200, 240 | 100, 150, 200, 250, 300 |
Anchor node density (%) | 20 | 30 |
Communication radius (m) | 30 | 50 |
Number of experiments | 100 | 100 |
As can be seen from Figure 3, the overall localization error of the four algorithms gradually decreases as the node density increases. When the node density is small, the localization error decreases more. After the number of nodes exceeds 200, the localization error curve of each algorithm starts to level off. The reason for this is that within a fixed sensing network area, when the density of nodes exceeds a certain percentage, the interference between nodes increases, resulting in a decrease in the signal-to-noise ratio, which affects the positioning accuracy. Compared to the rest of the algorithms, the localization error of the MSSADV-Hop is always optimal for different node densities.
The simulation parameters are shown in Table 2, and the effect of different anchor node densities on positioning is shown in Figure 4.
Parameter | Experimental Environment 3 | Experimental Environment 4 |
Network area range | 100m×100m | 200m×200m |
Total number of nodes | 100 | 200 |
Anchor node density (%) | 10, 15, 20, 25, 30 | 20, 25, 30, 35, 40 |
Communication radius (m) | 30 | 50 |
Number of experiments | 100 | 100 |
As can be seen from Figure 4, there is an overall decreasing trend in the localization error of the four algorithms as the density of anchor nodes increases. It is due to the fact that as the number of anchor nodes increases, the average jump distance error for each node decreases and the distance between the unknown node and the anchor node is estimated more accurately. Compared to the other three algorithms, the MSSADV-Hop always has the smallest positioning error under the same conditions.
The experimental parameters are shown in Table 3, and the effect of different node densities on positioning is shown in Figure 5.
Parameter | Experimental Environment 5 | Experimental Environment 6 |
Network area range | 100m×100m | 200m×200m |
Total number of nodes | 100 | 200 |
Anchor node density (%) | 20 | 30 |
Communication radius (m) | 20, 25, 30, 35, 40 | 40, 45, 50, 55, 60 |
Number of experiments | 100 | 100 |
As can be seen from Figure 5, the overall localization error of the four algorithms tends to decrease as the communication radius increases. This is because when the communication radius is small, the number of anchor nodes within the communication range of the unknown node is small and cannot accurately obtain its own position. When comparing the four algorithms, the MSSADV-Hop consistently has lower positioning errors than the other algorithms under the same conditions. The MSSADV-Hop also has a significant advantage in terms of positioning accuracy under different communication radius conditions.
5. Conclusion
To address the problem of low positioning accuracy of the traditional DV-Hop algorithm, this study proposes a DV-Hop positioning method based on multi-strategy improved SSA to optimize the positioning steps that generate errors. The method first uses circle chaotic mapping to optimize the sparrow population distribution. Then a dynamic weighting factor was introduced into the discoverer position iteration formula, which enables the algorithm to increase the global search range at the beginning of the iteration and to perform better local search at the end of the iteration. Reverse learning and Gaussian variation were added for position perturbation to improve the algorithm's ability to jump out of local optima. Finally, the improved sparrow algorithm was substituted into the calculation of unknown node coordinates, making the calculation of the unknown node coordinates by the localization algorithm more accurate. The experimental results show that the MSSADV-Hop proposed in this study has a significant improvement in localization accuracy and has good localization performance.
Conceptualization, W.L. and K.J.; methodology, W.L., K.J., J.B. and Y.L.; software, J.B. and K.J.; validation, W.L. and Y.L.; investigation, W.L. and J.B.; data curation, W.L. and K.J.; writing---original draft preparation, K.J. and J.B.; writing---review and editing, K.J. All authors have read and agreed to the published version of the manuscript.
The data used to support the findings of this study are available from the corresponding author upon request.
The authors declare no conflict of interest.