Advanced Analysis of Blast Pile Fragmentation in Open-Pit Mining Utilizing 3D Point Cloud Technology
Abstract:
An innovative approach utilizing 3D laser scanning technology has been introduced in open-pit mining for capturing spatial data of blast piles. RANSAC for plane fitting and DBSCAN for clustering are applied to outline rock block contours accurately. Quick calculation of rock block volumes and maximum particle sizes is enabled through 3D convex hulls and Oriented Bounding Boxes (OBB). Delaunay triangulation of 3D point cloud data is used to create a detailed mesh model for precise volume estimation of blast piles. Indoor testing revealed relative errors of approximately 4.61% for block volumes and 4.75% for particle sizes, while field applications showed an average rock block identification accuracy of 80.4%, increasing with block size. Estimated versus actual blast pile volumes showed a relative error of 4.85%, with computational errors for the pile's height, forward throw distance, and lateral extent being 2.92%, 3.91%, and 4.29%, respectively.
1. Introduction
Blast fragmentation has been established as a critical metric for the assessment of blasting outcomes, serving as a foundation for the adjustment and refinement of subsequent blasting parameters. In situ blasting scenarios often encounter challenges with oversized blast fragmentation, necessitating mechanical crushing or secondary blasting. Such situations negatively impact subsequent loading and transportation processes, leading to reduced operational efficiency. On the contrary, excessively fine blast fragmentation, indicative of superfluous explosive usage, not only escalates production costs but also complicates loading and beneficiation due to the prevalence of fine materials. Hence, the accurate identification of blast fragmentation sizes is a pivotal area of investigation within the blasting field [1].
The methodologies for blast fragmentation identification are broadly classified into direct and indirect methods. The direct measurement technique, primarily grounded in field statistics [2], is recognized for its precision. However, it is often deemed labor-intensive and inefficient, especially in complex mining environments where it also poses significant safety risks. In contrast, indirect methods, which largely leverage photographic image processing, have evolved considerably. Sereshki et al. [3] introduced a Matlab-based algorithm designed for the automatic delineation of rock particle boundaries. Furthermore, Babaeian et al. [4] applied the Split-Desktop software for image analysis, obtaining metrics such as fragmentation distribution, uniformity index, and average size of fractured rocks. They formulated new criteria for identifying adhesion bodies, employing eXtreme Gradient Boosting (XGBoost) theory and methodologies, thereby effectively segregating adhesion zones within rock blocks. In a similar vein, Shan [5] utilized the K-means clustering approach for image categorization, which demonstrated substantial efficacy in image segmentation.
The limitations of 2D image recognition, notably its inability to capture depth information, render it challenging for accurately calculating the particle size of rock blocks. Conversely, 3D laser scanning technology, as an advanced real-scene scanning method, swiftly acquires extensive, dense data points of the target object, thus creating a realistic replication. Onederra et al. [6] were pioneers in using high-resolution 3D laser scanning technology for quantifying the "entire pile" fragmentation resulting from full-scale production blasting. Data gathered from production blasts at the Esperanza mine demonstrated the viability of comprehensive automated 3D data analysis at sufficient resolution. Liu et al. [7] incorporated Monte Carlo simulation as a statistical estimation approach for Blast Fragmentation Measurement and Prediction (BFMP). In controlled settings, the congruence of statistical estimations from combined 2D image processing and 3D laser scanning with physical water tank measurements was examined, revealing a closer alignment of 3D laser scanning results with actual physical measurements. Engin et al. [8] collected point cloud data of blasted rock piles through ground laser scanning and transitioned to a computer environment for 3D modeling and surface reconstruction. Their process included identifying fragments within the pile and determining their size distribution.
In the realm of blast pile evaluation, characteristics are predominantly defined by spatial and morphological parameters. These parameters include, but are not limited to, the volume, as well as geometric shape attributes such as the forward throw distance, the lateral extent, and the elevation of the blast pile. Hudaverdi et al. [9] utilized hierarchical cluster analysis for classifying blasting data into similarity groups and applied discriminant analysis for group membership testing, further employing multiple regression analysis to devise a predictive model for average particle size estimation of slag piles. Ge et al. [10] implemented 3D laser scanning for scanning landslide deposits, calculating both volume and block distribution of the accumulations. Bamford et al. [11] developed a Deep Neural Network (DNN) architecture to predict characteristic sizes of rock fragments from 2D images of slag piles. Hudaverdi et al. [12] introduced a comprehensive multivariate analysis approach for blast fragment prediction, evaluating several blasts conducted across diverse mines and rock layers globally. Bamford et al. [13] utilized drones for monitoring in three phases: pre-blasting, post-blasting, and post-clearance, focusing on surveying pit walls, predicting in-situ block size distribution, and assessing blast-induced damage through a Digital Elevation Model (DEM).
Despite these advancements, the focus of most research, both domestic and international, has been predominantly on predicting the morphology of blast piles and their influencing factors, with less emphasis on the computation of characteristic blast information. Hence, this study leverages 3D laser scanning technology to explore blast pile distribution characteristics. Initial stages involve preprocessing of point cloud data and identifying rock block contours using a combination of segmentation and clustering algorithms. This is followed by employing Delaunay's algorithm for 3D modeling of blast pile point cloud data. The blast pile's volume is then calculated using projection methods, and the intelligent extraction of key spatial parameters, height, forward throw distance, and lateral span, is achieved via coordinate methods. This approach aspires to provide methodological and technical support for optimizing blasting designs in open-pit mines.
2. Acquisition and Processing of 3D Laser Scanning Point Cloud Data
The core principles of 3D laser scanning technology encompass four primary areas: distance measurement, angular displacement, scanning, and orientation [14]. Distance measurement techniques are categorized into triangulation, pulse, and phase methods. The pulse method, predominantly used in a majority of contemporary 3D laser scanners, is characterized by its ability to measure extensive distances and its applicability in both indoor and outdoor environments. For medium-range measurements, the phase method is preferred due to its heightened accuracy. The triangulation method, suitable for short-range indoor measurements, is recognized for its superior precision. In the pulse measurement process, a series of pulse signals are emitted using a pulsed fiber laser, and a highly sensitive Avalanche Photo Diode (APD) detector captures the echo reflected from the target. The distance to the object is determined by computing the time difference of the laser signal's round-trip. Illustrated in subgraph (a) of Figure 1, the laser transmitter emits a pulse signal that, after reflecting off the target, reaches the receiver. Given that the measured distance is S, the speed of light is C, and the time difference of the round-trip laser signal is $S=\frac{1}{2} \times C \times t$, the calculation for distance can be established.
In the device's internal mechanism, as depicted in subgraph (b) of Figure 1, the angle a between the emitted laser beam and the horizontal direction, and the angle with the vertical direction, are recorded, along with the time $f$ taken from emission to reception. Using time $f$ and the speed of light $C$, the scanner's distance to the object is measured. The point cloud data is centered around the scanner's origin point, with the internal setup designating the X-axis, the horizontal plane as the Y-axis, and the vertical axis to the horizontal plane as the Z-axis. These configurations allow for the precise determination of any point's coordinates.
The acquisition of comprehensive, low-noise, high-precision initial point cloud data is paramount for the subsequent processing of data obtained through 3D laser scanning technology.
The fundamental process for onsite scanning to acquire 3D point cloud data is depicted in Figure 2.
An initial step in collecting 3D point cloud data involves conducting an onsite survey. This phase encompasses understanding the scope of scanning, the environmental conditions, and identifying any obstacles like trees around the object. This information forms the basis for the overall scanning strategy.
Post consideration of scanning range and obstructions such as trees, the placement of scanner sites is critical. The primary objective is to comprehensively capture the object's surface point cloud. Subsequent to site selection, the positioning of survey targets relative to the site is established. Following these steps, several crucial scanning parameters are set: scanning angle, interval between sampling points, camera settings, and target recognition. If a transformation to geodetic coordinates is necessary, employing a total station or GPS for 3D measurement of target coordinates is required. This series of tasks constitutes a complete scan at a single site. When multiple sites are established, the same process is replicated for each. Advancements in scanning technology now often necessitate augmenting point cloud data with actual color information, thus necessitating the use of internal or external digital cameras for object photography. The entire procedure is encapsulated in the workflow diagram presented in Figure 3.
The selection of sites for scanning equipment and the resultant quality of point cloud data are interdependent, influencing not only the completeness of the point cloud but also the efficiency of scanning and subsequent tasks like point cloud stitching. The site selection process for scanning equipment typically adheres to the following steps, contingent upon the object's geometry and the surrounding environment:
Step 1: On-site exploration
On-site exploration is a crucial preliminary step in the site selection process, yielding an in-depth understanding of scanning site conditions. This stage primarily involves determining the scanning range, angles, and pre-setting of equipment sites. It facilitates a reduction in scanning workload and aids in acquiring clearer point cloud data. In the context of post-processing, it also diminishes the point cloud stitching workload.
Step 2: Geometry of the object being scanned
Objects being scanned are generally categorized into three types based on their geometry. The isolated type, illustrated in subgraph (a) of Figure 4, usually necessitates a minimum of four scanning sites encircling the object to garner comprehensive point cloud data. Notably, point cloud data from these sites may present substantial stitching errors, potentially leading to non-closure issues, thereby necessitating meticulous identification and minimization of error sources. The convex type, depicted in subgraph (b) of Figure 4, often exhibits significant angular turns in the middle section. For such scanning scenarios, a minimum of three sites is required to ensure the acquisition of quality point cloud data. It is crucial to maximize the overlap between two sites since areas with large angular deviations tend to have lower point cloud quality, and greater overlap leads to enhanced stitching accuracy. The third category, a combination of convex and concave forms, as shown in subgraph (c) of Figure 4, typically demands a greater number of sites due to multiple angles, thereby avoiding scanning blind spots. In these cases, maintaining a consistent scanning distance across all sites is recommended to ensure uniformity in data collection.
In the process of acquiring point cloud coordinates, 3D laser scanning technology concurrently captures the point cloud's grayscale values, reflecting the varying intensities of the laser's reflection. However, the integration of color information into the point cloud is dependent on images from external digital cameras. This process involves aligning pixel points from digital images with the point cloud, thereby enriching the point cloud data with color.
The extraction of color information from point clouds significantly enhances the clarity of object textures and other details, crucial for the accurate recognition of geometrical features in rock masses. The source of color information in point clouds is digital cameras, differentiated into built-in and external types. Built-in cameras, with predetermined focal lengths and parameters, facilitate the scanner's software in automatically aligning the captured digital images with the point cloud. In contrast, external cameras, selected based on specific user requirements, necessitate manual alignment of their output images with the point cloud.
The grayscale values of point clouds differ based on the intensity of reflected laser light, bearing a direct relation to the material properties of the object. This variance in grayscale values enables the classification of the object's composition. For instance, within the point cloud data of a slope, there is a discernible difference in grayscale values between rock and vegetation, as exemplified in Figure 5.
The determination of the sampling point interval, influenced not only by the scanning device but also by the scanning objective, requires careful adjustment. For terrain scanning aimed at generating contour lines or cross-sectional views, a larger sampling interval is feasible, typically in the centimeter range. Conversely, for slope deformation monitoring, a millimeter-level sampling interval is necessary to ensure high precision in the resultant slope point cloud data. Generally, a smaller sampling interval correlates with increased precision of the point cloud data, thus enabling a broader spectrum of analysis. However, an overly small interval may prolong fieldwork and reduce data processing efficiency. Optimal setting of the sampling interval should strike a balance among scanning duration, data processing time, and the precision of the data.
Subsequent to the field operations detailed previously, 3D point cloud data of the object is obtained. Nevertheless, this data typically comprises an array of noise points, including airborne dust, vegetation, and various obstructions. These elements necessitate removal in later processing stages. Additional procedures, such as point cloud stitching and coordinate transformation, are integral to the advanced stages of point cloud processing. Summarily, the post-processing of point cloud data encompasses several key steps: Preprocessing (including noise reduction and vegetation removal), stitching of point cloud data, and coordinate transformation. This is depicted in the flowchart presented in Figure 6. Upon completion of these processes, the point cloud data becomes suitable for subsequent analytical applications.
In the operational phase of 3D laser scanners, a proprietary scanning system is engaged, typically positioning the origin of the coordinate system at the scanner's center. This positioning methodology yields point cloud data within a relative coordinate system. However, for applications where an understanding of the object's true geodetic coordinates is essential, a transformation of the point cloud data from relative to absolute coordinates becomes imperative. During the acquisition of field data, the scanner is often employed in conjunction with a total station or GPS. The latter is tasked with the measurement of the absolute coordinates of reference targets. The transformation of point cloud coordinates from the relative system of the scanner to the absolute geodetic system is effectuated utilizing Eq. (2):
where, $X_1, Y_1, Z_1$ represent the coordinates within the geodetic coordinate system, with the X-axis corresponding to the east direction, and the Y-axis to the north. $X_0, Y_0, Z_0$ denote the coordinates within the scanner's own system. $\delta_x, \delta_y, \delta_z$ are the distances for coordinate translation. S represents the scale factor, equating to 1 when both coordinate systems possess the same dimensions. $\alpha_1, \alpha_2, \alpha_3$ symbolize the angles between the axes of the absolute and scanner's coordinate systems. $R$ denotes the coordinate transformation matrix, as derived from Eq.(3):
It is crucial to note that for the research focus of this study, specifically the structural analysis of rock masses, the absolute coordinate values of the point cloud data are not predominantly required. Only the determination of the direction of true north is necessary, thereby necessitating only the local coordinate system. Upon leveling the scanner, the angle $\alpha_2$ between the scanner's Y-axis and true north is measured, subsequently simplifying Eq. (3) to Eq. (4):
Despite the numerous advantages of 3D laser scanning technology over conventional measurement methods, it is not devoid of errors. These errors primarily originate from intrinsic limitations of the instruments, external field conditions, and inaccuracies during data processing. Sometimes, such errors can lead to significant discrepancies between the point cloud model and the actual object. Errors in 3D laser scanning technology are generally categorized into field errors and office errors. Field errors predominantly include errors inherent to the instrument, environmental errors, and reflection errors from the scanning surface. Office errors primarily consist of errors in point cloud stitching and in matching point clouds from different periods.
Field errors are inaccuracies encountered during the initial data collection phase for point cloud data. The complex nature of field working environments contributes to multiple factors influencing these errors. This section categorizes and analyzes them.
Instrument error refers to the discrepancy between measured results and actual outcomes due to the instrument's inherent issues, usually involving range and angle measurement errors.
a. Range measurement error
Distance measurement in 3D laser scanners is executed by receiving reflected laser light. However, the emitted laser forms a spot on the target object, enlarging with increasing distance. Theoretically, the laser measures the distance from the spot's center to the scanner's center. Distance measurement in scanners, though, is typically based on the first reflected point, which introduces increased uncertainty with larger spots. This error, escalating with distance, is termed proportional error. Furthermore, the fixed distance between the laser emitter and reflector in the device also induces error, known as fixed error. This instrument error can be formulated by Eq. (5):
where, $\sigma_s$ is the distance-related error, $\sigma_{fixed}$ signifies the fixed error of the instrument, $\sigma_{proportional}$ represents the proportional error associated with range measurement, and $S$ is the distance.
b. Angle measurement error
Angle measurement errors in 3D laser scanning chiefly encompass horizontal and vertical angle errors. Contributing factors include vibrations of the reflective mirror, unevenness of the mirror surface, and control errors in the scanning motor's uneven rotation. Additionally, the size of the laser spot also impacts the angle measurement error. This error is calculated as per Eq. (6), with Figure 7 demonstrating the principle:
c. Multipath error
Under typical conditions, the device receives laser light reflected only from a single object. However, in the presence of obstructions, if the laser strikes the junction between the obstruction and the object (as depicted in Figure 8), two beams of reflected laser light may be received. This interference leads to inaccuracies in distance measurement, known as multipath error.
Subsequent to the acquisition of point cloud data, processing tasks encompass the stitching of point cloud data, filtering, and the alignment of point clouds from different periods. The primary sources of error in this phase are identified in the stitching of point clouds and their registration from diverse periods.
a. Point cloud stitching error
The process of gathering point cloud data for an object frequently necessitates the use of multiple scanning stations, each providing a scan from a unique angle. Given that each scanning station's data is centered around its respective scanner, the resulting point clouds do not share a unified coordinate system. This necessitates the alignment of point cloud data from each station into a singular coordinate system, a process known as point cloud stitching. There are generally three approaches to stitching: aligning based on feature points between point clouds, utilizing targets established during scanning, or employing transformations and stitching based on rear sight point coordinates established at each scanning station. Irrespective of the chosen method, stitching errors are unavoidable, though the use of targets typically results in minimal errors. Contemporary scanning equipment often achieves stitching inaccuracies within a 2 mm threshold.
b. Registration error of point clouds from different periods
For applications such as deformation analysis using point cloud data, a prevalent method involves comparing two distinct sets of point clouds, either via point-to-model or model-to-model comparisons. In either scenario, aligning the two point clouds is imperative. Standard practice involves matching based on distinctive points or surfaces. However, for deformation monitoring where precision is paramount, this approach often incurs substantial error. Utilizing targets provided by the scanning equipment for alignment is recommended to enhance accuracy.
3. Methodology for Extracting Blast Pile Characteristics from Point Cloud Data
In the acquisition of point cloud data from blast piles, the presence of noise is an inevitable consequence influenced by factors such as scanning equipment, environmental conditions, and the inherent characteristics of the blast pile itself. Consequently, the initial phase in point cloud data processing is dedicated to noise reduction. This study integrates radius filtering and statistical filtering to optimize denoising for specific field data environments. Radius filtering, noted for its simplicity [15], entails the construction of a $k-d$ tree structure within the point cloud data to establish topological relationships between disordered points. Here, a threshold $D$ is defined for the quantity of neighboring points within a predetermined range $R$, and the count of neighboring points for each individual point is calculated. Points meeting this threshold are retained, effectively addressing isolated, drifting, and redundant points in the point cloud data.
The elimination of mixed points, characterized by their sparse spatial distribution and varying distances from the target point cloud, poses a significant challenge in the denoising process. Sole reliance on radius filtering is insufficient for their removal, as each noise point, though minimal, contributes certain informational content. The application of statistical filtering [16] targets the sparsity characteristic of these mixed points. It involves calculating the average distance of each point to its nearest $k$ points. Given that points within a blast pile are closely packed, their inter-point average distances are relatively small. Consequently, points surrounded by larger inter-point distances are identified as sparsely distributed, characterizing them as mixed points. This process entails statistical analysis on the neighborhoods of each point, presupposing that the point distances in the blast pile point cloud conform to a Gaussian distribution, shaped by mean $\mu$ and standard deviation $\sigma$. The n-th point in the point cloud, denoted as $P_n\left(X_n, Y_n, Z_n\right)$, and its distance to any other point $P_n\left(X_n, Y_n, Z_n\right)$ is calculated as per Eq. (7):
Eq. (8) is utilized to compute the average distance between each point and all other points, with the standard deviation determined by Eq. (9):
In the implementation of the algorithm, only the values for $k$ and $m$, the standard deviation multiplier $m$, are input. A point is preserved if its average distance to the nearest $k$ points falls within the standard range $(\mu-m \sigma, \mu+m \sigma)$. Figure 9 displays the results both before and after the noise reduction process.
Post-noise reduction, the ensuing step involves the stitching of point cloud data. The inherent limitations in scanning angle and area during the 3D laser scanning of blast piles necessitate multiple scans from various angles and stations, as capturing the entire point cloud in a single scan is often unfeasible. Each scanning operation, centered around the scanner, establishes a unique coordinate system for a comprehensive 360-degree horizontal scan. To amalgamate multi-station point cloud data effectively, the scanned data is unified within the same coordinate system. This stitching process is facilitated by utilizing the GPS positioning system to assign absolute coordinates to each scanning station, thereby integrating point cloud data from diverse angles and stations into a cohesive whole. In complex mining environments, field scanning can be prone to errors due to signal issues or incorrect station setup. To mitigate such errors and enhance stitching precision, the Generalized Iterative Closest Point (GICP) algorithm is employed for refined stitching post-GPS coordinate alignment.
The substantial data volume generated from each scan of the on-site blast pile point cloud poses challenges for computational efficiency in subsequent stages. Given the intrinsic properties of 3D point clouds, feature vectors within the same scan exhibit resistance to density variations. Consequently, variations in the local surface sampling density of the scan data do not influence the feature vector values. This implies that a reduction in point cloud density does not detrimentally affect research outcomes. The employed downsampling method is voxel-based, aiming to uniformly diminish the number of sampling points. It involves the construction of a voxel grid [17] encompassing the entire point cloud data set. This extensive voxel grid is then divided into smaller, equally-sized cubic sections based on pre-set parameters. The centroid of the point cloud in each cube is computed, allowing for the elimination of extraneous points and substitution of the cube's point cloud data with its centroid. This process ensures the preservation of the blast pile point cloud's geometric characteristics even while reducing point cloud density, thereby achieving a filtering effect. The substantial reduction in data volume ensuing from this approach markedly enhances program execution speed. Figure 10 illustrates the impact of voxel downsampling on the blast pile point cloud data.
The preprocessing of point cloud data still leaves numerous ground points, not pertinent to the study, within the field data. These extraneous points can hinder the speed and precision of feature computation, blending seamlessly with the blast pile point cloud. To address this issue, the RANSAC algorithm is employed for preliminary segmentation of the blast pile point cloud data, effectively removing ground points to augment computational efficiency and accuracy. The essence of the RANSAC algorithm is a two-step iterative process: first, a random subset of points from the blast pile point cloud is selected to estimate model parameters [18], followed by the computation of corresponding model parameters; second, the remainder of the data is evaluated against the model derived in the first step, classifying points that fit the model as inliers and the rest as outliers.
The optimal model for plane fitting of the point cloud data is attained after $K$ iterations. To optimize the exclusion of ground points from the blast pile point cloud, a detailed analysis of RANSAC's parameters is imperative, ensuring more accurate fitting results. The iterative process can be predetermined based on the volume of point cloud data. In point cloud space, any three non-collinear points can form a plane, usually expressed by the general equation:
Assuming $P_1\left(x_1, y_1, z_1\right), P_2\left(x_2, y_2, z_2\right)$, and $P_3\left(x_3, y_3, z_3\right)$ as any three points from the point cloud dataset, a plane equation of the form $A x+B y+C z+D=0$ is constructed based on these data points, where $A, B$, and $C$ denote the plane's normal vector. The plane equation is derived using the coordinates of these points:
The iterative process halts after $K$ iterations when there are no remaining unmarked points. The algebraic distance of other points to the plane, as determined by these three points, is computed (as per Eq. (7)). Points are deemed on the same plane if their distance is below a threshold value; otherwise, they are marked as outliers. Once the count of points on the same plane surpasses a certain quantity, the plane is preserved.
Figure 11 illustrates a laboratory flat laying state rock block, with its outline preliminarily identified through RANSAC plane fitting. To enhance process efficiency, the number of iterations can be preset, contingent upon the point cloud data volume.
In the point cloud data of blast piles, rock blocks are densely arranged, with some overlapping, posing a challenge for segmentation. Nevertheless, it is observed that the point cloud density in areas of depressions and gaps between rock blocks is significantly lower compared to regions with rock blocks. This observation leads to the application of the DBSCAN algorithm for the clustering analysis of the blast pile point cloud, facilitating the delineation of rock block contours. DBSCAN effectively clusters spatial data points based on density [19], employing the concept of neighborhood to represent the spatial data point set's density. This algorithm does not require preset cluster numbers, can detect clusters of arbitrary shapes, and inherently possesses a noise-filtering capability, clarifying the spatial distribution of rock blocks within the blast pile. The parameters $Eps$ and $MinPts$ in DBSCAN, defining the compactness of point cloud distribution, are crucial: $Eps$ sets the radius for clustering point clouds, while $MinPts$ determines the threshold for the number of point cloud samples in the neighborhood with radius $Eps$. Laboratory-laid rock block point cloud analysis, as depicted in Figure 12, demonstrates recognition results with varying $Eps$ and $MinPts$ values, illustrated in Figure 13.
Iterative testing established the parameter variation patterns of the DBSCAN clustering algorithm. A higher $Eps$ value with fixed $MinPts$ leads to under-segmentation in rock block point clouds, while a lower $Eps$ value results in over-segmentation. Conversely, with a fixed $Eps$ value, a larger $MinPts$ value causes over-segmentation, and a smaller $MinPts$ value leads to under-segmentation. Identifying the appropriate $Eps$ and $MinPts$ values is essential for optimal clustering outcomes and requires multiple trials in varying blast pile point cloud scenarios.
Following clustering, the 3D point cloud model of the rock block is reconstructed using the 3D convex hull method. The rock block's point cloud model encompasses a collection of surface scan points. Combining RANSAC plane fitting and DBSCAN clustering with 3D convex hull calculations facilitates the volume computation of the rock block. The principle involves selecting any four non-coplanar points [20] within the rock block point cloud, forming an initial tetrahedron as the base of the convex hull. After establishing the initial convex hull, other points' spatial relation to this hull is assessed. Points within the initial hull are bypassed, while exterior points are utilized to form new convex hulls, continuing until all points in the blast pile point cloud data are evaluated. The 3D reconstruction resulting in the convex hull (subgraph (a) of Figure 14) allows for the direct calculation of the rock block's longest particle diameter using the bounding box method. The bounding box (subgraph (b) of Figure 14), a simplistic geometric space, encapsulates any shape or complexity of the point cloud data. The OBB method [21] is used to compute the rock block particle size, particularly focusing on the main axis of the OBB.
The morphology of blast piles, characterized as standard irregular objects, poses significant challenges for manual measurement. Utilizing 3D laser scanning technology, comprehensive spatial data of field blast piles is captured, revealing the surface's spatial characteristics. The process involves generating dense triangular meshes from preprocessed directed point cloud data, employing triangulation methods [22], and subsequently accomplishing 3D reconstruction to form the blast pile's 3D model. The Delaunay algorithm, prevalently used for mesh triangulation [23], is applied to the blast pile's point cloud data. The volume of the resulting blast pile model is calculated using projection methods.
The underlying principle of the projection method for volume calculation is the projection of each triangular element onto a plane. These projected triangular bases are connected to form a polyhedron, and the summation of these polyhedrons' volumes yields the model's total volume. Despite the irregularity of the blast pile surface, the model precisely determines the projection plane. The contour line of the blast pile typically exhibits a downward curvature with minimal intersections and overlaps. An orthographic projection method is employed to project the blast pile surface onto the XOY plane, and through 3D rotation, a spatial prism is formed between the irregular triangular mesh surface, the projection plane, and the intermediate elevation ( Figure 15). The cumulative volume of each triangular prism ascertains the blast pile's total volume.
The extraction of 3D parameters is critical for analyzing the blast pile's spatial morphology [24]. Key 3D characteristics include the forward throw distance, lateral extent, and height, with the lateral extent being predominantly influenced by the lateral extent of the blasting area. Upon obtaining the blast pile's 3D model, with the blasting design's free face as the XOZ plane and the forward throw distance along the Y-axis, the Cartesian coordinate system is used to compute the model's length, width, and height. Eq. (14) is employed to determine the maximum and minimum values of $X, Y, Z$ in the model, thereby extracting the 3D characteristics of the blast pile:
The same method is utilized to calculate the bench height pre-blasting and, in conjunction with post-blasting blast pile height, to ascertain the blast pile's rise height:
Short forward throw distances and high rise heights post-blasting often suggest low looseness and suboptimal blasting results. Moreover, excessive rise height poses a safety hazard during loading operations and impedes ore transport efficiency. In contrast, extensive forward throw distances coupled with low rise heights indicate excessive explosive usage [25], leading to inefficiencies in cost. The deduced 3D characteristics of the blast pile provide a benchmark for evaluating blasting effects and serve as feedback for optimizing blasting parameters.
4. Result Analysis
The effectiveness of blasting is influenced by both the properties of the rock and the parameters of the blast, resulting in variability in blasting outcomes. To ascertain the generalizability of the blast fragmentation recognition algorithm to diverse blast pile distributions, rock blocks of different sizes were mixed and arrayed into various forms in a laboratory setting, maintaining a consistent stacking height of approximately 15 cm. Representative morphologies of blast piles were then chosen from these configurations for analysis. Figure 16 presents intermediate process images derived from the point cloud data processing methodology delineated in Section 2.
In the examination of Figure 16, it is evident that the majority of rock blocks are accurately identified, particularly larger rocks which exhibit a higher rate of recognition. Nonetheless, instances of over-segmentation and under-segmentation are noted in regions where rocks adhere to each other. Optimization of these results is achievable through the adjustment of the $Eps$ and $MinPts$ parameters. Table 1 collates the measurement parameters of rock block identification against actual particle size values. A series of repetitive identifications on 25 distinct stacked blast piles within the laboratory setting revealed a tendency of the OBB algorithm to overestimate rock block particle sizes, with an average relative error approximating 7.62% in comparison to actual measurements.
Rock Block No. | Direct Measurement (m) | Algorithm Calculation (m) | Relative Error (%) | Rock Block No. | Direct Measurement (m) | Algorithm Calculation (m) | Relative Error (%) |
1 | 0.159 | 0.166 | 4.40 | 16 | 0.080 | 0.083 | 3.75 |
2 | 0.152 | 0.155 | 1.97 | 17 | 0.076 | 0.080 | 5.26 |
3 | 0.137 | 0.141 | 2.91 | 18 | 0.110 | 0.113 | 2.73 |
4 | 0.145 | 0.152 | 4.83 | 19 | 0.118 | 0.120 | 1.69 |
5 | 0.136 | 0.140 | 2.94 | 20 | 0.147 | 0.142 | 3.40 |
6 | 0.123 | 0.126 | 2.43 | 21 | 0.123 | 0.127 | 3.25 |
7 | 0.110 | 0.112 | 1.82 | 22 | 0.092 | 0.099 | 7.61 |
8 | 0.115 | 0.120 | 4.35 | 23 | 0.095 | 0.098 | 3.16 |
9 | 0.120 | 0.116 | 3.33 | 24 | 0.132 | 0.135 | 2.27 |
10 | 0.121 | 0.124 | 2.48 | 25 | 0.115 | 0.122 | 6.08 |
11 | 0.098 | 0.094 | 4.08 | 26 | 0.124 | 0.128 | 3.225 |
12 | 0.099 | 0.101 | 2.02 | 27 | 0.122 | 0.126 | 3.28 |
13 | 0.143 | 0.146 | 2.10 | 28 | 0.126 | 0.131 | 3.97 |
14 | 0.133 | 0.135 | 1.50 | 29 | 0.128 | 0.135 | 5.47 |
15 | 0.140 | 0.143 | 2.14 | 30 | 0.133 | 0.138 | 3.76 |
Subsequent to these laboratory tests, the applicability and precision of the method described in this paper were verified. The volume of blast piles calculated via this method was contrasted with both actual measurements and computational outcomes from Python's VTK library. The method exhibited an average volumetric error of about 4.67%, while direct computation through the VTK library indicated an average error margin of approximately 8.03%. Consequently, the volume measurements conducted in this study demonstrate greater accuracy and applicability. Figure 17 compares the calculated forward throw distance and height of the blast pile using the coordinate method against actual values. To bolster credibility, Principal Component Analysis (PCA) was also employed to calculate corresponding parameters for comparative analysis. The findings indicate that the average error in parameter measurements using this method stands at 3.12%, as opposed to a 5.02% error rate when using the PCA algorithm directly. Hence, the 3D characteristic values of the blast pile computed through the proposed method exhibit heightened precision.
The field test was carried out at the Anqian Phase II Open-pit Mine Rock Stripping Project. Prior to and subsequent to the blasting process, the blasting area underwent 3D laser scanning. The primary ore types in the mine comprised hematite and magnetite, supplemented by minor quantities of limonite and siderite. Characterized by high rock hardness and complex joint and fracture structures, the blasting operation encompassed 87 blast holes within a rocky terrain. The parameters included hole spacing of 6.6m, row spacing of 4.2 m, inter-hole delay of 42 ms, inter-row delay of 17 ms, hole depth ranging from 12.3 m to 13.9 m, a maximum charge per hole of 168 kg, and a total charge weight of 13,272 kg. A plum blossom pattern and sequential detonation were employed for hole placement. To comprehensively analyze the distribution characteristics of the blast pile, 3D laser scanning was conducted on the blasting area both prior to and subsequent to the blasting operation. This process facilitated the creation of a 3D model of the blasting area. Subgraph (a) of Figure 18 illustrates the 3D model established through this technique. Following the established processing procedure, the data for the reconstructed convex hull were obtained, as depicted in subgraph (b) of Figure 18. Subsequently, the parameters pertinent to the blast pile were calculated.
Post-blasting operations involved the direct measurement of the blast pile's height, forward throw distance, and span of the blast area, with the actual volume of the blast pile recorded via shovel transport data. These measurements were juxtaposed with the results computed through the method delineated in this paper ( Table 2). The blast pile volume deduced via the projection method exhibited a close alignment with the actual shovel transport data, manifesting a relative error of 4.85%. The calculated parameters of the blast pile's height, forward throw distance, and span, using the coordinate method, showed relative errors of 2.92%, 3.91%, and 4.29%, respectively. These findings affirm the method's accuracy and its enhanced speed and safety over conventional field measurement techniques.
Owing to the expansive surface area of the blast pile, direct calculation of rock block characteristics entails substantial computational demands and presents challenges in verification. Post-acquisition of the 3D point cloud of the surface rocks of the blast pile, five segments of rock block point clouds were selected for analysis and validation. Parameter trials ascertained that optimal recognition effects were achieved with $Eps$ values of 0.04-0.045 and $MinPts$ ranging between 35-40. Given the field challenges in directly measuring the volume of rock blocks, particle size was adopted as the representative characteristic for blast fragmentation. Reflecting field conditions, rock block particle sizes were segmented into four length intervals: 0-0.5 m, 0.5-1 m, 1-1.5 m, and 1.5-2 m. Table 3 outlines the accuracy statistics of rock block particle size identification within various size ranges from the point cloud data of blast piles from six typical on-site areas. P0-0.5 is designated to represent the accuracy rate in identifying the particle sizes of rock blocks that fall within the 0-0.5 m range. The table reveals that larger rock blocks demonstrate higher identification accuracy, with an average site identification accuracy of 80.4%.
In accordance with the operational context of Anqian Mine, the calculated rock block particle sizes were classified as follows: 0-0.5 m as small blocks, 0.5-1 m as medium blocks, above 1-1.2 m as large blocks, and beyond 1.2m as blocks necessitating secondary crushing. The large block rate for this blasting was noted at 4.9%, indicative of effective blasting. For transportation efficiency, rock blocks exceeding 1.2 m necessitated mechanical crushing.
Blast Pile Parameter | Volume (m3) | Blast Pile Height (dm) | Forward Throw Distance (dm) | Blast Area Span (dm) |
Field measurement results | 30810.69 | 14.35 | 36.63 | 55.5 |
Scanning calculation results | 32305.20 | 14.62 | 37.41 | 56.8 |
Relative error (%) | 4.85 | 2.92 | 3.91 | 4.29 |
Blast Pile Area | P0-0.5 | P0.5-1.0 | P1.0-1.5 | P1.5-2 |
1 | 76.9% | 78.3% | 80.2% | 91.8% |
2 | 77.3% | 79.6% | 76.6% | 89.6% |
3 | 71.2% | 74.7% | 81.9% | 87.8% |
4 | 74.1% | 78.3% | 80.6% | 88.5% |
5 | 72.9% | 76.5% | 79.6% | 91.2% |
6 | 75.5% | 80.9% | 78.3% | 88.3% |
5. Conclusions
Employing 3D laser scanning technology, this study rigorously examines the distribution characteristics of blast piles, formulates a methodology for their characterization, and substantiates the efficacy of this approach through both laboratory and field testing. The salient conclusions are as follows:
A pioneering approach for computing blast fragmentation characteristics, rooted in point cloud data, has been developed. This method extensively utilizes the RANSAC plane fitting and DBSCAN clustering algorithms, facilitating the delineation of blast fragmentation contours through the strategic adjustment of $Eps$ and $MinPts$ parameters. Rock block volumes are computed employing the convex hull algorithm, while the bounding box length serves as an indicator of block particle size.
A technique for extracting 3D features of blast piles has been established. This involves applying the Delaunay algorithm to triangulate the 3D point cloud data of blast piles, thereby generating a mesh model of the pile. The projection method is utilized to calculate the model's volume, and the coordinate method is employed to determine key 3D parameters, including the height, forward throw distance, and lateral extent of the blast pile.
The utility of the proposed methodology is affirmed through the analysis of point cloud data from laboratory-arranged blast piles and field blast piles at an open-pit mine bench blasting site. Laboratory tests reveal a relative error in rock block volume under stacked conditions of approximately 4.61%, and a 4.75% relative error in particle size. In field applications, the method demonstrates an average rock block identification accuracy of about 80.4%, which increases with the size of the rock blocks. The calculated volume of the blast pile shows a relative error of 4.85%, and the errors in computing the 3D characteristics are 2.92%, 3.91%, and 4.29%, respectively. These findings on blast pile distribution characteristics provide insightful benchmarks for the assessment of blasting outcomes and the refinement of blasting parameters.
Data is available on request due to privacy restrictions.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.