Javascript is required
Search
Volume 2, Issue 3, 2024

Abstract

Full Text|PDF|XML
The accurate estimation of the age of orange trees is a critical task in orchard management, providing valuable insights into tree growth, yield prediction, and the implementation of optimal agricultural practices. Traditional methods, such as counting growth rings, while precise, are often labor-intensive and invasive, requiring tree cutting or core sampling. These techniques are impractical for large-scale application, as they are time-consuming and may cause damage to the trees. A novel non-invasive system based on fuzzy logic, combined with linear regression analysis, has been developed to estimate the age of orange trees using easily measurable parameters, including trunk diameter and height. The fuzzy inference system (FIS) offers an adaptive, intuitive, and accurate model for age estimation by incorporating these key variables. Furthermore, a multiple linear regression analysis was performed, revealing a statistically significant correlation between the predictor variables (trunk diameter and height) and tree age. The regression coefficients for diameter (p = 0.0134) and height (p = 0.0444) demonstrated strong relationships with tree age, and an R-squared value of 0.9800 indicated a high degree of model fit. These results validate the effectiveness of the proposed system, highlighting the potential of combining fuzzy logic and regression techniques to achieve precise and scalable age estimation. The model provides a valuable tool for orchard managers, agronomists, and environmental scientists, offering an efficient method for monitoring tree health, optimizing fruit production, and promoting sustainable agricultural practices.

Abstract

Full Text|PDF|XML

Digital ink Chinese character recognition (DICCR) systems have predominantly been developed using datasets composed of native language writers. However, the handwriting of foreign students, who possess distinct writing habits and often make errors or deviations from standard forms, poses a unique challenge to recognition systems. To address this issue, a robust and adaptable approach is proposed, utilizing a residual network augmented with multi-scale dilated convolutions. The proposed architecture incorporates convolutional kernels of varying scales, which facilitate the extraction of contextual information from different receptive fields. Additionally, the use of dilated convolutions with varying dilation rates allows the model to capture long-range dependencies and short-range features concurrently. This strategy mitigates the gridding effect commonly associated with dilated convolutions, thereby enhancing feature extraction. Experiments conducted on a dataset of digital ink Chinese characters (DICCs) written by foreign students demonstrate the efficacy of the proposed method in improving recognition accuracy. The results indicate that the network is capable of more effectively handling the non-standard writing styles often encountered in such datasets. This approach offers significant potential for the error extraction and automatic evaluation of Chinese character writing, especially in the context of non-native learners.

Open Access
Research article
Development and Evaluation of a Parallel K-means Algorithm for Big Data Analysis in Google MapReduce Environment
junwei zhao ,
xuexu yuan ,
qingtao hou ,
hanyu gao ,
chunyu gao ,
yuanyuan zhang
|
Available online: 08-22-2024

Abstract

Full Text|PDF|XML
The challenge of executing iterative big data analysis algorithms within the Google Cloud MapReduce environment has been addressed by developing a parallel K-means algorithm capable of leveraging the distributed computing power of the platform. Traditional K-means, which requires iterative steps, is adapted into a parallel version using MapReduce to enhance computational efficiency. This parallel algorithm is structured into multiple super-steps, each of which executes in parallel but is processed sequentially across super-steps. Each super-step corresponds to one iteration of the serial K-means algorithm, with parallel computation carried out at each node to determine the mean of each cluster center. Experimental evaluations have demonstrated that the parallel K-means algorithm performs effectively and accurately. Notably, for a dataset of 450 water samples, a parallel speedup factor of 20.8 was achieved, significantly reducing the time required for data analysis. This substantial reduction in processing time is critical in time-sensitive applications, such as coal mine rescue operations, where quick decision-making is essential. The results indicate that the proposed parallel K-means algorithm is both a feasible and efficient solution for handling large-scale datasets within cloud environments, providing substantial benefits in both computational speed and practical application.
- no more data -