Javascript is required
Search
/
/
Acadlore Transactions on AI and Machine Learning
ATAIML
Acadlore Transactions on AI and Machine Learning (ATAIML)
ATAMS
ISSN (print): 2957-9562
ISSN (online): 2957-9570
Submit to ATAIML
Review for ATAIML
Propose a Special Issue
Current State
Issue
Volume
2024: Vol. 3
Archive
Home

Acadlore Transactions on AI and Machine Learning (ATAIML) aims to spearhead the academic exploration of artificial intelligence, machine, and deep learning, along with their associated disciplines. Underscoring the pivotal role of AI and machine learning innovations in shaping the modern technological landscape, ATAIML strives to decode the complexities of current methodologies and applications in the AI domain. Published quarterly by Acadlore, the journal typically releases its four issues in March, June, September, and December each year.

  • Professional Service - Every article submitted undergoes an intensive yet swift peer review and editing process, adhering to the highest publication standards.

  • Prompt Publication - Thanks to our proficiency in orchestrating the peer-review, editing, and production processes, all accepted articles see rapid publication.

  • Open Access - Every published article is instantly accessible to a global readership, allowing for uninhibited sharing across various platforms at any time.

Editor(s)-in-chief(2)
andreas pester
British University in Egypt, Egypt
andreas.pester@bue.edu.eg | website
Research interests: Differential Equations; LabVIEW; MATLAB, Educational Technology; Blended Learning; M-Learning; Deep Learning
zhuang wu
Capital University of Economics and Business, China
wuzhuang@cueb.edu.cn | website
Research interests: Decision Optimization and Management; Computational Intelligence; Intelligent Information Processing; Big Data; Online Public Opinion; Image Processing and Visualization

Aims & Scope

Aims

Acadlore Transactions on AI and Machine Learning (ATAIML) emerges as a pivotal platform at the intersection of artificial intelligence, machine learning, and their multifaceted applications. Recognizing the profound potential of these disciplines, the journal endeavors to unravel the complexities underpinning AI and ML theories, methodologies, and their tangible real-world implications.

In a world advancing at digital light-speed, ATAIML posits that AI and ML reshape industries at their core. From the expansion of reality to the birth of synthetic data and the intricate design of graph neural networks, such advancements are at the forefront of innovation. With a mission to chronicle these paradigm shifts, ATAIML aims to serve as a beacon for researchers, professionals, and enthusiasts eager to fathom the vast horizons of AI and ML in the modern age.

Furthermore, ATAIML highlights the following features:

  • Every publication benefits from prominent indexing, ensuring widespread recognition.

  • A distinguished editorial team upholds unparalleled quality and broad appeal.

  • Seamless online discoverability of each article maximizes its global reach.

  • An author-centric and transparent publication process enhances submission experience.

Scope

ATAIML's expansive scope encompasses, but is not limited to:

  • AI-Integrated Sensory Technologies: Insights into AI's role in amplifying and harmonizing sensory data.

  • Symbiosis of AI and IoT: The collaborative dance between artificial intelligence and the Internet of Things and their cumulative impact on contemporary society.

  • Mixed Realities Shaped by AI: Probing the AI-crafted mixed-reality realms and their implications.

  • Sustainable AI Innovations: A focus on 'Green AI' and its instrumental role in shaping a sustainable future.

  • Synthetic Data in the AI Era: A deep dive into the rise and relevance of synthetic data and its AI-driven generation.

  • Graph Neural Paradigms: Exploration of the nuances of graph-centric neural networks and their evolutionary trajectory.

  • Interdisciplinary AI Applications: Delving into AI's intersections with fields such as psychology, fashion, and the arts.

  • Moral and Ethical Dimensions of AI: A comprehensive study of the ethical landscapes carved by AI's advancements and the corresponding legal challenges.

  • Diverse Learning Methodologies: Exploration of revolutionary learning techniques ranging from Bayesian paradigms to statistical approaches in ML.

  • Emergent AI Narratives: Spotlight on cutting-edge AI technologies, foundational standards, computational attributes, and their transformative use cases.

  • Holistic Integration: Emphasis on multi-disciplinary submissions that combine insights from varied fields, offering a holistic perspective on AI and ML's global resonance.

Articles
Recent Articles
Most Downloaded
Most Cited

Abstract

Full Text|PDF|XML

Swarm intelligence (SI) has emerged as a transformative approach in solving complex optimization problems by drawing inspiration from collective behaviors observed in nature, particularly among social animals and insects. Ant Colony Optimization (ACO), a prominent subclass of SI algorithms, models the foraging behavior of ant colonies to address a range of challenging combinatorial problems. Originally introduced in 1992 for the Traveling Salesman Problem (TSP), ACO employs artificial pheromone trails and heuristic information to probabilistically guide solution construction. The artificial ants within ACO algorithms engage in a stochastic search process, iteratively refining solutions through the deposition and evaporation of pheromone levels based on previous search experiences. This review synthesizes the extensive body of research that has since advanced ACO from its initial ant system (AS) model to sophisticated algorithmic variants. These advances have both significantly enhanced ACO's practical performance across various application domains and contributed to a deeper theoretical understanding of its mechanics. The focus of this study is placed on the behavioral foundations of ACO, as well as on the metaheuristic frameworks that enable its versatility and robustness in handling large-scale, computationally intensive tasks. Additionally, this study highlights current limitations and potential areas for future exploration within ACO, aiming to facilitate a comprehensive understanding of this dynamic field of swarm-based optimization.

Abstract

Full Text|PDF|XML

Radar warning receivers (RWRs) are critical for swiftly and accurately identifying potential threats in complex electromagnetic environments. Numerous methods have been developed over the years, with recent advances in artificial intelligence (AI) significantly enhancing RWR capabilities. This study presents a machine learning-based approach for emitter identification within RWR systems, leveraging a comprehensive radar signal library. Key parameters such as signal frequency, pulse width, pulse repetition frequency (PRF), and beam width were extracted from pulsed radar signals and utilized in various machine learning algorithms. The preprogramming phase of RWRs was optimized through the application of multiple classification algorithms, including k-Nearest Neighbors (KNN), Decision Tree (DT), the ensemble learning method, support vector machine (SVM), and Artificial Neural Network (ANN). These algorithms were compared against conventional methods to evaluate their performance. The machine learning models demonstrated a high degree of accuracy, achieving over 95% in training phases and exceeding 99% in test simulations. The findings highlight the superiority of machine learning algorithms in terms of speed and precision when compared to traditional approaches. Furthermore, the flexibility of machine learning techniques to adapt to diverse problem sets underscores their potential as a preferred solution for future RWR applications. This study suggests that the integration of machine learning into RWR emitter identification not only enhances the operational efficiency of electronic warfare (EW) systems but also represents a significant advancement in the field. The increasing relevance of machine learning in recent years positions it as a promising tool for addressing complex signal processing challenges in EW.

Abstract

Full Text|PDF|XML

Maintaining wheat moisture content within a safe range is of critical importance for ensuring the quality and safety of wheat. High-precision, rapid detection of wheat moisture content is a key factor in enabling effective control processes. A microwave detection system based on metasurface lens antennas was proposed in this study, which facilitates accurate, non-invasive, and contactless measurement of wheat moisture content. The system measures the attenuation characteristics of wheat with varying moisture content from 23.5 GHz to 24.5 GHz in the frequency range. A linear regression equation (coefficient of determination R2=0.9946) was established by using the measured actual moisture content obtained through the standard drying method, and was used as the prediction model for wheat moisture. Totally, 72 wheat samples were selected for moisture content prediction, yielding a root mean square error (RMSE) of 0.193%, mean absolute error (MAE) of 0.16%, and maximum relative error (MRE) of 5.25%. The results indicate that the proposed microwave detection system, based on metasurface lens antennas, provides an effective method for detecting wheat moisture content.

Abstract

Full Text|PDF|XML

Leaf diseases pose a significant threat to global agricultural productivity, impacting both crop yields and quality. Traditional detection methods often rely on expert knowledge, are labor-intensive, and can be time-consuming. To address these limitations, a novel framework was developed for the segmentation and detection of leaf diseases, incorporating complex fuzzy set (CFS) theory and advanced spatial averaging and difference techniques. This approach leverages the Hue, Saturation, and Value (HSV) color model, which offers superior contrast and visual clarity, to effectively distinguish between healthy and diseased regions in leaf images. The HSV space was utilized due to its ability to enhance the visibility of subtle disease patterns. CFSs were introduced to manage the inherent uncertainty and imprecision associated with disease characteristics, enabling a more accurate delineation of affected areas. Spatial techniques further refine the segmentation, improving detection precision and robustness. Experimental validation on diverse datasets demonstrates the proposed method’s high accuracy across a variety of plant diseases, highlighting its reliability and adaptability to real-world agricultural conditions. Moreover, the framework enhances interpretability by offering insights into the progression of disease, thus supporting informed decision-making for crop protection and management. The proposed model shows considerable potential in practical agricultural applications, where it can assist farmers and agronomists in timely and accurate disease identification, ultimately improving crop management practices.

Abstract

Full Text|PDF|XML

Accurately predicting whether bank users will opt for time deposit products is critical for optimizing marketing strategies and enhancing user engagement, ultimately improving a bank’s profitability. Traditional predictive models, such as linear regression and Logistic Regression (LR), are often limited in their ability to capture the complex, time-dependent patterns in user behavior. In this study, a hybrid approach that combines Long Short-Term Memory (LSTM) neural networks and a stacked ensemble learning framework is proposed to address these limitations. Initially, LSTM models were employed to extract temporal features from two distinct bank marketing datasets, thereby capturing the sequential nature of user interactions. These extracted features were subsequently input into several base classifiers, including Random Forest (RF), Support Vector Machine (SVM), and k-Nearest Neighbour (KNN), to conduct initial classifications. The outputs of these classifiers were then integrated using a LR model for final decision-making through a stacking ensemble method. The experimental evaluation demonstrates that the proposed LSTM-stacked model outperforms traditional models in predicting user time deposits on both datasets, providing robust predictive performance. The results suggest that leveraging temporal feature extraction with LSTM and combining it with ensemble techniques yields superior prediction accuracy, thereby offering a more sophisticated solution for banks aiming to enhance their marketing efficiency.

Abstract

Full Text|PDF|XML

Rice is a staple food for a significant portion of the global population, particularly in countries where it constitutes the primary source of sustenance. Accurate classification of rice varieties is critical for enhancing both agricultural yield and economic outcomes. Traditional classification methods are often inefficient, leading to increased costs, higher misclassification rates, and time loss. To address these limitations, automated classification systems employing machine learning (ML) algorithms have gained attention. However, when raw data is inadequately organized or scattered, classification accuracy can decline. To improve data organization, normalization processes are often employed. Despite its widespread use, the specific contribution of normalization to classification performance requires further validation. In this study, a dataset comprising two rice varieties Osmancik and Cammeo produced in Turkey was utilized to evaluate the impact of normalization on classification outcomes. The k-Nearest Neighbor (KNN) algorithm was applied to both normalized and non-normalized datasets, and their respective performances were compared across various training and testing ratios. The normalized dataset achieved a classification accuracy of 0.950, compared to 0.921 for the non-normalized dataset. This approximately 3% improvement demonstrates the positive effect of data normalization on classification accuracy. These findings underscore the importance of incorporating normalization in ML models for rice classification to optimize performance and accuracy.

Open Access
Research article
Integrating Long Short-Term Memory and Multilayer Perception for an Intelligent Public Affairs Distribution Model
hong fang ,
minjing peng ,
xiaotian du ,
baisheng lin ,
mingjun jiang ,
jieyi hu ,
zhenjiang long ,
qiaoxian hu
|
Available online: 08-01-2024

Abstract

Full Text|PDF|XML

In the realm of urban public affairs management, the necessity for accurate and intelligent distribution of resources has become increasingly imperative for effective social governance. This study, drawing on crime data from Chicago in 2022, introduces a novel approach to public affairs distribution by employing Long Short-Term Memory (LSTM), Multilayer Perceptron (MLP), and their integration. By extensively preprocessing textual, numerical, boolean, temporal, and geographical data, the proposed models were engineered to discern complex interrelations among multidimensional features, thereby enhancing their capability to classify and predict public affairs events. Comparative analysis reveals that the hybrid LSTM-MLP model exhibits superior prediction accuracy over the individual LSTM or MLP models, evidencing enhanced proficiency in capturing intricate event patterns and trends. The effectiveness of the model was further corroborated through a detailed examination of training and validation accuracies, loss trajectories, and confusion matrices. This study contributes a robust methodology to the field of intelligent public affairs prediction and resource allocation, demonstrating significant practical applicability and potential for widespread implementation.

Abstract

Full Text|PDF|XML

Sentiment analysis, a crucial component of natural language processing (NLP), involves the classification of subjective information by extracting emotional content from textual data. This technique plays a significant role in the movie industry by analyzing public opinions about films. The present research addresses a gap in the literature by conducting a comparative analysis of various machine learning algorithms for sentiment analysis in film reviews, utilizing a dataset from Kaggle comprising 50,000 reviews. Classifiers such as Logistic Regression, Multinomial Naive Bayes, Linear Support Vector Classification (LinearSVC), and Gradient Boosting were employed to categorize the reviews into positive and negative sentiments. The emphasis was placed on specifying and comparing these classifiers in the context of film review sentiment analysis, highlighting their respective advantages and disadvantages. The dataset underwent thorough preprocessing, including data cleaning and the application of stemming techniques to enhance processing efficiency. The performance of the classifiers was rigorously evaluated using metrics such as accuracy, precision, recall, and F1-score. Among the classifiers, LinearSVC demonstrated the highest accuracy at 90.98%. This comprehensive evaluation not only identified the most effective classifier but also elucidated the contextual efficiencies of various algorithms. The findings indicate that LinearSVC excels at accurately classifying sentiments in film reviews, thereby offering new insights into public opinions on films. Furthermore, the extended comparison provides a step-by-step guide for selecting the most suitable classifier based on dataset characteristics and context, contributing valuable knowledge to the existing literature on the impact of different machine learning approaches on sentiment analysis outcomes in the movie industry.

Open Access
Research article
DNA-Level Enhanced Vigenère Encryption for Securing Color Images
abdelhakim chemlal ,
hassan tabti ,
hamid el bourakkadi ,
rrghout hicham ,
abdellatif jarjar ,
abdellhamid benazzi
|
Available online: 06-25-2024

Abstract

Full Text|PDF|XML
This study presents the development of a novel method for color image encryption, leveraging an enhanced Vigenère algorithm. The conventional Vigenère cipher is augmented with substantial substitution tables derived from widely used chaotic maps in the cryptography domain, including the logistic map and the A.J. map. These enhancements incorporate new confusion and diffusion functions integrated into the substitution tables. Following the Vigenère encryption process, a transition to deoxyribonucleic acid (DNA) notation is implemented, controlled by a pseudo-random crossover matrix. This matrix facilitates a genetic crossover specifically adapted for image encryption. Simulations conducted on a variety of images of diverse formats and sizes demonstrate the robustness of this approach against differential and frequency-based attacks. The substantial size of the encryption key significantly enhances the system's security, providing strong protection against brute-force attacks.
Open Access
Research article
Characterization and Risk Assessment of Cyber Security Threats in Cloud Computing: A Comparative Evaluation of Mitigation Techniques
oludele awodele ,
chibueze ogbonna ,
emmanuel o. ogu ,
johnson o. hinmikaiye ,
jide e. t. akinsola
|
Available online: 05-15-2024

Abstract

Full Text|PDF|XML

Advancements in information technology have significantly enhanced productivity and efficiency through the adoption of cloud computing, yet this adoption has also introduced a spectrum of security threats. Effective cybersecurity mitigation strategies are imperative to minimize the impact on cloud infrastructure and ensure reliability. This study seeks to categorize and assess the risk levels of cybersecurity threats in cloud computing environments, providing a comprehensive characterization based on eleven major causes, including natural disasters, loss of encryption keys, unauthorized login access, and others. Using fuzzy set theory to analyze uncertainties and model threats, threats were identified, prioritized, and categorized according to their impact on cloud infrastructure. A high level of data loss was revealed in five key features, such as encryption key compromise and unauthorized login access, while a lower impact was observed in unknown cloud storage and exposure to sensitive data. Seven threat features, including encryption key loss and operating system failure, were found to significantly contribute to data breaches. In contrast, others like virtual machine sharing and impersonation, exhibited lower risk levels. A comparative analysis of threat mitigation techniques determined Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service and Elevation of Privilege (STRIDE) as the most effective methodology with a score of 59, followed by Quality Threat Modeling Methodology (QTMM) (57), Common Vulnerability Scoring System (CVSS) (51), Process for Attack Simulation and Threat Analysis (PASTA) (50), and Persona non-Grata (PnG) (47). Attack Tree and Hierarchical Threat Modeling Methodology (HTMM) each achieved 46, while Linkability, Identifiablility, Nonrepudiation, Detectability, Disclosure of Information, Unawareness and Noncompliance (LINDDUN) scored 45. These findings underscore the value of fuzzy set theory in tandem with threat modeling to categorize and assess cybersecurity risks in cloud computing. STRIDE is recommended as an effective modeling technique for cloud environments. This comprehensive analysis provides critical insights for organizations and security experts, empowering them to proactively address recurring threats and minimize disruptions to daily operations.

Abstract

Full Text|PDF|XML

Dental implants (DIs) are prone to failure due to uncommon mechanical complications and fractures. Precise identification of implant fixture systems from periapical radiographs is imperative for accurate diagnosis and treatment, particularly in the absence of comprehensive medical records. Existing methods predominantly leverage spatial features derived from implant images using convolutional neural networks (CNNs). However, texture images exhibit distinctive patterns detectable as strong energy at specific frequencies in the frequency domain, a characteristic that motivates this study to employ frequency-domain analysis through a novel multi-branch spectral channel attention network (MBSCAN). High-frequency data obtained via a two-dimensional (2D) discrete cosine transform (DCT) are exploited to retain phase information and broaden the application of frequency-domain attention mechanisms. Fine-tuning of the multi-branch spectral channel attention (MBSCA) parameters is achieved through the modified aquila optimizer (MAO) algorithm, optimizing classification accuracy. Furthermore, pre-trained CNN architectures such as Visual Geometry Group (VGG) 16 and VGG19 are harnessed to extract features for classifying intact and fractured DIs from panoramic and periapical radiographs. The dataset comprises 251 radiographic images of intact DIs and 194 images of fractured DIs, meticulously selected from a pool of 21,398 DIs examined across two dental facilities. The proposed model has exhibited robust accuracy in detecting and classifying fractured DIs, particularly when relying exclusively on periapical images. The MBSCA-MAO scheme has demonstrated exceptional performance, achieving a classification accuracy of 95.7% with precision, recall, and F1-score values of 95.2%, 94.3%, and 95.6%, respectively. Comparative analysis indicates that the proposed model significantly surpasses existing methods, showcasing its superior efficacy in DI classification.

Abstract

Full Text|PDF|XML

Named Entity Recognition (NER), a pivotal task in information extraction, is aimed at identifying named entities of various types within text. Traditional NER methods, however, often fall short in providing sufficient semantic representation of text and preserving word order information. Addressing these challenges, a novel approach is proposed, leveraging dual Graph Neural Networks (GNNs) based on multi-feature fusion. This approach constructs a co-occurrence graph and a dependency syntax graph from text sequences, capturing textual features from a dual-graph perspective to overcome the oversight of word interdependencies. Furthermore, Bidirectional Long Short-Term Memory Networks (BiLSTMs) are utilized to encode text, addressing the issues of neglecting word order features and the difficulty in capturing contextual semantic information. Additionally, to enable the model to learn features across different subspaces and the varying degrees of information significance, a multi-head self-attention mechanism is introduced for calculating internal dependency weights within feature vectors. The proposed model achieves F1-scores of 84.85% and 96.34% on the CCKS-2019 and Resume datasets, respectively, marking improvements of 1.13 and 0.67 percentage points over baseline models. The results affirm the effectiveness of the presented method in enhancing performance on the NER task.

load more...
- no more data -
- no more data -