Javascript is required
Search
/
/
Information Dynamics and Applications
HF
Information Dynamics and Applications (IDA)
IJKIS
ISSN (print): 2958-1486
ISSN (online): 2958-1494
Submit to IDA
Review for IDA
Propose a Special Issue
Current State
Issue
Volume
2024: Vol. 3
Archive
Home

Information Dynamics and Applications (IDA) stands out in the realm of academic publishing as a distinct peer-reviewed, open-access journal, primarily focusing on the dynamic nature and diverse applications of information technology and its related fields. Distinguishing itself from other journals in the domain, IDA dedicates itself to exploring both the underlying principles and the practical impacts of information technology, thereby bridging theoretical research with real-world applications. IDA not only covers the traditional aspects of information technology but also delves into emerging trends and innovations that set it apart in the scholarly community. Published quarterly by Acadlore, the journal typically releases its four issues in March, June, September, and December each year.

  • Professional Service - Every article submitted undergoes an intensive yet swift peer review and editing process, adhering to the highest publication standards.

  • Prompt Publication - Thanks to our proficiency in orchestrating the peer-review, editing, and production processes, all accepted articles see rapid publication.

  • Open Access - Every published article is instantly accessible to a global readership, allowing for uninhibited sharing across various platforms at any time.

Editor(s)-in-chief(2)
balamurugan balusamy
Shiv Nadar University, India
balamurugan.balusamy@snu.edu.in | website
Research interests: Big Data; Network Security; Cloud Computing; Block Chain; Data Sciences; Engineering Education
gengxin sun
Qingdao University, China
sungengxin@qdu.edu.cn | website
Research interests: Big Data; Artificial Intelligence; Complex Networks

Aims & Scope

Aims

Information Dynamics and Applications (IDA), as an international open-access journal, stands at the forefront of exploring the dynamics and expansive applications of information technology. This fully refereed journal delves into the heart of interdisciplinary research, focusing on critical aspects of information processing, storage, and transmission. With a commitment to advancing the field, IDA serves as a crucible for original research, encompassing reviews, research papers, short communications, and special issues on emerging topics. The journal particularly emphasizes innovative analytical and application techniques in various scientific and engineering disciplines.

IDA aims to provide a platform where detailed theoretical and experimental results can be published without constraints on length, encouraging comprehensive disclosure for reproducibility. The journal prides itself on the following attributes:

  • Every publication benefits from prominent indexing, ensuring widespread recognition.

  • A distinguished editorial team upholds unparalleled quality and broad appeal.

  • Seamless online discoverability of each article maximizes its global reach.

  • An author-centric and transparent publication process enhances submission experience.

Scope

The scope of IDA is diverse and expansive, encompassing a wide range of topics within the realm of information technology:

  • Artificial Intelligence (AI) and Machine Learning (ML): Investigating the latest developments in AI and ML, and their applications across various industries.

  • Digitalization and Data Science: Exploring the transformation brought about by digital technologies and the analytical power of data science.

  • Signal Processing and Simulation Optimization: Advancements in the field of signal processing, including audio, video, and communication signal processing, and the development of optimization techniques for simulations.

  • Social Networking and Ubiquitous Computing: Research on the impact of social media on society and the pervasiveness of computing in everyday life.

  • Industrial Engineering and Information Architecture: Studies on the integration of information technology in industrial engineering and the structuring of information systems.

  • Internet of Things (IoT): Delving into the connected world of IoT and its implications for smart cities, healthcare, and more.

  • Data Mining, Storage, and Manipulation: Techniques and innovations in extracting valuable insights from large data sets, and the management of data storage and manipulation.

  • Database Management and Decision Support Systems: Exploring advanced database management systems and the development of decision support systems.

  • Enterprise Systems and E-Commerce: The evolution and future of enterprise resource planning systems and the impact of e-commerce on global markets.

  • Knowledge-Based Systems and Robotics: The intersection of knowledge-based systems with robotics and automation.

  • Cybersecurity and Software as a Service (SaaS): Cutting-edge research in cybersecurity and the growing trend of SaaS in business and consumer applications.

  • Supply Chain Management and Systems Analysis: Innovations in supply chain management driven by information technology, and systems analysis in complex IT environments.

  • Quantum Computing and Optimization: The role of quantum computing in solving complex problems and its future potential.

  • Virtual and Augmented Reality: Exploring the implications of virtual and augmented reality technologies in education, training, entertainment, and more.

Articles
Recent Articles
Most Downloaded
Most Cited

Abstract

Full Text|PDF|XML
The complexity and variability of Internet traffic data present significant challenges in feature extraction and selection, often resulting in ineffective abnormal traffic monitoring. To address these challenges, an improved Bidirectional Long Short-Term Memory (BiLSTM) network-based approach for Internet abnormal traffic monitoring was proposed. In this method, a constrained minimum collection node coverage strategy was first applied to optimize the selection of collection nodes, ensuring comprehensive data coverage across network nodes while minimizing resource consumption. The collected traffic dataset was then transformed to enhance data validity. To enable more robust feature extraction, a combined Convolutional Neural Network (CNN) and BiLSTM model was employed, allowing for a comprehensive analysis of data characteristics. Additionally, an attention mechanism was incorporated to weigh the significance of attribute features, further enhancing classification accuracy. The final traffic monitoring results were produced through a softmax classifier, demonstrating that the proposed method yields a high monitoring accuracy with a low false positive rate of 0.2, an Area Under the Curve (AUC) of 0.95, and an average monitoring latency of 5.7 milliseconds (ms). These results indicate that the method provides an efficient and rapid response to Internet traffic anomalies, with a marked improvement in monitoring performance and resource efficiency.

Abstract

Full Text|PDF|XML

The traditional K-means clustering algorithm has unstable clustering results and low efficiency due to the random selection of initial cluster centres. To address the limitations, an improved K-means clustering algorithm based on adaptive guided differential evolution (AGDE-KM) was proposed. First, adaptive operators were designed to enhance global search capability in the early stages and accelerate convergence in later stages. Second, a multi-mutation strategy with a weighted coefficient was introduced to leverage the advantages of different mutation strategies during various evolutionary phases, balancing global and local search capabilities and expediting convergence. Third, a Gaussian perturbation crossover operation was proposed based on the best individual in the current population, providing individuals with superior evolution directions while preserving population diversity across dimensions, thereby avoiding the local optima of the algorithm. The optimal solution output at the end of the algorithm implementation was used as the initial cluster centres, replacing the cluster centres randomly selected by the traditional K-means clustering algorithm. The proposed algorithm was evaluated on public datasets from the UCI repository, including Vowel, Iris, and Glass, as well as a synthetic dataset (Jcdx). The sum of squared errors (SSE) was reduced by 5.65%, 19.59%, 13.31%, and 6.1%, respectively, compared to traditional K-means. Additionally, clustering time was decreased by 83.03%, 81.33%, 77.47%, and 92.63%, respectively. Experimental results demonstrate that the proposed improved algorithm significantly enhances convergence speed and optimisation capability, significantly improving the clustering effectiveness, efficiency, and stability.

Open Access
Research article
Detection of Fruit Ripeness and Defectiveness Using Convolutional Neural Networks
joshua s. mommoh ,
james l. obetta ,
samuel n. john ,
kennedy okokpujie ,
osemwegie n. omoruyi ,
ayokunle a. awelewa
|
Available online: 09-22-2024

Abstract

Full Text|PDF|XML

The classification of fruit ripeness and detection of defects are critical processes in the agricultural industry to minimize losses during commercialization. This study evaluated the performance of three Convolutional Neural Network (CNN) architectures—Extreme Inception Network (XceptionNet), Wide Residual Network (Wide ResNet), and Inception Version 4 (Inception V4)—in predicting the ripeness and quality of tomatoes. A dataset comprising 2,589 images of beef tomatoes was assembled from Golden Fingers Farms and Ranches Limited, Abuja, Nigeria. The samples were categorized into six classes representing five progressive ripening stages and a defect class, based on the United States Department of Agriculture (USDA) colour chart. To enhance the dataset's size and diversity, image augmentation through geometric transformations was employed, increasing the dataset to 3,000 images. Fivefold cross-validation was conducted to ensure a robust evaluation of the models' performance. The Wide ResNet model demonstrated superior performance, achieving an average accuracy of 97.87%, surpassing the 96.85% and 96.23% achieved by XceptionNet and Inception V4, respectively. These findings underscore the potential of Wide ResNet as an effective tool for accurately detecting ripeness levels and defects in tomatoes. The comparative analysis highlights the effectiveness of deep learning (DL) techniques in addressing challenges in agricultural automation and quality assessment. The proposed methodology offers a scalable solution for implementing automated ripeness and defect detection systems, with significant implications for reducing waste and improving supply chain efficiency.

Abstract

Full Text|PDF|XML
The traditional channel scheduling methods in short-range wireless communication networks are often constrained by fixed rules, resulting in inefficient channel resource utilization and unstable data communication. To address these limitations, a novel multi-channel scheduling approach, based on a Q-learning feedback mechanism, was proposed. The architecture of short-range wireless communication networks was analyzed, focusing on the core network system and wireless access network structures. The network channel nodes were optimized by deploying Dijkstra's algorithm in conjunction with an undirected graph representation of the communication nodes within the network. Multi-channel state characteristic parameters were computed, and a channel state prediction model was constructed to forecast the state of the network channels. The Q-learning feedback mechanism was employed to implement multi-channel scheduling, leveraging the algorithm’s reinforcement learning capabilities and framing the scheduling process as a Markov decision-making problem. Experimental results demonstrate that this method achieved a maximum average packet loss rate of 0.03 and a network throughput of up to 4.5 Mbps, indicating high channel resource utilization efficiency. Moreover, in low-traffic conditions, communication delay remained below 0.4 s, and in high-traffic scenarios, it varied between 0.26 and 0.4 s. These outcomes suggest that the proposed approach enables efficient and stable transmission of communication data, maintaining both low packet loss and high throughput.
Open Access
Research article
Enhanced Defect Detection in Insulator Iron Caps Using Improved YOLOv8n
qiming zhang ,
ying liu ,
song tang ,
kui kang
|
Available online: 09-04-2024

Abstract

Full Text|PDF|XML

To address the challenges in detecting surface defects on insulator iron caps, particularly due to the complex backgrounds that hinder accurate identification, an improved defect detection algorithm based on YOLOv8n, whose full name is You Only Look Once version 8 nano, was proposed. The C2f convolutional layers in both the backbone and neck networks were replaced by the C2f-Spatial and Channel Reconstruction Convolution (SCConv) convolutional network, which strengthens the model's capacity to extract detailed surface defect features. Additionally, a Convolutional Block Attention Module (CBAM) was incorporated after the Spatial Pyramid Pooling - Fast (SPPF) layer, enhancing the extraction of deep feature information. Furthermore, the original feature fusion method in YOLOv8n was replaced with a Bidirectional Feature Pyramid Network (BiFPN), significantly improving the detection accuracy. Extensive experiments conducted on a self-constructed dataset demonstrated the effectiveness of this approach, with improvements of 2.7% and 2.9% in mAP@0.5 and mAP@0.95, respectively. The results confirm that the proposed algorithm exhibits strong robustness and superior performance in detecting insulator iron cap defects under varied conditions.

Open Access
Research article
Optimizing Energy Storage and Hybrid Inverter Performance in Smart Grids Through Machine Learning
kavitha hosakote shankara ,
mallikarjunaswamy srikantaswamy ,
sharmila nagaraju
|
Available online: 08-24-2024

Abstract

Full Text|PDF|XML

The effective integration of renewable energy sources (RES), such as solar and wind power, into smart grids is essential for advancing sustainable energy management. Hybrid inverters play a pivotal role in the conversion and distribution of this energy, but conventional approaches, including Static Resource Allocation (SRA) and Fixed Threshold Inverter Control (FTIC), frequently encounter inefficiencies, particularly in managing fluctuating renewable energy inputs and adapting to variable load demands. These inefficiencies lead to increased energy loss and a reduction in overall system performance. In response to these challenges, the Optimized Energy Storage and Hybrid Inverter Management Algorithm (OESHIMA) has been developed, employing machine learning for real-time data analysis and decision-making. By continuously monitoring energy production, storage capacity, and consumption patterns, OESHIMA dynamically optimizes energy allocation and inverter operations. Comparative analysis demonstrates that OESHIMA enhances energy efficiency by 0.25% and reduces energy loss by 0.20% when benchmarked against conventional methods. Furthermore, the algorithm extends the lifespan of energy storage systems by 0.15%, contributing to both sustainable and cost-efficient energy management within smart grids. These findings underscore the potential of OESHIMA in addressing the limitations of traditional energy management systems (EMSs) while improving hybrid inverter performance in the context of renewable energy integration.

Open Access
Research article
DV-Hop Positioning Method Based on Multi-Strategy Improved Sparrow Search Algorithm
wenli lei ,
jinping han ,
jiawei bao ,
kun jia
|
Available online: 06-29-2024

Abstract

Full Text|PDF|XML
In order to address the problem of large positioning errors in non-ranging positioning algorithms for wireless sensor networks (WSN), this study proposes a Distance Vector-Hop (DV-Hop) positioning method based on the multi-strategy improved sparrow search algorithm (SSA). The method first introduces circle chaotic mapping, adaptive weighting factor, Gaussian variation and an inverse learning strategy to improve the iteration speed and optimization accuracy of the sparrow algorithm, and then uses the improved SSA to estimate the position of the unknown node. Experimental results show that, compared with the original method, the improved DV-Hop algorithm has significantly improved the positioning accuracy.

Abstract

Full Text|PDF|XML
The persistent emergence of software vulnerabilities necessitates the development of effective detection methodologies. Machine learning (ML) and deep learning (DL) offer promising avenues for automating feature extraction; however, their efficacy in vulnerability detection remains insufficiently explored. This study introduces the Multi-Deep Software Automation Detection Network (MDSADNet) to enhance binary and multi-class software classification. Unlike traditional one-dimensional Convolutional Neural Networks (CNNs), MDSADNet employs a novel two-dimensional multi-scale convolutional process to capture both intra-data and inter-data $n$-gram features. Experimental evaluations conducted on binary and multi-class datasets demonstrate MDSADNet's superior performance in software automation classification. Furthermore, the Mantis Search Algorithm (MSA), inspired by the foraging and mating behaviors of mantises, was incorporated to optimize MDSADNet’s hyperparameters. This optimization process was structured into three distinct stages: sexual cannibalism, prey pursuit, and prey assault. The model's validation involved performance metrics such as F1-score, recall, accuracy, and precision. Comparative analyses with state-of-the-art DL and ML models highlight MDSADNet's enhanced classification capabilities. The results indicate that MDSADNet significantly outperforms existing models, achieving higher accuracy and robustness in detecting software vulnerabilities.
Open Access
Research article
Enhancing Pneumonia Diagnosis with Transfer Learning: A Deep Learning Approach
rashmi ashtagi ,
nitin khanapurkar ,
abhijeet r. patil ,
vinaya sarmalkar ,
balaji chaugule ,
h. m. naveen
|
Available online: 06-16-2024

Abstract

Full Text|PDF|XML
The significant impact of pneumonia on public health, particularly among vulnerable populations, underscores the critical need for early detection and treatment. This research leverages the National Institutes of Health (NIH) chest X-ray dataset, employing a comprehensive exploratory data analysis (EDA) to examine patient demographics, X-ray perspectives, and pixel-level evaluations. A pre-trained Visual Geometry Group (VGG) 16 model is integrated into the proposed architecture, emphasizing the synergy between robust machine learning techniques and EDA insights to enhance diagnostic accuracy. Rigorous data preparation methods are utilized to ensure dataset reliability, addressing missing data and sanitizing demographic information. The study not only provides valuable insights into pneumonia-related trends but also establishes a foundation for future advancements in medical diagnostics. Detailed results are presented, including disease distribution, model performance metrics, and clinical implications, highlighting the potential of machine learning models to support accurate and timely clinical decision-making. This integration of advanced technologies into traditional healthcare practices is expected to improve patient outcomes. Future directions include enhancing model sensitivity, incorporating diverse datasets, and collaborating with medical professionals to validate and implement the system in clinical settings. These efforts are anticipated to revolutionize pneumonia diagnosis and broader medical diagnostics. This work offers comprehensive code for developing and optimizing deep learning (DL) models for medical image classification, focusing on pneumonia detection in X-ray images. The code outlines the construction of the model using pre-trained architectures such as VGG16, detailing essential preparation steps including image augmentation and metadata parsing. Tools for data separation, generator creation, and callback training for monitoring are provided. Additionally, the code facilitates performance assessment through various metrics, including the receiver operating characteristic (ROC) curve and F1-score. By providing a systematic framework, this research aims to accelerate the development process for researchers in medical image processing and expedite the creation of accurate diagnostic tools.
Open Access
Research article
Advancements in Image Recognition: A Siamese Network Approach
Jiaqi Du ,
Wanshu Fu ,
Yi Zhang ,
ziqi wang
|
Available online: 06-13-2024

Abstract

Full Text|PDF|XML
In the realm of computer vision, image recognition serves as a pivotal task with extensive applications in intelligent security, autonomous driving, and robotics. Traditional methodologies for image recognition often grapple with computational inefficiencies and diminished accuracy in complex scenarios and extensive datasets. To address these challenges, an algorithm utilizing a siamese network architecture has been developed. This architecture leverages dual interconnected neural network submodules for the efficient extraction and comparison of image features. The effectiveness of this siamese network-based algorithm is demonstrated through its application to various benchmark datasets, where it consistently outperforms conventional approaches in terms of accuracy and processing speed. By employing weight-sharing techniques and optimizing neural network pathways, the proposed algorithm enhances the robustness and efficiency of image recognition tasks. The advancements presented in this study not only contribute to the theoretical understanding but also offer practical solutions, underscoring the significant potential and applicability of siamese networks in advancing image recognition technologies.

Abstract

Full Text|PDF|XML
Traditional methods for keyword extraction predominantly rely on statistical relationships between words, neglecting the cohesive structure of the extracted keyword set. This study introduces an enhanced method for keyword extraction, utilizing the Watts-Strogatz model to construct a word network graph from candidate words within the text. By leveraging the characteristics of small-world networks (SWNs), i.e., short average path lengths and high clustering coefficients, the method ascertains the relevance between words and their impact on sentence cohesion. A comprehensive weight for each word is calculated through a linear weighting of features including part of speech, position, and Term Frequency-Inverse Document Frequency (TF-IDF), subsequently improving the impact factors of the TextRank algorithm for obtaining the final weight of candidate words. This approach facilitates the extraction of keywords based on the final weight outcomes. Through uncovering the deep hidden structures of feature words, the method effectively reveals the connectivity within the word network graph. Experiments demonstrate superiority over existing methods in terms of precision, recall, and F1-measure.

Abstract

Full Text|PDF|XML

The decentralised nature of cryptocurrency, coupled with its potential for significant financial returns, has elevated its status as a sought-after investment opportunity on a global scale. Nonetheless, the inherent unpredictability and volatility of the cryptocurrency market present considerable challenges for investors aiming to forecast price movements and secure profitable investments. In response to this challenge, the current investigation was conducted to assess the efficacy of three Machine Learning (ML) algorithms, namely, Gradient Boosting (GB), Random Forest (RF), and Bagging, in predicting the daily closing prices of six major cryptocurrencies, namely, Binance, Bitcoin, Ethereum, Solana, USD, and XRP. The study utilised historical price data spanning from January 1, 2015 to January 26, 2024 for Bitcoin, from January 1, 2018 to January 26, 2024 for Ethereum and XRP, from January 1, 2021 to January 26, 2024 for Solana, and from January 1, 2019 to January 26, 2024 for USD. A novel approach was adopted wherein the lagging prices of the cryptocurrencies were employed as features for prediction, as opposed to the conventional method of using opening, high, and low prices, which are not predictive in nature. The data set was divided into a training set (80%) and a testing set (20%) for the evaluation of the algorithms. The performance of these ML algorithms was systematically compared using a suite of metrics, including R2, adjusted R2, Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The findings revealed that the GB algorithm exhibited superior performance in predicting the prices of Bitcoin and Solana, whereas the RF algorithm demonstrated greater efficacy for Ethereum, USD, and XRP. This comparative analysis underscores the relative advantages of RF over GB and Bagging algorithms in the context of cryptocurrency price prediction. The outcomes of this study not only contribute to the existing body of knowledge on the application of ML algorithms in financial markets but also provide actionable insights for investors navigating the volatile cryptocurrency market.

Open Access
Research article
Enhancing 5G LTE Communications: A Novel LDPC Decoder for Next-Generation Systems
divyashree yamadur venkatesh ,
komala mallikarjunaiah ,
mallikarjunaswamy srikantaswamy ,
ke huang
|
Available online: 03-21-2024

Abstract

Full Text|PDF|XML

The advent of fifth-generation (5G) long-term evolution (LTE) technology represents a critical leap forward in telecommunications, enabling unprecedented high-speed data transfer essential for today’s digital society. Despite the advantages, the transition introduces significant challenges, including elevated bit error rate (BER), diminished signal-to-noise ratio (SNR), and the risk of jitter, undermining network reliability and efficiency. In response, a novel low-density parity check (LDPC) decoder optimized for 5G LTE applications has been developed. This decoder is tailored to significantly reduce BER and improve SNR, thereby enhancing the performance and reliability of 5G communications networks. Its design accommodates advanced switching and parallel processing capabilities, crucial for handling complex data flows inherent in contemporary telecommunications systems. A distinctive feature of this decoder is its dynamic adaptability in adjusting message sizes and code rates, coupled with the augmentation of throughput via reconfigurable switching operations. These innovations allow for a versatile approach to optimizing 5G networks. Comparative analyses demonstrate the decoder’s superior performance relative to the quasi-cyclic low-density check code (QCLDC) method, evidencing marked improvements in communication quality and system efficiency. The introduction of this LDPC decoder thus marks a significant contribution to the evolution of 5G networks, offering a robust solution to the pressing challenges faced by next-generation communication systems and establishing a new standard for high-speed wireless connectivity.

load more...
- no more data -
- no more data -