Javascript is required
Search
Volume 3, Issue 1, 2024
Open Access
Research article
Optimizing Misinformation Control: A Cloud-Enhanced Machine Learning Approach
muhammad daniyal baig ,
waseem akram ,
hafiz burhan ul haq ,
hassan zahoor rajput ,
muhammad imran
|
Available online: 01-24-2024

Abstract

Full Text|PDF|XML
The digital age has witnessed the rampant spread of misinformation, significantly impacting the medical and financial sectors. This phenomenon, fueled by various sources, contributes to public distress and information warfare, necessitating robust countermeasures. In response, a novel model has been developed, integrating cloud computing with advanced machine learning techniques. This model prioritizes the identification and mitigation of false information through optimized classification strategies. Utilizing diverse datasets for predictive analysis, the model employs state-of-the-art algorithms, including K-Nearest Neighbors (KNN) and Random Forest (RF), to enhance accuracy and efficiency. A distinctive feature of this approach is the implementation of cloud-empowered transfer learning, providing a scalable and optimized solution to address the challenges posed by the vast, yet often unreliable, information available online. By harnessing the potential of cloud computing and machine learning, this model offers a strategic approach to combating the prevalent issue of misinformation in the digital world.
Open Access
Research article
Enhancing Image Captioning and Auto-Tagging Through a FCLN with Faster R-CNN Integration
shalaka prasad deore ,
taibah sohail bagwan ,
prachiti sunil bhukan ,
harsheen tejindersingh rajpal ,
shantanu bharat gade
|
Available online: 02-02-2024

Abstract

Full Text|PDF|XML

In the realm of automated image captioning, which entails generating descriptive text for images, the fusion of Natural Language Processing (NLP) and computer vision techniques is paramount. This study introduces the Fully Convolutional Localization Network (FCLN), a novel approach that concurrently addresses localization and description tasks within a singular forward pass. It maintains spatial information and avoids detail loss, streamlining the training process with consistent optimization. The foundation of FCLN is laid by a Convolutional Neural Network (CNN), adept at extracting salient image features. Central to this architecture is a Localization Layer, pivotal in precise object detection and caption generation. The FCLN architecture amalgamates a region detection network, reminiscent of Faster Region-CNN (R-CNN), with a captioning network. This synergy enables the production of contextually meaningful image captions. The incorporation of the Faster R-CNN framework facilitates region-based object detection, offering precise contextual understanding and inter-object relationships. Concurrently, a Long Short-Term Memory (LSTM) network is employed for generating captions. This integration yields superior performance in caption accuracy, particularly in complex scenes. Evaluations conducted on the Microsoft Common Objects in Context (MS COCO) test server affirm the model's superiority over existing benchmarks, underscoring its efficacy in generating precise and context-rich image captions.

Open Access
Review article
A Comparative Review of Internet of Things Model Workload Distribution Techniques in Fog Computing Networks
nandini gowda puttaswamy ,
anitha narasimha murthy ,
houssem degha
|
Available online: 03-17-2024

Abstract

Full Text|PDF|XML

In the realm of fog computing (FC), a vast array of intelligent devices collaborates within an intricate network, a synergy that, while promising, has not been without its challenges. These challenges, including data loss, difficulties in workload distribution, a lack of parallel processing capabilities, and security vulnerabilities, have necessitated the exploration and deployment of a variety of solutions. Among these, software-defined networks (SDN), double-Q learning algorithms, service function chains (SFC), virtual network functions (VNF) stand out as significant. An exhaustive survey has been conducted to explore workload distribution methodologies within Internet of Things (IoT) architectures in FC networks. This investigation is anchored in a parameter-centric analysis, aiming to enhance the efficiency of data transmission across such networks. It delves into the architectural framework, pivotal pathways, and applications, aiming to identify bottlenecks and forge the most effective communication channels for IoT devices under substantial workload conditions. The findings of this research are anticipated to guide the selection of superior simulation tools, validate datasets, and refine strategies for data propagation. This, in turn, is expected to facilitate optimal power consumption and enhance outcomes in data transmission and propagation across multiple dimensions. The rigorous exploration detailed herein not only illuminates the complexities of workload distribution in FC networks but also charts a course towards more resilient and efficient IoT ecosystems.

Open Access
Research article
Enhancing 5G LTE Communications: A Novel LDPC Decoder for Next-Generation Systems
divyashree yamadur venkatesh ,
komala mallikarjunaiah ,
mallikarjunaswamy srikantaswamy ,
ke huang
|
Available online: 03-21-2024

Abstract

Full Text|PDF|XML

The advent of fifth-generation (5G) long-term evolution (LTE) technology represents a critical leap forward in telecommunications, enabling unprecedented high-speed data transfer essential for today’s digital society. Despite the advantages, the transition introduces significant challenges, including elevated bit error rate (BER), diminished signal-to-noise ratio (SNR), and the risk of jitter, undermining network reliability and efficiency. In response, a novel low-density parity check (LDPC) decoder optimized for 5G LTE applications has been developed. This decoder is tailored to significantly reduce BER and improve SNR, thereby enhancing the performance and reliability of 5G communications networks. Its design accommodates advanced switching and parallel processing capabilities, crucial for handling complex data flows inherent in contemporary telecommunications systems. A distinctive feature of this decoder is its dynamic adaptability in adjusting message sizes and code rates, coupled with the augmentation of throughput via reconfigurable switching operations. These innovations allow for a versatile approach to optimizing 5G networks. Comparative analyses demonstrate the decoder’s superior performance relative to the quasi-cyclic low-density check code (QCLDC) method, evidencing marked improvements in communication quality and system efficiency. The introduction of this LDPC decoder thus marks a significant contribution to the evolution of 5G networks, offering a robust solution to the pressing challenges faced by next-generation communication systems and establishing a new standard for high-speed wireless connectivity.

Abstract

Full Text|PDF|XML

The decentralised nature of cryptocurrency, coupled with its potential for significant financial returns, has elevated its status as a sought-after investment opportunity on a global scale. Nonetheless, the inherent unpredictability and volatility of the cryptocurrency market present considerable challenges for investors aiming to forecast price movements and secure profitable investments. In response to this challenge, the current investigation was conducted to assess the efficacy of three Machine Learning (ML) algorithms, namely, Gradient Boosting (GB), Random Forest (RF), and Bagging, in predicting the daily closing prices of six major cryptocurrencies, namely, Binance, Bitcoin, Ethereum, Solana, USD, and XRP. The study utilised historical price data spanning from January 1, 2015 to January 26, 2024 for Bitcoin, from January 1, 2018 to January 26, 2024 for Ethereum and XRP, from January 1, 2021 to January 26, 2024 for Solana, and from January 1, 2019 to January 26, 2024 for USD. A novel approach was adopted wherein the lagging prices of the cryptocurrencies were employed as features for prediction, as opposed to the conventional method of using opening, high, and low prices, which are not predictive in nature. The data set was divided into a training set (80%) and a testing set (20%) for the evaluation of the algorithms. The performance of these ML algorithms was systematically compared using a suite of metrics, including R2, adjusted R2, Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The findings revealed that the GB algorithm exhibited superior performance in predicting the prices of Bitcoin and Solana, whereas the RF algorithm demonstrated greater efficacy for Ethereum, USD, and XRP. This comparative analysis underscores the relative advantages of RF over GB and Bagging algorithms in the context of cryptocurrency price prediction. The outcomes of this study not only contribute to the existing body of knowledge on the application of ML algorithms in financial markets but also provide actionable insights for investors navigating the volatile cryptocurrency market.

- no more data -