Javascript is required
Search
Volume 4, Issue 1, 2025
Open Access
Research article
Advanced Tanning Detection Through Image Processing and Computer Vision
sayak mukhopadhyay ,
janmejay gupta ,
akshay kumar
|
Available online: 01-20-2025

Abstract

Full Text|PDF|XML

This study introduces an advanced approach to the automated detection of skin tanning, leveraging image processing and computer vision techniques to accurately assess tanning levels. A method was proposed in which skin tone variations were analyzed by comparing a reference image with a current image of the same subject. This approach establishes a reliable framework for estimating tanning levels through a sequence of image preprocessing, skin segmentation, dominant color extraction, and tanning assessment. The hue-saturation-value (HSV) color space was employed to quantify these variations, with particular emphasis placed on the saturation component, which is identified as a critical factor for tanning detection. This novel focus on the saturation component offers a robust and objective alternative to traditional visual assessment methods. Additionally, the potential integration of machine learning techniques to enhance skin segmentation and improve image analysis accuracy was explored. The proposed framework was positioned within an Internet of Things (IoT) ecosystem for real-time monitoring of sun safety, providing a practical application for both individual and public health contexts. Experimental results demonstrate the efficacy of the proposed method in distinguishing various tanning levels, thereby offering significant advancements in the fields of cosmetic dermatology, public health, and preventive medicine. These findings suggest that the integration of image processing, computer vision, and machine learning can provide a powerful tool for the automated assessment of skin tanning, with broad implications for real-time health monitoring and the prevention of overexposure to ultraviolet (UV) radiation.

Abstract

Full Text|PDF|XML

Drought, a complex natural phenomenon with profound global impacts, including the depletion of water resources, reduced agricultural productivity, and ecological disruption, has become a critical challenge in the context of climate change. Effective drought prediction models are essential for mitigating these adverse effects. This study investigates the contribution of various data preprocessing steps—specifically class imbalance handling and dimensionality reduction techniques—to the performance of machine learning models for drought prediction. Synthetic Minority Over-sampling Technique (SMOTE) and near miss sampling methods were employed to address class imbalances within the dataset. Additionally, Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) were applied for dimensionality reduction, aiming to improve computational efficiency while retaining essential features. Decision tree algorithms were trained on the preprocessed data to assess the impact of these preprocessing techniques on model accuracy, precision, recall, and F1-score. The results indicate that the SMOTE-based sampling approach significantly enhances the overall performance of the drought prediction model, particularly in terms of accuracy and robustness. Furthermore, the combination of SMOTE, PCA, and LDA demonstrates a substantial improvement in model reliability and generalizability. These findings underscore the critical importance of carefully selecting and applying appropriate data preprocessing techniques to address class imbalances and reduce feature space, thus optimizing the performance of machine learning models in drought prediction. This study highlights the potential of preprocessing strategies in improving the predictive capabilities of models, providing valuable insights for future research in climate-related prediction tasks.

Abstract

Full Text|PDF|XML

This study investigates the recognition of seven primary human emotions—contempt, anger, disgust, surprise, fear, happiness, and sadness—based on facial expressions. A transfer learning approach was employed, utilizing three pre-trained convolutional neural network (CNN) architectures: AlexNet, VGG16, and ResNet50. The system was structured to perform facial expression recognition (FER) by incorporating three key stages: face detection, feature extraction, and emotion classification using a multiclass classifier. The proposed methodology was designed to enhance pattern recognition accuracy through a carefully structured training pipeline. Furthermore, the performance of the transfer learning models was compared using a multiclass support vector machine (SVM) classifier, and extensive testing was planned on large-scale datasets to further evaluate detection accuracy. This study addresses the challenge of spontaneous FER, a critical research area in human-computer interaction, security, and healthcare. A key contribution of this study is the development of an efficient feature extraction method, which facilitates FER with minimal reliance on extensive datasets. The proposed system demonstrates notable improvements in recognition accuracy compared to traditional approaches, significantly reducing misclassification rates. It is also shown to require less computational time and resources, thereby enhancing its scalability and applicability to real-world scenarios. The approach outperforms conventional techniques, including SVMs with handcrafted features, by leveraging the robust feature extraction capabilities of transfer learning. This framework offers a scalable and reliable solution for FER tasks, with potential applications in healthcare, security, and human-computer interaction. Additionally, the system’s ability to function effectively in the absence of a caregiver provides significant assistance to individuals with disabilities in expressing their emotional needs. This research contributes to the growing body of work on facial emotion recognition and paves the way for future advancements in artificial intelligence-driven emotion detection systems.

Abstract

Full Text|PDF|XML

Stance, a critical discourse marker, reflects the expression of attitudes, feelings, evaluations, or judgments by speakers or writers toward a topic or other participants in a conversation. This study investigates the manifestation of stance in the discourse of four prominent artificial intelligence (AI) chatbots—ChatGPT, Gemini, MetaAI, and Bing Copilot—focusing on three dimensions: interpersonal stance (how chatbots perceive one another), epistemic stance (their relationship to the topic of discussion), and style stance (their communicative style). Through a systematic analysis, it is revealed that these chatbots employ various stance markers, including hedging, self-mention, power dominance, alignment, and face-saving strategies. Notably, the use of face-saving framing by AI models, despite their lack of a genuine “face,” highlights the distinction between authentic interactional intent and the reproduction of linguistic conventions. This suggests that stance in AI discourse is not a product of subjective intent but rather an inherent feature of natural language. However, this study extends the discourse by examining stance as a feature of chatbot-to-chatbot communication rather than human-AI interactions, thereby bridging the gap between human linguistic behaviors and AI tendencies. It is concluded that stance is not an extraneous feature of discourse but an integral and unavoidable aspect of language use, which chatbots inevitably replicate. In other words, if chatbots must use language, then pragmatic features like stance are inevitable. Ultimately, this raises a broader question: Is it even possible for a chatbot to produce language devoid of stance? The implications of this research underscore the intrinsic connection between language use and pragmatic features, suggesting that stance is an inescapable component of any linguistic output, including that of AI systems.

- no more data -