Continuous improvement in service quality assurance, based on customer satisfaction, is critical for loading and unloading activities at dry bulk ports. Many ports are now adopting and refining various methods in response to the advancements of Industry 4.0. This research aims to develop and implement Adaptive DMAIC 4.0. Key advantages of this method include IoT based real-time monitoring systems, predictive data analytics, and process automation capabilities. Current Six Sigma measurements show level 3 (DPMO 11,800). While the Cp value of 1.19 indicates stable process stability, the Cpk value of 0.76 $<$ 1 reveals remaining issues requiring systematic, continuous improvement. To enhance process performance, the average loading/unloading time should be maintained closer to the target midpoint of 1.5 minutes/bulk, creating a more balanced distribution. This adjustment would help increase the Cpk value to meet the minimum standard $\geq$ of 1.33, ensuring consistently efficient operations. In theory, implementing the DMAIC 4.0 framework will establish a system that is more resilient to internal and external disruptions, enables sustained performance improvement, and drives toward zero defects and Six Sigma capability. In practice, this approach significantly enhances loading and unloading performance for boosting capacity, operational capability, and TKBM professionalism while eliminating human error.
This study develops and validates a framework for evaluating the success of welfare-oriented digital platforms, with a focus on Thailand’s national pension system. The framework integrates the Information Systems Success Moe (ISSM) and the Technology Acceptance Model (TAM) with trust as a socio-technical construct to evaluate stability, usability, and trustworthiness in aging societies. The data was collected using a survey of 400 elderly citizens and analyzed using structural equation modeling (SEM) with the Jamovi software. The findings were further supplemented by a thematic analysis of the open-ended responses, which provided context for anomalies, such as instability in use, fraud risk, and usability issues, among other concerns. System quality increased perceived ease of use but decreased perceived usefulness when instability occurred. Trust increased usefulness but was not a predictor of behavioral intention. Ease of use unexpectedly decreased intention. User satisfaction, rather than actual use, surfaced as the strongest predictor of net benefits. These findings underscore that the anomalies of adoption are a result of structural and institutional barriers rather than user reluctance. The study rethinks adoption constructs as indicators of system success, thereby expanding the ISSM-TAM integration. It provides policymakers and system architects with a means to diagnose problems and develop welfare information systems for aging societies that are more resilient, trustworthy, and accessible.
When public safety is considered to be of paramount importance, the capacity to detect violent situations through audio monitoring has become increasingly indispensable. This paper proposed a hybrid audio text violence detection system that combines text-based information with frequency-based features to improve accuracy and reliability. The two core models of the system include a frequency-based model, Random Forest (RF) classifier, and a natural language processing (NLP) model called Bidirectional Encoder Representations from Transformers (BERT). RF classifier was trained on Mel-Frequency Cepstral Coefficients (MFCCs) and other spectrum features, whereas BERT identified violent content in transcribed speech. BERT model was improved through task-specific fine-tuning on a curated violence-related text dataset and balanced with class-weighting strategies to address category imbalance. This adaptation enhanced its ability to capture subtle violent language patterns beyond general purpose embeddings. Furthermore, a meta-learner ensemble model using eXtreme Gradient Boosting (XGBoost) classifier model could combine the probability output of the two base models. The ensemble strategy proposed in this research differed from conventionally multimodal fusion techniques, which depend on a single strategy, either NLP or audio. The XGBoost fusion model possessed the qualities derived from both base models to improve classification accuracy and robustness by creating an ideal decision boundary. The proposed system was supported by a Graphical User Interface (GUI) for multiple purposes, such as smart city applications, emergency response, and security monitoring with real-time analysis. The proposed XGBoost ensemble model attained an overall accuracy of over 97.37%, demonstrating the efficacy of integrating machine learning-based decision.
Efficient light rail transit (LRT) systems are crucial for sustainable urban mobility; however, unforeseen departure delays continue to be a major hurdle, undermining operational reliability and passenger satisfaction. This study establishes a data-driven framework for forecasting departure delays by combining static GTFS schedules with real-time GTFS operational data from the Canberra LRT system. The dataset included 15,538 records with 42 attributes, spanning from 28 August 2020 to 13 August 2022. A stringent preprocessing pipeline was implemented, encompassing temporal feature engineering and feature selection based on mutual information. The Random Forest regressor with feature engineering and selection (RFR-FEFS) attained the highest predictive performance on the test set ($R^2$ = 0.94, MAE = 2.93, MSE = 34.32). The high accuracy indicates the model’s efficacy, yet it necessitates careful evaluation of potential overfitting and its generalizability beyond the examined system. Ablation experiments were performed to assess the impact of various feature groups by omitting temporal, spatial, or operational attributes. The findings indicate that the exclusion of temporal features decreased $R^2$ to 0.90, the exclusion of spatial features reduced it to 0.93, and the exclusion of operational features resulted in the most significant decline to 0.23. These findings affirm that all three feature categories contribute distinctly and synergistically to model performance. This research illustrates the capability of integrating diverse GTFS data with sophisticated machine learning techniques to attain precise LRT delay forecasts. Nevertheless, the framework was validated solely on one system and time frame; future research should investigate its transferability to other cities and integrate supplementary contextual data, including meteorological conditions and incident reports, to improve robustness and practical applicability.
Modern image processing systems deployed on embedded and heterogeneous platforms face increasing pressure to deliver high performance under strict energy and real-time constraints. The rapid growth in image resolution and frame rates has significantly amplified computational demand, making uniform full-precision processing increasingly inefficient. This paper presents a significance-driven adaptive approximate computing framework that reduces energy consumption by tailoring computational precision and resource allocation to the spatial importance of image content. We introduce a statistical importance metric that captures local structural variability using low-complexity deviation-based analysis on luminance information. The metric serves as a lightweight proxy for identifying regions that are more sensitive to approximation errors, enabling differentiated processing without the overhead of semantic or perceptual saliency models. Based on this importance classification, the proposed framework dynamically orchestrates heterogeneous CPU–GPU resources, applies variable kernel sizes, and exploits dynamic voltage and frequency scaling (DVFS) to reclaim timing slack for additional energy savings. The framework is validated through two complementary case studies: (i) a heterogeneous software implementation for adaptive convolution filtering on an Odroid XU-4 embedded platform, and (ii) a hardware-level approximate circuit allocation approach using configurable-precision arithmetic units. Experimental results demonstrate energy reductions of up to 60\% compared to uniform-precision baselines, while maintaining acceptable visual quality. Image quality is evaluated using both PSNR and the perceptually motivated SSIM metric, confirming that the proposed approach preserves structural fidelity despite aggressive approximation.
The rapid urbanization and economic development in China have led to increasing demand for infrastructure systems such as utilities, water, gas, and communication networks, exacerbating urban challenges like land scarcity and congestion. Previous studies have highlighted the potential of underground space development as a means to address these issues. Underground utility tunnel construction has been identified as a key solution for efficient pipeline maintenance and the advancement of smart city initiatives. However, as the scale of such projects continues to grow, so does the associated risk. Traditional risk assessment frameworks have often overlooked the significance of intelligent operation and maintenance (O&M) in the context of the digital transformation of infrastructure. This study proposes an updated risk assessment approach that integrates smart O&M into the evaluation framework, reflecting the adoption of technologies such as Building Information Modeling (BIM), digital twins, and big data in construction processes. The Analytic Hierarchy Process (AHP), expert consultations, questionnaire surveys, and fuzzy evaluation methods are applied to identify and assess risks in an underground utility tunnel project in Q City. The results indicate that the overall risk level of the project is above average, with the most significant risks occurring during the construction and operational phases. Risk mitigation measures have been proposed for the identified high-risk areas, tailored to the specific characteristics of the project. This study underscores the importance of incorporating smart operation and information technology risks into traditional risk management frameworks. The findings emphasize the need for a paradigm shift in the risk management of underground utility tunnel projects, particularly in light of the ongoing digital transformation of infrastructure. Such an approach would enhance the safety and efficiency of project management across the entire life cycle of the tunnel system.
Industrial symbiosis (IS) represents a strategic framework for collaboration among companies through innovative partnerships, which aimed at optimizing resource utilization, reducing environmental impact, and promoting sustainable development in line with the principles of circular economy. This study conducted a systematic literature review (SLR) and a quantitative analysis of the effectiveness of IS tools in resource management. Publications from January 2020 to December 2024 were retrieved from the established databases such as SpringerLink, ScienceDirect, EBSCO, and DOAJ, with a focus on industrial engineering, environmental management, circular economy, sustainable development, resource conservation, and recycling. Advanced methodologies including the Fuzzy Analytic Hierarchy Process (FAHP) and the Decision-Making Trial and Evaluation Laboratory (DEMATEL) were applied to evaluate four key dimensions, i.e., Decision-Making (DMD), Geographical Location (GLD), Strategic Planning (SD), and Lean Manufacturing (LMD), along with 21 subcriteria. The results indicated that DMD and GLD functioned as causal dimensions influencing SD and LMD, while alternatives such as Intelligent Waste Recycling Systems (IWRS) and Life Cycle Assessment (LCA) were considered to be highly efficient in resource utilization. The identification of dominant relationships via the threshold value of α = 0.58 highlighted strategic leverage points for implementing sustainable manufacturing practices. These findings emphasize that effective DMD, combined with strategic planning based on geographical considerations and application of technological tools, is critical for optimizing resources, enhancing environmental protection, and fostering economic and social development, thus providing clear guidance for the implementation of IS strategies in industrial settings.
Recent literature has explored the nexus between macroeconomic policy uncertainty (MPU) and the environment in compliance with Sustainable Development Goals (SDGs). This study contributes to the literature by exploring the possible or negative environmental effects of MPU. The present study reviewed 117 research articles published from 2020 to 2025 to understand the multifaceted association between MPU and environmental sustainability, having considered sectoral and spatial dynamics, asymmetric responses, and heterogeneous responses from different countries and regions. The findings suggested that the relationship was complex, and varied upon the economic sector, emissions source, policy regime, and geographical location. MPU reduced the speed of transition from the first to the second phase of the Environmental Kuznets Curve (EKC). In the short run, MPU can reduce emissions due to temporary economic slowdowns. Nevertheless, it can be responsible for negative long-term environmental performance by delaying green investments, increasing fossil fuel reliance, and weakening institutional effectiveness. Sectoral analyses revealed that MPU raised emissions in the energy and industrial sectors and reduced them in the agricultural sector. While strong institutional quality helped to mitigate emissions, weak institutions raised environmental problems. The findings of this review suggested that policymakers should design adaptive, sector-sensitive, and regionally coordinated environmental strategies to protect the environment from macroeconomic policy volatility.
In recent years, humanitarian logistics have received much attention from practitioners and researchers due to the significant damage from natural disasters on a global scale. This case study investigated the potential of leveraging social media data to enhance the effectiveness of humanitarian logistics in Vietnam after the disaster caused by Typhoon Yagi. The research examined public sentiment about the disaster response efforts, pinpointed the needs of critical relief, and assessed the performance of various machine learning models in classifying disaster-related content on social media. Data was sourced from multiple platforms, preprocessed and then categorized according to the damage types, required relief supplies, and sentiment labels. After that, different machine learning models were utilized to analyze the negative impact of the disaster. The analysis revealed that housing and transportation were the primary sources of negative public sentiment, indicating significant unmet needs in these areas. In contrast, generally more positive responses were received in relation to cash assistance, food, and medical support. A comparative evaluation of 12 machine learning models suggested that conventional algorithms, such as Random Forest, Support Vector Machine, and Logistic Regression, outperformed deep learning models in sentiment classification tasks. These findings shed light on the value of social media as a real-time indicator of public perception and logistical effectiveness. Therefore, incorporating sentiment analysis into the planning of disaster response can support more adaptive, timely, and community-informed decision-making for governments and humanitarian organizations.