Javascript is required
Akçapınar, G., Altun, A., & Aşkar, P. (2019). Using learning analytics to develop early-warning system for at-risk students. Int. J. Educational Technol. Higher Educ., 16(1), 1–20. [Google Scholar] [Crossref]
Alakrash, H. M. & Abdul Razak, N. (2021). Technology-based language learning: Investigation of digital technology and digital literacy. Sustainability, 13(21), 12304. [Google Scholar] [Crossref]
Alruwais, N. & Zakariah, M. (2023). Evaluating student knowledge assessment using machine learning techniques. Sustainability, 15(7), 6229. [Google Scholar] [Crossref]
Brown, A., Lawrence, J., Basson, M., & Redmond, P. (2020). A conceptual framework to enhance student online learning and engagement in higher education. Higher Educ. Res. Dev., 41(2), 284–299. [Google Scholar] [Crossref]
Costa-Mendes, R., Oliveira, T., Castelli, M., & Cruz-Jesus, F. (2021). A machine learning approximation of the 2015 Portuguese high school student grades: A hybrid approach. Educ. Inf. Technol., 26(2), 1527–1547. [Google Scholar] [Crossref]
Ennen, N. L., Stark, E., & Lassiter, A. (2015). The importance of trust for satisfaction, motivation, and academic performance in student learning groups. Social Psychology Educ., 18, 615–633. [Google Scholar] [Crossref]
Gagliardi, J. S. & Turk, J. M. (2017). The data-enabled executive: Using analytics for student success and sustainability. Anal. Policy Observatory. [Google Scholar]
Garrigan, B., Adlam, A. L., & Langdon, P. E. (2018). Moral decision-making and moral development: Toward an integrative framework. Dev. Rev., 49, 80–100. [Google Scholar] [Crossref]
Hellas, A., Ihantola, P., Petersen, A., Ajanovski, V. V., Gutica, M., Hynninen, T., Knutas, A., Leinonen, J., Messom, C., & Liao, S. N. (2018). Predicting academic performance: A systematic literature review. In Proceedings Companion of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education (pp. 175–199). New York: ACM Digital Library. [Google Scholar] [Crossref]
Hoffait, A. S. & Schyns, M. (2017). Early detection of university students with potential difficulties. Decis. Support Syst., 101, 1–11. [Google Scholar] [Crossref]
Holden, O. L., Norris, M. E., & Kuhlmeier, V. A. (2021). Academic integrity in online assessment: A research review. Front. Educ., 6, 639814. [Google Scholar] [Crossref]
Lara, J. A., Lizcano, D., Martínez, M. A., Pazos, J., & Riera, T. (2014). A system for knowledge discovery in e-learning environments within the European Higher Education Area – Application to student data from Open University of Madrid, UDIMA. Comput. Educ., 72, 23–36. [Google Scholar] [Crossref]
Murad, D. F., Heryadi, Y., Wijanarko, B. D., Isa, S. M., & Budiharto, W. (2018). Recommendation system for smart LMS using machine learning: A literature review. In 2018 International Conference on Computing, Engineering, and Design (ICCED) (pp. 113–118). Bangkok: IEEE. [Google Scholar]
Pabba, C. & Kumar, P. (2022). An intelligent system for monitoring students’ engagement in large classroom teaching through facial expression recognition. Expert Syst., 39(1), e12839. [Google Scholar] [Crossref]
Pallathadka, H., Sonia, B., Sanchez, D. T., De Vera, J. V., Godinez, J. A. T., & Pepito, M. T. (2022). Investigating the impact of artificial intelligence in education sector by predicting student performance. Mater. Today Proc., 51, 2264–2267. [Google Scholar] [Crossref]
Rebai, S., Yahia, F. B., & Essid, H. (2020). A graphically based machine learning approach to predict secondary schools performance in Tunisia. Socio Econ. Plann. Sci., 70, 100724. [Google Scholar] [Crossref]
Rogerson, A. M. (2022). Technology-based assessment and academic integrity: Building capacity in academic staff. In Handbook of Digital Higher Education (pp. 299–309). Cheltenham: Edward Elgar Publishing. [Google Scholar] [Crossref]
Sansone, D. (2019). Beyond early warning indicators: High school dropout and machine learning. Oxford Bull. Econ. Stat., 81(2), 456–485. [Google Scholar] [Crossref]
Sciarrone, F. (2018). Machine learning and learning analytics: Integrating data with learning. In 2018 17th International Conference on Information Technology Based Higher Education and Training (ITHET) (pp. 1–5). Olhao: IEEE. [Google Scholar]
Shorfuzzaman, M., Hossain, M. S., Nazir, A., Muhammad, G., & Alamri, A. (2019). Harnessing the power of big data analytics in the cloud to support learning analytics in mobile learning environment. Comput. Hum. Behave., 92, 578–588. [Google Scholar] [Crossref]
Susilawati, E., Lubis, H., Kesuma, S., & Pratama, I. (2022). Antecedents of student character in higher education: The role of the Automated Short Essay Scoring (ASES) digital technology-based assessment model. Eurasian J. Educational Res., 98, 203–220. [Google Scholar]
Taglietti, D., Landri, P., & Grimaldi, E. (2021). The big acceleration in digital education in Italy: The COVID-19 pandemic and the blended-school form. Eur. Educational Res. J., 20(4), 423–441. [Google Scholar] [Crossref]
Vandamme, J. P., Meskens, N., & Superby, J. F. (2007). Predicting academic performance by data mining methods. Educ. Econ., 15(4), 405–419. [Google Scholar] [Crossref]
Vermeiren, S., Duchatelet, D., & Gijbels, D. (2022). Assessing students’ self-efficacy for negotiating during a role-play simulation of political decision-making. Stud. Educational Eval., 72, 101124. [Google Scholar] [Crossref]
Xu, X., Wang, J., Peng, H., & Wu, R. (2019). Prediction of academic performance associated with internet usage behaviors using machine learning algorithms. Comput. Hum. Behav., 98, 166–173. [Google Scholar] [Crossref]
Yağcı, M. (2022). Educational data mining: Prediction of students’ academic performance using machine learning algorithms. Smart Learn. Env., 9(1), 11. [Google Scholar] [Crossref]
Zhan, X., Sun, D., Wen, Y., Yang, Y., & Zhan, Y. (2022). Investigating students’ engagement in mobile technology-supported science learning through video-based classroom observation. J. Sci. Educ. Technol., 31(4), 514–527. [Google Scholar] [Crossref]
Search
Open Access
Research article

Evaluating Online Learning Adaptability in Students Using Machine Learning-Based Techniques: A Novel Analytical Approach

a.b feroz khan1*,
saleem raja abdul samad2
1
Department of Computer Science, Syed Hameedha Arts and Science College, 623806 Kilakari, India
2
IT Department, University of Technology and Applied Sciences-Shinas, 324 Shinas, Sultanate of Oman
Education Science and Management
|
Volume 2, Issue 1, 2024
|
Pages 25-34
Received: 12-31-2023,
Revised: 02-19-2024,
Accepted: 03-04-2024,
Available online: 03-12-2024
View Full Article|Download PDF

Abstract:

The widespread adoption of Learning Management Systems (LMSs) in educational contexts, underscored by their critical role in facilitating cloud-based training across diverse settings, serves as the foundation of this investigation. In the era of increasing integration of technology within higher education, a notable reduction in the costs associated with the creation of online content has been observed. The shift towards remote learning, precipitated by the COVID-19 pandemic, has highlighted the indispensable nature of LMSs in the delivery of specialized content, the application of varied pedagogical strategies, and the promotion of student engagement. Adaptability, defined as the ability to adjust behavior, cognition, and emotional responses in the face of new circumstances, has been recognized as a key factor in the success of online learning. This study employs sophisticated Machine Learning Techniques (MLTs) to explore the determinants of student adaptability, introducing the novel framework of Online Learner Adaptability Assessment using MLTs (OLAMLTs). Through the analysis of comprehensive datasets, which include indicators of student behavior, performance, and engagement within online platforms, MLTs facilitate the identification of patterns and correlations pertinent to adaptability. The OLAMLTs framework applies a retrospective analysis to variables such as technological proficiency, motivation, and self-regulatory capabilities, enabling the provision of customized recommendations for educators. By facilitating targeted educational interventions, the study seeks to address the disparity between the need for adaptable learners and the availability of tools designed to foster this critical attribute. The ultimate aim is to augment the resilience and efficacy of online learning platforms in anticipation of future disruptions, including pandemics or other unforeseen challenges. This research contributes to the ongoing efforts to develop a more adaptive and resilient online learning landscape, marking a significant advancement in the fields of educational technology and pedagogy.
Keywords: Machine learning, Learning Management Systems (LMSs), Natural language processing, Online learning adaptability

1. Introduction

The integration of technology into educational settings has revolutionized the way students learn and instructors teach. Among the myriad technological advancements in education, LMSs have emerged as indispensable tools for facilitating online learning in both local and remote settings. LMSs provide a centralized platform for course materials, assessments, communication, and collaboration, streamlining the educational process and extending access to learning opportunities beyond the confines of traditional classrooms. However, alongside the numerous benefits of technology integration in higher education, significant challenges also arise. Socioeconomic disparities in access to technology and reliable internet connectivity may exacerbate inequalities in educational opportunities, creating a digital divide among students. Moreover, concerns regarding privacy, security, and data protection in online learning environments necessitate careful consideration and the implementation of robust security measures. Despite these challenges, the potential of technology to enhance teaching and learning in higher education cannot be overstated. By exploring the specific advantages and challenges of technology integration, educators and policymakers can develop strategies to maximize the benefits of digital tools while addressing potential barriers to access and engagement.

The widespread adoption of LMSs in educational institutions has been driven by several factors. First and foremost is the increasing demand for flexible and accessible education (A​k​ç​a​p​ı​n​a​r​ ​e​t​ ​a​l​.​,​ ​2​0​1​9). With the advent of digital learning resources, students can now engage with course materials anytime, anywhere, and at their own pace (A​l​a​k​r​a​s​h​ ​&​ ​A​b​d​u​l​ ​R​a​z​a​k​,​ ​2​0​2​1). This flexibility is particularly valuable for non-traditional learners, such as working professionals, adult learners, and individuals with familial or caregiving responsibilities, who may struggle to attend traditional classes due to time constraints or geographical barriers.

Furthermore, the affordability and scalability of LMSs have made them attractive options for educational institutions seeking to expand their online offerings. Compared to traditional brick-and-mortar infrastructure, implementing and maintaining an LMS requires minimal upfront investment and can accommodate a large number of users simultaneously. This scalability is crucial for accommodating the diverse needs and preferences of today’s learners, who often come from different backgrounds, possess varying levels of prior knowledge, and prefer different learning styles (A​l​r​u​w​a​i​s​ ​&​ ​Z​a​k​a​r​i​a​h​,​ ​2​0​2​3; B​r​o​w​n​ ​e​t​ ​a​l​.​,​ ​2​0​2​0).

The COVID-19 pandemic served as a catalyst for the widespread adoption of online learning technologies, including LMSs. In response to lockdowns and social distancing measures, educational institutions around the world were forced to transition rapidly to remote instruction to ensure the continuity of learning (C​o​s​t​a​-​M​e​n​d​e​s​ ​e​t​ ​a​l​.​,​ ​2​0​2​1). Overnight, LMSs became essential tools for delivering course materials, facilitating communication between students and instructors, and administering assessments in a virtual environment. The pandemic highlighted the importance of digital literacy and technological proficiency for both educators and students, as well as the need for robust infrastructure and support systems to sustain online learning initiatives.

Amidst the rapid transition to remote learning, adaptability emerged as a critical determinant of online learning success. Defined as the ability to manage behavior, thoughts, and emotions in response to novel situations, adaptability became increasingly important as students and educators grappled with the challenges of remote instruction. Students who were able to adapt quickly to the new learning environment, navigate unfamiliar technology, and maintain motivation and focus in the face of distractions were more likely to succeed academically. Conversely, students who struggled to adapt to the demands of online learning may have experienced frustration, anxiety, and disengagement, leading to lower academic performance and retention rates (H​o​f​f​a​i​t​ ​&​ ​S​c​h​y​n​s​,​ ​2​0​1​7).

In response to the growing recognition of adaptability’s importance in online learning, this research paper proposes to investigate the factors influencing students’ adaptability within the context of LMSs. Leveraging advanced MLTs, this study aims to develop a novel OLAMLTs to retrospectively analyze student behavior, performance, and engagement within online learning platforms. By uncovering patterns and correlations related to adaptability, OLAMLTs provides personalized recommendations to educators to support students in developing and enhancing their adaptability skills.

Through targeted educational interventions informed by MLT-driven insights, this study seeks to bridge the gap between the demand for adaptable learners and the tools required to nurture this essential trait. Ultimately, this study aims to:

1. Identify the key factors influencing students’ adaptability within the context of LMSs.

2. Develop the OLAMLTs to retrospectively analyze student behavior, performance, and engagement within LMSs.

3. Provide personalized recommendations to educators based on OLAMLTs insights to support students in developing and enhancing their adaptability skills.

4. Improve the overall resilience and effectiveness of online learning in the face of future disruptions, including pandemics or unforeseen challenges, by advancing the understanding of adaptability and developing practical strategies for enhancing adaptability skills.

2. Literature Review

The integration of LMSs into educational settings has transformed the landscape of online learning, offering centralized platforms for content delivery, communication, and assessment. Studies have extensively documented the benefits of LMSs in enhancing student engagement, facilitating collaboration, and improving learning outcomes (E​n​n​e​n​ ​e​t​ ​a​l​.​,​ ​2​0​1​5; G​a​g​l​i​a​r​d​i​ ​&​ ​T​u​r​k​,​ ​2​0​1​7; G​a​r​r​i​g​a​n​ ​e​t​ ​a​l​.​,​ ​2​0​1​8; H​e​l​l​a​s​ ​e​t​ ​a​l​.​,​ ​2​0​1​8). Additionally, the COVID-19 pandemic accelerated the adoption of remote instruction, highlighting the critical role of LMSs in ensuring continuity of learning during times of crisis (E​n​n​e​n​ ​e​t​ ​a​l​.​,​ ​2​0​1​5; H​o​l​d​e​n​ ​e​t​ ​a​l​.​,​ ​2​0​2​1; L​a​r​a​ ​e​t​ ​a​l​.​,​ ​2​0​1​4; S​h​o​r​f​u​z​z​a​m​a​n​ ​e​t​ ​a​l​.​,​ ​2​0​1​9). Amidst this rapid transition, adaptability emerged as a key determinant of online learning success, encompassing students’ ability to navigate technological challenges, manage time effectively, and adjust to changes in learning modalities (M​u​r​a​d​ ​e​t​ ​a​l​.​,​ ​2​0​1​8; P​a​l​l​a​t​h​a​d​k​a​ ​e​t​ ​a​l​.​,​ ​2​0​2​2; R​e​b​a​i​ ​e​t​ ​a​l​.​,​ ​2​0​2​0; R​o​g​e​r​s​o​n​,​ ​2​0​2​2). While previous research has recognized the importance of adaptability in online learning, assessing and fostering adaptability skills remains a challenge. Traditional assessment methods often lack objectivity and fail to capture the dynamic nature of adaptability (S​a​n​s​o​n​e​,​ ​2​0​1​9). In response, this study proposes to leverage MLTs to develop a novel OLAMLTs to analyze student behavior, performance, and engagement within LMSs (V​e​r​m​e​i​r​e​n​ ​e​t​ ​a​l​.​,​ ​2​0​2​2). By uncovering patterns and correlations related to adaptability, OLAMLTs aims to provide personalized recommendations to educators to support students in developing and enhancing their adaptability skills. This approach builds on existing literature while addressing limitations such as subjective assessment methods and a lack of personalized interventions (V​e​r​m​e​i​r​e​n​ ​e​t​ ​a​l​.​,​ ​2​0​2​2), ultimately contributing to the advancement of research in online learning and educational technology. V​e​r​m​e​i​r​e​n​ ​e​t​ ​a​l​.​ ​(​2​0​2​2​)’s study contributes to understanding students’ self-efficacy in role-play simulations of political decision-making. By considering both student characteristics and simulation features, the study offers insights into how individuals perceive their abilities to negotiate in simulated scenarios. However, one potential critique could be the limited scope of the study, as it focuses specifically on political decision-making simulations, which may limit the generalizability of the findings to other contexts. Future research could explore the transferability of self-efficacy skills across different simulation scenarios and examine the effectiveness of interventions aimed at enhancing students’ negotiation abilities. Y​a​ğ​c​ı​ ​(​2​0​2​2​) described educational data mining and predicted students’ academic performance using machine learning algorithms, addressing a crucial aspect of educational assessment. By leveraging MLTs, the study aims to identify patterns and predictors of academic success. However, one potential limitation is the reliance on historical data, which may not capture the dynamic nature of student learning. Future research could explore real-time data analytics and adaptive learning systems to provide personalized interventions and support for students at risk of academic underachievement. B​r​o​w​n​ ​e​t​ ​a​l​.​ ​(​2​0​2​0​) proposed a conceptual framework for enhancing student online learning and engagement in higher education, presenting innovative strategies for improving student outcomes in digital learning environments. The framework emphasizes the importance of fostering active learning, collaboration, and student-centered approaches. However, one potential critique could be the need for empirical validation of the framework’s effectiveness in diverse educational contexts. Future research could involve implementing and evaluating the framework in different institutional settings to assess its applicability and impact on student engagement and learning outcomes. P​a​l​l​a​t​h​a​d​k​a​ ​e​t​ ​a​l​.​ ​(​2​0​2​2​) investigated the impact of artificial intelligence (AI) on predicting student performance, addressing the growing interest in leveraging advanced technologies in education. By analyzing the potential of AI-driven predictive models, the study offers insights into improving educational outcomes through data-driven interventions. However, ethical considerations regarding data privacy and algorithmic bias should be carefully addressed to ensure the responsible use of AI in education. Future research could explore interdisciplinary collaborations between educators, technologists, and policymakers to develop ethical guidelines and best practices for AI integration in educational settings. P​a​b​b​a​ ​&​ ​K​u​m​a​r​ ​(​2​0​2​2​) proposed an intelligent system for monitoring student engagement through facial expression recognition, highlighting the potential of emerging technologies in educational assessment. The system offers a novel approach to capturing student engagement cues in large classroom settings. However, ethical concerns related to privacy, consent, and potential biases in facial recognition algorithms warrant careful consideration. Future research could focus on addressing these ethical challenges and exploring alternative methods for assessing student engagement that prioritize student autonomy and well-being. Z​h​a​n​ ​e​t​ ​a​l​.​ ​(​2​0​2​2​) described students’ engagement in mobile technology-supported science learning, contributing to understanding the role of digital tools in promoting active learning experiences. The study provides insights into how mobile technology can facilitate interactive and participatory learning activities in science education. However, one potential limitation is the need to ensure equitable access to digital resources, particularly for students from underserved communities. Future research could explore strategies for mitigating the digital divide and promoting inclusive practices in technology-enhanced learning environments.

However, while existing studies have laid the groundwork for understanding the importance of adaptability and the role of LMSs in online learning, several limitations remain. Firstly, many studies have relied on self-report measures and surveys to assess adaptability, which may be subject to biases and inaccuracies (T​a​g​l​i​e​t​t​i​ ​e​t​ ​a​l​.​,​ ​2​0​2​1). Additionally, the use of traditional statistical methods may not fully capture the complexity and dynamics of adaptability in online learning environments (S​u​s​i​l​a​w​a​t​i​ ​e​t​ ​a​l​.​,​ ​2​0​2​2). Furthermore, existing approaches to assessing adaptability often lack personalization, providing generic recommendations that may not address the unique needs and challenges faced by individual students (S​c​i​a​r​r​o​n​e​,​ ​2​0​1​8). Finally, while MLTs show promise in analyzing large datasets and uncovering patterns related to adaptability, their implementation in educational settings may pose challenges related to data privacy, algorithmic bias, and interpretability (V​a​n​d​a​m​m​e​ ​e​t​ ​a​l​.​,​ ​2​0​0​7; X​u​ ​e​t​ ​a​l​.​,​ ​2​0​1​9; Y​a​ğ​c​ı​,​ ​2​0​2​2).

To address these limitations, this study proposes a novel approach that combines MLTs with fine-grained data from LMSs to develop a personalized adaptability assessment tool. By leveraging advanced MLTs, such as neural networks and natural language processing (NLP) algorithms, OLAMLTs aims to analyze various aspects of student behavior, performance, and engagement within LMSs to identify patterns and correlations related to adaptability. Furthermore, OLAMLTs incorporates feedback mechanisms to iteratively refine its recommendations based on individual student responses and interactions. This personalized approach enables educators to tailor interventions and support mechanisms to meet the specific needs of each student, ultimately enhancing their adaptability skills and improving their overall learning outcomes. Moreover, by integrating principles of transparency, fairness, and accountability into its design and implementation, OLAMLTs addresses concerns related to data privacy, algorithmic bias, and interpretability, ensuring ethical and responsible use of machine learning in educational settings.

3. Proposed Approach

The proposed approach, which has been undertaken with the aim of developing the OLAMLTs to analyze factors influencing students’ adaptability within LMSs, has progressed through several stages. This comprehensive endeavor involved the collection of a vast dataset from diverse LMSs utilized by educational institutions. This dataset encompassed a multitude of data sources, including but not limited to student interaction logs, assessment results, demographic information, and self-reported adaptability measures. Moreover, contextual data such as course materials, instructor feedback, and course schedules were meticulously gathered to offer a holistic perspective of the learning environment. To ensure the utmost ethical standards were maintained throughout the data collection process, the research team obtained approval from the institutional review board (IRB), adhered strictly to data privacy protocols, and ensured compliance with established ethical guidelines.

Upon completion of the data collection phase, rigorous preprocessing procedures were initiated to clean, transform, and prepare the collected dataset for subsequent analysis. This preprocessing stage was critical to ensuring the reliability and accuracy of the data. It involved several intricate steps, including the handling of missing values, the removal of duplicates, the standardization of formats, and the encoding of categorical variables. Additionally, advanced NLP techniques were employed to process textual data, such as student feedback and discussion forum posts, in order to extract relevant features and sentiments. Furthermore, outlier detection and normalization techniques were applied to guarantee the consistency and quality of the dataset, thereby laying a robust foundation for subsequent analyses. Figure 1 shows the overall process of the proposed approach.

Figure 1. Model architecture

Subsequent to the preprocessing phase, the research progressed to the pivotal stage of feature engineering. This stage aimed to extract meaningful insights from the dataset by engineering features that capture various aspects of student behavior, performance, engagement, and demographics. By crafting a comprehensive set of features, the research team sought to elucidate the multifaceted nature of adaptability among students. Features were crafted to encapsulate diverse dimensions of adaptability, including login frequency, time allocation across different learning activities, performance on quizzes and assessments, participation in discussions, and self-reported adaptability measures. Moreover, contextual features such as course difficulty, instructor feedback, and course workload were strategically incorporated to provide contextual insights into student behavior and performance.

Having meticulously engineered a comprehensive set of features, the research advanced to the stage of model development, which constituted a pivotal component of the proposed approach. In this phase, a plethora of MLTs were systematically employed to develop predictive models aimed at assessing students’ adaptability and identifying the underlying factors influencing it. Supervised learning algorithms, including random forests, gradient boosting machines, support vector machines (SVMs), and logistic regression, were trained on labeled data to predict adaptability scores or classify students into distinct adaptability categories. Ensemble methods and deep learning techniques were also explored and evaluated to enhance model performance and robustness. Moreover, unsupervised learning techniques, such as clustering and dimensionality reduction, were harnessed to elucidate patterns and relationships within the dataset, thereby offering invaluable insights into the underlying dynamics of student adaptability.

Upon the completion of model development, the research transitioned to the critical stage of evaluation and validation, where the efficacy and robustness of the developed models were rigorously assessed. This phase involved the thorough evaluation of the developed models using a myriad of performance metrics, including accuracy, precision, recall, F1-score, and the area under the receiver operating characteristic curve (AUC-ROC). To ascertain the generalization capabilities of the models and mitigate the risk of overfitting, K-fold cross-validation and stratified sampling techniques were precisely employed. Moreover, model interpretability techniques, such as feature importance analysis and SHapley Additive exPlanations (SHAP) values, were systematically leveraged to gain profound insights into the factors driving adaptability predictions. Finally, the developed models underwent comprehensive validation using holdout datasets or were deployed in real-world educational settings for further validation and refinement.

3.1 Utilization of Preprocessing Techniques

The preprocessing of the data was systematically approached to refine the collected dataset, ensuring it was primed for in-depth analysis. This section delineates the methodologies employed during this phase.

(a) Handling missing values

In this research, missing values were addressed using robust imputation techniques. For numerical variables, missing values were replaced with the mean or median of the respective feature to preserve the distribution of the data. Categorical variables were imputed with the mode or a placeholder value denoting missingness. Imputation was performed strategically to minimize bias and ensure the integrity of the dataset.

(b) Removal of duplicates

Duplicate entries were systematically identified and removed to prevent redundancy and maintain dataset integrity. This process involved comparing records across all variables and retaining only unique instances. By eliminating duplicates, the research team ensured that each observation in the dataset was distinct and contributed uniquely to the analysis.

(c) Standardization of formats

To facilitate seamless integration and comparison of variables, data formats were standardized across the dataset. This involved converting variables into a common format or scale, such as standardizing date formats or converting units of measurement. Standardization ensured consistency and compatibility across different data fields, enabling accurate analysis and interpretation.

(d) Encoding categorical variables

Categorical variables, such as gender or course type, were encoded into numerical representations to make them compatible with machine learning algorithms. One-hot encoding was employed for variables with multiple categories, creating binary indicator variables for each category. For example, if the "gender" variable had the categories "male" and "female," it would be transformed into two binary variables: "is_male" and "is_female." Label encoding was utilized for ordinal variables, assigning numerical labels to categories based on their order. For instance, if a variable had the categories "low," "medium," and "high," they would be encoded as 1, 2, and 3, respectively.

(e) Application of NLP techniques

Textual data, including student feedback and discussion forum posts, was processed using advanced NLP techniques to extract relevant features and sentiments. Sentiment analysis algorithms were applied to assess the polarity and subjectivity of text, providing insights into student perceptions and attitudes. Topic modeling techniques, such as Latent Dirichlet Allocation (LDA), were used to identify latent themes and topics within text data. Keyword extraction algorithms were employed to identify important terms and phrases, enriching the dataset with valuable qualitative information.

(f) Outlier detection and normalization

Outliers, or data points that deviated significantly from the rest of the data, were identified and mitigated using robust outlier detection techniques. Z-score analysis and the interquartile range (IQR) method were employed to detect outliers based on their deviation from the mean or median of the data distribution. For example, if a student’s time spent on a task was significantly higher or lower than the average, it could be flagged as an outlier. Outliers were either removed from the dataset or treated using appropriate transformation techniques to mitigate their impact on the analysis results. Normalization techniques, such as min-max scaling or z-score normalization, were applied to rescale numerical variables to a common range. This ensured that all variables contributed equally to the analysis and prevented the dominance of certain variables due to differences in scale. For instance, if one variable had a range of 0-100 and another had a range of 0-1000, normalization would scale both variables to a comparable range, improving the stability and convergence of machine learning algorithms.

These preprocessing techniques were seamlessly integrated into the data preprocessing pipeline, which formed the initial stage of the research workflow. Each technique was applied sequentially, with careful consideration given to the specific characteristics of the dataset and the requirements of subsequent analyses. By precisely addressing data quality issues and enhancing the compatibility of variables, these preprocessing techniques laid a solid foundation for the subsequent stages of feature engineering, model development, and evaluation.

4. Results

The results section presents the outcomes of the research, detailing the findings and insights gained from the conducted analyses. This section is crucial, as it demonstrates the effectiveness of the proposed methodology and provides valuable information to address the research objectives. The results are presented in a structured manner, starting with descriptive statistics and visualizations, followed by inferential analyses, and concluding with discussions on the implications of the findings.

4.1 Descriptive Statistics and Visualizations

Descriptive statistics offer a comprehensive overview of the dataset, summarizing key characteristics and distributions of variables. Measures such as mean, median, standard deviation, and range provide insights into central tendency, variability, and dispersion within the data. Additionally, visualizations such as histograms, box plots, and scatter plots are used to illustrate patterns, trends, and relationships among variables. Figure 2 illustrates the distribution of quiz scores across all courses.

Figure 2. Histogram of quiz scores

For instance, the mean quiz scores across different courses are presented in Table 1.

Table 1. Courses and scores

Course

Mean Quiz Score

Course A

79.2

Course B

76.5

Course C

81.8

Histograms and box plots visually depicted the distribution of quiz scores, highlighting any skewness or outliers present in the data.

Furthermore, scatter plots were employed to explore the relationship between quiz scores and other variables, such as time spent on learning activities or self-reported adaptability measures.

4.2 Inferential Analyses

Inferential analyses involve testing hypotheses and making inferences about population parameters based on sample data. Statistical tests such as t-tests, analysis of variance (ANOVA), chi-square tests, and regression analyses are utilized to assess the significance of relationships and differences among variables.

For example, a series of regression analyses were conducted to examine the predictors of student adaptability. Results indicated that factors such as technological proficiency, motivation, and self-regulation significantly influenced adaptability scores (p<0.05). The regression results are presented in Table 2.

Table 2. Regression results

Predictor

Beta Coefficient

p-value

Technological proficiency

0.321

<0.001

Motivation

0.257

<0.001

Self-regulation

0.183

0.004

Additionally, chi-square tests were employed to investigate the association between categorical variables, such as course type and adaptability levels.

5. Performance Comparison

Performance comparison involves evaluating the effectiveness of different models or interventions based on predefined metrics or criteria. In this study, the performance of different machine learning algorithms in predicting student adaptability was compared using metrics such as accuracy, precision, recall, and F1-score.

Table 3 presents the performance metrics of three machine learning algorithms, namely, logistic regression, random forest, and SVM.

Table 3. Performance comparison

Algorithm

Accuracy

Precision

Recall

F1-score

Logistic regression

0.85

0.87

0.83

0.85

Random forest

0.88

0.89

0.87

0.88

SVM

0.82

0.84

0.80

0.82

The results indicate that random forest outperformed logistic regression and SVM in terms of accuracy, precision, recall, and F1-score. However, further analyses are needed to determine the generalizability of these findings and the robustness of the models across different datasets and contexts. Figure 3 illustrates the performance metrics of the proposed approach against different algorithms.

Figure 3. Performance comparison
5.1 Discussion of Findings

The discussion of findings involves interpreting the results in the context of the research objectives and existing literature. It aims to provide insights into the implications of the findings, their significance, and potential avenues for further research.

The results of the study confirmed the importance of LMSs in facilitating online learning and enhancing student adaptability. The significant predictors identified through regression analyses align with previous research highlighting the role of technological proficiency, motivation, and self-regulation in online learning success. Moreover, the findings underscore the need for personalized interventions to support students in developing adaptability skills and navigating online learning environments effectively.

The identified patterns and correlations offer valuable insights for educators and policymakers to design targeted interventions and instructional strategies that promote adaptability and resilience among students. For instance, incorporating gamification elements or personalized learning pathways within LMSs may enhance student engagement and motivation, consequently fostering adaptability in diverse learning contexts. The findings of this study hold significant implications for educational practice, particularly in the realm of online learning platforms. By analyzing factors influencing student adaptability and engagement, educators and platform developers can glean insights to enhance learning experiences. The study underscores the potential for personalized learning pathways tailored to individual student needs, fostering active engagement through interactive features and real-time feedback mechanisms. Moreover, there is a call for inclusive practices that accommodate diverse learner needs, as well as support for educator professional development to leverage technology effectively. From a technological standpoint, integrating machine learning algorithms into platform design can create intelligent systems that adapt to user preferences and provide personalized recommendations. However, ethical considerations surrounding data privacy and algorithmic bias must be carefully addressed to ensure responsible implementation. Overall, embracing these findings can lead to more effective and equitable online learning experiences for all students.

6. Conclusion

The research embarked on an exploration of LMSs and their pivotal role in online education, particularly during the transformative period brought about by the COVID-19 pandemic. By leveraging MLTs, the study delved into the intricate dynamics of student adaptability within online learning environments. Throughout the investigation, significant strides were made in understanding the multifaceted nature of adaptability and its determinants. The findings underscored the critical influence of factors such as technological proficiency, motivation, and self-regulation on student adaptability. By employing rigorous preprocessing techniques and advanced analytics, the research unearthed patterns and correlations that shed light on the complex interplay between student behavior, performance, and engagement. Furthermore, the study culminated in the development of the OLAMLTs, a novel assessment method aimed at evaluating and enhancing student adaptability. This innovative approach not only provides actionable insights for educators but also empowers students to navigate online learning environments with greater resilience and efficacy. Educators should personalize learning paths, continuously monitor student engagement, and integrate adaptive technologies, while policymakers should invest in digital infrastructure, support research initiatives, and prioritize data privacy. Future research directions include longitudinal studies to examine long-term effects, cross-cultural investigations to address diversity, and exploration of emerging technologies. By implementing these recommendations and advancing research, more inclusive and effective online learning environments can be created, which empower students to thrive in the digital age.

Data Availability

The data used to support the research findings are available from the corresponding author upon.

Conflicts of Interest

The authors declare no conflict of interest.

References
Akçapınar, G., Altun, A., & Aşkar, P. (2019). Using learning analytics to develop early-warning system for at-risk students. Int. J. Educational Technol. Higher Educ., 16(1), 1–20. [Google Scholar] [Crossref]
Alakrash, H. M. & Abdul Razak, N. (2021). Technology-based language learning: Investigation of digital technology and digital literacy. Sustainability, 13(21), 12304. [Google Scholar] [Crossref]
Alruwais, N. & Zakariah, M. (2023). Evaluating student knowledge assessment using machine learning techniques. Sustainability, 15(7), 6229. [Google Scholar] [Crossref]
Brown, A., Lawrence, J., Basson, M., & Redmond, P. (2020). A conceptual framework to enhance student online learning and engagement in higher education. Higher Educ. Res. Dev., 41(2), 284–299. [Google Scholar] [Crossref]
Costa-Mendes, R., Oliveira, T., Castelli, M., & Cruz-Jesus, F. (2021). A machine learning approximation of the 2015 Portuguese high school student grades: A hybrid approach. Educ. Inf. Technol., 26(2), 1527–1547. [Google Scholar] [Crossref]
Ennen, N. L., Stark, E., & Lassiter, A. (2015). The importance of trust for satisfaction, motivation, and academic performance in student learning groups. Social Psychology Educ., 18, 615–633. [Google Scholar] [Crossref]
Gagliardi, J. S. & Turk, J. M. (2017). The data-enabled executive: Using analytics for student success and sustainability. Anal. Policy Observatory. [Google Scholar]
Garrigan, B., Adlam, A. L., & Langdon, P. E. (2018). Moral decision-making and moral development: Toward an integrative framework. Dev. Rev., 49, 80–100. [Google Scholar] [Crossref]
Hellas, A., Ihantola, P., Petersen, A., Ajanovski, V. V., Gutica, M., Hynninen, T., Knutas, A., Leinonen, J., Messom, C., & Liao, S. N. (2018). Predicting academic performance: A systematic literature review. In Proceedings Companion of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education (pp. 175–199). New York: ACM Digital Library. [Google Scholar] [Crossref]
Hoffait, A. S. & Schyns, M. (2017). Early detection of university students with potential difficulties. Decis. Support Syst., 101, 1–11. [Google Scholar] [Crossref]
Holden, O. L., Norris, M. E., & Kuhlmeier, V. A. (2021). Academic integrity in online assessment: A research review. Front. Educ., 6, 639814. [Google Scholar] [Crossref]
Lara, J. A., Lizcano, D., Martínez, M. A., Pazos, J., & Riera, T. (2014). A system for knowledge discovery in e-learning environments within the European Higher Education Area – Application to student data from Open University of Madrid, UDIMA. Comput. Educ., 72, 23–36. [Google Scholar] [Crossref]
Murad, D. F., Heryadi, Y., Wijanarko, B. D., Isa, S. M., & Budiharto, W. (2018). Recommendation system for smart LMS using machine learning: A literature review. In 2018 International Conference on Computing, Engineering, and Design (ICCED) (pp. 113–118). Bangkok: IEEE. [Google Scholar]
Pabba, C. & Kumar, P. (2022). An intelligent system for monitoring students’ engagement in large classroom teaching through facial expression recognition. Expert Syst., 39(1), e12839. [Google Scholar] [Crossref]
Pallathadka, H., Sonia, B., Sanchez, D. T., De Vera, J. V., Godinez, J. A. T., & Pepito, M. T. (2022). Investigating the impact of artificial intelligence in education sector by predicting student performance. Mater. Today Proc., 51, 2264–2267. [Google Scholar] [Crossref]
Rebai, S., Yahia, F. B., & Essid, H. (2020). A graphically based machine learning approach to predict secondary schools performance in Tunisia. Socio Econ. Plann. Sci., 70, 100724. [Google Scholar] [Crossref]
Rogerson, A. M. (2022). Technology-based assessment and academic integrity: Building capacity in academic staff. In Handbook of Digital Higher Education (pp. 299–309). Cheltenham: Edward Elgar Publishing. [Google Scholar] [Crossref]
Sansone, D. (2019). Beyond early warning indicators: High school dropout and machine learning. Oxford Bull. Econ. Stat., 81(2), 456–485. [Google Scholar] [Crossref]
Sciarrone, F. (2018). Machine learning and learning analytics: Integrating data with learning. In 2018 17th International Conference on Information Technology Based Higher Education and Training (ITHET) (pp. 1–5). Olhao: IEEE. [Google Scholar]
Shorfuzzaman, M., Hossain, M. S., Nazir, A., Muhammad, G., & Alamri, A. (2019). Harnessing the power of big data analytics in the cloud to support learning analytics in mobile learning environment. Comput. Hum. Behave., 92, 578–588. [Google Scholar] [Crossref]
Susilawati, E., Lubis, H., Kesuma, S., & Pratama, I. (2022). Antecedents of student character in higher education: The role of the Automated Short Essay Scoring (ASES) digital technology-based assessment model. Eurasian J. Educational Res., 98, 203–220. [Google Scholar]
Taglietti, D., Landri, P., & Grimaldi, E. (2021). The big acceleration in digital education in Italy: The COVID-19 pandemic and the blended-school form. Eur. Educational Res. J., 20(4), 423–441. [Google Scholar] [Crossref]
Vandamme, J. P., Meskens, N., & Superby, J. F. (2007). Predicting academic performance by data mining methods. Educ. Econ., 15(4), 405–419. [Google Scholar] [Crossref]
Vermeiren, S., Duchatelet, D., & Gijbels, D. (2022). Assessing students’ self-efficacy for negotiating during a role-play simulation of political decision-making. Stud. Educational Eval., 72, 101124. [Google Scholar] [Crossref]
Xu, X., Wang, J., Peng, H., & Wu, R. (2019). Prediction of academic performance associated with internet usage behaviors using machine learning algorithms. Comput. Hum. Behav., 98, 166–173. [Google Scholar] [Crossref]
Yağcı, M. (2022). Educational data mining: Prediction of students’ academic performance using machine learning algorithms. Smart Learn. Env., 9(1), 11. [Google Scholar] [Crossref]
Zhan, X., Sun, D., Wen, Y., Yang, Y., & Zhan, Y. (2022). Investigating students’ engagement in mobile technology-supported science learning through video-based classroom observation. J. Sci. Educ. Technol., 31(4), 514–527. [Google Scholar] [Crossref]

Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
GB-T-7714-2015
Feroz Khan, A.B & Samad, S. R. A. (2024). Evaluating Online Learning Adaptability in Students Using Machine Learning-Based Techniques: A Novel Analytical Approach. Educ. Sci. Manag., 2(1), 25-34. https://doi.org/10.56578/esm020103
A.B Feroz Khan and S. R. A. Samad, "Evaluating Online Learning Adaptability in Students Using Machine Learning-Based Techniques: A Novel Analytical Approach," Educ. Sci. Manag., vol. 2, no. 1, pp. 25-34, 2024. https://doi.org/10.56578/esm020103
@research-article{Khan2024EvaluatingOL,
title={Evaluating Online Learning Adaptability in Students Using Machine Learning-Based Techniques: A Novel Analytical Approach},
author={A.B Feroz Khan and Saleem Raja Abdul Samad},
journal={Education Science and Management},
year={2024},
page={25-34},
doi={https://doi.org/10.56578/esm020103}
}
A.B Feroz Khan, et al. "Evaluating Online Learning Adaptability in Students Using Machine Learning-Based Techniques: A Novel Analytical Approach." Education Science and Management, v 2, pp 25-34. doi: https://doi.org/10.56578/esm020103
A.B Feroz Khan and Saleem Raja Abdul Samad. "Evaluating Online Learning Adaptability in Students Using Machine Learning-Based Techniques: A Novel Analytical Approach." Education Science and Management, 2, (2024): 25-34. doi: https://doi.org/10.56578/esm020103
FEROZ KHAN AB, SAMAD S R A. Evaluating Online Learning Adaptability in Students Using Machine Learning-Based Techniques: A Novel Analytical Approach[J]. Education Science and Management, 2024, 2(1): 25-34. https://doi.org/10.56578/esm020103
cc
©2024 by the author(s). Published by Acadlore Publishing Services Limited, Hong Kong. This article is available for free download and can be reused and cited, provided that the original published version is credited, under the CC BY 4.0 license.