Exploring the Impact of ChatGPT on Mathematics Performance: The Influential Role of Student Interest
Abstract:
This investigation examines the influence of ChatGPT on mathematics achievement, with a specific focus on the moderating role of students’ interest in mathematics. A sample of 250 students, encompassing undergraduates pursuing a Bachelor of Science and postgraduates engaged in Masters of Philosophy and Doctor of Philosophy programs in Mathematics Education at Akenten Appiah-Menka University of Skills Training and Entrepreneurial Development (AAMUSTED), Kumasi-Ghana, was selected through random sampling. Employing a quantitative methodology, data were collected via structured questionnaires and analyzed using Amos software, version 23, to test the hypothesized relationships. The findings revealed that student interest in mathematics significantly and positively correlates with the use of ChatGPT, as evidenced by a p-value of less than 1%. Conversely, ChatGPT’s direct influence on mathematics achievement was found to be negative, though not statistically significant, with a p-value of less than 1%. Furthermore, a direct, positive, and statistically significant relationship between students’ interest in mathematics and their achievement in the subject was observed, with a p-value of less than 1%. Notably, the study identified a statistically significant positive moderation effect of students’ interest on the association between ChatGPT usage and mathematics achievement, underlined by a p-value of less than 1%. The findings advocate for a cautious integration of ChatGPT in mathematics education, emphasizing that reliance on artificial intelligence should complement, not replace, traditional learning modalities. Additionally, it is suggested that future research might benefit from employing surveys or self-evaluation tools beyond questionnaires to gather data. This study contributes to the existing body of knowledge by highlighting the nuanced role of student interest in leveraging technology-enhanced learning tools for academic success in mathematics.1. Introduction
Research in the field of mathematics education, as documented by Rameli & Kosnin (2016), has highlighted numerous challenges faced by students in their mathematical studies. Among these challenges, difficulties in grasping abstract mathematical concepts such as algebraic variables, functions, and geometric theorems have been prominently identified. The struggle with understanding and manipulating mathematical symbols and notation further exacerbates these conceptual challenges. In addition to conceptual difficulties, procedural challenges also pose significant obstacles for learners. Basic arithmetic operations, subtraction, multiplication, and division, have been shown to negatively impact students’ interest in mathematics. Furthermore, fear and anxiety associated with mathematics learning can significantly impede students’ capacity to learn and excel in mathematics. The perceptions students hold towards mathematics learning often result in diminished confidence and motivation, which in turn adversely affects their interest and performance in the subject.
In response to these challenges, the scholarly community has embarked on an exploration of remedies aimed at invigorating students’ interest and enhancing their performance in mathematics. Gómez-García et al. (2020) highlight the pivotal role of access to essential technology and learning resources in bolstering students’ mathematical interest and performance. Similarly, Pitsia et al. (2017) assert that motivation exerts a positive influence on students’ interest and their achievements in mathematics. Complementing these factors, Silinskas and Kikas (2019) underscore the significance of parental involvement in students’ mathematical education and the crucial impact of classroom management on students’ mathematical outcomes.
Extensive research has been conducted on the influence of ChatGPT on students’ interest in mathematics and their performance within the subject. The investigation by Dao & Le (2023) into the impact of ChatGPT on mathematical reasoning and problem-solving in the context of the Vietnamese National High School Graduate Examination revealed that ChatGPT holds potential as an effective instrument for the facilitation of mathematics education. Furthermore, the incorporation of ChatGPT into mathematics pedagogy has been significantly associated with an enhancement in students’ mathematical interest, as demonstrated by Supriyadi & Kuncoro (2023). This association is indicative of ChatGPT’s role in fostering personalized learning experiences, supporting blended learning environments, and advancing computational thinking, data literacy, and statistical competencies. Wardat et al. (2023) have posited that the efficacy of ChatGPT solutions in the realm of mathematics education predominantly hinges upon the complexity of the equations, the nature of the input data, and the specificity of the instructions provided to ChatGPT. Ali et al. (2023) further contended that interactive tools in mathematics education, such as ChatGPT, possess the capacity to significantly elevate students’ motivation towards the subject. In examining the contribution of ChatGPT to students’ interest in mathematics, Zafrullah et al. (2023) conducted a study on the impact of ChatGPT on students’ learning interest within mathematics education. Their findings disclosed that a substantial majority of students experienced a “very good” influence on their learning interest attributable to ChatGPT, with the overall “learning experience” rating at 81.29%, and the facets of “interest”, “enjoyment”, “self-efficacy”, and “learning goals” recording percentages of 81.04%, 78.62%, and 81.04%, respectively. These outcomes suggest a positive correlation between the utilization of ChatGPT and students’ interest in mathematics education.
Within the ambit of mathematics educational research, the examination of mathematics interest as a moderating variable in the nexus between ChatGPT utilization and students’ mathematical performance remains scant. This research endeavors to bridge this lacuna by evaluating the moderating influence of mathematics interest on the relationship between the adoption of ChatGPT as a technology-assisted learning tool and students’ performance in mathematics. ChatGPT, while serving as a linchpin for augmenting students’ interest in mathematics, is not posited as a replacement for conventional classroom instruction. It functions as an expansive online repository, offering unfettered access to educational resources across various domains, including Science, Technology, Engineering, and Mathematics (STEM), as well as in fields like management and accounting education, as elucidated by Nguyen et al. (2023).
Notwithstanding the beneficial impact of ChatGPT on mathematics learning, concerns have been raised regarding its potential to engender student disengagement from traditional educational paradigms, manifesting in diminished interest in classroom-based instruction and social interactions. While technology has the capacity to elevate students’ interest in mathematics, its integration should be part of a holistic educational strategy that encompasses practical exercises, group discussions, and dynamic interactions between teachers and students. The extent to which ChatGPT utilization fosters interest in mathematics is subject to variation, influenced by factors such as individual learning preferences, prior technological experiences, and intrinsic motivation. Ensuring ethical and appropriate use of ChatGPT by students is imperative.
The conceptual framework delineates the interrelationships among the variables under investigation. As depicted in Figure 1, the employment of ChatGPT in the domain of mathematics education is posited as the independent variable, exerting an influence on students’ mathematics performance. Mathematics interest is conceptualized as a moderating variable, mediating the relationship between the utilization of ChatGPT in mathematics education and students’ mathematics performance. Mathematics performance is thereby identified as the dependent variable, its outcomes predicted by the application of ChatGPT in mathematics learning.
The objectives of this research are articulated with the aim to:
- Assess the impact of ChatGPT utilization on students’ mathematics performance.
- Examine the influence of ChatGPT on students’ interest in mathematics.
- Evaluate the relationship between students’ interest in mathematics and their mathematics performance.
- Investigate the moderating role of mathematics interest in the relationship between ChatGPT utilization and students’ mathematics performance.
It is hypothesized (H0A) that the employment of ChatGPT within mathematics education does not exert a significant influence on students’ mathematics performance. Conversely, the alternative hypothesis (H1A) posits that a significant effect of ChatGPT on mathematics performance is observable.
The null hypothesis (H0B) suggests that ChatGPT’s implementation in the learning process has no significant impact on students’ interest in mathematics. The corresponding alternative hypothesis (H1B) contends that ChatGPT significantly influences students’ mathematics interest.
Regarding the effect of students’ mathematics interest on their performance, the null hypothesis (H0C) asserts that there is no significant impact. On the other hand, the alternative hypothesis (H1C) argues for a significant influence of students’ mathematics interest on their performance.
Finally, it is hypothesized (H0D) that the moderating role of students’ interest in mathematics insignificantly influences the relationship between ChatGPT usage and mathematics performance. The alternative hypothesis (H1D), however, proposes that students’ mathematics interest significantly moderates this connection.
2. Methodology
A purely quantitative research design was adopted, utilizing a structured questionnaire for data collection. Quantitative research methodologies are dedicated to the collection and analysis of numerical data. They are favored for their ability to provide precise, objective, and standardized measurements of variables. This approach is particularly advantageous when the objective is to quantify variables or evaluate the impact of specified factors on research outcomes.
The study population comprised all students engaged in mathematics education at AAMUSTED, Kumasi-Tanoso, Ghana, totaling 667 students. The selection of AAMUSTED students as the study population was informed by the prevalence of ChatGPT usage among these students for academic purposes. Additionally, the relevance of the study to the academic community at AAMUSTED is underscored by the involvement of a postgraduate student from the university as the corresponding author.
From the aggregate population of 667 students, a sample of 250 students was selected utilizing the Yamane & Sato (1967) formula for sample size determination. This formula is particularly beneficial in survey research, facilitating the determination of an adequate sample size to achieve desired levels of accuracy and confidence in the findings. The formula is defined as follows:
where, n represents the sample size, N denotes the total population (667 in this case), and e signifies the level of significance (set at 0.05).
The selection of an alpha level of 0.05 aligns with conventional practices for statistical hypothesis testing and sample size calculation. This level of significance delineates the threshold probability for incurring a Type I error, which occurs when a true null hypothesis is erroneously rejected.
Demographic information regarding the study participants is summarized in Table 1. Among the sampled students, males constituted 76%, while females accounted for 23.2% of the population. A marginal proportion of 0.8% opted not to disclose their gender. In terms of academic standing, first-year students formed the largest cohort, representing 36% of the sample, followed by third-year students at 20%, second-year students at 17.2%, fourth-year students at 16.8%, and graduate students comprising 10% of the sample.
Demographics | Frequency | Percentage (%) |
Gender | 250 | 100.0 |
Male | 190 | 76.0 |
Female | 58 | 23.2 |
Prefer not to say | 2 | 0.8 |
Level of education | 250 | 100.0 |
1st year | 90 | 36.0 |
2nd year | 43 | 17.2 |
3rd year | 50 | 20.0 |
4th year | 42 | 16.8 |
Graduate student | 25 | 10.0 |
The selection process for participants in this study was conducted through a combination of purposive, stratified, and simple random sampling techniques. In purposive sampling, a non-probability approach is utilized, whereby participants are deliberately chosen based on specific criteria relevant to the research objectives. This method is particularly advantageous when targeting a demographic segment possessing characteristics or expertise essential to the study. For the purpose of this investigation, individuals enrolled in mathematics education programs at AAMUSTED, Kumasi-Tanoso, Ghana, were specifically targeted.
Furthermore, the student body was organized into distinct groups, based on their academic level, Bachelor of Science in Mathematics Education and Postgraduate Studies (Masters of Philosophy and Doctor of Philosophy in Mathematics Education). This stratification facilitated the employment of stratified sampling, ensuring the representation of each subgroup within the study. Subsequently, within these strata, participants were selected through simple random sampling, guaranteeing every student an equal opportunity for selection.
Data was gathered using a structured questionnaire, a tool that enables the standardized collection of information. Questionnaires ensure uniformity in the data collection process, as the same questions are presented to all respondents. The questionnaire was segmented into four sections: the first section aimed to collect demographic information, such as gender and educational level, from the respondents. The subsequent sections contained statements related to the primary variables under investigation (usage of ChatGPT, mathematics interest, and mathematics performance).
Responses were measured using a five-point Likert scale, ranging from 1 (strongly agree) to 5 (strongly disagree). Items measuring the main variables were adapted from existing literature to assess their applicability and to verify the consistency of findings across different contexts or populations. Specifically, the items related to mathematics interest and performance were adapted from the works of Arthur et al. (2022) and Kalpana & Malathi (2019), respectively. This approach not only contributes to the validation of the measurement items but also facilitates the examination of the constructs’ generalizability.
Data analysis was executed using SPSS (Version 23) and AMOS (Version 23) software. Initially, responses gathered from the participants were coded and inputted into SPSS (Version 23). Subsequently, a descriptive analysis focusing on frequencies was conducted to identify any missing responses. Following this preliminary step, further analytical procedures were undertaken, including Exploratory Factor Analysis (EFA) to explore the underlying factor structure of the questionnaire items, reliability analysis to assess the consistency of the measures, discriminant validity to evaluate the distinctiveness of the constructs, and path analysis to examine the structural relationships specified in the research hypotheses.
3. Data Analysis
EFA was employed as a statistical method to discern the latent structures within the dataset by examining the relationships among variables, utilizing SPSS (Version 23) for analysis. The procedure aimed to elucidate the underlying factor structure of a set of measurement items, with attention focused on the rotated component matrix, the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy, Bartlett’s Test of Sphericity, and the total variance explained (TVE). The rotated component matrix revealed the factor loadings for each measurement item on the identified factors, where a higher loading signifies a stronger association between an item and a factor. Varimax rotation facilitated the simplification of the factor structure, enhancing interpretability. Table 2 shows the results for the Exploratory Factor Analysis (EFA).
Rotated Component Matrix | |||||
Measurement Items | Components | ||||
1 | 2 | 3 | |||
ChatGPT3 | .790 | ||||
ChatGPT4 | .718 | ||||
ChatGPT5 | .844 | ||||
ChatGPT6 | .703 | ||||
ChatGPT7 | .780 | ||||
ChatGPT8 | .768 | ||||
MINT1 | .765 | ||||
MINT3 | .740 | ||||
MINT4 | .806 | ||||
PERF2 | .843 | ||||
PERF3 | .724 | ||||
PERF4 | .809 | ||||
PERF5 | 665 | ||||
KMO and Bartlett’s Test | |||||
TVE | 82.345% | ||||
KMO Measure of Sampling Adequacy | .942 | ||||
Bartlett’s Test of Sphericity | Approx. Chi-Square | 1015.111 | |||
Df | 78 | ||||
Sig. | .000 |
The KMO measure of sampling adequacy was utilized to evaluate the data’s appropriateness for EFA. A KMO value approaching 1 is indicative of high suitability for factor analysis. The obtained KMO measure, recorded at .942, signifies that the dataset is exceptionally conducive to EFA. Furthermore, Bartlett’s Test of Sphericity was applied to ascertain the significance of correlations among variables, a prerequisite for conducting EFA. The test yielded a highly significant result (p<0.001), with an approximate chi-square value of 1015.111 across 78 degrees of freedom, thereby affirming the data’s compatibility with EFA. Additionally, the analysis of TVE offered insights into the proportion of variance within the dataset accounted for by the identified components. The calculated TVE at 82.3% suggests that a considerable portion of the dataset’s variance is comprehensively explained by the components identified through the analysis.
CFA was performed utilizing AMOS (Version 23) to evaluate the suitability of the factor structure proposed based on the results from the EFA. CFA, a rigorous statistical method, aims to verify the extent to which a predetermined factor structure captures the relationships among measured variables accurately. This analysis involved the indicators retained from the EFA, confirming their factor loadings’ significance, which substantiates the model’s validity.
For optimal model fitness, criteria stipulated include a CMIN/DF ratio less than 3 or ideally below 1, a Comparative Fit Index (CFI) exceeding .95, a Root Mean Square Residual (RMR) below .08, a Root Mean Square Error of Approximation (RMSEA) under .06, and a Probability of Close Fit (PClose) demonstrating statistical insignificance (PClose>.05), as advocated by Amoako et al. (2022). The model fitness outcomes delineated in Table 3, corroborated by the results presented in Figure 2, conform to the benchmarks recommended by Amoako et al. (2022), thereby indicating a satisfactory fit of the CFA model to the data.The measurement items ChatGPT3, ChatGPT4, ChatGPT5, ChatGPT6, ChatGPT7, and ChatGPT8 serve as questionnaire queries designed to elicit students’ perspectives on utilizing ChatGPT. Additionally, INT1, INT3, and INT4 are items that gauge students’ interest in mathematics. PERF2, PERF3, PERF4, and PERF5, conversely, assess students’ performance in mathematics.
Overall model fit indices: CMIN=56.096; DF=0; P-value=.619; CMIN/DF=.935; TLI=1.005; CFI=1.000; GFI=.907; RMR=.039; RMSEA=.000; PClose=.896 | Std. Factor Loadings |
ChatGPT: CA=.947; CR=.945; AVE=.707 | |
ChatGPT3: I love using ChatGPT to learning mathematics | .832 |
ChatGPT4: ChatGPT improve my problem-solving ability | .883 |
ChatGPT5: I can learn mathematics without attending lectures with the help of ChatGPT | .871 |
ChatGPT6: ChatGPT is user friendly | .831 |
ChatGPT7: ChatGPT expressed complex mathematics problem into its simple terms | .885 |
ChatGPT8: I see ChatGPT as online library | .859 |
INT: CA=.921; CR=.921; AVE=.797 | |
INT1: My interest in mathematics increase when using ChatGPT | .861 |
INT3: I love to study mathematics | .929 |
INT4: Mathematics is fun when using ChatGPT | .886 |
PERF: CA=.922; CR=.926; AVE=.758 | |
PERF2: My performance in mathematics is better than any other courses | .863 |
PERF3: I am able to solve some of mathematics question in the lecturer hand-out without any assistant form peers | .828 |
PERF4: I used ChatGPT to solve complex mathematics question giving to us an assignment by the lecturer | .941 |
PERF5: I am capable of achieving success in math | 845 |
In structural equation modeling (SEM), the assessment of the model’s variables’ validity and reliability is imperative to ensure accurate representation of the relationships between components. The Composite Reliability (CR), Average Variance Extracted (AVE), Maximum Shared Variance (MSV), and the maximum value of the correlation between each variable and any other variable (MaxR(H)) were calculated to evaluate the model’s variables, which include PERF (mathematics performance), INT (mathematics interest), and ChatGPT (an AI chatbot for mathematics education). Discriminant validity was assessed using an add-in from AMOS (Version 23), which involves comparing the square root of the AVE with the intercorrelated variables. Discriminant validity is established when the smallest value of the square root of the AVE surpasses the highest value of the intercorrelated variables, as posited by Dogbe et al. (2019).
Variables | CR | AVE | MSV | MaxR(H) | ChatGPT | PERF | INTE |
ChatGPT | 0.945 | 0.741 | 0.707 | 0.956 | 0.861 | ||
PERF | 0.926 | 0.758 | 0.659 | 0.939 | 0.811*** | 0.870 | |
INTE | 0.921 | 0.797 | 0.707 | 0.928 | 0.841*** | 0.812*** | 0.893 |
The analysis in Table 4 indicated that the smallest value for the square root of the AVE was 0.861, and the highest intercorrelated value was 0.841. This finding supports the distinctiveness of the proposed factors, affirming they represent different constructs within the dataset. Convergent validity, which assesses the extent to which a measure correlates positively with alternative measures of the same construct, is deemed achieved when the Cronbach’s alpha and the AVE values meet or exceed thresholds of 0.7 and 0.6, respectively. According to Shrestha (2021), the attainment of these thresholds within this study confirms the achievement of convergent validity.
4. Research Results
The analysis of direct effects within the SEM framework was conducted to elucidate the relationships among the variables specified in the model. Utilizing AMOS (Version 23), the estimated direct effects, along with their standard error (S.E), critical ratio (C.R), and p-values, were calculated for the interactions among the study variables, including INT, PERF, ChatGPT, and ChatGPT_INT (the interaction between ChatGPT usage and Mathematics Interest). Additionally, control variables such as gender and the level of education of the respondents were incorporated into the analysis. Moreover, Figure 3 demonstrates the path analysis as presented its numerical in Table 5.
Direct Effect | Std. Estimate | S.E | C.R | P-value |
Gender$\rightarrow$PERF | .027 | .068 | .391 | .696 |
Level of Education$\rightarrow$PERF | .019 | .027 | .720 | .472 |
ChatGP$\rightarrow$TINT | .839 | .111 | 7.558 | *** |
ChatGPT$\rightarrow$PERF | -.757 | .122 | -6.213 | *** |
INTE$\rightarrow$PERF | .442 | .109 | 4.063 | *** |
ChatGPT_INT$\rightarrow$PERF | .206 | .011 | 19.593 | *** |
In the results presented in Table 5, the effect of respondents’ gender on mathematics performance was observed to be positive yet statistically insignificant, with a p-value exceeding the 1% threshold ($\beta$=.027; C.R=.391). Similarly, the level of education exhibited a positive but statistically insignificant effect on mathematics performance, evidenced by a p-value exceeding the 1% mark ($\beta$=.019; C.R.=.720).
Further analysis elucidated a direct negative impact of ChatGPT usage on mathematics performance (Table 5), which was statistically significant, with a p-value below 1% ($\beta$=-0.757, C.R=-6.213). This finding suggests a noticeable decline in students’ mathematical abilities with increased reliance on ChatGPT, leading to the acceptance of hypothesis H1A, which posited a significant effect of ChatGPT on mathematics performance.
Conversely, ChatGPT was found to exert a direct positive influence on mathematics interest (INT), with statistical significance indicated by a p-value below 1% ($\beta$=0.839, C.R.=7.558). This indicates a substantial increase in students’ interest in mathematics with frequent ChatGPT usage, substantiating the acceptance of hypothesis H1B, which stated that ChatGPT significantly affects students’ mathematics interest.
Additionally, the positive and statistically significant direct effect of mathematics interest on mathematics performance, with a p-value below 1% ($\beta$=0.442, C.R=4.063), implies that students’ performance in mathematics improves as their interest in the subject increases. This led to the acceptance of hypothesis H1C, affirming that students’ mathematics interest significantly influences their mathematics performance.
The interaction between ChatGPT usage and mathematics interest was also analyzed, revealing a direct positive effect on mathematics performance, with a substantial and statistically significant relationship, as evidenced by a C.R of 18.841 and a p-value below 1% ($\beta$=0.206, S.E=0.011). This underscores that the synergy between ChatGPT usage and mathematics interest markedly enhances mathematics performance, validating the acceptance of hypothesis H1D, which proposed that students’ mathematics interest significantly moderates the relationship between ChatGPT usage and mathematics performance.
Figure 4 illustrates the two-way interaction effect, examining the interplay between the independent variable (ChatGPT) and the dependent variable (mathematics performance), as moderated by mathematics interest. It was observed that mathematics interest mitigates the negative impact of ChatGPT on mathematics performance.
5. Results and Discussion
Within the cohort of 250 students from the Mathematics Education program at AAMUSTED, Kumasi, it was determined that the application of ChatGPT in mathematics education exhibits a negative and statistically significant influence on students’ mathematics performance. This finding indicates that reliance on ChatGPT for solving mathematics problems, without a corresponding explanation of the solution processes, does not foster improvement in students’ mathematical competencies. Previous investigations into the impact of ChatGPT on students’ mathematics performance have predominantly concentrated on basic and secondary school students, leaving a gap in research concerning tertiary education students. This study, therefore, enriches the literature by exploring the implications of ChatGPT usage among university students studying mathematics education. Contrary to the assertion by Dao & Le (2023) that ChatGPT harbors the potential to enhance students’ learning without directly influencing their performance in mathematics reasoning and problem-solving, findings from Wardat et al. (2023) suggest an improvement in mathematical abilities and capabilities through ChatGPT, offering foundational mathematics ideas to users/students.
Additionally, the investigation unveiled that the impact of ChatGPT usage on mathematics interest is not only directly positive but also statistically significant, aligning with findings from other studies such as Ali et al. (2023), who observed that motivation in mathematics learning is positively affected by ChatGPT usage. Furthermore, Zafrullah et al. (2023) identified ChatGPT as a motivational tool that amplifies students’ interest in mathematics, supporting the notion that ChatGPT can predict the rate of interest among students in the discipline, as highlighted by Woodhouse & Charlesworth (2023).
The outcomes of this investigation have elucidated a significant direct positive relationship between students’ interest in mathematics and their performance in the subject. Research conducted by Arthur et al. (2022), involving a sample of 373 first-year undergraduate students, revealed that an elevated interest in mathematics markedly improves students’ performance in the discipline. Similarly, Arhin & Yanney (2020) observed that students’ performance in mathematics is positively correlated with their interest in the subject, indicating that enhanced academic achievement in mathematics is associated with increased subject interest. This finding is corroborated by Wong & Wong (2019), who concluded that students’ performance in mathematics improves with a growing interest in studying the subject.
Furthermore, the current study disclosed that the moderating role of mathematics interest in the dynamic between ChatGPT usage and students’ mathematics performance is both positive and statistically significant. The analysis indicates that while the utilization of ChatGPT negatively impacts students’ mathematics performance, the presence of mathematics interest significantly ameliorates this effect. This suggests that the potential for ChatGPT to positively influence students’ performance in mathematics is contingent upon its effect on stimulating students’ interest in the subject.
6. Conclusion
The investigation has elucidated that the deployment of ChatGPT exerts a substantial positive effect on students’ interest in mathematics, as evidenced by the highly significant direct relationship between ChatGPT utilization and mathematics interest. Furthermore, a nuanced yet positive link between the application of ChatGPT and mathematics performance was discerned, albeit with marginal statistical significance, suggesting a potential for modest enhancements in mathematics achievement attributable to ChatGPT usage. The profound positive impact of mathematics interest on mathematical performance was underscored by the highly significant direct association between these variables. These findings furnish valuable insights into the dynamics affecting mathematics interest and performance, enhancing the comprehension of the interactions among variables within the SEM framework and their consequent effects on one another.
7. Limitations and Recommendation
Limitations of the study include a focus on immediate outcomes, such as short-term gains in mathematics proficiency, potentially obscuring the long-term effects of ChatGPT integration on students’ mathematics interest and performance. An assumption of universal access to ChatGPT and comparable technologies among students may overlook disparities in technology access or utilization across different student demographics, which could influence the results. Additionally, the study may not fully account for variations in students’ familiarity and comfort with using ChatGPT, nor does it consider the potential impact of teacher factors, such as instructional support and strategies, on students’ mathematics interest and performance.
Recommendations arising from the study advocate for a balanced approach to ChatGPT’s utilization in mathematics education, cautioning against over-reliance on the tool. It suggests the incorporation of surveys or self-evaluation tools as alternative or supplementary data collection methods to questionnaires.
The authors stated that the study was approved by the Mathematics Education Research Ethics Committee of AAMUSTED (AAMUSTED). Written informed consent was obtained from Head of the Departments and lecturers, as well as from students.
The data used to support the findings of this study are available from the corresponding author upon request.
The authors declare that, there was no conflict of interest for the study.