
researchandeducation.ro
Preview meta tags from the researchandeducation.ro website.
Linked Hostnames
6- 115 links toresearchandeducation.ro
- 2 links tocreativecommons.org
- 1 link totwitter.com
- 1 link towww.facebook.com
- 1 link towww.researchandeducation.ro
- 1 link towww.specificfeeds.com
Search Engine Appearance
Evaluating ChatGPT’s Effect on Students’ Independent Learning Experiences: The Case of Pentecost University, a Ghanaian Higher Educational Institution
Download PDF Article Download Graphical Abstract Evaluating ChatGPT’s Effect on Students’ Independent Learning Experiences: The Case of Pentecost University, a Ghanaian Higher Educational Institution Abstract This study evaluates the impact of ChatGPT, a generative AI tool, on students' independent learning experiences within Pentecost University, a higher education context in Ghana. The study adopts a quantitative research approach using a descriptive research design, focusing on Pentecost University, one of the Ghanaian private universities with significant involvement in artificial intelligence studies. The survey collected data from 334 students, representing an 87.7% response rate, to assess various aspects of their interaction with ChatGPT, including engagement, reliability, motivation, and personalisation of learning. Results indicate that ChatGPT is widely used among students for completing assignments (45%), conducting research (35%), and learning new topics (20%). The findings demonstrate that ChatGPT significantly enhances student engagement, particularly through its real-time response capabilities, providing immediate feedback and fostering interaction. Multiple regression analysis highlights that personalised learning goals and real-time response are key predictors of increased student motivation, accounting for 62% of the variance observed. However, gender and educational level do not significantly impact engagement, indicating the inclusivity of ChatGPT as a learning tool. The study concludes that while ChatGPT has proven beneficial in enhancing independent learning experiences, its effectiveness is maximised when integrated with structured support from educators. Keywords ChatGPT, Prompt-Engineering, Chatbot, Independent Learning, Higher Education JEL Classification I20, I21, I29 1. Introduction Background Technological advancements have had a significant impact on education; this has changed teaching and learning significantly (Dede, 1996; Ferdig, 2006; Nora & Snyder, 2008; Kirkwood & Price, 2014; Delgado et al., 2015; Haleem et al., 2022). One of technology’s profound impacts on education is its access to information. With the power of the Internet, vast amounts of data are accessible to everyone. These include online institutional repositories that offer free academic research documents, online libraries, educational/learning websites, and digital textbooks. The easy access to information has empowered learners to explore knowledge. Despite the challenges and concerns with the use of technology in education, the evidence is clear that the pros far outweigh the cons. Since November 2022, ChatGPT, a large language model variant of the GPT large language model, has emerged as one of the transformative innovations in generative AI, revolutionising access to information (Afjal, 2023; Nazir & Wang, 2023). The chatbot’s ability to engage in human-like conversation with its continuous updates has improved dramatically over the past year. These updates have extended the model's capabilities beyond just answering basic questions by text; it can now engage in multi-modal operations and voice engagement. With the model’s ability to integrate with 100s of plugins, ChatGPT has proven to be a powerful conversational tool and a revolutionary learning companion. Notwithstanding the ethical dilemmas of the use of generative AI tools like ChatGPT in higher education, the need for innovative learning has called for its careful and responsible integration into the learning environment, as ChatGPT has proven to adapt to the diverse needs of learners in enhancing their engagement and facilitating their personal learning experience (Al-Mughairi & Bhaskar, 2024; Elbanna & Armstrong, 2024; Sabraz Nawaz et al., 2024). Unlike other levels of education where the teacher gives students more structured guidance and oversight, the case is different for higher education students. At the higher education level, there is greater emphasis on students' independent learning. This requires that students actively engage with course materials, do more research, pursue a deeper understanding of concepts taught, plan their learning schedule and perform individual assessments of their learning progress. A tool like ChatGPT can be very useful at this level of education. The remarkable capabilities of ChatGPT make it an easy tool to customise to suit independent learning, as customisation of learning software is a key factor in effective learning (Ismail et al., 2023). This study is both timely and relevant, particularly as higher education institutions are currently engaged in debates over whether to embrace the use of generative AI tools like ChatGPT among students. Problem Statement Independent learning aims to establish a collaborative and interactive learning atmosphere that enables students to assume responsibility for their learning (Dabbagh & Kitsantas, 2012); this contributes significantly to the learning process. Despite the recognised benefits of independent learning, several constraints have hindered its adoption among students. Some of these constraints are a lack of motivation by students to engage in independent learning, the challenge of access to technological tools and, most importantly, the ability to customise one’s learning strategies or curriculum (Hummel et al., 2004; Bartle, 2015; Islam et al., 2015; Børte et al., 2023). With generative AI tools like ChatGPT, students should now have access to more personalised and interactive learning experiences. However, studies exploring the chatbot’s effect on students’ independent learning in higher education, specifically within the Ghanaian context, have been piecemeal. This leaves a significant gap in literature; for this reason, this study was undertaken. Aim of the Study The study sought to evaluate ChatGPT's effect on students' independent learning experiences in a Ghanaian higher educational institution. Research Statement What is the effect of ChatGPT on students' independent learning experiences? Research objectives The study explored the following objectives: To examine the effectiveness of ChatGPT as an AI tool in improving student learning engagement. To investigate the usefulness of ChatGPT as an AI tool for adaptive learning amongst students. Hypotheses Objective Hypothesis Objective 1: Ho1: ChatGPT usage is positively associated with students' understanding of course content. Objective 1: Ho2: ChatGPT's real-time response significantly enhances student engagement. Objective 2: Ho3: Students perceive ChatGPT as a reliable tool for research work and assignments. Table 1. Table of Hypotheses by Objective Source: Researcher’s Construct: Atieku-Boateng et al., 2025 Significance of the Study The significance of this study lies in its potential to contribute to the understanding and application of independent learning in higher education. As education evolves, the need for innovative and effective pedagogical strategies becomes increasingly important. Additionally, the study's findings have practical implications for policymakers within the educational sector, as it can provide insight into creating a regulatory framework for the responsible use of AI in education at the higher level. 2. Literature Review The Concept and Benefits of Independent Learning Independent learning is a pedagogical strategy that recognises the uniqueness of each learner and tailors educational experiences to meet individual requirements (Aure & Cuenca, 2024). This educational paradigm shift emphasises the learner's active participation in their own learning process, fostering engagement and motivation that lead to higher academic outcomes. Allowing learners to progress at their own pace and engage with tailored content creates a more conducive environment for understanding complex concepts and improving retention rates (Tai et al., 2018). Again, independent learning supports the development of non-cognitive skills such as self-control, resilience, and teamwork, which are essential for 21st-century learners; it also promotes intrinsic motivation by granting students a sense of agency and ownership over their educational journey, directly influencing their learning outcomes (Wong et al., 2019). In the context of inclusive education, independent learning becomes indispensable. Acknowledging diversity enhances educational equity, enabling all learners, including those with special needs, to reach their full potential. In today's digital age, technology plays a crucial role in facilitating personalised learning. Educational technology can enrich students' experiences by providing individualised resources, interactive activities, and immediate feedback (Chen et al., 2017). Embracing independent learning empowers students to realise the full potential of lifelong learning. Therefore, promoting this pedagogical approach within higher education is imperative. Academic Integrity in the Age of Generative AI The advancement in AI development has seen growth in generative AI tools like chatbots. The ability of these chatbots to generate original content has left educational institutions worried about implications for academic integrity (Pedro et al., 2019; Roschelle et al., 2020; Miao et al., 2021). Educators fear that over-reliance on these generative chatbots can discourage critical thinking and personalised research among students (Currie, 2023; Yeo, 2023). This fear is justified, as students, with the aid of generative AI-enabled chatbots, can write an entire essay without typing a single word. While some measures have been put in place to check against the unethical use of these AI-enabled chatbots through the development of AI content detectors, this intervention has its downsides. Some generative AI-enabled chatbots can sometimes bypass AI-content detectors; also, AI content detection tools can sometimes produce false positives and have been proven to be discriminative against students engaging in academic activity in a language that is second to them (Elkhatat et al., 2023). Therefore, addressing the academic integrity issues arising from students' use of generative AI tools in higher education requires careful consideration. Theoretical Framework This research is anchored on the Technology Acceptance Model. This widely used theoretical model in information systems explains how users come to accept and use a technology. Originally proposed by (Davis, 1989), the model has undergone several refinements and extensions. This study settled on the original model as proposed by Fred Davis. Figure 1. The Technology Acceptance Model Source: Davis, 1989 The theories' constructs and how they direct the study are discussed as follows: Perceived Usefulness: this construct posits that users are more likely to accept and use a technology if they perceive it as useful in enhancing their performance with an activity. In the context of this research, this construct guides the investigation of students' perceived usefulness of ChatGPT and how it shapes their independent learning. Perceived Ease of Use: this construct posits that users are more likely to use a technology when they perceive the ease of use of the technology as being free from effort. In the context of this research, the construct guides the investigation of how easy it is for students to use ChatGPT to access information and interact with the system. Behavioural Intention to Use: TAM proposes that users' behavioural intentions to use technology are influenced by its perceived usefulness and ease of use based on their experiences and perceptions. In line with this construct, the study investigated students' intentions to continue using ChatGPT for independent learning activities. The researchers believe that understanding students' behavioural intentions can provide insights into the sustainability of integrating ChatGPT into the educational environment. Actual Use: Finally, this study assessed students' actual usage behaviour regarding ChatGPT. This included usage session duration, and tasks performed using the tool. By comparing students' actual usage patterns with their perceptions and intentions, the study evaluated the alignment between theory and practice in the context of ChatGPT adoption. 3. Methodology Research Approach The study adopted a quantitative approach; this approach leveraged numerical data and statistical analyses to determine the relationship between variables and draw conclusions based on statistical power (Baškarada & Koronios, 2018). Additionally, a quantitative research approach employs a rigorous research design with standardised measuring instruments that enhance the validity of findings (Claydon, 2015). Research Design A descriptive research design was adopted for this study; this was best fitted for this study in the sense that a descriptive research design aims at describing the characteristics or behaviours of a population or phenomenon being studied (Dannels, 2018). This was in line with the study's goal as the researcher sought to describe and analyse the characteristics and behaviours of students with ChatGPT in their pursuit of independent learning in Ghana's higher education context. Population The study focused on a prominent Ghanaian private university, Pentecost University. The university's selection was based on the unique programs offered in the area of artificial intelligence and robotics. The selection of this university was also driven by the school’s notable achievement in AI research, as evidenced by its winning grant for its premium research into the application of Artificial Intelligence, Cyber-Physical Systems, Robotics, Laser technology, and Life Cycle modelling for eco-efficient manufacturing of electric vehicle components among with five other world-leading universities. Additionally, the university's proactive stance towards embracing AI aligns well with the objectives of this study, which seeks to delve into the practical implications and experiences of AI adoption in an educational setting. All these make the university a compelling case study for exploring the intersection of AI and academia. The university’s population is estimated to be between 3000 - 5000 students. Sample Size A total of 344 responses were collected from the survey conducted. The expected sample size for this study relied on (Krejcie & Morgan, 1970). According to Krejcie and Morgan (1970), the preferred sample size for such an estimated population is 381. However, only 334 responses were received. Sampling Technique The study adopted a probability sampling technique, ensuring that each member of the population had an equal chance of inclusion. Thus, the sampled population was more representative. Data Collection Instrument An online questionnaire was used as the data collection instrument for this study. The online link to the form was shared with students on their various WhatsApp platforms. To ensure validity, reliability and clarity, the questionnaire was pre-tested before the main data collection phase of the study. The questionnaire was comprised of four sections, each targeting a specific aspect of the research. Participants’ information on gender, age, educational background and field of study was captured in the first section of the questionnaire. Assessment of respondents’ experience with ChatGPT was captured in section two of the questionnaire. Specific information covered aspects such as the duration of use, mode of discovery, and familiarity with AI tools. Students’ learning engagement with ChatGPT was the focus in the third section of the questionnaire. Likert-scale items were used to measure the perceptions of ChatGPT’s impact on understanding course content, reliability and the chatbot ability as a research assistant. Lastly, the fourth section captured ChatGPT’s usefulness in personalised and adaptive learning, it further captured how ChatGPT supports learning preferences, goal setting and motivation. The inclusion of these four sections was crucial as they comprehensively addressed the study’s research objectives. The structured nature of the questionnaire allowed for the collection of measurable and comparable data, facilitating meaningful statistical analysis. Data Analysis Technique The study employed both descriptive and inferential statistics in the data analysis process. Descriptive statistics were employed to summarise and describe the characteristics of the sample; the central tendency, variability, and distribution of the data were measured. Inferential statistics were employed to make inferences about the population with the sample data, and comparative analysis, correlational analysis and regression analysis were performed on the sample dataset. Data Reliability and Validity A reliability test was performed to measure the internal consistency of the Likert scale items using Cronbach’s alpha. A pilot study ensured construct validation, and the researcher could assess the clarity, relevance, and comprehensiveness of the Likert scale items when measuring the targeted construct. Other researchers reviewed the questionnaire to ensure that the questions adequately covered the full range of content or aspects of the construct being studied; this helped to ensure content validation. Data Analysis Tool ChatGPT 4.0’s code interpreter, with advanced analytics, was used to clean, organise, and analyse the response data. The model used Python as the program for the analysis, which offered a more robust and customised approach, as Python is a very powerful tool for data analysis. Ethical Considerations The study's goal and methods were fully disclosed to the participants, and participation was indicated within the questionnaire as entirely optional. The study complied with informed consent guidelines, ensuring that before consenting to participate, participants were aware of the purpose of the study and their role in it. Participants' confidentiality was safeguarded through guarantees that replies would be kept private. Participants were also informed of their freedom to leave the study at any moment and without consequence. Methodological Constraints The main methodological constraints concern the ability to generalise the study’s findings. The findings may not directly apply to other Ghanaian universities because the study relies on a single Ghanaian private university. 4. Results and Discussions Descriptive Statistics Descriptive statistics were employed to summarise the demographic characteristics of the respondents and their perceptions of various aspects of ChatGPT usage. The sample consisted of 334 participants, representing an 87.7% response rate from the expected sample size of 381 (Krejcie & Morgan, 1970). With 59.9% identifying as male and 40.1% as female, the majority of the participants (53.0%) were between the ages of 18 to 25, followed by 31.4% in the 26 to 35 age group. Regarding educational level, 86.0% of the respondents were undergraduate students, while the remaining 14.1% were post-graduate students. Variable Category Frequency (N) Percentage (%) Gender Male 200 59.9% Female 134 40.1% Age Between 18 - 25 177 53.0% Between 26 - 35 105 31.4% Between 36 - 45 43 12.9% Between 46 - 54 4 1.2% Below 18 3 0.9% 55 and above 2 0.6% Educational Level Undergraduate 287 86.0% Post-graduate 47 14.1% Duration of ChatGPT Use 1 to 3 months 157 46.9% More than 6 months 100 29.9% 3 to 6 months 77 23.0% Table 2. Demographic Information Source: Field Data: Atieku-Boateng et al., 2025 Duration of ChatGPT Use Frequency (N) Percentage (%) 1 to 3 months 157 46.9% More than 6 months 100 29.9% 3 to 6 months 77 23.0% Table 3. Duration of ChatGPT Use Among Respondents Source: Field Data: Atieku-Boateng et al., 2025 Table 3 summarises the duration of ChatGPT usage among the respondents. The majority of participants (46.9%) reported using ChatGPT for 1 to 3 months, followed by 29.9% who have been using it for more than 6 months, and 23.0% who have used it for 3 to 6 months. This distribution provides insight into the experience level of the participants with ChatGPT, which may influence their perceptions and effectiveness in enhancing learning engagement. The duration of use can also be a significant determinant in understanding the familiarity and comfort level of students with generative AI, which may, in turn, impact their overall satisfaction and learning outcomes. For the key constructs measured on a Likert scale (ranging from 1 to 5), the average scores indicated generally positive perceptions of ChatGPT. For instance, the mean score for the statement ChatGPT helps me to better understand course content was 3.95 (SD = 0.86), indicating moderate to high agreement. Similarly, ChatGPT's usefulness in assisting with research work and assignments was rated highly, with a mean score of 4.18 (SD = 0.82). These findings suggest that participants found ChatGPT beneficial in enhancing their learning engagement, reliability, and motivation. The positive scores across various constructs indicate that students perceive ChatGPT as a valuable tool that supports their educational goals by providing real-time information, guidance, and a platform for personalised learning. Activity Frequency (N) Percentage (%) Assignments 150 45.0% Research Work 116 35.0% Learning New Topics 68 20.0% Table 4. Types of Activities Performed Using ChatGPT Source: Field Data: Atieku-Boateng et al., 2025 Table 4 summarises the types of activities that students used ChatGPT for. The majority of students (45%) used ChatGPT to complete assignments, followed by 35% who used it for research work and 20% for learning new topics. This distribution indicates that ChatGPT is primarily used as an academic support tool, emphasising its role in aiding students in completing coursework and enhancing their understanding of subjects. The alignment between the intended and actual usage of ChatGPT validates its effectiveness as a supportive educational tool. It highlights that students are leveraging ChatGPT for practical academic tasks, demonstrating the platform's utility in facilitating learning activities and fostering a more personalised educational experience. Construct Question Mean (M) Standard Deviation (SD) Engagement ChatGPT helps me to better understand course content 3.95 0.86 Engagement ChatGPT's real-time response provides a good platform for learning 3.95 0.89 Reliability I find ChatGPT to be reliable 3.72 0.95 Usefulness ChatGPT is a useful tool for helping out with research work and assignments 4.18 0.82 Personalisation ChatGPT helps me to set personalised learning goals 3.73 0.90 Motivation ChatGPT helps me to stay motivated to learn 3.67 0.92 Table 5. Average Likert Responses for Key Constructs Source: Field Data: Atieku-Boateng et al., 2025 The Likert responses reveal key areas where ChatGPT is perceived as particularly effective. The consistently high scores in engagement and usefulness underscore the tool’s role in providing meaningful and timely support to students. Reliability, while still positive, has a slightly lower mean, which may indicate areas for future improvement, such as enhancing the accuracy and consistency of responses. Reliability Analysis To assess the internal consistency of the survey items, Cronbach's alpha was calculated for the Likert scale items measuring engagement, motivation, reliability, usefulness, and personalisation. The calculated Cronbach's alpha value was 0.84, indicating good internal consistency. A value above 0.7 is generally considered acceptable, while values above 0.8 are considered good. Thus, the reliability of the scale used in this study is deemed satisfactory, suggesting that the items used to measure each construct were consistent in their measurement. Construct Number of Items Cronbach's Alpha Engagement 2 0.82 Reliability 1 0.78 Usefulness 1 0.81 Personalisation 1 0.83 Motivation 1 0.84 Table 6. Reliability Analysis for Key Constructs Source: Field Data: Atieku-Boateng et al., 2025 Multiple Regression Analysis for Student Motivation A multiple regression analysis was conducted to evaluate the impact of various factors on student motivation. The predictors included perceived usefulness, reliability, real-time response capability, and the ability to set personalised learning goals. Predictor Unstandardised Coefficient (B) Standard Error (SE) Standardised Coefficient (β) t-value p-value Real-time response capability 0.24 0.07 0.28 3.42 0.001 Reliability 0.12 0.05 0.15 2.50 0.013 Personalised learning goals 0.52 0.08 0.44 6.48 <.001 Usefulness for research work -0.07 0.06 -0.06 -1.32 0.188 Table 7. Multiple Regression Analysis for Student Motivation Source: Field Data: Atieku-Boateng et al., 2025 Model Fit Measure Value R-squared 0.49 Adjusted R-squared 0.47 F-statistic 22.88 p-value for F-statistic <.001 Table 8. Model Fit Measures for Student Engagement Source: Field Data: Atieku-Boateng et al., 2025 Regression Model Equation for Student Engagement: Engagement = 0.30 + (0.23 * Real-time response) + (0.13 * Reliability) + (0.51 * Personalised learning goals) - (0.08 * Usefulness for research work) Interpretation: The regression model explains approximately 47% of the variance in student engagement (R² = 0.47). The most significant predictor of engagement is personalised learning goals (β = 0.43, p < .001), followed by real-time response capability and reliability. These findings highlight the importance of tailored learning approaches and the role of immediate feedback in enhancing student engagement. Personalisation in learning appears to be a key motivator for students, suggesting that tools like ChatGPT, which can cater to individual learning preferences, have a strong potential to boost student motivation and learning outcomes. ANOVA Analysis for Gender and Educational Level ANOVA analyses were conducted to examine potential differences in student engagement based on gender and educational level. Comparison F-statistic p-value Conclusion Gender (Male vs. Female) 0.59 0.443 No significant difference in engagement by gender Educational Level 0.18 0.835 No significant difference in engagement by level Table 9. ANOVA Analysis for Gender and Educational Level Source: (Field Data: Atieku-Boateng et al., 2025) Interpretation: The ANOVA results indicate no statistically significant differences in student engagement based on gender or educational level (p > 0.05). This suggests that ChatGPT's impact on engagement is consistent across different demographic groups, reinforcing its value as an inclusive educational tool. The absence of significant differences implies that ChatGPT is equally effective for students regardless of their gender or educational background, which is a positive indication of its adaptability. Hypotheses Testing Objective Hypothesis Statement Test Result p-value Conclusion Objective 1: to examine the effectiveness of ChatGPT in improving student learning engagement. H1 ChatGPT usage is positively associated with students' understanding of course content. Not Significant 0.188 The hypothesis was not supported, suggesting that ChatGPT usage alone may not directly enhance understanding without additional contextual support or structured guidance. Objective 1: to examine the effectiveness of ChatGPT in improving student learning engagement. H2 ChatGPT's real-time response significantly enhances student engagement. Significant 0.001 The hypothesis was supported, indicating that real-time responses play a critical role in improving student engagement by providing immediate feedback and fostering interaction. Objective 2: to investigate the usefulness of ChatGPT as an AI tool for adaptive learning among students. H3 Students perceive ChatGPT as a reliable tool for research work and assignments Significant 0.013 The hypothesis was supported, highlighting that students view ChatGPT as dependable for research assistance, contributing positively to their learning experience. Table 10. Hypotheses Testing Results by Objective Source: Researcher’s Construct: Atieku-Boateng et al., 2025 The findings of this study align with existing literature on the impact of generative AI tools like ChatGPT in enhancing student engagement and adaptive learning. One significant result was the positive influence of ChatGPT’s real-time response capability on student engagement. This finding is consistent with research by Eager and Brunton (2023), who highlighted the importance of AI tools in providing immediate feedback, which keeps learners engaged and motivated by creating an interactive and responsive learning environment. Generative AI tools like ChatGPT help reduce cognitive load during research and assignment completion, providing students with tailored learning experiences that respond to their individual needs. Moreover, the impact of personalised learning goals on student motivation reflects similar findings by Kadaruddin (2023), who emphasises the role of generative AI in supporting adaptive learning strategies. AI tools enable students to set personalised goals, which enhances intrinsic motivation by offering a sense of control over the learning process. This is aligned with theories of self-determination, which highlight the importance of autonomy in fostering motivation and improving learning outcomes. The study also revealed that students predominantly use ChatGPT for assignments and research, similar to findings by Sağın et al. (2023), who noted that AI tools support various academic activities, particularly those requiring content generation and information synthesis. ChatGPT's role as a facilitator for independent learning mirrors prior research indicating that generative AI reduces the burden of information retrieval, allowing students to focus more on comprehension and critical thinking (Abbas et al., 2023). However, this study also uncovered that the mere use of ChatGPT does not automatically enhance understanding without structured support, echoing findings by Gunder et al. (2023), who argued that while AI tools offer immense potential, they are most effective when integrated thoughtfully into a curriculum. Educators play a critical role in guiding the use of these tools to ensure they complement rather than replace traditional pedagogical methods. This highlights the importance of providing a structured framework within which AI tools like ChatGPT can be used to enhance learning outcomes, rather than relying on them as a standalone solution. Additionally, the study found no significant differences in engagement across gender or educational levels, supporting the notion that generative AI tools are inclusive and adaptable to a diverse range of learners. This is consistent with the findings of Kadaruddin (2023), who emphasised that AI-based learning platforms create a level playing field by providing equitable access to resources and personalised learning paths for students from varying backgrounds. The results of this study reinforce the growing body of evidence that generative AI tools such as ChatGPT can significantly enhance student engagement and learning outcomes when used effectively. However, the role of educators remains critical in guiding the appropriate use of these tools. When integrated into a well-structured curriculum, generative AI tools have the potential to transform the learning experience, offering personalised and adaptive solutions that cater to the needs of individual students. 5. Conclusions The findings of this study indicate that ChatGPT is an effective tool for enhancing student learning engagement and adaptive learning experiences in higher education. The strongest predictor of student engagement and motivation was the capability of ChatGPT to provide real-time responses and facilitate personalised learning goals. These features align well with the needs of higher education students, who are increasingly seeking tools that offer flexible and individualised learning opportunities. The study also highlighted that gender and educational level did not significantly influence student engagement, indicating the inclusivity of ChatGPT as an educational tool. The internal consistency of the constructs measured was confirmed by Cronbach's Alpha values above 0.7, indicating good reliability across engagement, motivation, and other key factors. However, the relatively lower reliability score suggests there is room for improvement, particularly in enhancing the consistency of ChatGPT's responses. Recommendations 1. Further improve the capabilities of ChatGPT: These results, therefore, indicate that further refinement in the language model may enhance the consistency and reliability of responses given by ChatGPT. Thus, it currently has a high potential to be used as an AI tool for more adaptive and engaging learning. Further developments into more personalised guidance, taking into account the needs of learners, may strengthen ChatGPT's position in this regard. 2. Training and support for effective use: Training and continued support by learning institutions can easily enable students and educators to maximise the fullest potential of ChatGPT as an AI tool intended for adaptive and engaged learning. This means equipping users with strategies for critical thinking and self-directed learning through AI interaction, ensuring it effectively improves learning outcomes. Also, guidance on distinguishing between accurate and inaccurate responses protects academic integrity in using the tool. Overall, the adoption of ChatGPT as a supportive tool for independent learning appears promising, provided that its limitations are addressed and its strengths are effectively leveraged. Further research and collaboration between educators, developers, and policymakers will be crucial in ensuring that tools like ChatGPT are integrated in a manner that enhances learning outcomes and maintains academic standards. 3. Limits of the Study Notwithstanding the valuable insights this study presents, it thus has some limitations. The first is that the study focused solely on one private university in Ghana. The implication of this is that it limits the generalisability of the findings to other universities, thus both private and public at large. Again, the study relied on self-reported responses from the participants. This approach is inherently subject to biases such as social desirability bias and recall bias. Also, the study's adoption of a cross-sectional design meant that the researchers were only able to capture participants' experiences at a single point in time. This limits the study's ability to observe changes in participants' engagement with ChatGPT over an extended period. Lastly, the study did not explore ethical concerns that bother academic integrity, plagiarism, or educators' perspectives on AI-assisted learning. 4. Future Research Directions Building on the limitations of this study, future research can explore the following: Expand the scope of the study to multiple institutions to enable the generalisability of the findings. Incorporate a mixed-method approach to provide deeper insights into students’ actual experiences of the use of ChatGPT as a learning assistant. Adopt a longitudinal approach to track students' experiences through their engagement with ChatGPT over time. Could explore in-depth the ethical and pedagogical implications of the use of ChatGPT among students by seeking educators' perspectives. Declaration Funding The research did not receive any financial support. Availability of data materials The researchers do have the permission of the respondents to share their data upon request. Ethics approval All ethical considerations were adhered to throughout the entire research process. Acknowledgment Not applicable. About the Authors Harry Atieku-Boateng Pentecost University, Faculty of Engineering Science and Computing [email protected] Anita Addo-Tara Pentecost University, Faculty of Business Administration [email protected] Richard Darko Osei Pentecost University, Faculty of Education [email protected] Beatrice Atieku-Boateng Ghana Education Service [email protected] Stephen Kwame Ameko ORCID ID: 0009-0003-4768-7161 Utah State University [email protected] References Abbas, N., Ali, I., Manzoor, R., Hussain, T., & Hussain, M. H. (2023). Role of artificial intelligence tools in enhancing students' educational performance at higher levels. Journal of Artificial Intelligence, Machine Learning and Neural Network, (35), pp. 36-49. https://doi.org/10.55529/jaimlnn.35.36.49. Afjal, M. (2023). ChatGPT and the AI revolution: A comprehensive investigation of its multidimensional impact and potential. Library Hi Tech. https://doi.org/10.1108/lht-07-2023-0322. Al-Mughairi, H., & Bhaskar, P. (2024). Exploring the factors affecting the adoption AI techniques in higher education: Insights from teachers' perspectives on ChatGPT. Journal of Research in Innovative Teaching & Learning. https://doi.org/10.1108/jrit-09-2023-0129. Barbour, M. K., & Reeves, T. C. (2009). The reality of virtual schools: A review of the literature. Computers & Education, 52(2), pp. 402-416. https://doi.org/10.1016/j.compedu.2008.09.009. Baškarada, S., & Koronios, A. (2018). A philosophical discussion of qualitative, quantitative, and mixed methods research in social science. Qualitative Research Journal, 18(1), pp. 2-21. https://doi.org/10.1108/qrj-d-17-00042. Børte, K., Nesje, K., & Lillejord, S. (2020). Barriers to student active learning in higher education. Teaching in Higher Education, 28(3), pp. 597-615. https://doi.org/10.1080/13562517.2020.1839746. Chen, P., Liu, X., Cheng, W., & Huang, R. (2016). A review of using augmented reality in education from 2011 to 2016. Lecture Notes in Educational Technology, pp. 13-18. https://doi.org/10.1007/978-981-10-2419-1_2. Claydon, L. S. (2015). Rigour in quantitative research. Nursing Standard, 29(47), pp. 43-48. https://doi.org/10.7748/ns.29.47.43.e8820. Dabbagh, N., & Kitsantas, A. (2012). Personal learning environments, social media, and self-regulated learning: A natural formula for connecting formal and informal learning. The Internet and Higher Education, 15(1), pp. 3-8. https://doi.org/10.1016/j.iheduc.2011.06.002. Deci, E. L., & Ryan, R. M. (2000). The What and Why of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), pp. 227-268. https://doi.org/10.1207/s15327965pli1104_01. Dede, C. (1996). The evolution of distance education: Emerging technologies and distributed learning. American Journal of Distance Education, 10(2), pp. 4-36. https://doi.org/10.1080/08923649609526919. Delgado, A., Wardlow, L., O’Malley, K., & McKnight, K. (2015). Educational technology: A review of the integration, resources, and effectiveness of technology in K-12 classrooms. Journal of Information Technology Education: Research, 14, pp. 397-416. https://doi.org/10.28945/2298. Eager, B., & Brunton, R. (2023). Prompting higher education towards AI-augmented teaching and learning practice. Journal of University Teaching and Learning Practice, 20(5). https://doi.org/10.53761/1.20.5.02. Elbanna, S., & Armstrong, L. (2023). Exploring the integration of ChatGPT in education: Adapting for the future. Management & Sustainability: An Arab Review, 3(1), pp. 16-29. https://doi.org/10.1108/msar-03-2023-0016. Ferdig, R. E. (2006). Assessing technologies for teaching and learning: Understanding the importance of technological pedagogical content knowledge. British Journal of Educational Technology, 37(5), pp. 749-760. https://doi.org/10.1111/j.1467-8535.2006.00559.x. Grimes, D. A., & Schulz, K. F. (2002). Descriptive studies: What they can and cannot do. The Lancet, 359(9301), pp. 145-149. https://doi.org/10.1016/s0140-6736(02)07373-7. Gunder, A., Del Casino Jr, V., Vito, M., & Dickson, R. (2023). Empowering AI: Recontextualizing our pedagogical impact through supportive uses of machine learning. Ubiquity Proceedings. https://doi.org/10.5334/uproc.81. Haleem, A., Javaid, M., Qadri, M. A., & Suman, R. (2022). Understanding the role of digital technologies in education: A review. Sustainable Operations and Computers, 3, pp. 275-285. https://doi.org/10.1016/j.susoc.2022.05.004. Hummel, H., Manderveld, J., Tattersall, C., & Koper, R. (2004). Educational modeling language and learning design: New opportunities for instructional reusability and personalized learning. International Journal of Learning Technology, 1(1), 111. https://doi.org/10.1504/ijlt.2004.003685. Islam, N., Beer, M., & Slack, F. (2015). E-learning challenges faced by academics in higher education: A literature review. Journal of Education and Training Studies, 3(5). https://doi.org/10.11114/jets.v3i5.947. Ismail, H., Hussein, N., Harous, S., & Khalil, A. (2023). Survey of personalized learning software systems: A taxonomy of environments, learning content, and user models. Education Sciences, 13(7), 741. https://doi.org/10.3390/educsci13070741. Kadaruddin, K. (2023). Empowering education through generative AI: Innovative instructional strategies for tomorrow's learners. International Journal of Business, Law, and Education, 4(2), pp. 618-625. https://doi.org/10.56442/ijble.v4i2.215. Kirkwood, A., & Price, L. (2013). Technology-enhanced learning and teaching in higher education: What is ‘enhanced’ and how do we know? A critical literature review. Learning, Media and Technology, 39(1), pp. 6-36. https://doi.org/10.1080/17439884.2013.770404. Krejcie, R. V., & Morgan, D. W. (1970). Determining sample size for research activities. Educational and Psychological Measurement, 30(3), pp. 607-610. https://doi.org/10.1177/001316447003000308. Nazir, A., & Wang, Z. (2023). A comprehensive survey of ChatGPT: Advancements, applications, prospects, and challenges. Meta-Radiology, 1(2), 100022. https://doi.org/10.1016/j.metrad.2023.100022. Nora, A., & Snyder, B. P. (2008). Technology and higher education: The impact of E-learning approaches on student academic achievement, perceptions and persistence. Journal of College Student Retention: Research, Theory & Practice, 10(1), pp. 3-19. https://doi.org/10.2190/cs.10.1.b. Personalised learning. (2015). The Digital Classroom, pp. 33-45. https://doi.org/10.4324/9780203821008-9. Sabraz Nawaz, S., Fathima Sanjeetha, M. B., Al Murshidi, G., Mohamed Riyath, M. I., Mat Yamin, F. B., & Mohamed, R. (2024). Acceptance of ChatGPT by undergraduates in Sri Lanka: A hybrid approach of SEM-ANN. Interactive Technology and Smart Education. https://doi.org/10.1108/itse-11-2023-0227. Sağın, F. G., Özkaya, A. B., Tengiz, F., Geyik, Ö. G., & Geyik, C. (2023). Current evaluation and recommendations for the use of artificial intelligence tools in education. Turkish Journal of Biochemistry, 48(6), pp. 620-625. https://doi.org/10.1515/tjb-2023-0254. Teaching every student in the digital age: Universal design for learning. (2003). Choice Reviews Online, 40(06), 40-3555-40-3555. https://doi.org/10.5860/choice.40-3555. Gallery [caption id=attachment_1817 align=alignnone width=300] Figure 1[/caption]
Bing
Evaluating ChatGPT’s Effect on Students’ Independent Learning Experiences: The Case of Pentecost University, a Ghanaian Higher Educational Institution
Download PDF Article Download Graphical Abstract Evaluating ChatGPT’s Effect on Students’ Independent Learning Experiences: The Case of Pentecost University, a Ghanaian Higher Educational Institution Abstract This study evaluates the impact of ChatGPT, a generative AI tool, on students' independent learning experiences within Pentecost University, a higher education context in Ghana. The study adopts a quantitative research approach using a descriptive research design, focusing on Pentecost University, one of the Ghanaian private universities with significant involvement in artificial intelligence studies. The survey collected data from 334 students, representing an 87.7% response rate, to assess various aspects of their interaction with ChatGPT, including engagement, reliability, motivation, and personalisation of learning. Results indicate that ChatGPT is widely used among students for completing assignments (45%), conducting research (35%), and learning new topics (20%). The findings demonstrate that ChatGPT significantly enhances student engagement, particularly through its real-time response capabilities, providing immediate feedback and fostering interaction. Multiple regression analysis highlights that personalised learning goals and real-time response are key predictors of increased student motivation, accounting for 62% of the variance observed. However, gender and educational level do not significantly impact engagement, indicating the inclusivity of ChatGPT as a learning tool. The study concludes that while ChatGPT has proven beneficial in enhancing independent learning experiences, its effectiveness is maximised when integrated with structured support from educators. Keywords ChatGPT, Prompt-Engineering, Chatbot, Independent Learning, Higher Education JEL Classification I20, I21, I29 1. Introduction Background Technological advancements have had a significant impact on education; this has changed teaching and learning significantly (Dede, 1996; Ferdig, 2006; Nora & Snyder, 2008; Kirkwood & Price, 2014; Delgado et al., 2015; Haleem et al., 2022). One of technology’s profound impacts on education is its access to information. With the power of the Internet, vast amounts of data are accessible to everyone. These include online institutional repositories that offer free academic research documents, online libraries, educational/learning websites, and digital textbooks. The easy access to information has empowered learners to explore knowledge. Despite the challenges and concerns with the use of technology in education, the evidence is clear that the pros far outweigh the cons. Since November 2022, ChatGPT, a large language model variant of the GPT large language model, has emerged as one of the transformative innovations in generative AI, revolutionising access to information (Afjal, 2023; Nazir & Wang, 2023). The chatbot’s ability to engage in human-like conversation with its continuous updates has improved dramatically over the past year. These updates have extended the model's capabilities beyond just answering basic questions by text; it can now engage in multi-modal operations and voice engagement. With the model’s ability to integrate with 100s of plugins, ChatGPT has proven to be a powerful conversational tool and a revolutionary learning companion. Notwithstanding the ethical dilemmas of the use of generative AI tools like ChatGPT in higher education, the need for innovative learning has called for its careful and responsible integration into the learning environment, as ChatGPT has proven to adapt to the diverse needs of learners in enhancing their engagement and facilitating their personal learning experience (Al-Mughairi & Bhaskar, 2024; Elbanna & Armstrong, 2024; Sabraz Nawaz et al., 2024). Unlike other levels of education where the teacher gives students more structured guidance and oversight, the case is different for higher education students. At the higher education level, there is greater emphasis on students' independent learning. This requires that students actively engage with course materials, do more research, pursue a deeper understanding of concepts taught, plan their learning schedule and perform individual assessments of their learning progress. A tool like ChatGPT can be very useful at this level of education. The remarkable capabilities of ChatGPT make it an easy tool to customise to suit independent learning, as customisation of learning software is a key factor in effective learning (Ismail et al., 2023). This study is both timely and relevant, particularly as higher education institutions are currently engaged in debates over whether to embrace the use of generative AI tools like ChatGPT among students. Problem Statement Independent learning aims to establish a collaborative and interactive learning atmosphere that enables students to assume responsibility for their learning (Dabbagh & Kitsantas, 2012); this contributes significantly to the learning process. Despite the recognised benefits of independent learning, several constraints have hindered its adoption among students. Some of these constraints are a lack of motivation by students to engage in independent learning, the challenge of access to technological tools and, most importantly, the ability to customise one’s learning strategies or curriculum (Hummel et al., 2004; Bartle, 2015; Islam et al., 2015; Børte et al., 2023). With generative AI tools like ChatGPT, students should now have access to more personalised and interactive learning experiences. However, studies exploring the chatbot’s effect on students’ independent learning in higher education, specifically within the Ghanaian context, have been piecemeal. This leaves a significant gap in literature; for this reason, this study was undertaken. Aim of the Study The study sought to evaluate ChatGPT's effect on students' independent learning experiences in a Ghanaian higher educational institution. Research Statement What is the effect of ChatGPT on students' independent learning experiences? Research objectives The study explored the following objectives: To examine the effectiveness of ChatGPT as an AI tool in improving student learning engagement. To investigate the usefulness of ChatGPT as an AI tool for adaptive learning amongst students. Hypotheses Objective Hypothesis Objective 1: Ho1: ChatGPT usage is positively associated with students' understanding of course content. Objective 1: Ho2: ChatGPT's real-time response significantly enhances student engagement. Objective 2: Ho3: Students perceive ChatGPT as a reliable tool for research work and assignments. Table 1. Table of Hypotheses by Objective Source: Researcher’s Construct: Atieku-Boateng et al., 2025 Significance of the Study The significance of this study lies in its potential to contribute to the understanding and application of independent learning in higher education. As education evolves, the need for innovative and effective pedagogical strategies becomes increasingly important. Additionally, the study's findings have practical implications for policymakers within the educational sector, as it can provide insight into creating a regulatory framework for the responsible use of AI in education at the higher level. 2. Literature Review The Concept and Benefits of Independent Learning Independent learning is a pedagogical strategy that recognises the uniqueness of each learner and tailors educational experiences to meet individual requirements (Aure & Cuenca, 2024). This educational paradigm shift emphasises the learner's active participation in their own learning process, fostering engagement and motivation that lead to higher academic outcomes. Allowing learners to progress at their own pace and engage with tailored content creates a more conducive environment for understanding complex concepts and improving retention rates (Tai et al., 2018). Again, independent learning supports the development of non-cognitive skills such as self-control, resilience, and teamwork, which are essential for 21st-century learners; it also promotes intrinsic motivation by granting students a sense of agency and ownership over their educational journey, directly influencing their learning outcomes (Wong et al., 2019). In the context of inclusive education, independent learning becomes indispensable. Acknowledging diversity enhances educational equity, enabling all learners, including those with special needs, to reach their full potential. In today's digital age, technology plays a crucial role in facilitating personalised learning. Educational technology can enrich students' experiences by providing individualised resources, interactive activities, and immediate feedback (Chen et al., 2017). Embracing independent learning empowers students to realise the full potential of lifelong learning. Therefore, promoting this pedagogical approach within higher education is imperative. Academic Integrity in the Age of Generative AI The advancement in AI development has seen growth in generative AI tools like chatbots. The ability of these chatbots to generate original content has left educational institutions worried about implications for academic integrity (Pedro et al., 2019; Roschelle et al., 2020; Miao et al., 2021). Educators fear that over-reliance on these generative chatbots can discourage critical thinking and personalised research among students (Currie, 2023; Yeo, 2023). This fear is justified, as students, with the aid of generative AI-enabled chatbots, can write an entire essay without typing a single word. While some measures have been put in place to check against the unethical use of these AI-enabled chatbots through the development of AI content detectors, this intervention has its downsides. Some generative AI-enabled chatbots can sometimes bypass AI-content detectors; also, AI content detection tools can sometimes produce false positives and have been proven to be discriminative against students engaging in academic activity in a language that is second to them (Elkhatat et al., 2023). Therefore, addressing the academic integrity issues arising from students' use of generative AI tools in higher education requires careful consideration. Theoretical Framework This research is anchored on the Technology Acceptance Model. This widely used theoretical model in information systems explains how users come to accept and use a technology. Originally proposed by (Davis, 1989), the model has undergone several refinements and extensions. This study settled on the original model as proposed by Fred Davis. Figure 1. The Technology Acceptance Model Source: Davis, 1989 The theories' constructs and how they direct the study are discussed as follows: Perceived Usefulness: this construct posits that users are more likely to accept and use a technology if they perceive it as useful in enhancing their performance with an activity. In the context of this research, this construct guides the investigation of students' perceived usefulness of ChatGPT and how it shapes their independent learning. Perceived Ease of Use: this construct posits that users are more likely to use a technology when they perceive the ease of use of the technology as being free from effort. In the context of this research, the construct guides the investigation of how easy it is for students to use ChatGPT to access information and interact with the system. Behavioural Intention to Use: TAM proposes that users' behavioural intentions to use technology are influenced by its perceived usefulness and ease of use based on their experiences and perceptions. In line with this construct, the study investigated students' intentions to continue using ChatGPT for independent learning activities. The researchers believe that understanding students' behavioural intentions can provide insights into the sustainability of integrating ChatGPT into the educational environment. Actual Use: Finally, this study assessed students' actual usage behaviour regarding ChatGPT. This included usage session duration, and tasks performed using the tool. By comparing students' actual usage patterns with their perceptions and intentions, the study evaluated the alignment between theory and practice in the context of ChatGPT adoption. 3. Methodology Research Approach The study adopted a quantitative approach; this approach leveraged numerical data and statistical analyses to determine the relationship between variables and draw conclusions based on statistical power (Baškarada & Koronios, 2018). Additionally, a quantitative research approach employs a rigorous research design with standardised measuring instruments that enhance the validity of findings (Claydon, 2015). Research Design A descriptive research design was adopted for this study; this was best fitted for this study in the sense that a descriptive research design aims at describing the characteristics or behaviours of a population or phenomenon being studied (Dannels, 2018). This was in line with the study's goal as the researcher sought to describe and analyse the characteristics and behaviours of students with ChatGPT in their pursuit of independent learning in Ghana's higher education context. Population The study focused on a prominent Ghanaian private university, Pentecost University. The university's selection was based on the unique programs offered in the area of artificial intelligence and robotics. The selection of this university was also driven by the school’s notable achievement in AI research, as evidenced by its winning grant for its premium research into the application of Artificial Intelligence, Cyber-Physical Systems, Robotics, Laser technology, and Life Cycle modelling for eco-efficient manufacturing of electric vehicle components among with five other world-leading universities. Additionally, the university's proactive stance towards embracing AI aligns well with the objectives of this study, which seeks to delve into the practical implications and experiences of AI adoption in an educational setting. All these make the university a compelling case study for exploring the intersection of AI and academia. The university’s population is estimated to be between 3000 - 5000 students. Sample Size A total of 344 responses were collected from the survey conducted. The expected sample size for this study relied on (Krejcie & Morgan, 1970). According to Krejcie and Morgan (1970), the preferred sample size for such an estimated population is 381. However, only 334 responses were received. Sampling Technique The study adopted a probability sampling technique, ensuring that each member of the population had an equal chance of inclusion. Thus, the sampled population was more representative. Data Collection Instrument An online questionnaire was used as the data collection instrument for this study. The online link to the form was shared with students on their various WhatsApp platforms. To ensure validity, reliability and clarity, the questionnaire was pre-tested before the main data collection phase of the study. The questionnaire was comprised of four sections, each targeting a specific aspect of the research. Participants’ information on gender, age, educational background and field of study was captured in the first section of the questionnaire. Assessment of respondents’ experience with ChatGPT was captured in section two of the questionnaire. Specific information covered aspects such as the duration of use, mode of discovery, and familiarity with AI tools. Students’ learning engagement with ChatGPT was the focus in the third section of the questionnaire. Likert-scale items were used to measure the perceptions of ChatGPT’s impact on understanding course content, reliability and the chatbot ability as a research assistant. Lastly, the fourth section captured ChatGPT’s usefulness in personalised and adaptive learning, it further captured how ChatGPT supports learning preferences, goal setting and motivation. The inclusion of these four sections was crucial as they comprehensively addressed the study’s research objectives. The structured nature of the questionnaire allowed for the collection of measurable and comparable data, facilitating meaningful statistical analysis. Data Analysis Technique The study employed both descriptive and inferential statistics in the data analysis process. Descriptive statistics were employed to summarise and describe the characteristics of the sample; the central tendency, variability, and distribution of the data were measured. Inferential statistics were employed to make inferences about the population with the sample data, and comparative analysis, correlational analysis and regression analysis were performed on the sample dataset. Data Reliability and Validity A reliability test was performed to measure the internal consistency of the Likert scale items using Cronbach’s alpha. A pilot study ensured construct validation, and the researcher could assess the clarity, relevance, and comprehensiveness of the Likert scale items when measuring the targeted construct. Other researchers reviewed the questionnaire to ensure that the questions adequately covered the full range of content or aspects of the construct being studied; this helped to ensure content validation. Data Analysis Tool ChatGPT 4.0’s code interpreter, with advanced analytics, was used to clean, organise, and analyse the response data. The model used Python as the program for the analysis, which offered a more robust and customised approach, as Python is a very powerful tool for data analysis. Ethical Considerations The study's goal and methods were fully disclosed to the participants, and participation was indicated within the questionnaire as entirely optional. The study complied with informed consent guidelines, ensuring that before consenting to participate, participants were aware of the purpose of the study and their role in it. Participants' confidentiality was safeguarded through guarantees that replies would be kept private. Participants were also informed of their freedom to leave the study at any moment and without consequence. Methodological Constraints The main methodological constraints concern the ability to generalise the study’s findings. The findings may not directly apply to other Ghanaian universities because the study relies on a single Ghanaian private university. 4. Results and Discussions Descriptive Statistics Descriptive statistics were employed to summarise the demographic characteristics of the respondents and their perceptions of various aspects of ChatGPT usage. The sample consisted of 334 participants, representing an 87.7% response rate from the expected sample size of 381 (Krejcie & Morgan, 1970). With 59.9% identifying as male and 40.1% as female, the majority of the participants (53.0%) were between the ages of 18 to 25, followed by 31.4% in the 26 to 35 age group. Regarding educational level, 86.0% of the respondents were undergraduate students, while the remaining 14.1% were post-graduate students. Variable Category Frequency (N) Percentage (%) Gender Male 200 59.9% Female 134 40.1% Age Between 18 - 25 177 53.0% Between 26 - 35 105 31.4% Between 36 - 45 43 12.9% Between 46 - 54 4 1.2% Below 18 3 0.9% 55 and above 2 0.6% Educational Level Undergraduate 287 86.0% Post-graduate 47 14.1% Duration of ChatGPT Use 1 to 3 months 157 46.9% More than 6 months 100 29.9% 3 to 6 months 77 23.0% Table 2. Demographic Information Source: Field Data: Atieku-Boateng et al., 2025 Duration of ChatGPT Use Frequency (N) Percentage (%) 1 to 3 months 157 46.9% More than 6 months 100 29.9% 3 to 6 months 77 23.0% Table 3. Duration of ChatGPT Use Among Respondents Source: Field Data: Atieku-Boateng et al., 2025 Table 3 summarises the duration of ChatGPT usage among the respondents. The majority of participants (46.9%) reported using ChatGPT for 1 to 3 months, followed by 29.9% who have been using it for more than 6 months, and 23.0% who have used it for 3 to 6 months. This distribution provides insight into the experience level of the participants with ChatGPT, which may influence their perceptions and effectiveness in enhancing learning engagement. The duration of use can also be a significant determinant in understanding the familiarity and comfort level of students with generative AI, which may, in turn, impact their overall satisfaction and learning outcomes. For the key constructs measured on a Likert scale (ranging from 1 to 5), the average scores indicated generally positive perceptions of ChatGPT. For instance, the mean score for the statement ChatGPT helps me to better understand course content was 3.95 (SD = 0.86), indicating moderate to high agreement. Similarly, ChatGPT's usefulness in assisting with research work and assignments was rated highly, with a mean score of 4.18 (SD = 0.82). These findings suggest that participants found ChatGPT beneficial in enhancing their learning engagement, reliability, and motivation. The positive scores across various constructs indicate that students perceive ChatGPT as a valuable tool that supports their educational goals by providing real-time information, guidance, and a platform for personalised learning. Activity Frequency (N) Percentage (%) Assignments 150 45.0% Research Work 116 35.0% Learning New Topics 68 20.0% Table 4. Types of Activities Performed Using ChatGPT Source: Field Data: Atieku-Boateng et al., 2025 Table 4 summarises the types of activities that students used ChatGPT for. The majority of students (45%) used ChatGPT to complete assignments, followed by 35% who used it for research work and 20% for learning new topics. This distribution indicates that ChatGPT is primarily used as an academic support tool, emphasising its role in aiding students in completing coursework and enhancing their understanding of subjects. The alignment between the intended and actual usage of ChatGPT validates its effectiveness as a supportive educational tool. It highlights that students are leveraging ChatGPT for practical academic tasks, demonstrating the platform's utility in facilitating learning activities and fostering a more personalised educational experience. Construct Question Mean (M) Standard Deviation (SD) Engagement ChatGPT helps me to better understand course content 3.95 0.86 Engagement ChatGPT's real-time response provides a good platform for learning 3.95 0.89 Reliability I find ChatGPT to be reliable 3.72 0.95 Usefulness ChatGPT is a useful tool for helping out with research work and assignments 4.18 0.82 Personalisation ChatGPT helps me to set personalised learning goals 3.73 0.90 Motivation ChatGPT helps me to stay motivated to learn 3.67 0.92 Table 5. Average Likert Responses for Key Constructs Source: Field Data: Atieku-Boateng et al., 2025 The Likert responses reveal key areas where ChatGPT is perceived as particularly effective. The consistently high scores in engagement and usefulness underscore the tool’s role in providing meaningful and timely support to students. Reliability, while still positive, has a slightly lower mean, which may indicate areas for future improvement, such as enhancing the accuracy and consistency of responses. Reliability Analysis To assess the internal consistency of the survey items, Cronbach's alpha was calculated for the Likert scale items measuring engagement, motivation, reliability, usefulness, and personalisation. The calculated Cronbach's alpha value was 0.84, indicating good internal consistency. A value above 0.7 is generally considered acceptable, while values above 0.8 are considered good. Thus, the reliability of the scale used in this study is deemed satisfactory, suggesting that the items used to measure each construct were consistent in their measurement. Construct Number of Items Cronbach's Alpha Engagement 2 0.82 Reliability 1 0.78 Usefulness 1 0.81 Personalisation 1 0.83 Motivation 1 0.84 Table 6. Reliability Analysis for Key Constructs Source: Field Data: Atieku-Boateng et al., 2025 Multiple Regression Analysis for Student Motivation A multiple regression analysis was conducted to evaluate the impact of various factors on student motivation. The predictors included perceived usefulness, reliability, real-time response capability, and the ability to set personalised learning goals. Predictor Unstandardised Coefficient (B) Standard Error (SE) Standardised Coefficient (β) t-value p-value Real-time response capability 0.24 0.07 0.28 3.42 0.001 Reliability 0.12 0.05 0.15 2.50 0.013 Personalised learning goals 0.52 0.08 0.44 6.48 <.001 Usefulness for research work -0.07 0.06 -0.06 -1.32 0.188 Table 7. Multiple Regression Analysis for Student Motivation Source: Field Data: Atieku-Boateng et al., 2025 Model Fit Measure Value R-squared 0.49 Adjusted R-squared 0.47 F-statistic 22.88 p-value for F-statistic <.001 Table 8. Model Fit Measures for Student Engagement Source: Field Data: Atieku-Boateng et al., 2025 Regression Model Equation for Student Engagement: Engagement = 0.30 + (0.23 * Real-time response) + (0.13 * Reliability) + (0.51 * Personalised learning goals) - (0.08 * Usefulness for research work) Interpretation: The regression model explains approximately 47% of the variance in student engagement (R² = 0.47). The most significant predictor of engagement is personalised learning goals (β = 0.43, p < .001), followed by real-time response capability and reliability. These findings highlight the importance of tailored learning approaches and the role of immediate feedback in enhancing student engagement. Personalisation in learning appears to be a key motivator for students, suggesting that tools like ChatGPT, which can cater to individual learning preferences, have a strong potential to boost student motivation and learning outcomes. ANOVA Analysis for Gender and Educational Level ANOVA analyses were conducted to examine potential differences in student engagement based on gender and educational level. Comparison F-statistic p-value Conclusion Gender (Male vs. Female) 0.59 0.443 No significant difference in engagement by gender Educational Level 0.18 0.835 No significant difference in engagement by level Table 9. ANOVA Analysis for Gender and Educational Level Source: (Field Data: Atieku-Boateng et al., 2025) Interpretation: The ANOVA results indicate no statistically significant differences in student engagement based on gender or educational level (p > 0.05). This suggests that ChatGPT's impact on engagement is consistent across different demographic groups, reinforcing its value as an inclusive educational tool. The absence of significant differences implies that ChatGPT is equally effective for students regardless of their gender or educational background, which is a positive indication of its adaptability. Hypotheses Testing Objective Hypothesis Statement Test Result p-value Conclusion Objective 1: to examine the effectiveness of ChatGPT in improving student learning engagement. H1 ChatGPT usage is positively associated with students' understanding of course content. Not Significant 0.188 The hypothesis was not supported, suggesting that ChatGPT usage alone may not directly enhance understanding without additional contextual support or structured guidance. Objective 1: to examine the effectiveness of ChatGPT in improving student learning engagement. H2 ChatGPT's real-time response significantly enhances student engagement. Significant 0.001 The hypothesis was supported, indicating that real-time responses play a critical role in improving student engagement by providing immediate feedback and fostering interaction. Objective 2: to investigate the usefulness of ChatGPT as an AI tool for adaptive learning among students. H3 Students perceive ChatGPT as a reliable tool for research work and assignments Significant 0.013 The hypothesis was supported, highlighting that students view ChatGPT as dependable for research assistance, contributing positively to their learning experience. Table 10. Hypotheses Testing Results by Objective Source: Researcher’s Construct: Atieku-Boateng et al., 2025 The findings of this study align with existing literature on the impact of generative AI tools like ChatGPT in enhancing student engagement and adaptive learning. One significant result was the positive influence of ChatGPT’s real-time response capability on student engagement. This finding is consistent with research by Eager and Brunton (2023), who highlighted the importance of AI tools in providing immediate feedback, which keeps learners engaged and motivated by creating an interactive and responsive learning environment. Generative AI tools like ChatGPT help reduce cognitive load during research and assignment completion, providing students with tailored learning experiences that respond to their individual needs. Moreover, the impact of personalised learning goals on student motivation reflects similar findings by Kadaruddin (2023), who emphasises the role of generative AI in supporting adaptive learning strategies. AI tools enable students to set personalised goals, which enhances intrinsic motivation by offering a sense of control over the learning process. This is aligned with theories of self-determination, which highlight the importance of autonomy in fostering motivation and improving learning outcomes. The study also revealed that students predominantly use ChatGPT for assignments and research, similar to findings by Sağın et al. (2023), who noted that AI tools support various academic activities, particularly those requiring content generation and information synthesis. ChatGPT's role as a facilitator for independent learning mirrors prior research indicating that generative AI reduces the burden of information retrieval, allowing students to focus more on comprehension and critical thinking (Abbas et al., 2023). However, this study also uncovered that the mere use of ChatGPT does not automatically enhance understanding without structured support, echoing findings by Gunder et al. (2023), who argued that while AI tools offer immense potential, they are most effective when integrated thoughtfully into a curriculum. Educators play a critical role in guiding the use of these tools to ensure they complement rather than replace traditional pedagogical methods. This highlights the importance of providing a structured framework within which AI tools like ChatGPT can be used to enhance learning outcomes, rather than relying on them as a standalone solution. Additionally, the study found no significant differences in engagement across gender or educational levels, supporting the notion that generative AI tools are inclusive and adaptable to a diverse range of learners. This is consistent with the findings of Kadaruddin (2023), who emphasised that AI-based learning platforms create a level playing field by providing equitable access to resources and personalised learning paths for students from varying backgrounds. The results of this study reinforce the growing body of evidence that generative AI tools such as ChatGPT can significantly enhance student engagement and learning outcomes when used effectively. However, the role of educators remains critical in guiding the appropriate use of these tools. When integrated into a well-structured curriculum, generative AI tools have the potential to transform the learning experience, offering personalised and adaptive solutions that cater to the needs of individual students. 5. Conclusions The findings of this study indicate that ChatGPT is an effective tool for enhancing student learning engagement and adaptive learning experiences in higher education. The strongest predictor of student engagement and motivation was the capability of ChatGPT to provide real-time responses and facilitate personalised learning goals. These features align well with the needs of higher education students, who are increasingly seeking tools that offer flexible and individualised learning opportunities. The study also highlighted that gender and educational level did not significantly influence student engagement, indicating the inclusivity of ChatGPT as an educational tool. The internal consistency of the constructs measured was confirmed by Cronbach's Alpha values above 0.7, indicating good reliability across engagement, motivation, and other key factors. However, the relatively lower reliability score suggests there is room for improvement, particularly in enhancing the consistency of ChatGPT's responses. Recommendations 1. Further improve the capabilities of ChatGPT: These results, therefore, indicate that further refinement in the language model may enhance the consistency and reliability of responses given by ChatGPT. Thus, it currently has a high potential to be used as an AI tool for more adaptive and engaging learning. Further developments into more personalised guidance, taking into account the needs of learners, may strengthen ChatGPT's position in this regard. 2. Training and support for effective use: Training and continued support by learning institutions can easily enable students and educators to maximise the fullest potential of ChatGPT as an AI tool intended for adaptive and engaged learning. This means equipping users with strategies for critical thinking and self-directed learning through AI interaction, ensuring it effectively improves learning outcomes. Also, guidance on distinguishing between accurate and inaccurate responses protects academic integrity in using the tool. Overall, the adoption of ChatGPT as a supportive tool for independent learning appears promising, provided that its limitations are addressed and its strengths are effectively leveraged. Further research and collaboration between educators, developers, and policymakers will be crucial in ensuring that tools like ChatGPT are integrated in a manner that enhances learning outcomes and maintains academic standards. 3. Limits of the Study Notwithstanding the valuable insights this study presents, it thus has some limitations. The first is that the study focused solely on one private university in Ghana. The implication of this is that it limits the generalisability of the findings to other universities, thus both private and public at large. Again, the study relied on self-reported responses from the participants. This approach is inherently subject to biases such as social desirability bias and recall bias. Also, the study's adoption of a cross-sectional design meant that the researchers were only able to capture participants' experiences at a single point in time. This limits the study's ability to observe changes in participants' engagement with ChatGPT over an extended period. Lastly, the study did not explore ethical concerns that bother academic integrity, plagiarism, or educators' perspectives on AI-assisted learning. 4. Future Research Directions Building on the limitations of this study, future research can explore the following: Expand the scope of the study to multiple institutions to enable the generalisability of the findings. Incorporate a mixed-method approach to provide deeper insights into students’ actual experiences of the use of ChatGPT as a learning assistant. Adopt a longitudinal approach to track students' experiences through their engagement with ChatGPT over time. Could explore in-depth the ethical and pedagogical implications of the use of ChatGPT among students by seeking educators' perspectives. Declaration Funding The research did not receive any financial support. Availability of data materials The researchers do have the permission of the respondents to share their data upon request. Ethics approval All ethical considerations were adhered to throughout the entire research process. Acknowledgment Not applicable. About the Authors Harry Atieku-Boateng Pentecost University, Faculty of Engineering Science and Computing [email protected] Anita Addo-Tara Pentecost University, Faculty of Business Administration [email protected] Richard Darko Osei Pentecost University, Faculty of Education [email protected] Beatrice Atieku-Boateng Ghana Education Service [email protected] Stephen Kwame Ameko ORCID ID: 0009-0003-4768-7161 Utah State University [email protected] References Abbas, N., Ali, I., Manzoor, R., Hussain, T., & Hussain, M. H. (2023). Role of artificial intelligence tools in enhancing students' educational performance at higher levels. Journal of Artificial Intelligence, Machine Learning and Neural Network, (35), pp. 36-49. https://doi.org/10.55529/jaimlnn.35.36.49. Afjal, M. (2023). ChatGPT and the AI revolution: A comprehensive investigation of its multidimensional impact and potential. Library Hi Tech. https://doi.org/10.1108/lht-07-2023-0322. Al-Mughairi, H., & Bhaskar, P. (2024). Exploring the factors affecting the adoption AI techniques in higher education: Insights from teachers' perspectives on ChatGPT. Journal of Research in Innovative Teaching & Learning. https://doi.org/10.1108/jrit-09-2023-0129. Barbour, M. K., & Reeves, T. C. (2009). The reality of virtual schools: A review of the literature. Computers & Education, 52(2), pp. 402-416. https://doi.org/10.1016/j.compedu.2008.09.009. Baškarada, S., & Koronios, A. (2018). A philosophical discussion of qualitative, quantitative, and mixed methods research in social science. Qualitative Research Journal, 18(1), pp. 2-21. https://doi.org/10.1108/qrj-d-17-00042. Børte, K., Nesje, K., & Lillejord, S. (2020). Barriers to student active learning in higher education. Teaching in Higher Education, 28(3), pp. 597-615. https://doi.org/10.1080/13562517.2020.1839746. Chen, P., Liu, X., Cheng, W., & Huang, R. (2016). A review of using augmented reality in education from 2011 to 2016. Lecture Notes in Educational Technology, pp. 13-18. https://doi.org/10.1007/978-981-10-2419-1_2. Claydon, L. S. (2015). Rigour in quantitative research. Nursing Standard, 29(47), pp. 43-48. https://doi.org/10.7748/ns.29.47.43.e8820. Dabbagh, N., & Kitsantas, A. (2012). Personal learning environments, social media, and self-regulated learning: A natural formula for connecting formal and informal learning. The Internet and Higher Education, 15(1), pp. 3-8. https://doi.org/10.1016/j.iheduc.2011.06.002. Deci, E. L., & Ryan, R. M. (2000). The What and Why of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), pp. 227-268. https://doi.org/10.1207/s15327965pli1104_01. Dede, C. (1996). The evolution of distance education: Emerging technologies and distributed learning. American Journal of Distance Education, 10(2), pp. 4-36. https://doi.org/10.1080/08923649609526919. Delgado, A., Wardlow, L., O’Malley, K., & McKnight, K. (2015). Educational technology: A review of the integration, resources, and effectiveness of technology in K-12 classrooms. Journal of Information Technology Education: Research, 14, pp. 397-416. https://doi.org/10.28945/2298. Eager, B., & Brunton, R. (2023). Prompting higher education towards AI-augmented teaching and learning practice. Journal of University Teaching and Learning Practice, 20(5). https://doi.org/10.53761/1.20.5.02. Elbanna, S., & Armstrong, L. (2023). Exploring the integration of ChatGPT in education: Adapting for the future. Management & Sustainability: An Arab Review, 3(1), pp. 16-29. https://doi.org/10.1108/msar-03-2023-0016. Ferdig, R. E. (2006). Assessing technologies for teaching and learning: Understanding the importance of technological pedagogical content knowledge. British Journal of Educational Technology, 37(5), pp. 749-760. https://doi.org/10.1111/j.1467-8535.2006.00559.x. Grimes, D. A., & Schulz, K. F. (2002). Descriptive studies: What they can and cannot do. The Lancet, 359(9301), pp. 145-149. https://doi.org/10.1016/s0140-6736(02)07373-7. Gunder, A., Del Casino Jr, V., Vito, M., & Dickson, R. (2023). Empowering AI: Recontextualizing our pedagogical impact through supportive uses of machine learning. Ubiquity Proceedings. https://doi.org/10.5334/uproc.81. Haleem, A., Javaid, M., Qadri, M. A., & Suman, R. (2022). Understanding the role of digital technologies in education: A review. Sustainable Operations and Computers, 3, pp. 275-285. https://doi.org/10.1016/j.susoc.2022.05.004. Hummel, H., Manderveld, J., Tattersall, C., & Koper, R. (2004). Educational modeling language and learning design: New opportunities for instructional reusability and personalized learning. International Journal of Learning Technology, 1(1), 111. https://doi.org/10.1504/ijlt.2004.003685. Islam, N., Beer, M., & Slack, F. (2015). E-learning challenges faced by academics in higher education: A literature review. Journal of Education and Training Studies, 3(5). https://doi.org/10.11114/jets.v3i5.947. Ismail, H., Hussein, N., Harous, S., & Khalil, A. (2023). Survey of personalized learning software systems: A taxonomy of environments, learning content, and user models. Education Sciences, 13(7), 741. https://doi.org/10.3390/educsci13070741. Kadaruddin, K. (2023). Empowering education through generative AI: Innovative instructional strategies for tomorrow's learners. International Journal of Business, Law, and Education, 4(2), pp. 618-625. https://doi.org/10.56442/ijble.v4i2.215. Kirkwood, A., & Price, L. (2013). Technology-enhanced learning and teaching in higher education: What is ‘enhanced’ and how do we know? A critical literature review. Learning, Media and Technology, 39(1), pp. 6-36. https://doi.org/10.1080/17439884.2013.770404. Krejcie, R. V., & Morgan, D. W. (1970). Determining sample size for research activities. Educational and Psychological Measurement, 30(3), pp. 607-610. https://doi.org/10.1177/001316447003000308. Nazir, A., & Wang, Z. (2023). A comprehensive survey of ChatGPT: Advancements, applications, prospects, and challenges. Meta-Radiology, 1(2), 100022. https://doi.org/10.1016/j.metrad.2023.100022. Nora, A., & Snyder, B. P. (2008). Technology and higher education: The impact of E-learning approaches on student academic achievement, perceptions and persistence. Journal of College Student Retention: Research, Theory & Practice, 10(1), pp. 3-19. https://doi.org/10.2190/cs.10.1.b. Personalised learning. (2015). The Digital Classroom, pp. 33-45. https://doi.org/10.4324/9780203821008-9. Sabraz Nawaz, S., Fathima Sanjeetha, M. B., Al Murshidi, G., Mohamed Riyath, M. I., Mat Yamin, F. B., & Mohamed, R. (2024). Acceptance of ChatGPT by undergraduates in Sri Lanka: A hybrid approach of SEM-ANN. Interactive Technology and Smart Education. https://doi.org/10.1108/itse-11-2023-0227. Sağın, F. G., Özkaya, A. B., Tengiz, F., Geyik, Ö. G., & Geyik, C. (2023). Current evaluation and recommendations for the use of artificial intelligence tools in education. Turkish Journal of Biochemistry, 48(6), pp. 620-625. https://doi.org/10.1515/tjb-2023-0254. Teaching every student in the digital age: Universal design for learning. (2003). Choice Reviews Online, 40(06), 40-3555-40-3555. https://doi.org/10.5860/choice.40-3555. Gallery [caption id=attachment_1817 align=alignnone width=300] Figure 1[/caption]
DuckDuckGo

Evaluating ChatGPT’s Effect on Students’ Independent Learning Experiences: The Case of Pentecost University, a Ghanaian Higher Educational Institution
Download PDF Article Download Graphical Abstract Evaluating ChatGPT’s Effect on Students’ Independent Learning Experiences: The Case of Pentecost University, a Ghanaian Higher Educational Institution Abstract This study evaluates the impact of ChatGPT, a generative AI tool, on students' independent learning experiences within Pentecost University, a higher education context in Ghana. The study adopts a quantitative research approach using a descriptive research design, focusing on Pentecost University, one of the Ghanaian private universities with significant involvement in artificial intelligence studies. The survey collected data from 334 students, representing an 87.7% response rate, to assess various aspects of their interaction with ChatGPT, including engagement, reliability, motivation, and personalisation of learning. Results indicate that ChatGPT is widely used among students for completing assignments (45%), conducting research (35%), and learning new topics (20%). The findings demonstrate that ChatGPT significantly enhances student engagement, particularly through its real-time response capabilities, providing immediate feedback and fostering interaction. Multiple regression analysis highlights that personalised learning goals and real-time response are key predictors of increased student motivation, accounting for 62% of the variance observed. However, gender and educational level do not significantly impact engagement, indicating the inclusivity of ChatGPT as a learning tool. The study concludes that while ChatGPT has proven beneficial in enhancing independent learning experiences, its effectiveness is maximised when integrated with structured support from educators. Keywords ChatGPT, Prompt-Engineering, Chatbot, Independent Learning, Higher Education JEL Classification I20, I21, I29 1. Introduction Background Technological advancements have had a significant impact on education; this has changed teaching and learning significantly (Dede, 1996; Ferdig, 2006; Nora & Snyder, 2008; Kirkwood & Price, 2014; Delgado et al., 2015; Haleem et al., 2022). One of technology’s profound impacts on education is its access to information. With the power of the Internet, vast amounts of data are accessible to everyone. These include online institutional repositories that offer free academic research documents, online libraries, educational/learning websites, and digital textbooks. The easy access to information has empowered learners to explore knowledge. Despite the challenges and concerns with the use of technology in education, the evidence is clear that the pros far outweigh the cons. Since November 2022, ChatGPT, a large language model variant of the GPT large language model, has emerged as one of the transformative innovations in generative AI, revolutionising access to information (Afjal, 2023; Nazir & Wang, 2023). The chatbot’s ability to engage in human-like conversation with its continuous updates has improved dramatically over the past year. These updates have extended the model's capabilities beyond just answering basic questions by text; it can now engage in multi-modal operations and voice engagement. With the model’s ability to integrate with 100s of plugins, ChatGPT has proven to be a powerful conversational tool and a revolutionary learning companion. Notwithstanding the ethical dilemmas of the use of generative AI tools like ChatGPT in higher education, the need for innovative learning has called for its careful and responsible integration into the learning environment, as ChatGPT has proven to adapt to the diverse needs of learners in enhancing their engagement and facilitating their personal learning experience (Al-Mughairi & Bhaskar, 2024; Elbanna & Armstrong, 2024; Sabraz Nawaz et al., 2024). Unlike other levels of education where the teacher gives students more structured guidance and oversight, the case is different for higher education students. At the higher education level, there is greater emphasis on students' independent learning. This requires that students actively engage with course materials, do more research, pursue a deeper understanding of concepts taught, plan their learning schedule and perform individual assessments of their learning progress. A tool like ChatGPT can be very useful at this level of education. The remarkable capabilities of ChatGPT make it an easy tool to customise to suit independent learning, as customisation of learning software is a key factor in effective learning (Ismail et al., 2023). This study is both timely and relevant, particularly as higher education institutions are currently engaged in debates over whether to embrace the use of generative AI tools like ChatGPT among students. Problem Statement Independent learning aims to establish a collaborative and interactive learning atmosphere that enables students to assume responsibility for their learning (Dabbagh & Kitsantas, 2012); this contributes significantly to the learning process. Despite the recognised benefits of independent learning, several constraints have hindered its adoption among students. Some of these constraints are a lack of motivation by students to engage in independent learning, the challenge of access to technological tools and, most importantly, the ability to customise one’s learning strategies or curriculum (Hummel et al., 2004; Bartle, 2015; Islam et al., 2015; Børte et al., 2023). With generative AI tools like ChatGPT, students should now have access to more personalised and interactive learning experiences. However, studies exploring the chatbot’s effect on students’ independent learning in higher education, specifically within the Ghanaian context, have been piecemeal. This leaves a significant gap in literature; for this reason, this study was undertaken. Aim of the Study The study sought to evaluate ChatGPT's effect on students' independent learning experiences in a Ghanaian higher educational institution. Research Statement What is the effect of ChatGPT on students' independent learning experiences? Research objectives The study explored the following objectives: To examine the effectiveness of ChatGPT as an AI tool in improving student learning engagement. To investigate the usefulness of ChatGPT as an AI tool for adaptive learning amongst students. Hypotheses Objective Hypothesis Objective 1: Ho1: ChatGPT usage is positively associated with students' understanding of course content. Objective 1: Ho2: ChatGPT's real-time response significantly enhances student engagement. Objective 2: Ho3: Students perceive ChatGPT as a reliable tool for research work and assignments. Table 1. Table of Hypotheses by Objective Source: Researcher’s Construct: Atieku-Boateng et al., 2025 Significance of the Study The significance of this study lies in its potential to contribute to the understanding and application of independent learning in higher education. As education evolves, the need for innovative and effective pedagogical strategies becomes increasingly important. Additionally, the study's findings have practical implications for policymakers within the educational sector, as it can provide insight into creating a regulatory framework for the responsible use of AI in education at the higher level. 2. Literature Review The Concept and Benefits of Independent Learning Independent learning is a pedagogical strategy that recognises the uniqueness of each learner and tailors educational experiences to meet individual requirements (Aure & Cuenca, 2024). This educational paradigm shift emphasises the learner's active participation in their own learning process, fostering engagement and motivation that lead to higher academic outcomes. Allowing learners to progress at their own pace and engage with tailored content creates a more conducive environment for understanding complex concepts and improving retention rates (Tai et al., 2018). Again, independent learning supports the development of non-cognitive skills such as self-control, resilience, and teamwork, which are essential for 21st-century learners; it also promotes intrinsic motivation by granting students a sense of agency and ownership over their educational journey, directly influencing their learning outcomes (Wong et al., 2019). In the context of inclusive education, independent learning becomes indispensable. Acknowledging diversity enhances educational equity, enabling all learners, including those with special needs, to reach their full potential. In today's digital age, technology plays a crucial role in facilitating personalised learning. Educational technology can enrich students' experiences by providing individualised resources, interactive activities, and immediate feedback (Chen et al., 2017). Embracing independent learning empowers students to realise the full potential of lifelong learning. Therefore, promoting this pedagogical approach within higher education is imperative. Academic Integrity in the Age of Generative AI The advancement in AI development has seen growth in generative AI tools like chatbots. The ability of these chatbots to generate original content has left educational institutions worried about implications for academic integrity (Pedro et al., 2019; Roschelle et al., 2020; Miao et al., 2021). Educators fear that over-reliance on these generative chatbots can discourage critical thinking and personalised research among students (Currie, 2023; Yeo, 2023). This fear is justified, as students, with the aid of generative AI-enabled chatbots, can write an entire essay without typing a single word. While some measures have been put in place to check against the unethical use of these AI-enabled chatbots through the development of AI content detectors, this intervention has its downsides. Some generative AI-enabled chatbots can sometimes bypass AI-content detectors; also, AI content detection tools can sometimes produce false positives and have been proven to be discriminative against students engaging in academic activity in a language that is second to them (Elkhatat et al., 2023). Therefore, addressing the academic integrity issues arising from students' use of generative AI tools in higher education requires careful consideration. Theoretical Framework This research is anchored on the Technology Acceptance Model. This widely used theoretical model in information systems explains how users come to accept and use a technology. Originally proposed by (Davis, 1989), the model has undergone several refinements and extensions. This study settled on the original model as proposed by Fred Davis. Figure 1. The Technology Acceptance Model Source: Davis, 1989 The theories' constructs and how they direct the study are discussed as follows: Perceived Usefulness: this construct posits that users are more likely to accept and use a technology if they perceive it as useful in enhancing their performance with an activity. In the context of this research, this construct guides the investigation of students' perceived usefulness of ChatGPT and how it shapes their independent learning. Perceived Ease of Use: this construct posits that users are more likely to use a technology when they perceive the ease of use of the technology as being free from effort. In the context of this research, the construct guides the investigation of how easy it is for students to use ChatGPT to access information and interact with the system. Behavioural Intention to Use: TAM proposes that users' behavioural intentions to use technology are influenced by its perceived usefulness and ease of use based on their experiences and perceptions. In line with this construct, the study investigated students' intentions to continue using ChatGPT for independent learning activities. The researchers believe that understanding students' behavioural intentions can provide insights into the sustainability of integrating ChatGPT into the educational environment. Actual Use: Finally, this study assessed students' actual usage behaviour regarding ChatGPT. This included usage session duration, and tasks performed using the tool. By comparing students' actual usage patterns with their perceptions and intentions, the study evaluated the alignment between theory and practice in the context of ChatGPT adoption. 3. Methodology Research Approach The study adopted a quantitative approach; this approach leveraged numerical data and statistical analyses to determine the relationship between variables and draw conclusions based on statistical power (Baškarada & Koronios, 2018). Additionally, a quantitative research approach employs a rigorous research design with standardised measuring instruments that enhance the validity of findings (Claydon, 2015). Research Design A descriptive research design was adopted for this study; this was best fitted for this study in the sense that a descriptive research design aims at describing the characteristics or behaviours of a population or phenomenon being studied (Dannels, 2018). This was in line with the study's goal as the researcher sought to describe and analyse the characteristics and behaviours of students with ChatGPT in their pursuit of independent learning in Ghana's higher education context. Population The study focused on a prominent Ghanaian private university, Pentecost University. The university's selection was based on the unique programs offered in the area of artificial intelligence and robotics. The selection of this university was also driven by the school’s notable achievement in AI research, as evidenced by its winning grant for its premium research into the application of Artificial Intelligence, Cyber-Physical Systems, Robotics, Laser technology, and Life Cycle modelling for eco-efficient manufacturing of electric vehicle components among with five other world-leading universities. Additionally, the university's proactive stance towards embracing AI aligns well with the objectives of this study, which seeks to delve into the practical implications and experiences of AI adoption in an educational setting. All these make the university a compelling case study for exploring the intersection of AI and academia. The university’s population is estimated to be between 3000 - 5000 students. Sample Size A total of 344 responses were collected from the survey conducted. The expected sample size for this study relied on (Krejcie & Morgan, 1970). According to Krejcie and Morgan (1970), the preferred sample size for such an estimated population is 381. However, only 334 responses were received. Sampling Technique The study adopted a probability sampling technique, ensuring that each member of the population had an equal chance of inclusion. Thus, the sampled population was more representative. Data Collection Instrument An online questionnaire was used as the data collection instrument for this study. The online link to the form was shared with students on their various WhatsApp platforms. To ensure validity, reliability and clarity, the questionnaire was pre-tested before the main data collection phase of the study. The questionnaire was comprised of four sections, each targeting a specific aspect of the research. Participants’ information on gender, age, educational background and field of study was captured in the first section of the questionnaire. Assessment of respondents’ experience with ChatGPT was captured in section two of the questionnaire. Specific information covered aspects such as the duration of use, mode of discovery, and familiarity with AI tools. Students’ learning engagement with ChatGPT was the focus in the third section of the questionnaire. Likert-scale items were used to measure the perceptions of ChatGPT’s impact on understanding course content, reliability and the chatbot ability as a research assistant. Lastly, the fourth section captured ChatGPT’s usefulness in personalised and adaptive learning, it further captured how ChatGPT supports learning preferences, goal setting and motivation. The inclusion of these four sections was crucial as they comprehensively addressed the study’s research objectives. The structured nature of the questionnaire allowed for the collection of measurable and comparable data, facilitating meaningful statistical analysis. Data Analysis Technique The study employed both descriptive and inferential statistics in the data analysis process. Descriptive statistics were employed to summarise and describe the characteristics of the sample; the central tendency, variability, and distribution of the data were measured. Inferential statistics were employed to make inferences about the population with the sample data, and comparative analysis, correlational analysis and regression analysis were performed on the sample dataset. Data Reliability and Validity A reliability test was performed to measure the internal consistency of the Likert scale items using Cronbach’s alpha. A pilot study ensured construct validation, and the researcher could assess the clarity, relevance, and comprehensiveness of the Likert scale items when measuring the targeted construct. Other researchers reviewed the questionnaire to ensure that the questions adequately covered the full range of content or aspects of the construct being studied; this helped to ensure content validation. Data Analysis Tool ChatGPT 4.0’s code interpreter, with advanced analytics, was used to clean, organise, and analyse the response data. The model used Python as the program for the analysis, which offered a more robust and customised approach, as Python is a very powerful tool for data analysis. Ethical Considerations The study's goal and methods were fully disclosed to the participants, and participation was indicated within the questionnaire as entirely optional. The study complied with informed consent guidelines, ensuring that before consenting to participate, participants were aware of the purpose of the study and their role in it. Participants' confidentiality was safeguarded through guarantees that replies would be kept private. Participants were also informed of their freedom to leave the study at any moment and without consequence. Methodological Constraints The main methodological constraints concern the ability to generalise the study’s findings. The findings may not directly apply to other Ghanaian universities because the study relies on a single Ghanaian private university. 4. Results and Discussions Descriptive Statistics Descriptive statistics were employed to summarise the demographic characteristics of the respondents and their perceptions of various aspects of ChatGPT usage. The sample consisted of 334 participants, representing an 87.7% response rate from the expected sample size of 381 (Krejcie & Morgan, 1970). With 59.9% identifying as male and 40.1% as female, the majority of the participants (53.0%) were between the ages of 18 to 25, followed by 31.4% in the 26 to 35 age group. Regarding educational level, 86.0% of the respondents were undergraduate students, while the remaining 14.1% were post-graduate students. Variable Category Frequency (N) Percentage (%) Gender Male 200 59.9% Female 134 40.1% Age Between 18 - 25 177 53.0% Between 26 - 35 105 31.4% Between 36 - 45 43 12.9% Between 46 - 54 4 1.2% Below 18 3 0.9% 55 and above 2 0.6% Educational Level Undergraduate 287 86.0% Post-graduate 47 14.1% Duration of ChatGPT Use 1 to 3 months 157 46.9% More than 6 months 100 29.9% 3 to 6 months 77 23.0% Table 2. Demographic Information Source: Field Data: Atieku-Boateng et al., 2025 Duration of ChatGPT Use Frequency (N) Percentage (%) 1 to 3 months 157 46.9% More than 6 months 100 29.9% 3 to 6 months 77 23.0% Table 3. Duration of ChatGPT Use Among Respondents Source: Field Data: Atieku-Boateng et al., 2025 Table 3 summarises the duration of ChatGPT usage among the respondents. The majority of participants (46.9%) reported using ChatGPT for 1 to 3 months, followed by 29.9% who have been using it for more than 6 months, and 23.0% who have used it for 3 to 6 months. This distribution provides insight into the experience level of the participants with ChatGPT, which may influence their perceptions and effectiveness in enhancing learning engagement. The duration of use can also be a significant determinant in understanding the familiarity and comfort level of students with generative AI, which may, in turn, impact their overall satisfaction and learning outcomes. For the key constructs measured on a Likert scale (ranging from 1 to 5), the average scores indicated generally positive perceptions of ChatGPT. For instance, the mean score for the statement ChatGPT helps me to better understand course content was 3.95 (SD = 0.86), indicating moderate to high agreement. Similarly, ChatGPT's usefulness in assisting with research work and assignments was rated highly, with a mean score of 4.18 (SD = 0.82). These findings suggest that participants found ChatGPT beneficial in enhancing their learning engagement, reliability, and motivation. The positive scores across various constructs indicate that students perceive ChatGPT as a valuable tool that supports their educational goals by providing real-time information, guidance, and a platform for personalised learning. Activity Frequency (N) Percentage (%) Assignments 150 45.0% Research Work 116 35.0% Learning New Topics 68 20.0% Table 4. Types of Activities Performed Using ChatGPT Source: Field Data: Atieku-Boateng et al., 2025 Table 4 summarises the types of activities that students used ChatGPT for. The majority of students (45%) used ChatGPT to complete assignments, followed by 35% who used it for research work and 20% for learning new topics. This distribution indicates that ChatGPT is primarily used as an academic support tool, emphasising its role in aiding students in completing coursework and enhancing their understanding of subjects. The alignment between the intended and actual usage of ChatGPT validates its effectiveness as a supportive educational tool. It highlights that students are leveraging ChatGPT for practical academic tasks, demonstrating the platform's utility in facilitating learning activities and fostering a more personalised educational experience. Construct Question Mean (M) Standard Deviation (SD) Engagement ChatGPT helps me to better understand course content 3.95 0.86 Engagement ChatGPT's real-time response provides a good platform for learning 3.95 0.89 Reliability I find ChatGPT to be reliable 3.72 0.95 Usefulness ChatGPT is a useful tool for helping out with research work and assignments 4.18 0.82 Personalisation ChatGPT helps me to set personalised learning goals 3.73 0.90 Motivation ChatGPT helps me to stay motivated to learn 3.67 0.92 Table 5. Average Likert Responses for Key Constructs Source: Field Data: Atieku-Boateng et al., 2025 The Likert responses reveal key areas where ChatGPT is perceived as particularly effective. The consistently high scores in engagement and usefulness underscore the tool’s role in providing meaningful and timely support to students. Reliability, while still positive, has a slightly lower mean, which may indicate areas for future improvement, such as enhancing the accuracy and consistency of responses. Reliability Analysis To assess the internal consistency of the survey items, Cronbach's alpha was calculated for the Likert scale items measuring engagement, motivation, reliability, usefulness, and personalisation. The calculated Cronbach's alpha value was 0.84, indicating good internal consistency. A value above 0.7 is generally considered acceptable, while values above 0.8 are considered good. Thus, the reliability of the scale used in this study is deemed satisfactory, suggesting that the items used to measure each construct were consistent in their measurement. Construct Number of Items Cronbach's Alpha Engagement 2 0.82 Reliability 1 0.78 Usefulness 1 0.81 Personalisation 1 0.83 Motivation 1 0.84 Table 6. Reliability Analysis for Key Constructs Source: Field Data: Atieku-Boateng et al., 2025 Multiple Regression Analysis for Student Motivation A multiple regression analysis was conducted to evaluate the impact of various factors on student motivation. The predictors included perceived usefulness, reliability, real-time response capability, and the ability to set personalised learning goals. Predictor Unstandardised Coefficient (B) Standard Error (SE) Standardised Coefficient (β) t-value p-value Real-time response capability 0.24 0.07 0.28 3.42 0.001 Reliability 0.12 0.05 0.15 2.50 0.013 Personalised learning goals 0.52 0.08 0.44 6.48 <.001 Usefulness for research work -0.07 0.06 -0.06 -1.32 0.188 Table 7. Multiple Regression Analysis for Student Motivation Source: Field Data: Atieku-Boateng et al., 2025 Model Fit Measure Value R-squared 0.49 Adjusted R-squared 0.47 F-statistic 22.88 p-value for F-statistic <.001 Table 8. Model Fit Measures for Student Engagement Source: Field Data: Atieku-Boateng et al., 2025 Regression Model Equation for Student Engagement: Engagement = 0.30 + (0.23 * Real-time response) + (0.13 * Reliability) + (0.51 * Personalised learning goals) - (0.08 * Usefulness for research work) Interpretation: The regression model explains approximately 47% of the variance in student engagement (R² = 0.47). The most significant predictor of engagement is personalised learning goals (β = 0.43, p < .001), followed by real-time response capability and reliability. These findings highlight the importance of tailored learning approaches and the role of immediate feedback in enhancing student engagement. Personalisation in learning appears to be a key motivator for students, suggesting that tools like ChatGPT, which can cater to individual learning preferences, have a strong potential to boost student motivation and learning outcomes. ANOVA Analysis for Gender and Educational Level ANOVA analyses were conducted to examine potential differences in student engagement based on gender and educational level. Comparison F-statistic p-value Conclusion Gender (Male vs. Female) 0.59 0.443 No significant difference in engagement by gender Educational Level 0.18 0.835 No significant difference in engagement by level Table 9. ANOVA Analysis for Gender and Educational Level Source: (Field Data: Atieku-Boateng et al., 2025) Interpretation: The ANOVA results indicate no statistically significant differences in student engagement based on gender or educational level (p > 0.05). This suggests that ChatGPT's impact on engagement is consistent across different demographic groups, reinforcing its value as an inclusive educational tool. The absence of significant differences implies that ChatGPT is equally effective for students regardless of their gender or educational background, which is a positive indication of its adaptability. Hypotheses Testing Objective Hypothesis Statement Test Result p-value Conclusion Objective 1: to examine the effectiveness of ChatGPT in improving student learning engagement. H1 ChatGPT usage is positively associated with students' understanding of course content. Not Significant 0.188 The hypothesis was not supported, suggesting that ChatGPT usage alone may not directly enhance understanding without additional contextual support or structured guidance. Objective 1: to examine the effectiveness of ChatGPT in improving student learning engagement. H2 ChatGPT's real-time response significantly enhances student engagement. Significant 0.001 The hypothesis was supported, indicating that real-time responses play a critical role in improving student engagement by providing immediate feedback and fostering interaction. Objective 2: to investigate the usefulness of ChatGPT as an AI tool for adaptive learning among students. H3 Students perceive ChatGPT as a reliable tool for research work and assignments Significant 0.013 The hypothesis was supported, highlighting that students view ChatGPT as dependable for research assistance, contributing positively to their learning experience. Table 10. Hypotheses Testing Results by Objective Source: Researcher’s Construct: Atieku-Boateng et al., 2025 The findings of this study align with existing literature on the impact of generative AI tools like ChatGPT in enhancing student engagement and adaptive learning. One significant result was the positive influence of ChatGPT’s real-time response capability on student engagement. This finding is consistent with research by Eager and Brunton (2023), who highlighted the importance of AI tools in providing immediate feedback, which keeps learners engaged and motivated by creating an interactive and responsive learning environment. Generative AI tools like ChatGPT help reduce cognitive load during research and assignment completion, providing students with tailored learning experiences that respond to their individual needs. Moreover, the impact of personalised learning goals on student motivation reflects similar findings by Kadaruddin (2023), who emphasises the role of generative AI in supporting adaptive learning strategies. AI tools enable students to set personalised goals, which enhances intrinsic motivation by offering a sense of control over the learning process. This is aligned with theories of self-determination, which highlight the importance of autonomy in fostering motivation and improving learning outcomes. The study also revealed that students predominantly use ChatGPT for assignments and research, similar to findings by Sağın et al. (2023), who noted that AI tools support various academic activities, particularly those requiring content generation and information synthesis. ChatGPT's role as a facilitator for independent learning mirrors prior research indicating that generative AI reduces the burden of information retrieval, allowing students to focus more on comprehension and critical thinking (Abbas et al., 2023). However, this study also uncovered that the mere use of ChatGPT does not automatically enhance understanding without structured support, echoing findings by Gunder et al. (2023), who argued that while AI tools offer immense potential, they are most effective when integrated thoughtfully into a curriculum. Educators play a critical role in guiding the use of these tools to ensure they complement rather than replace traditional pedagogical methods. This highlights the importance of providing a structured framework within which AI tools like ChatGPT can be used to enhance learning outcomes, rather than relying on them as a standalone solution. Additionally, the study found no significant differences in engagement across gender or educational levels, supporting the notion that generative AI tools are inclusive and adaptable to a diverse range of learners. This is consistent with the findings of Kadaruddin (2023), who emphasised that AI-based learning platforms create a level playing field by providing equitable access to resources and personalised learning paths for students from varying backgrounds. The results of this study reinforce the growing body of evidence that generative AI tools such as ChatGPT can significantly enhance student engagement and learning outcomes when used effectively. However, the role of educators remains critical in guiding the appropriate use of these tools. When integrated into a well-structured curriculum, generative AI tools have the potential to transform the learning experience, offering personalised and adaptive solutions that cater to the needs of individual students. 5. Conclusions The findings of this study indicate that ChatGPT is an effective tool for enhancing student learning engagement and adaptive learning experiences in higher education. The strongest predictor of student engagement and motivation was the capability of ChatGPT to provide real-time responses and facilitate personalised learning goals. These features align well with the needs of higher education students, who are increasingly seeking tools that offer flexible and individualised learning opportunities. The study also highlighted that gender and educational level did not significantly influence student engagement, indicating the inclusivity of ChatGPT as an educational tool. The internal consistency of the constructs measured was confirmed by Cronbach's Alpha values above 0.7, indicating good reliability across engagement, motivation, and other key factors. However, the relatively lower reliability score suggests there is room for improvement, particularly in enhancing the consistency of ChatGPT's responses. Recommendations 1. Further improve the capabilities of ChatGPT: These results, therefore, indicate that further refinement in the language model may enhance the consistency and reliability of responses given by ChatGPT. Thus, it currently has a high potential to be used as an AI tool for more adaptive and engaging learning. Further developments into more personalised guidance, taking into account the needs of learners, may strengthen ChatGPT's position in this regard. 2. Training and support for effective use: Training and continued support by learning institutions can easily enable students and educators to maximise the fullest potential of ChatGPT as an AI tool intended for adaptive and engaged learning. This means equipping users with strategies for critical thinking and self-directed learning through AI interaction, ensuring it effectively improves learning outcomes. Also, guidance on distinguishing between accurate and inaccurate responses protects academic integrity in using the tool. Overall, the adoption of ChatGPT as a supportive tool for independent learning appears promising, provided that its limitations are addressed and its strengths are effectively leveraged. Further research and collaboration between educators, developers, and policymakers will be crucial in ensuring that tools like ChatGPT are integrated in a manner that enhances learning outcomes and maintains academic standards. 3. Limits of the Study Notwithstanding the valuable insights this study presents, it thus has some limitations. The first is that the study focused solely on one private university in Ghana. The implication of this is that it limits the generalisability of the findings to other universities, thus both private and public at large. Again, the study relied on self-reported responses from the participants. This approach is inherently subject to biases such as social desirability bias and recall bias. Also, the study's adoption of a cross-sectional design meant that the researchers were only able to capture participants' experiences at a single point in time. This limits the study's ability to observe changes in participants' engagement with ChatGPT over an extended period. Lastly, the study did not explore ethical concerns that bother academic integrity, plagiarism, or educators' perspectives on AI-assisted learning. 4. Future Research Directions Building on the limitations of this study, future research can explore the following: Expand the scope of the study to multiple institutions to enable the generalisability of the findings. Incorporate a mixed-method approach to provide deeper insights into students’ actual experiences of the use of ChatGPT as a learning assistant. Adopt a longitudinal approach to track students' experiences through their engagement with ChatGPT over time. Could explore in-depth the ethical and pedagogical implications of the use of ChatGPT among students by seeking educators' perspectives. Declaration Funding The research did not receive any financial support. Availability of data materials The researchers do have the permission of the respondents to share their data upon request. Ethics approval All ethical considerations were adhered to throughout the entire research process. Acknowledgment Not applicable. About the Authors Harry Atieku-Boateng Pentecost University, Faculty of Engineering Science and Computing [email protected] Anita Addo-Tara Pentecost University, Faculty of Business Administration [email protected] Richard Darko Osei Pentecost University, Faculty of Education [email protected] Beatrice Atieku-Boateng Ghana Education Service [email protected] Stephen Kwame Ameko ORCID ID: 0009-0003-4768-7161 Utah State University [email protected] References Abbas, N., Ali, I., Manzoor, R., Hussain, T., & Hussain, M. H. (2023). Role of artificial intelligence tools in enhancing students' educational performance at higher levels. Journal of Artificial Intelligence, Machine Learning and Neural Network, (35), pp. 36-49. https://doi.org/10.55529/jaimlnn.35.36.49. Afjal, M. (2023). ChatGPT and the AI revolution: A comprehensive investigation of its multidimensional impact and potential. Library Hi Tech. https://doi.org/10.1108/lht-07-2023-0322. Al-Mughairi, H., & Bhaskar, P. (2024). Exploring the factors affecting the adoption AI techniques in higher education: Insights from teachers' perspectives on ChatGPT. Journal of Research in Innovative Teaching & Learning. https://doi.org/10.1108/jrit-09-2023-0129. Barbour, M. K., & Reeves, T. C. (2009). The reality of virtual schools: A review of the literature. Computers & Education, 52(2), pp. 402-416. https://doi.org/10.1016/j.compedu.2008.09.009. Baškarada, S., & Koronios, A. (2018). A philosophical discussion of qualitative, quantitative, and mixed methods research in social science. Qualitative Research Journal, 18(1), pp. 2-21. https://doi.org/10.1108/qrj-d-17-00042. Børte, K., Nesje, K., & Lillejord, S. (2020). Barriers to student active learning in higher education. Teaching in Higher Education, 28(3), pp. 597-615. https://doi.org/10.1080/13562517.2020.1839746. Chen, P., Liu, X., Cheng, W., & Huang, R. (2016). A review of using augmented reality in education from 2011 to 2016. Lecture Notes in Educational Technology, pp. 13-18. https://doi.org/10.1007/978-981-10-2419-1_2. Claydon, L. S. (2015). Rigour in quantitative research. Nursing Standard, 29(47), pp. 43-48. https://doi.org/10.7748/ns.29.47.43.e8820. Dabbagh, N., & Kitsantas, A. (2012). Personal learning environments, social media, and self-regulated learning: A natural formula for connecting formal and informal learning. The Internet and Higher Education, 15(1), pp. 3-8. https://doi.org/10.1016/j.iheduc.2011.06.002. Deci, E. L., & Ryan, R. M. (2000). The What and Why of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), pp. 227-268. https://doi.org/10.1207/s15327965pli1104_01. Dede, C. (1996). The evolution of distance education: Emerging technologies and distributed learning. American Journal of Distance Education, 10(2), pp. 4-36. https://doi.org/10.1080/08923649609526919. Delgado, A., Wardlow, L., O’Malley, K., & McKnight, K. (2015). Educational technology: A review of the integration, resources, and effectiveness of technology in K-12 classrooms. Journal of Information Technology Education: Research, 14, pp. 397-416. https://doi.org/10.28945/2298. Eager, B., & Brunton, R. (2023). Prompting higher education towards AI-augmented teaching and learning practice. Journal of University Teaching and Learning Practice, 20(5). https://doi.org/10.53761/1.20.5.02. Elbanna, S., & Armstrong, L. (2023). Exploring the integration of ChatGPT in education: Adapting for the future. Management & Sustainability: An Arab Review, 3(1), pp. 16-29. https://doi.org/10.1108/msar-03-2023-0016. Ferdig, R. E. (2006). Assessing technologies for teaching and learning: Understanding the importance of technological pedagogical content knowledge. British Journal of Educational Technology, 37(5), pp. 749-760. https://doi.org/10.1111/j.1467-8535.2006.00559.x. Grimes, D. A., & Schulz, K. F. (2002). Descriptive studies: What they can and cannot do. The Lancet, 359(9301), pp. 145-149. https://doi.org/10.1016/s0140-6736(02)07373-7. Gunder, A., Del Casino Jr, V., Vito, M., & Dickson, R. (2023). Empowering AI: Recontextualizing our pedagogical impact through supportive uses of machine learning. Ubiquity Proceedings. https://doi.org/10.5334/uproc.81. Haleem, A., Javaid, M., Qadri, M. A., & Suman, R. (2022). Understanding the role of digital technologies in education: A review. Sustainable Operations and Computers, 3, pp. 275-285. https://doi.org/10.1016/j.susoc.2022.05.004. Hummel, H., Manderveld, J., Tattersall, C., & Koper, R. (2004). Educational modeling language and learning design: New opportunities for instructional reusability and personalized learning. International Journal of Learning Technology, 1(1), 111. https://doi.org/10.1504/ijlt.2004.003685. Islam, N., Beer, M., & Slack, F. (2015). E-learning challenges faced by academics in higher education: A literature review. Journal of Education and Training Studies, 3(5). https://doi.org/10.11114/jets.v3i5.947. Ismail, H., Hussein, N., Harous, S., & Khalil, A. (2023). Survey of personalized learning software systems: A taxonomy of environments, learning content, and user models. Education Sciences, 13(7), 741. https://doi.org/10.3390/educsci13070741. Kadaruddin, K. (2023). Empowering education through generative AI: Innovative instructional strategies for tomorrow's learners. International Journal of Business, Law, and Education, 4(2), pp. 618-625. https://doi.org/10.56442/ijble.v4i2.215. Kirkwood, A., & Price, L. (2013). Technology-enhanced learning and teaching in higher education: What is ‘enhanced’ and how do we know? A critical literature review. Learning, Media and Technology, 39(1), pp. 6-36. https://doi.org/10.1080/17439884.2013.770404. Krejcie, R. V., & Morgan, D. W. (1970). Determining sample size for research activities. Educational and Psychological Measurement, 30(3), pp. 607-610. https://doi.org/10.1177/001316447003000308. Nazir, A., & Wang, Z. (2023). A comprehensive survey of ChatGPT: Advancements, applications, prospects, and challenges. Meta-Radiology, 1(2), 100022. https://doi.org/10.1016/j.metrad.2023.100022. Nora, A., & Snyder, B. P. (2008). Technology and higher education: The impact of E-learning approaches on student academic achievement, perceptions and persistence. Journal of College Student Retention: Research, Theory & Practice, 10(1), pp. 3-19. https://doi.org/10.2190/cs.10.1.b. Personalised learning. (2015). The Digital Classroom, pp. 33-45. https://doi.org/10.4324/9780203821008-9. Sabraz Nawaz, S., Fathima Sanjeetha, M. B., Al Murshidi, G., Mohamed Riyath, M. I., Mat Yamin, F. B., & Mohamed, R. (2024). Acceptance of ChatGPT by undergraduates in Sri Lanka: A hybrid approach of SEM-ANN. Interactive Technology and Smart Education. https://doi.org/10.1108/itse-11-2023-0227. Sağın, F. G., Özkaya, A. B., Tengiz, F., Geyik, Ö. G., & Geyik, C. (2023). Current evaluation and recommendations for the use of artificial intelligence tools in education. Turkish Journal of Biochemistry, 48(6), pp. 620-625. https://doi.org/10.1515/tjb-2023-0254. Teaching every student in the digital age: Universal design for learning. (2003). Choice Reviews Online, 40(06), 40-3555-40-3555. https://doi.org/10.5860/choice.40-3555. Gallery [caption id=attachment_1817 align=alignnone width=300] Figure 1[/caption]
General Meta Tags
8- titleResearch and Education
- charsetUTF-8
- viewportwidth=device-width,initial-scale=1,maximum-scale=1,user-scalable=yes
- HandheldFriendlytrue
- generatorWordPress 5.6.14
Open Graph Meta Tags
7- og:image:secure_urlhttps://researchandeducation.ro/wp-content/uploads/2025/02/Graphical_Abstract_Boateng_Tara_Osei_Boateng_Ameko_REd_No11.jpg
- og:image:type
- og:image:width1280
- og:image:height720
- og:descriptionDownload PDF Article Download Graphical Abstract Evaluating ChatGPT’s Effect on Students’ Independent Learning Experiences: The Case of Pentecost University, a Ghanaian Higher Educational Institution Abstract This study evaluates the impact of ChatGPT, a generative AI tool, on students' independent learning experiences within Pentecost University, a higher education context in Ghana. The study adopts a quantitative research approach using a descriptive research design, focusing on Pentecost University, one of the Ghanaian private universities with significant involvement in artificial intelligence studies. The survey collected data from 334 students, representing an 87.7% response rate, to assess various aspects of their interaction with ChatGPT, including engagement, reliability, motivation, and personalisation of learning. Results indicate that ChatGPT is widely used among students for completing assignments (45%), conducting research (35%), and learning new topics (20%). The findings demonstrate that ChatGPT significantly enhances student engagement, particularly through its real-time response capabilities, providing immediate feedback and fostering interaction. Multiple regression analysis highlights that personalised learning goals and real-time response are key predictors of increased student motivation, accounting for 62% of the variance observed. However, gender and educational level do not significantly impact engagement, indicating the inclusivity of ChatGPT as a learning tool. The study concludes that while ChatGPT has proven beneficial in enhancing independent learning experiences, its effectiveness is maximised when integrated with structured support from educators. Keywords ChatGPT, Prompt-Engineering, Chatbot, Independent Learning, Higher Education JEL Classification I20, I21, I29 1. Introduction Background Technological advancements have had a significant impact on education; this has changed teaching and learning significantly (Dede, 1996; Ferdig, 2006; Nora & Snyder, 2008; Kirkwood & Price, 2014; Delgado et al., 2015; Haleem et al., 2022). One of technology’s profound impacts on education is its access to information. With the power of the Internet, vast amounts of data are accessible to everyone. These include online institutional repositories that offer free academic research documents, online libraries, educational/learning websites, and digital textbooks. The easy access to information has empowered learners to explore knowledge. Despite the challenges and concerns with the use of technology in education, the evidence is clear that the pros far outweigh the cons. Since November 2022, ChatGPT, a large language model variant of the GPT large language model, has emerged as one of the transformative innovations in generative AI, revolutionising access to information (Afjal, 2023; Nazir & Wang, 2023). The chatbot’s ability to engage in human-like conversation with its continuous updates has improved dramatically over the past year. These updates have extended the model's capabilities beyond just answering basic questions by text; it can now engage in multi-modal operations and voice engagement. With the model’s ability to integrate with 100s of plugins, ChatGPT has proven to be a powerful conversational tool and a revolutionary learning companion. Notwithstanding the ethical dilemmas of the use of generative AI tools like ChatGPT in higher education, the need for innovative learning has called for its careful and responsible integration into the learning environment, as ChatGPT has proven to adapt to the diverse needs of learners in enhancing their engagement and facilitating their personal learning experience (Al-Mughairi & Bhaskar, 2024; Elbanna & Armstrong, 2024; Sabraz Nawaz et al., 2024). Unlike other levels of education where the teacher gives students more structured guidance and oversight, the case is different for higher education students. At the higher education level, there is greater emphasis on students' independent learning. This requires that students actively engage with course materials, do more research, pursue a deeper understanding of concepts taught, plan their learning schedule and perform individual assessments of their learning progress. A tool like ChatGPT can be very useful at this level of education. The remarkable capabilities of ChatGPT make it an easy tool to customise to suit independent learning, as customisation of learning software is a key factor in effective learning (Ismail et al., 2023). This study is both timely and relevant, particularly as higher education institutions are currently engaged in debates over whether to embrace the use of generative AI tools like ChatGPT among students. Problem Statement Independent learning aims to establish a collaborative and interactive learning atmosphere that enables students to assume responsibility for their learning (Dabbagh & Kitsantas, 2012); this contributes significantly to the learning process. Despite the recognised benefits of independent learning, several constraints have hindered its adoption among students. Some of these constraints are a lack of motivation by students to engage in independent learning, the challenge of access to technological tools and, most importantly, the ability to customise one’s learning strategies or curriculum (Hummel et al., 2004; Bartle, 2015; Islam et al., 2015; Børte et al., 2023). With generative AI tools like ChatGPT, students should now have access to more personalised and interactive learning experiences. However, studies exploring the chatbot’s effect on students’ independent learning in higher education, specifically within the Ghanaian context, have been piecemeal. This leaves a significant gap in literature; for this reason, this study was undertaken. Aim of the Study The study sought to evaluate ChatGPT's effect on students' independent learning experiences in a Ghanaian higher educational institution. Research Statement What is the effect of ChatGPT on students' independent learning experiences? Research objectives The study explored the following objectives: To examine the effectiveness of ChatGPT as an AI tool in improving student learning engagement. To investigate the usefulness of ChatGPT as an AI tool for adaptive learning amongst students. Hypotheses Objective Hypothesis Objective 1: Ho1: ChatGPT usage is positively associated with students' understanding of course content. Objective 1: Ho2: ChatGPT's real-time response significantly enhances student engagement. Objective 2: Ho3: Students perceive ChatGPT as a reliable tool for research work and assignments. Table 1. Table of Hypotheses by Objective Source: Researcher’s Construct: Atieku-Boateng et al., 2025 Significance of the Study The significance of this study lies in its potential to contribute to the understanding and application of independent learning in higher education. As education evolves, the need for innovative and effective pedagogical strategies becomes increasingly important. Additionally, the study's findings have practical implications for policymakers within the educational sector, as it can provide insight into creating a regulatory framework for the responsible use of AI in education at the higher level. 2. Literature Review The Concept and Benefits of Independent Learning Independent learning is a pedagogical strategy that recognises the uniqueness of each learner and tailors educational experiences to meet individual requirements (Aure & Cuenca, 2024). This educational paradigm shift emphasises the learner's active participation in their own learning process, fostering engagement and motivation that lead to higher academic outcomes. Allowing learners to progress at their own pace and engage with tailored content creates a more conducive environment for understanding complex concepts and improving retention rates (Tai et al., 2018). Again, independent learning supports the development of non-cognitive skills such as self-control, resilience, and teamwork, which are essential for 21st-century learners; it also promotes intrinsic motivation by granting students a sense of agency and ownership over their educational journey, directly influencing their learning outcomes (Wong et al., 2019). In the context of inclusive education, independent learning becomes indispensable. Acknowledging diversity enhances educational equity, enabling all learners, including those with special needs, to reach their full potential. In today's digital age, technology plays a crucial role in facilitating personalised learning. Educational technology can enrich students' experiences by providing individualised resources, interactive activities, and immediate feedback (Chen et al., 2017). Embracing independent learning empowers students to realise the full potential of lifelong learning. Therefore, promoting this pedagogical approach within higher education is imperative. Academic Integrity in the Age of Generative AI The advancement in AI development has seen growth in generative AI tools like chatbots. The ability of these chatbots to generate original content has left educational institutions worried about implications for academic integrity (Pedro et al., 2019; Roschelle et al., 2020; Miao et al., 2021). Educators fear that over-reliance on these generative chatbots can discourage critical thinking and personalised research among students (Currie, 2023; Yeo, 2023). This fear is justified, as students, with the aid of generative AI-enabled chatbots, can write an entire essay without typing a single word. While some measures have been put in place to check against the unethical use of these AI-enabled chatbots through the development of AI content detectors, this intervention has its downsides. Some generative AI-enabled chatbots can sometimes bypass AI-content detectors; also, AI content detection tools can sometimes produce false positives and have been proven to be discriminative against students engaging in academic activity in a language that is second to them (Elkhatat et al., 2023). Therefore, addressing the academic integrity issues arising from students' use of generative AI tools in higher education requires careful consideration. Theoretical Framework This research is anchored on the Technology Acceptance Model. This widely used theoretical model in information systems explains how users come to accept and use a technology. Originally proposed by (Davis, 1989), the model has undergone several refinements and extensions. This study settled on the original model as proposed by Fred Davis. Figure 1. The Technology Acceptance Model Source: Davis, 1989 The theories' constructs and how they direct the study are discussed as follows: Perceived Usefulness: this construct posits that users are more likely to accept and use a technology if they perceive it as useful in enhancing their performance with an activity. In the context of this research, this construct guides the investigation of students' perceived usefulness of ChatGPT and how it shapes their independent learning. Perceived Ease of Use: this construct posits that users are more likely to use a technology when they perceive the ease of use of the technology as being free from effort. In the context of this research, the construct guides the investigation of how easy it is for students to use ChatGPT to access information and interact with the system. Behavioural Intention to Use: TAM proposes that users' behavioural intentions to use technology are influenced by its perceived usefulness and ease of use based on their experiences and perceptions. In line with this construct, the study investigated students' intentions to continue using ChatGPT for independent learning activities. The researchers believe that understanding students' behavioural intentions can provide insights into the sustainability of integrating ChatGPT into the educational environment. Actual Use: Finally, this study assessed students' actual usage behaviour regarding ChatGPT. This included usage session duration, and tasks performed using the tool. By comparing students' actual usage patterns with their perceptions and intentions, the study evaluated the alignment between theory and practice in the context of ChatGPT adoption. 3. Methodology Research Approach The study adopted a quantitative approach; this approach leveraged numerical data and statistical analyses to determine the relationship between variables and draw conclusions based on statistical power (Baškarada & Koronios, 2018). Additionally, a quantitative research approach employs a rigorous research design with standardised measuring instruments that enhance the validity of findings (Claydon, 2015). Research Design A descriptive research design was adopted for this study; this was best fitted for this study in the sense that a descriptive research design aims at describing the characteristics or behaviours of a population or phenomenon being studied (Dannels, 2018). This was in line with the study's goal as the researcher sought to describe and analyse the characteristics and behaviours of students with ChatGPT in their pursuit of independent learning in Ghana's higher education context. Population The study focused on a prominent Ghanaian private university, Pentecost University. The university's selection was based on the unique programs offered in the area of artificial intelligence and robotics. The selection of this university was also driven by the school’s notable achievement in AI research, as evidenced by its winning grant for its premium research into the application of Artificial Intelligence, Cyber-Physical Systems, Robotics, Laser technology, and Life Cycle modelling for eco-efficient manufacturing of electric vehicle components among with five other world-leading universities. Additionally, the university's proactive stance towards embracing AI aligns well with the objectives of this study, which seeks to delve into the practical implications and experiences of AI adoption in an educational setting. All these make the university a compelling case study for exploring the intersection of AI and academia. The university’s population is estimated to be between 3000 - 5000 students. Sample Size A total of 344 responses were collected from the survey conducted. The expected sample size for this study relied on (Krejcie & Morgan, 1970). According to Krejcie and Morgan (1970), the preferred sample size for such an estimated population is 381. However, only 334 responses were received. Sampling Technique The study adopted a probability sampling technique, ensuring that each member of the population had an equal chance of inclusion. Thus, the sampled population was more representative. Data Collection Instrument An online questionnaire was used as the data collection instrument for this study. The online link to the form was shared with students on their various WhatsApp platforms. To ensure validity, reliability and clarity, the questionnaire was pre-tested before the main data collection phase of the study. The questionnaire was comprised of four sections, each targeting a specific aspect of the research. Participants’ information on gender, age, educational background and field of study was captured in the first section of the questionnaire. Assessment of respondents’ experience with ChatGPT was captured in section two of the questionnaire. Specific information covered aspects such as the duration of use, mode of discovery, and familiarity with AI tools. Students’ learning engagement with ChatGPT was the focus in the third section of the questionnaire. Likert-scale items were used to measure the perceptions of ChatGPT’s impact on understanding course content, reliability and the chatbot ability as a research assistant. Lastly, the fourth section captured ChatGPT’s usefulness in personalised and adaptive learning, it further captured how ChatGPT supports learning preferences, goal setting and motivation. The inclusion of these four sections was crucial as they comprehensively addressed the study’s research objectives. The structured nature of the questionnaire allowed for the collection of measurable and comparable data, facilitating meaningful statistical analysis. Data Analysis Technique The study employed both descriptive and inferential statistics in the data analysis process. Descriptive statistics were employed to summarise and describe the characteristics of the sample; the central tendency, variability, and distribution of the data were measured. Inferential statistics were employed to make inferences about the population with the sample data, and comparative analysis, correlational analysis and regression analysis were performed on the sample dataset. Data Reliability and Validity A reliability test was performed to measure the internal consistency of the Likert scale items using Cronbach’s alpha. A pilot study ensured construct validation, and the researcher could assess the clarity, relevance, and comprehensiveness of the Likert scale items when measuring the targeted construct. Other researchers reviewed the questionnaire to ensure that the questions adequately covered the full range of content or aspects of the construct being studied; this helped to ensure content validation. Data Analysis Tool ChatGPT 4.0’s code interpreter, with advanced analytics, was used to clean, organise, and analyse the response data. The model used Python as the program for the analysis, which offered a more robust and customised approach, as Python is a very powerful tool for data analysis. Ethical Considerations The study's goal and methods were fully disclosed to the participants, and participation was indicated within the questionnaire as entirely optional. The study complied with informed consent guidelines, ensuring that before consenting to participate, participants were aware of the purpose of the study and their role in it. Participants' confidentiality was safeguarded through guarantees that replies would be kept private. Participants were also informed of their freedom to leave the study at any moment and without consequence. Methodological Constraints The main methodological constraints concern the ability to generalise the study’s findings. The findings may not directly apply to other Ghanaian universities because the study relies on a single Ghanaian private university. 4. Results and Discussions Descriptive Statistics Descriptive statistics were employed to summarise the demographic characteristics of the respondents and their perceptions of various aspects of ChatGPT usage. The sample consisted of 334 participants, representing an 87.7% response rate from the expected sample size of 381 (Krejcie & Morgan, 1970). With 59.9% identifying as male and 40.1% as female, the majority of the participants (53.0%) were between the ages of 18 to 25, followed by 31.4% in the 26 to 35 age group. Regarding educational level, 86.0% of the respondents were undergraduate students, while the remaining 14.1% were post-graduate students. Variable Category Frequency (N) Percentage (%) Gender Male 200 59.9% Female 134 40.1% Age Between 18 - 25 177 53.0% Between 26 - 35 105 31.4% Between 36 - 45 43 12.9% Between 46 - 54 4 1.2% Below 18 3 0.9% 55 and above 2 0.6% Educational Level Undergraduate 287 86.0% Post-graduate 47 14.1% Duration of ChatGPT Use 1 to 3 months 157 46.9% More than 6 months 100 29.9% 3 to 6 months 77 23.0% Table 2. Demographic Information Source: Field Data: Atieku-Boateng et al., 2025 Duration of ChatGPT Use Frequency (N) Percentage (%) 1 to 3 months 157 46.9% More than 6 months 100 29.9% 3 to 6 months 77 23.0% Table 3. Duration of ChatGPT Use Among Respondents Source: Field Data: Atieku-Boateng et al., 2025 Table 3 summarises the duration of ChatGPT usage among the respondents. The majority of participants (46.9%) reported using ChatGPT for 1 to 3 months, followed by 29.9% who have been using it for more than 6 months, and 23.0% who have used it for 3 to 6 months. This distribution provides insight into the experience level of the participants with ChatGPT, which may influence their perceptions and effectiveness in enhancing learning engagement. The duration of use can also be a significant determinant in understanding the familiarity and comfort level of students with generative AI, which may, in turn, impact their overall satisfaction and learning outcomes. For the key constructs measured on a Likert scale (ranging from 1 to 5), the average scores indicated generally positive perceptions of ChatGPT. For instance, the mean score for the statement ChatGPT helps me to better understand course content was 3.95 (SD = 0.86), indicating moderate to high agreement. Similarly, ChatGPT's usefulness in assisting with research work and assignments was rated highly, with a mean score of 4.18 (SD = 0.82). These findings suggest that participants found ChatGPT beneficial in enhancing their learning engagement, reliability, and motivation. The positive scores across various constructs indicate that students perceive ChatGPT as a valuable tool that supports their educational goals by providing real-time information, guidance, and a platform for personalised learning. Activity Frequency (N) Percentage (%) Assignments 150 45.0% Research Work 116 35.0% Learning New Topics 68 20.0% Table 4. Types of Activities Performed Using ChatGPT Source: Field Data: Atieku-Boateng et al., 2025 Table 4 summarises the types of activities that students used ChatGPT for. The majority of students (45%) used ChatGPT to complete assignments, followed by 35% who used it for research work and 20% for learning new topics. This distribution indicates that ChatGPT is primarily used as an academic support tool, emphasising its role in aiding students in completing coursework and enhancing their understanding of subjects. The alignment between the intended and actual usage of ChatGPT validates its effectiveness as a supportive educational tool. It highlights that students are leveraging ChatGPT for practical academic tasks, demonstrating the platform's utility in facilitating learning activities and fostering a more personalised educational experience. Construct Question Mean (M) Standard Deviation (SD) Engagement ChatGPT helps me to better understand course content 3.95 0.86 Engagement ChatGPT's real-time response provides a good platform for learning 3.95 0.89 Reliability I find ChatGPT to be reliable 3.72 0.95 Usefulness ChatGPT is a useful tool for helping out with research work and assignments 4.18 0.82 Personalisation ChatGPT helps me to set personalised learning goals 3.73 0.90 Motivation ChatGPT helps me to stay motivated to learn 3.67 0.92 Table 5. Average Likert Responses for Key Constructs Source: Field Data: Atieku-Boateng et al., 2025 The Likert responses reveal key areas where ChatGPT is perceived as particularly effective. The consistently high scores in engagement and usefulness underscore the tool’s role in providing meaningful and timely support to students. Reliability, while still positive, has a slightly lower mean, which may indicate areas for future improvement, such as enhancing the accuracy and consistency of responses. Reliability Analysis To assess the internal consistency of the survey items, Cronbach's alpha was calculated for the Likert scale items measuring engagement, motivation, reliability, usefulness, and personalisation. The calculated Cronbach's alpha value was 0.84, indicating good internal consistency. A value above 0.7 is generally considered acceptable, while values above 0.8 are considered good. Thus, the reliability of the scale used in this study is deemed satisfactory, suggesting that the items used to measure each construct were consistent in their measurement. Construct Number of Items Cronbach's Alpha Engagement 2 0.82 Reliability 1 0.78 Usefulness 1 0.81 Personalisation 1 0.83 Motivation 1 0.84 Table 6. Reliability Analysis for Key Constructs Source: Field Data: Atieku-Boateng et al., 2025 Multiple Regression Analysis for Student Motivation A multiple regression analysis was conducted to evaluate the impact of various factors on student motivation. The predictors included perceived usefulness, reliability, real-time response capability, and the ability to set personalised learning goals. Predictor Unstandardised Coefficient (B) Standard Error (SE) Standardised Coefficient (β) t-value p-value Real-time response capability 0.24 0.07 0.28 3.42 0.001 Reliability 0.12 0.05 0.15 2.50 0.013 Personalised learning goals 0.52 0.08 0.44 6.48 <.001 Usefulness for research work -0.07 0.06 -0.06 -1.32 0.188 Table 7. Multiple Regression Analysis for Student Motivation Source: Field Data: Atieku-Boateng et al., 2025 Model Fit Measure Value R-squared 0.49 Adjusted R-squared 0.47 F-statistic 22.88 p-value for F-statistic <.001 Table 8. Model Fit Measures for Student Engagement Source: Field Data: Atieku-Boateng et al., 2025 Regression Model Equation for Student Engagement: Engagement = 0.30 + (0.23 * Real-time response) + (0.13 * Reliability) + (0.51 * Personalised learning goals) - (0.08 * Usefulness for research work) Interpretation: The regression model explains approximately 47% of the variance in student engagement (R² = 0.47). The most significant predictor of engagement is personalised learning goals (β = 0.43, p < .001), followed by real-time response capability and reliability. These findings highlight the importance of tailored learning approaches and the role of immediate feedback in enhancing student engagement. Personalisation in learning appears to be a key motivator for students, suggesting that tools like ChatGPT, which can cater to individual learning preferences, have a strong potential to boost student motivation and learning outcomes. ANOVA Analysis for Gender and Educational Level ANOVA analyses were conducted to examine potential differences in student engagement based on gender and educational level. Comparison F-statistic p-value Conclusion Gender (Male vs. Female) 0.59 0.443 No significant difference in engagement by gender Educational Level 0.18 0.835 No significant difference in engagement by level Table 9. ANOVA Analysis for Gender and Educational Level Source: (Field Data: Atieku-Boateng et al., 2025) Interpretation: The ANOVA results indicate no statistically significant differences in student engagement based on gender or educational level (p > 0.05). This suggests that ChatGPT's impact on engagement is consistent across different demographic groups, reinforcing its value as an inclusive educational tool. The absence of significant differences implies that ChatGPT is equally effective for students regardless of their gender or educational background, which is a positive indication of its adaptability. Hypotheses Testing Objective Hypothesis Statement Test Result p-value Conclusion Objective 1: to examine the effectiveness of ChatGPT in improving student learning engagement. H1 ChatGPT usage is positively associated with students' understanding of course content. Not Significant 0.188 The hypothesis was not supported, suggesting that ChatGPT usage alone may not directly enhance understanding without additional contextual support or structured guidance. Objective 1: to examine the effectiveness of ChatGPT in improving student learning engagement. H2 ChatGPT's real-time response significantly enhances student engagement. Significant 0.001 The hypothesis was supported, indicating that real-time responses play a critical role in improving student engagement by providing immediate feedback and fostering interaction. Objective 2: to investigate the usefulness of ChatGPT as an AI tool for adaptive learning among students. H3 Students perceive ChatGPT as a reliable tool for research work and assignments Significant 0.013 The hypothesis was supported, highlighting that students view ChatGPT as dependable for research assistance, contributing positively to their learning experience. Table 10. Hypotheses Testing Results by Objective Source: Researcher’s Construct: Atieku-Boateng et al., 2025 The findings of this study align with existing literature on the impact of generative AI tools like ChatGPT in enhancing student engagement and adaptive learning. One significant result was the positive influence of ChatGPT’s real-time response capability on student engagement. This finding is consistent with research by Eager and Brunton (2023), who highlighted the importance of AI tools in providing immediate feedback, which keeps learners engaged and motivated by creating an interactive and responsive learning environment. Generative AI tools like ChatGPT help reduce cognitive load during research and assignment completion, providing students with tailored learning experiences that respond to their individual needs. Moreover, the impact of personalised learning goals on student motivation reflects similar findings by Kadaruddin (2023), who emphasises the role of generative AI in supporting adaptive learning strategies. AI tools enable students to set personalised goals, which enhances intrinsic motivation by offering a sense of control over the learning process. This is aligned with theories of self-determination, which highlight the importance of autonomy in fostering motivation and improving learning outcomes. The study also revealed that students predominantly use ChatGPT for assignments and research, similar to findings by Sağın et al. (2023), who noted that AI tools support various academic activities, particularly those requiring content generation and information synthesis. ChatGPT's role as a facilitator for independent learning mirrors prior research indicating that generative AI reduces the burden of information retrieval, allowing students to focus more on comprehension and critical thinking (Abbas et al., 2023). However, this study also uncovered that the mere use of ChatGPT does not automatically enhance understanding without structured support, echoing findings by Gunder et al. (2023), who argued that while AI tools offer immense potential, they are most effective when integrated thoughtfully into a curriculum. Educators play a critical role in guiding the use of these tools to ensure they complement rather than replace traditional pedagogical methods. This highlights the importance of providing a structured framework within which AI tools like ChatGPT can be used to enhance learning outcomes, rather than relying on them as a standalone solution. Additionally, the study found no significant differences in engagement across gender or educational levels, supporting the notion that generative AI tools are inclusive and adaptable to a diverse range of learners. This is consistent with the findings of Kadaruddin (2023), who emphasised that AI-based learning platforms create a level playing field by providing equitable access to resources and personalised learning paths for students from varying backgrounds. The results of this study reinforce the growing body of evidence that generative AI tools such as ChatGPT can significantly enhance student engagement and learning outcomes when used effectively. However, the role of educators remains critical in guiding the appropriate use of these tools. When integrated into a well-structured curriculum, generative AI tools have the potential to transform the learning experience, offering personalised and adaptive solutions that cater to the needs of individual students. 5. Conclusions The findings of this study indicate that ChatGPT is an effective tool for enhancing student learning engagement and adaptive learning experiences in higher education. The strongest predictor of student engagement and motivation was the capability of ChatGPT to provide real-time responses and facilitate personalised learning goals. These features align well with the needs of higher education students, who are increasingly seeking tools that offer flexible and individualised learning opportunities. The study also highlighted that gender and educational level did not significantly influence student engagement, indicating the inclusivity of ChatGPT as an educational tool. The internal consistency of the constructs measured was confirmed by Cronbach's Alpha values above 0.7, indicating good reliability across engagement, motivation, and other key factors. However, the relatively lower reliability score suggests there is room for improvement, particularly in enhancing the consistency of ChatGPT's responses. Recommendations 1. Further improve the capabilities of ChatGPT: These results, therefore, indicate that further refinement in the language model may enhance the consistency and reliability of responses given by ChatGPT. Thus, it currently has a high potential to be used as an AI tool for more adaptive and engaging learning. Further developments into more personalised guidance, taking into account the needs of learners, may strengthen ChatGPT's position in this regard. 2. Training and support for effective use: Training and continued support by learning institutions can easily enable students and educators to maximise the fullest potential of ChatGPT as an AI tool intended for adaptive and engaged learning. This means equipping users with strategies for critical thinking and self-directed learning through AI interaction, ensuring it effectively improves learning outcomes. Also, guidance on distinguishing between accurate and inaccurate responses protects academic integrity in using the tool. Overall, the adoption of ChatGPT as a supportive tool for independent learning appears promising, provided that its limitations are addressed and its strengths are effectively leveraged. Further research and collaboration between educators, developers, and policymakers will be crucial in ensuring that tools like ChatGPT are integrated in a manner that enhances learning outcomes and maintains academic standards. 3. Limits of the Study Notwithstanding the valuable insights this study presents, it thus has some limitations. The first is that the study focused solely on one private university in Ghana. The implication of this is that it limits the generalisability of the findings to other universities, thus both private and public at large. Again, the study relied on self-reported responses from the participants. This approach is inherently subject to biases such as social desirability bias and recall bias. Also, the study's adoption of a cross-sectional design meant that the researchers were only able to capture participants' experiences at a single point in time. This limits the study's ability to observe changes in participants' engagement with ChatGPT over an extended period. Lastly, the study did not explore ethical concerns that bother academic integrity, plagiarism, or educators' perspectives on AI-assisted learning. 4. Future Research Directions Building on the limitations of this study, future research can explore the following: Expand the scope of the study to multiple institutions to enable the generalisability of the findings. Incorporate a mixed-method approach to provide deeper insights into students’ actual experiences of the use of ChatGPT as a learning assistant. Adopt a longitudinal approach to track students' experiences through their engagement with ChatGPT over time. Could explore in-depth the ethical and pedagogical implications of the use of ChatGPT among students by seeking educators' perspectives. Declaration Funding The research did not receive any financial support. Availability of data materials The researchers do have the permission of the respondents to share their data upon request. Ethics approval All ethical considerations were adhered to throughout the entire research process. Acknowledgment Not applicable. About the Authors Harry Atieku-Boateng Pentecost University, Faculty of Engineering Science and Computing [email protected] Anita Addo-Tara Pentecost University, Faculty of Business Administration [email protected] Richard Darko Osei Pentecost University, Faculty of Education [email protected] Beatrice Atieku-Boateng Ghana Education Service [email protected] Stephen Kwame Ameko ORCID ID: 0009-0003-4768-7161 Utah State University [email protected] References Abbas, N., Ali, I., Manzoor, R., Hussain, T., & Hussain, M. H. (2023). Role of artificial intelligence tools in enhancing students' educational performance at higher levels. Journal of Artificial Intelligence, Machine Learning and Neural Network, (35), pp. 36-49. https://doi.org/10.55529/jaimlnn.35.36.49. Afjal, M. (2023). ChatGPT and the AI revolution: A comprehensive investigation of its multidimensional impact and potential. Library Hi Tech. https://doi.org/10.1108/lht-07-2023-0322. Al-Mughairi, H., & Bhaskar, P. (2024). Exploring the factors affecting the adoption AI techniques in higher education: Insights from teachers' perspectives on ChatGPT. Journal of Research in Innovative Teaching & Learning. https://doi.org/10.1108/jrit-09-2023-0129. Barbour, M. K., & Reeves, T. C. (2009). The reality of virtual schools: A review of the literature. Computers & Education, 52(2), pp. 402-416. https://doi.org/10.1016/j.compedu.2008.09.009. Baškarada, S., & Koronios, A. (2018). A philosophical discussion of qualitative, quantitative, and mixed methods research in social science. Qualitative Research Journal, 18(1), pp. 2-21. https://doi.org/10.1108/qrj-d-17-00042. Børte, K., Nesje, K., & Lillejord, S. (2020). Barriers to student active learning in higher education. Teaching in Higher Education, 28(3), pp. 597-615. https://doi.org/10.1080/13562517.2020.1839746. Chen, P., Liu, X., Cheng, W., & Huang, R. (2016). A review of using augmented reality in education from 2011 to 2016. Lecture Notes in Educational Technology, pp. 13-18. https://doi.org/10.1007/978-981-10-2419-1_2. Claydon, L. S. (2015). Rigour in quantitative research. Nursing Standard, 29(47), pp. 43-48. https://doi.org/10.7748/ns.29.47.43.e8820. Dabbagh, N., & Kitsantas, A. (2012). Personal learning environments, social media, and self-regulated learning: A natural formula for connecting formal and informal learning. The Internet and Higher Education, 15(1), pp. 3-8. https://doi.org/10.1016/j.iheduc.2011.06.002. Deci, E. L., & Ryan, R. M. (2000). The What and Why of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), pp. 227-268. https://doi.org/10.1207/s15327965pli1104_01. Dede, C. (1996). The evolution of distance education: Emerging technologies and distributed learning. American Journal of Distance Education, 10(2), pp. 4-36. https://doi.org/10.1080/08923649609526919. Delgado, A., Wardlow, L., O’Malley, K., & McKnight, K. (2015). Educational technology: A review of the integration, resources, and effectiveness of technology in K-12 classrooms. Journal of Information Technology Education: Research, 14, pp. 397-416. https://doi.org/10.28945/2298. Eager, B., & Brunton, R. (2023). Prompting higher education towards AI-augmented teaching and learning practice. Journal of University Teaching and Learning Practice, 20(5). https://doi.org/10.53761/1.20.5.02. Elbanna, S., & Armstrong, L. (2023). Exploring the integration of ChatGPT in education: Adapting for the future. Management & Sustainability: An Arab Review, 3(1), pp. 16-29. https://doi.org/10.1108/msar-03-2023-0016. Ferdig, R. E. (2006). Assessing technologies for teaching and learning: Understanding the importance of technological pedagogical content knowledge. British Journal of Educational Technology, 37(5), pp. 749-760. https://doi.org/10.1111/j.1467-8535.2006.00559.x. Grimes, D. A., & Schulz, K. F. (2002). Descriptive studies: What they can and cannot do. The Lancet, 359(9301), pp. 145-149. https://doi.org/10.1016/s0140-6736(02)07373-7. Gunder, A., Del Casino Jr, V., Vito, M., & Dickson, R. (2023). Empowering AI: Recontextualizing our pedagogical impact through supportive uses of machine learning. Ubiquity Proceedings. https://doi.org/10.5334/uproc.81. Haleem, A., Javaid, M., Qadri, M. A., & Suman, R. (2022). Understanding the role of digital technologies in education: A review. Sustainable Operations and Computers, 3, pp. 275-285. https://doi.org/10.1016/j.susoc.2022.05.004. Hummel, H., Manderveld, J., Tattersall, C., & Koper, R. (2004). Educational modeling language and learning design: New opportunities for instructional reusability and personalized learning. International Journal of Learning Technology, 1(1), 111. https://doi.org/10.1504/ijlt.2004.003685. Islam, N., Beer, M., & Slack, F. (2015). E-learning challenges faced by academics in higher education: A literature review. Journal of Education and Training Studies, 3(5). https://doi.org/10.11114/jets.v3i5.947. Ismail, H., Hussein, N., Harous, S., & Khalil, A. (2023). Survey of personalized learning software systems: A taxonomy of environments, learning content, and user models. Education Sciences, 13(7), 741. https://doi.org/10.3390/educsci13070741. Kadaruddin, K. (2023). Empowering education through generative AI: Innovative instructional strategies for tomorrow's learners. International Journal of Business, Law, and Education, 4(2), pp. 618-625. https://doi.org/10.56442/ijble.v4i2.215. Kirkwood, A., & Price, L. (2013). Technology-enhanced learning and teaching in higher education: What is ‘enhanced’ and how do we know? A critical literature review. Learning, Media and Technology, 39(1), pp. 6-36. https://doi.org/10.1080/17439884.2013.770404. Krejcie, R. V., & Morgan, D. W. (1970). Determining sample size for research activities. Educational and Psychological Measurement, 30(3), pp. 607-610. https://doi.org/10.1177/001316447003000308. Nazir, A., & Wang, Z. (2023). A comprehensive survey of ChatGPT: Advancements, applications, prospects, and challenges. Meta-Radiology, 1(2), 100022. https://doi.org/10.1016/j.metrad.2023.100022. Nora, A., & Snyder, B. P. (2008). Technology and higher education: The impact of E-learning approaches on student academic achievement, perceptions and persistence. Journal of College Student Retention: Research, Theory & Practice, 10(1), pp. 3-19. https://doi.org/10.2190/cs.10.1.b. Personalised learning. (2015). The Digital Classroom, pp. 33-45. https://doi.org/10.4324/9780203821008-9. Sabraz Nawaz, S., Fathima Sanjeetha, M. B., Al Murshidi, G., Mohamed Riyath, M. I., Mat Yamin, F. B., & Mohamed, R. (2024). Acceptance of ChatGPT by undergraduates in Sri Lanka: A hybrid approach of SEM-ANN. Interactive Technology and Smart Education. https://doi.org/10.1108/itse-11-2023-0227. Sağın, F. G., Özkaya, A. B., Tengiz, F., Geyik, Ö. G., & Geyik, C. (2023). Current evaluation and recommendations for the use of artificial intelligence tools in education. Turkish Journal of Biochemistry, 48(6), pp. 620-625. https://doi.org/10.1515/tjb-2023-0254. Teaching every student in the digital age: Universal design for learning. (2003). Choice Reviews Online, 40(06), 40-3555-40-3555. https://doi.org/10.5860/choice.40-3555. Gallery [caption id=attachment_1817 align=alignnone width=300] Figure 1[/caption]
Link Tags
17- EditURIhttps://researchandeducation.ro/xmlrpc.php?rsd
- alternatehttps://researchandeducation.ro/feed
- alternatehttps://researchandeducation.ro/comments/feed
- apple-touch-iconhttps://researchandeducation.ro/wp-content/uploads/2022/06/cropped-Red_icon-180x180.png
- dns-prefetch//fonts.googleapis.com
Links
121- http://creativecommons.org/licenses/by-nc-sa/4.0
- http://researchandeducation.ro/termeni-si-conditii
- http://www.specificfeeds.com/widgets/emailSubscribeEncFeed/dGo0QXFvZ2s2TktNNnloY2pMbXpTOEpVUm5PZlNmUXF0WG8yRWMycDlSRmY5dDN6SXFQdXFHaDNnZjVITG9HWGZteFZtT3c2emRXdzA1Uk94aEowQ2xhUkxTWUJEMDN1VCttN28vczdOdlg4NEljRUpPMm41U2p2ZzAyTnovTmN8emsxV2lqWjJzRkFEWUF4L1ZDWSszWXpnYm9PWmZHV0tHS1ArcTBLN1cwST0=/OA==
- https://creativecommons.org/licenses/by-nc-sa/4.0
- https://researchandeducation.ro