Rangpur Road, Baghopara, Gokul, Bogura    info@pundrauniversity.edu.bd
ADMIN PANEL
ISSN: 2789-7036 (Print)

Journal of

Pundra University of Science & Technology

Editorial
DOI:
Artificial Intelligence in the Classroom: Mapping Usage Patterns and Predicting Dependency Among University Students Using Machine Learning
Hoque M. H. E. 1* , Mahbub M. 2
1 Hoque M. H. E.
2 Mahbub M.

* Corresponding Author: Hoque M. H. E.
Abstract
Artificial intelligence (AI) is becoming a regular part of student life in universities. Because of this, it is essential to understand how undergraduates utilize these technologies and their level of dependence on them. This study investigates AI usage patterns and dependency levels among undergraduate students in Bangladesh and develops machine learning models to predict high-risk dependency. The results showed that almost all students are familiar with AI tools. Among them, ChatGPT, Google Bard, and Grammarly were the most popular across different departments. What makes this study different is that it did not rely solely on survey analysis, but also applied machine learning to predict levels of dependency. Students were grouped into Low, Medium, and High dependency categories using a combined scoring system. Four predictive models were tested: Artificial Neural Network (ANN), Random Forest, XGBoost, and Logistic Regression. Among these, ANN performed the best with 89% accuracy and an F1-score of 0.88, followed by Random Forest (87%), XGBoost (84%), and Logistic Regression (76%).The findings further show that younger students, first- and second-year undergraduates, and students with average academic results are more likely to depend heavily on AI. This trend suggests a proportional increase in the adoption of AI tools over time, in parallel with advancements in their capabilities. The strongest predictors of dependency were how much time students spent on AI, the type of tools they used, and the purpose of using them (for example, writing, coding, or preparing for exams).In sum, this study highlights who relies most on AI and offers universities guidance to promote responsible use that enhances learning while preserving independence, critical thinking, and integrity.
Keywords
AI Dependency, Student Survey, Higher Education, Predictive Modeling, Academic Integrity.
Introduction
Artificial intelligence (AI) is spreading rapidly in higher education, changing how students access information, complete assignments, and interact with learning materials. Tools such as large language models (LLMs), tutoring systems, and writing assistants are now widely used by students for text generation, coding support, and data analysis. Popular platforms like ChatGPT, Google Gemini, and GitHub Copilot help students with research, writing, and problem-solving across different fields. Although AI offers many benefits such as accessibility, speed, and personalized learning its rapid adoption has raised concerns. Researchers and educators note that excessive use may reduce students’ originality, critical thinking, and ability to work independently A recent survey found that more than half (51.6%) of university students believe AI negatively affects their capacity to think critically and solve problems without assistance.9 Teachers have also reported a decline in analytical skills, with students often accepting AI-generated answers without checking their accuracy or logic. The availability of AI has also raised concerns about academic dishonesty, as some students may use these tools to complete assignments, essays, or exams without proper authorization or acknowledgment.4 This practice threatens academic integrity and undermines the purpose of higher education, which is to develop independent, creative, and ethical learners. At Pundra University, initial observations suggest that AI use varies across disciplines. Students in technical and business programs, such as Computer Science and Engineering (CSE) and Business Administration (BBA), reported higher use of AI tools compared to students in humanities or civil engineering. However, frequent use does not always mean dependency. The way AI is used for example, brainstorming, editing, or generating full content plays a major role in determining whether it is a helpful tool or a substitute for learning.1 To study this issue, we apply machine learning methods to predict AI dependency among undergraduate students. Using survey data on department, frequency of use, purpose of use, and learning behaviors, we trained and tested multiple classification models. Results show that Random Forest achieved 86% accuracy in predicting dependency levels with usage frequency, academic discipline, and purpose of use being the most important predictors. The significance of this study lies in its ability to guide proactive university strategies. By identifying students at risk of over-dependence, institutions can introduce targeted measures such as AI literacy programs, ethical guidelines, and updated teaching practices to ensure responsible use. This research adds to the growing discussion on human-AI collaboration in education by presenting a data-driven framework for monitoring and managing student behavior. To align with this aim, the specific research objectives of the study are outlined below:  Measure AI tool usage patterns among undergraduates at Pundra University.  Explore the relationship between AI dependency, academic performance, and critical thinking.  Develop and validate a machine learning model to predict high dependency risk.  Recommend strategies for integrating AI into curricula without reducing independent learning. While this study offers timely insights into AI dependency in the Bangladeshi higher education context, it is important to acknowledge its scope limitations upfront. The data are drawn exclusively from undergraduate students at Pundra University a private institution in northern Bangladesh and the sample size (n = 230) reflects a single-site, convenience-based recruitment strategy. As such, the findings should be interpreted as exploratory and context-specific rather than nationally representative. Nevertheless, this focused approach enables a granular, data-driven analysis of AI behaviors in an underrepresented region, laying the groundwork for future multi-institutional validation.