Artificial intelligence (AI) is spreading rapidly in higher education, changing how
students access information, complete assignments, and interact with learning materials.
Tools such as large language models (LLMs), tutoring systems, and writing assistants are
now widely used by students for text generation, coding support, and data analysis.
Popular platforms like ChatGPT, Google Gemini, and GitHub Copilot help students with
research, writing, and problem-solving across different fields.
Although AI offers many benefits such as accessibility, speed, and personalized learning
its rapid adoption has raised concerns. Researchers and educators note that excessive
use may reduce students’ originality, critical thinking, and ability to work independently A recent survey found that more than half (51.6%) of university students believe AI
negatively affects their capacity to think critically and solve problems without
assistance.9 Teachers have also reported a decline in analytical skills, with students often
accepting AI-generated answers without checking their accuracy or logic.
The availability of AI has also raised concerns about academic dishonesty, as some
students may use these tools to complete assignments, essays, or exams without proper
authorization or acknowledgment.4 This practice threatens academic integrity and
undermines the purpose of higher education, which is to develop independent, creative,
and ethical learners.
At Pundra University, initial observations suggest that AI use varies across disciplines.
Students in technical and business programs, such as Computer Science and Engineering
(CSE) and Business Administration (BBA), reported higher use of AI tools compared to
students in humanities or civil engineering. However, frequent use does not always mean
dependency. The way AI is used for example, brainstorming, editing, or generating full
content plays a major role in determining whether it is a helpful tool or a substitute for
learning.1
To study this issue, we apply machine learning methods to predict AI dependency among
undergraduate students. Using survey data on department, frequency of use, purpose of
use, and learning behaviors, we trained and tested multiple classification models. Results
show that Random Forest achieved 86% accuracy in predicting dependency levels with
usage frequency, academic discipline, and purpose of use being the most important
predictors.
The significance of this study lies in its ability to guide proactive university strategies.
By identifying students at risk of over-dependence, institutions can introduce targeted
measures such as AI literacy programs, ethical guidelines, and updated teaching
practices to ensure responsible use. This research adds to the growing discussion on
human-AI collaboration in education by presenting a data-driven framework for
monitoring and managing student behavior. To align with this aim, the specific research
objectives of the study are outlined below:
Measure AI tool usage patterns among undergraduates at Pundra University.
Explore the relationship between AI dependency, academic performance, and critical
thinking.
Develop and validate a machine learning model to predict high dependency risk.
Recommend strategies for integrating AI into curricula without reducing independent
learning.
While this study offers timely insights into AI dependency in the Bangladeshi higher
education context, it is important to acknowledge its scope limitations upfront. The data
are drawn exclusively from undergraduate students at Pundra University a private
institution in northern Bangladesh and the sample size (n = 230) reflects a single-site,
convenience-based recruitment strategy. As such, the findings should be interpreted as
exploratory and context-specific rather than nationally representative. Nevertheless, this
focused approach enables a granular, data-driven analysis of AI behaviors in an
underrepresented region, laying the groundwork for future multi-institutional validation.