At one elite college, over 80% of students now use AI – but it’s not all about outsourcing their wor

Survey shows students rapidly picked up chatbots, but perhaps surprisingly, they more often used it to augment their learning, rather than hand work off.

Author: Germán Reyes on Aug 18, 2025
 
Source: The Conversation
Students have quickly incorporated the likes of ChatGPT into their work, but little research is available on how they're using generative AI. Photo by Alejandra Villa Loarca/Newsday RM via Getty Images

Over 80% of Middlebury College students use generative AI for coursework, according to a recent survey I conducted with my colleague and fellow economist Zara Contractor. This is one of the fastest technology adoption rates on record, far outpacing the 40% adoption rate among U.S. adults, and it happened in less than two years after ChatGPT’s public launch.

Although we surveyed only one college, our results align with similar studies, providing an emerging picture of the technology’s use in higher education.

Between December 2024 and February 2025, we surveyed over 20% of Middlebury College’s student body, or 634 students, to better understand how students are using artificial intelligence, and published our results in a working paper that has not yet gone through peer review.

What we found challenges the panic-driven narrative around AI in higher education and instead suggests that institutional policy should focus on how AI is used, not whether it should be banned.

Not just a homework machine

Contrary to alarming headlines suggesting that “ChatGPT has unraveled the entire academic project” and “AI Cheating Is Getting Worse,” we discovered that students primarily use AI to enhance their learning rather than to avoid work.

When we asked students about 10 different academic uses of AI – from explaining concepts and summarizing readings to proofreading, creating programming code and, yes, even writing essays – explaining concepts topped the list. Students frequently described AI as an “on-demand tutor,” a resource that was particularly valuable when office hours weren’t available or when they needed immediate help late at night.

We grouped AI uses into two types: “augmentation” to describe uses that enhance learning, and “automation” for uses that produce work with minimal effort. We found that 61% of the students who use AI employ these tools for augmentation purposes, while 42% use them for automation tasks like writing essays or generating code.

Even when students used AI to automate tasks, they showed judgment. In open-ended responses, students told us that when they did automate work, it was often during crunch periods like exam week, or for low-stakes tasks like formatting bibliographies and drafting routine emails, not as their default approach to completing meaningful coursework.

**In the graphic explainer, add “tasks” after

Of course, Middlebury is a small liberal arts college with a relatively large portion of wealthy students. What about everywhere else? To find out, we analyzed data from other researchers covering over 130 universities across more than 50 countries. The results mirror our Middlebury findings: Globally, students who use AI tend to be more likely to use it to augment their coursework, rather than automate it.

But should we trust what students tell us about how they use AI? An obvious concern with survey data is that students might underreport uses they see as inappropriate, like essay writing, while overreporting legitimate uses like getting explanations. To verify our findings, we compared them with data from AI company Anthropic, which analyzed actual usage patterns from university email addresses of their chatbot, Claude AI.

Anthropic’s data shows that “technical explanations” represent a major use, matching our finding that students most often use AI to explain concepts. Similarly, Anthropic found that designing practice questions, editing essays and summarizing materials account for a substantial share of student usage, which aligns with our results.

In other words, our self-reported survey data matches actual AI conversation logs.

Why it matters

As writer and academic Hua Hsu recently noted, “There are no reliable figures for how many American students use A.I., just stories about how everyone is doing it.” These stories tend to emphasize extreme examples, like a Columbia student who used AI “to cheat on nearly every assignment.”

But these anecdotes can conflate widespread adoption with universal cheating. Our data confirms that AI use is indeed widespread, but students primarily use it to enhance learning, not replace it. This distinction matters: By painting all AI use as cheating, alarmist coverage may normalize academic dishonesty, making responsible students feel naive for following rules when they believe “everyone else is doing it.”

Moreover, this distorted picture provides biased information to university administrators, who need accurate data about actual student AI usage patterns to craft effective, evidence-based policies.

What’s next

Our findings suggest that extreme policies like blanket bans or unrestricted use carry risks. Prohibitions may disproportionately harm students who benefit most from AI’s tutoring functions while creating unfair advantages for rule breakers. But unrestricted use could enable harmful automation practices that may undermine learning.

Instead of one-size-fits-all policies, our findings lead me to believe that institutions should focus on helping students distinguish beneficial AI uses from potentially harmful ones. Unfortunately, research on AI’s actual learning impacts remains in its infancy – no studies I’m aware of have systematically tested how different types of AI use affect student learning outcomes, or whether AI impacts might be positive for some students but negative for others.

Until that evidence is available, everyone interested in how this technology is changing education must use their best judgment to determine how AI can foster learning.

Germán Reyes does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Read These Next