Student cheating dominates talk of generative AI in higher ed, but universities and tech companies f
A sociologist who researches AI’s impact on work and education argues there are ethical dimensions to generative AI that institutions are not considering.

Debates about generative artificial intelligence on college campuses have largely centered on student cheating. But focusing on cheating overlooks a larger set of ethical concerns that higher education institutions face, from the use of copyrighted material in large language models to student privacy.
As a sociologist who teaches about AI and studies the impact of this technology on work, I am well acquainted with research on the rise of AI and its social consequences. And when one looks at ethical questions from multiple perspectives – those of students, higher education institutions and technology companies – it is clear that the burden of responsible AI use should not fall entirely on students’ shoulders.
I argue that responsibility, more generally, begins with the companies behind this technology and needs to be shouldered by higher education institutions themselves.
To ban or not to ban generative AI
Let’s start where some colleges and universities did: banning generative AI products, such as ChatGPT, partly over student academic integrity concerns.
While there is evidence that students inappropriately use this technology, banning generative AI ignores research indicating it can improve college students’ academic achievement. Studies have also shown generative AI may have other educational benefits, such as for students with disabilities. Furthermore, higher education institutions have a responsibility to make students ready for AI-infused workplaces.
Given generative AI’s benefits and its widespread student use, many colleges and universities today have integrated generative AI into their curricula. Some higher education institutions have even provided students free access to these tools through their school accounts. Yet I believe these strategies involve additional ethical considerations and risks.
As with previous waves of technology, the adoption of generative AI can exacerbate inequalities in education, given that not all students will have access to the same technology. If schools encourage generative AI use without providing students with free access, there will be a divide between students who can pay for a subscription and those who use free tools.
On top of this, students using free tools have few privacy guarantees in the U.S. When they use these tools – even as simple as “Hey ChatGPT, can you help me brainstorm a paper idea?” – students are producing potentially valuable data that companies use to improve their models. By contrast, paid versions can offer more data protections and clearer privacy guidelines.
Higher education institutions can address equity concerns and help protect student data by seeking licenses with vendors that address student privacy. These licenses can provide students with free access to generative AI tools and specify that student data is not to be used to train or improve models. However, they are not panaceas.
Who’s responsible now?
In “Teaching with AI,” José Antonio Bowen and C. Edward Watson argue that higher education institutions need to rethink their approach to academic integrity. I agree with their assessment, but for ethical reasons not covered in their book: Integrating generative AI into the curriculum through vendor agreements involves higher education institutions recognizing tech companies’ transgressions and carefully considering the implications of owning student data.
To begin, I find the practice of penalizing students for “stealing” words from large language models to write papers ethically difficult to reconcile with tech companies’ automated “scraping” of websites, such as Wikipedia and Reddit, without citation. Big tech companies have used copyrighted material – some of it allegedly taken from piracy websites – to train the large language models that power chatbots. Although the two actions – asking a chatbot to write an essay versus training it on copyrighted material – are not exactly the same, they both have a component of ethical responsibility. For technology companies, ethical issues such as this are typically raised only in lawsuits.
For institutions of higher education, I think these issues should be raised prior to signing AI vendor licenses. As a Chronicle of Higher Education article suggests, colleges and universities should vet AI model outputs as they would student papers. If they have not done so prior to signing vendor agreements, I see little basis for them to pursue traditional “academic integrity” violations for alleged student plagiarism. Instead, higher education institutions should consider changes to their academic integrity policies.
Then there is the issue of how student data is handled under AI vendor agreements. One likely source of student concern is whether their school, as a commercial customer and data owner, logs interactions with identifiers and can pursue academic integrity charges and other matters on this basis.
The solution to this is simple: Higher education institutions can prominently display the terms and conditions of such agreements to members of their community. If colleges and universities are unwilling to do so, or if their leaders don’t understand the terms themselves, then maybe institutions need to rethink their AI strategies.
The above data privacy issues take on new meaning given the ways in which generative AI is currently being used, sometimes as “companions” with which people share highly personal information. OpenAI estimates that about 70% of ChatGPT consumer usage is for nonwork purposes. OpenAI’s CEO, Sam Altman, recognizes that people are turning to ChatGPT for “deeply personal decisions that include life advice, coaching and support.”
Although the long-term effects of using chatbots as companions or confidants is unknown, the recent case of a teen committing suicide while interacting with ChatGPT is a tragic reminder of generative AI’s risks and the importance of ensuring people’s personal security along with their privacy.
Formulating explicit statements that generative AI should be used only for academic purposes could help mitigate the risks related to students forming potentially damaging emotional attachments with chatbots. So, too, could reminders about campus mental health and other resources. Training students and faculty on all these matters and more can aid in promoting personally responsible AI use.
But colleges and universities cannot skirt their own responsibilities. At some point, higher education institutions may see that such responsibility is too heavy of a cross to bear and that their risk-mitigation strategies are essentially Band-Aids for a systemic problem.
Jeffrey C. Dixon is a faculty representative on the College of the Holy Cross Institutional Review of Artificial Intelligence Task Force.
Read These Next
Retailers are quietly changing their return policies – here’s why you should be on the lookout this
During the pandemic, retailers used generous return policies to win over shoppers. Now, those policies…
Don’t stress out about overeating during the holidays – a dietitian explains how a day of indulgence
Guilt and discomfort around food can be especially challenging during the holidays. But trusting your…
Most colleges score low on helping students of all faiths – or none – develop a sense of belonging.
Respectful relationships are key to students’ sense that their spiritual identity is welcome on campus.…



