Higher education has fallen on hard times. Present political pressures and federal grant pullbacks meant to refocus academia’s priorities are compounding the financial struggles many campuses have faced given unreliable state funding. Yet, one investor group remains supportive: technology. Why? To boost generative artificial intelligence (AI) sales.
Tech companies large and small seek favor in academia’s executive suites and boardrooms so they might influence key decisions on the purchase of AI technologies and AI use policies. But when corporate interests have an outsized role in governance, campuses can be diverted from serving the public good.
We are educators at San Diego State University (SDSU), a large public research university belonging to one of the first statewide systems to have embraced AI. We serve as core members of the team leading the largest survey worldwide on how students, staff, and faculty engage with and think about AI. The survey data, plus focus groups and interviews, are helping SDSU craft an intentional, equitable, and ethically-informed response to the promises and challenges this new technology entails.
We found involving faculty, staff, and students in decisions regarding AI processes and policies is essential. Further, because of the documented divergence between AI developers’ and educators’ views of AI’s potential harms, educators must work closely with vendors to both develop business plans and ensure the quality and integrity of AI models. Educators can also contribute expert testimony and data to support AI legislation; and there is much we can do in the classroom as well.
General Guidance on Generative AI
Our research suggests several ways that educators can bektter manage generative AI. First, we need more consistent approaches to AI within majors to increase students’ sense of coherence. Second, we must assist students in learning to recognize what legitimate AI use looks like, and help them level up to become “AI collaborators” instead of “appropriators.” We must also advocate for increased funding for faculty and staff to learn about how to engage with AI.
We must refuse to let big AI set education’s agenda.
The challenges these recommendations address are already at play, given higher education’s reliance on commercial, data-driven platforms to mediate how students learn, socialize, and measure success. Prioritizing profits in higher education means corporate interests trump those of students, faculty, and staff, influencing how we design and implement initiatives in our research, classrooms, and programs. AI’s amplification of this pattern provides a loud wake-up call.
We must refuse to let big AI set education’s agenda. We must insist on various contractual protections from AI vendors and collaborate to reduce algorithmic biases and other harms that irresponsibly deployed AI entails. Doing so requires that colleges, universities, and faculty build mutually beneficial relationships with industry partners. Opting out is no longer an option.
Action Points
1. Each major needs a plan
The claim that generative AI improves student outcomes is not backed by evidence. Yet, AI has been introduced into our classrooms at a speed and scale that has exhausted IT, risk management, and other administrative divisions.
Moreover, instructors often don’t know what to do with these now ubiquitous, fast-changing AI tools and struggle to cope with their implications for the classroom. Some educators have taken a triage-like approach, updating assignments piecemeal as time allows. Others are ignoring AI as either irrelevant or unethical. Still, others would prefer being handed a policy to stand behind.
Given higher education’s long tradition of trusting faculty to make their own decisions, leadership at our own university has been understandably and justifiably hesitant to take a firm stance on what’s allowable or desired. Yet, as one anonymous professor survey respondent said, “Everything feels fragmented”* and another said, “The guidelines [that do exist] are vague, and I’m not sure whether to encourage exploration or restrict use until there’s more clarity.” Another conveyed, “There’s so much contradictory information. One workshop says embrace AI, another says proceed with caution.”
Such confusion is in no way unique or limited to our university. We hear similar reports from various campuses nationwide.
The lack of clarity affects students, who often want to do their own work and to learn in the process. Anonymous student survey respondents said things like “I try not to use [AI] because I don’t really want to rely on it,” noting that doing so would be “cheating myself out of an education.” Fewer than one in five (17.3%) agreed with the statement that “I am comfortable submitting a prompt to an AI like ChatGPT and turning in the answer it provides.”
But knowing what qualifies as one’s own work can be a challenge. One student wondered if using ChatGPT is “learning or cheating.” Another suggested, “Every professor says something different.” This appeal from our 2023 data remains iconic: “Please just tell us what to do, and be clear about it.” Again, similar observations seem commonplace throughout higher education.
At our own university, backed by our data, the faculty senate voted to require an AI statement on every single syllabus. This seems to have gone a long way to quell the disquiet.
Although most students expect discipline-to-discipline differences, within-major inconsistencies remain a grave problem for undergraduates. When AI-related instructions and information are let to vary too dramatically within a given program, students’ “sense of coherence” can be dangerously undermined.
Gaining consensus in higher education can be challenging in part because higher education is an institution that prides itself on free thinking and debate. In many cases, intentional, systematic, whole-curricular overhauls are in order. But they are difficult to achieve because updating a major is a complex task, and on top of that AI keeps changing. This causes decision delay among institutional leaders as well as good educators whose scholarly training places value on grounding all actions in careful, methodical, thorough investigations.
That said, these challenges have not stopped AI innovators. Educators must step up and help shape how AI is introduced and implemented into learning and scholarship. We must offer students a well-considered compass to navigate today’s new frontiers and devise intentional responses at not just the course but the curricular level.
2. Students need models
Coherent AI literacy instruction requires a commitment to modeling good AI hygiene. This means more than just teaching students how to avoid plagiarism or other forms of academic dishonesty. Instead of focusing solely on cheating, which may result in false accusations, we must reduce students’ overreliance on AI.
One way to do this is by increasing the sense of meaning that a given assignment might provoke. For instance by tying an assignment more tightly to “real life” or events that interest a typical undergrad, we make cheating less tempting. Good pedagogy also aligns each assignment explicitly to specific course aims.
Another solution is to emphasize the learning process rather than its immediate product, which helps students internalize lessons more deeply. For instance rather than simply assigning an essay, this might involve crafting multiple small assignments related to the steps one must take to develop the essay. In tandem, students might be encouraged to document updates made along the way or discuss how and why an AI tool’s feedback was or was not useful in advancing one’s thinking. Such an approach can reduce a student’s likelihood to outsource work to AI and therefore protect against damage to skill development, subject matter mastery, and later on-the-job performance.
We also learned in supplemental interview research from 2025 that help from teachers is imperative. Student participants fell into two distinct clusters. Some are appropriators or shortcutters, who let AI do the work. Appropriators outsource cognitive effort, prioritize efficiency over intellectual growth, and appropriate AI’s output as their own. Nonetheless, they understand themselves as using AI responsibly – even when essentially just copying and submitting AI-provided answers.
Appropriators stand in contrast to a second group of students, collaborators, who engage AI tools to sharpen their own thinking. Collaborators use AI as peer reviewers, tutors, or thought partners and vet these outputs carefully.
Students becoming collaborators depends on how professors treat AI in the classroom. The most effective professors explicitly conveyed to students, in personalized terms, the value of intellectual effort for strengthening critical thinking skills by using AI as a training partner. By modeling this kind of self-aware, responsible, AI hygiene, good professors help ensure more students leave college as AI collaborators than AI-dependent appropriators.
3. Work must be funded
Modeling responsible AI use is not easy. We need broader institutional and public support and a boost in funding to better assess how to integrate AI safely and constructively into classrooms and curriculums. In the words of one faculty member, AI’s emergence means “we need to completely overhaul the way we teach.” This will take a lot of time. And, because time has not been built into the schedule — overhauling one’s course is on top of everything else — it “means working evenings/weekends and missing sleep. I will do it, though, because it is … our job.”
This dedication to students and the profession notwithstanding, institutions need to budget for the time and labor it takes teachers to integrate AI productively into their courses. Our campus is piloting several compensation schemes. But financing these adequately is difficult, considering the time it takes to gain experience using relevant AI tools and to rethink teaching and assessment as well as administrative strategies. This is why more funding from the government and AI vendors is clearly needed.
Addressing AI in the classroom also requires that we address some structural problems. We need smaller class sizes, course loads, and administrative and service expectations. As things stand now, these factors have reduced time available to spend with students and pushed educators toward product-focused, easily-graded assignments like term papers. Real learning happens over time, through the process of thinking; however, assessing such learning is itself time- and thus resource-intensive. The AI revolution and its impact on student strategies for attaining good grades in the absence of strong teacher guidance demonstrates the crucial importance of attending to and incentivizing this part of the equation.
Sitting this one out is not an option
AI companies have been more than happy to offer students free accounts, provide on-campus trainings, give faculty or campus units grants to conduct studies on topics that serve their interests, and offer contractual incentives to universities that choose their platforms. In doing so, and in habituating consumers to their products, these companies increase their chances for locking in long-term customers or brand loyalists. It’s safe to say that their plans for profit and control over governance are succeeding. But free lunch is never free.
And AI’s penetration is not just happening on college campuses. By March 2025, a majority of states had moved toward incorporating AI into their K-12 education systems, often with the backing of AI companies themselves. ChatGPT’s makers are among many incentivizing uptake through various forms of support. However, as a new RAND report bluntly states, “Professional development for teachers, training for students on how to use AI in education, and school and district policies lag.”
As one student surveyed said, “There’s anxiety about doing things ‘the right way’ when no one is explaining what that means.” Many students prioritize responsible use and worry about bias, misinformation, data privacy, growing digital divides, climate concerns, intensified grade expectations, potential for robots to replace teachers, and even the erosion of their critical thinking and creative powers. One noted, we “should have a voice before those systems shape education.”
Based on our findings, we agree that faculty, staff, and students must have a voice in this debate. Yet, that voice must be claimed.
Education professionals must engage in the future of AI directly and materially. Teachers must provide guidance and generative energy as AI’s co-producers rather than simply acting as critics.
But teachers cannot, and should not, bear this burden on their own. Institutions must develop systems and policies through executive decisions that support individual instructors versus placing the burden entirely on teachers’ shoulders. Even if the guidance education professionals can offer is provisional, it’s up to us to ensure that the ship of higher education is headed in the right direction. It’s clear that legislation and campus-industry partnerships to ensure the integrity and pedagogic value of products brought to market will help.
In the meantime, campus members need more guided experience using and thinking about (and with) AI. This will ensure that AI is used for problem-solving in intentional, creative, and ethical ways.


