- A recent Wiley report reveals that 68 percent of instructors expect generative AI to negatively impact academic integrity.
- Nearly 47 percent of students believe AI has made cheating easier, with 35 percent attributing this specifically to ChatGPT.
- The survey involved 850 instructors and over 2,000 students, and did not define “cheating” specifically, leading to varied interpretations.
- Concerns about AI’s potential for misuse have been ongoing since ChatGPT’s release in November 2022, with nearly half of university provosts expressing significant worries.
- Despite initial bans, many institutions have relaxed restrictions on AI tools as technology and attitudes have evolved.
- 56 percent of professors reported no significant impact on cheating in the past year, but 68 percent anticipate a negative effect in the coming years.
- Over half of students noted that increased proctoring and stricter rules have made cheating more difficult.
- Students who dislike AI tools often cite cheating as a primary concern, whereas faculty are more worried about AI’s impact on critical thinking.
- A significant number of students mistrust AI or fear it might lead instructors to suspect cheating.
- Vanderbeek suggests incentives for early work completion, randomized exams, and tools for detecting suspicious behavior as strategies to maintain academic integrity.
Main AI News:
As generative artificial intelligence (AI) continues to evolve, it presents both opportunities and challenges within educational settings. While AI tools offer valuable resources such as creating rubrics and aiding in study guides, there is growing apprehension about their potential to facilitate academic dishonesty.
A recent report from Wiley, shared exclusively with Inside Higher Ed, highlights that a significant majority of instructors—68 percent—anticipate a negative impact on academic integrity due to generative AI. This concern mirrors broader worries about AI’s role in cheating, with a survey of over 2,000 students revealing that nearly half (47 percent) believe AI has made cheating easier compared to the previous year. Notably, 35 percent of these students attribute this increased potential for dishonesty specifically to ChatGPT.
Lyssa Vanderbeek, Wiley’s vice president of courseware, acknowledged these concerns, noting that while academic integrity issues are longstanding, the rapid advancement and accessibility of generative AI have exacerbated the situation. Vanderbeek emphasized that this challenge is not new but rather a continuation of existing issues in academic settings.
The survey, which involved 850 instructors and more than 2,000 students, did not provide a specific definition of “cheating,” leading to varying interpretations—from fact-checking assignments to using AI to write entire papers. Vanderbeek observed a shift towards more open discussions in classrooms about what constitutes cheating and how to seek help constructively.
When ChatGPT emerged in November 2022, it sparked immediate concerns among academics about its potential for misuse. An Inside Higher Ed survey earlier this year found that nearly half of university provosts were worried about generative AI’s threat to academic integrity, with 26 percent expressing significant concern.
Although many institutions initially banned AI tools, many have since relaxed these restrictions as both technology and perceptions have evolved. This situation parallels past fears about new technologies, such as Wikipedia in 2001 or calculators in the 1970s.
According to the Wiley survey, 56 percent of professors did not believe AI had impacted cheating in the past year, but 68 percent anticipated a negative impact on academic integrity in the next three years. Conversely, over half of students (56 percent) noted that stricter rules and increased proctoring have made cheating more challenging. Proctoring practices, which expanded during remote learning, have largely persisted as classes have returned to in-person formats.
Students who expressed strong reservations about generative AI cited its role in facilitating cheating as a primary concern, with 33 percent highlighting this issue. In contrast, only 14 percent of faculty identified cheating as a significant drawback of AI, with 37 percent citing its detrimental effect on critical thinking.
Vanderbeek was surprised by the number of students who mistrust AI tools—36 percent indicated a lack of trust as a reason for non-use, while 37 percent feared that using AI might lead instructors to suspect them of cheating. Previous surveys, including Inside Higher Ed’s 2024 provosts’ survey, have shown that student adoption of generative AI significantly exceeds faculty use—45 percent of students reported using AI in their classes over the past year, compared to just 15 percent of instructors.
To address these challenges, Vanderbeek suggests three key strategies for maintaining academic integrity: incentivizing early completion of work, implementing randomized exam questions to deter online searching, and providing instructors with tools to detect suspicious activities, such as plagiarized content or submissions from international IP addresses.
“The takeaway is that there is still a lot to learn,” Vanderbeek concluded. “We view this as an opportunity to explore how generative AI might enhance learning experiences that are currently beyond our reach.”
Conclusion:
The growing concerns about AI’s impact on academic integrity reflect a broader challenge in adapting educational practices to new technologies. As generative AI tools become more prevalent, educational institutions must navigate the delicate balance between leveraging these technologies for enhanced learning and preventing misuse. The market for academic integrity solutions is likely to expand, with a focus on developing tools and strategies that address the evolving landscape of AI in education. Institutions may invest more in technologies and practices that ensure fairness and prevent dishonesty while fostering open discussions about ethical use of AI tools.