New study: Fears of widespread cheating due to AI chatbots in schools were overblown

TL;DR:

  • Research from Stanford University challenges fears of increased cheating in schools due to AI chatbots like ChatGPT.
  • Surveys across 40 US high schools show that cheating rates have remained consistent despite the presence of AI chatbots.
  • Pew Research Center’s survey reveals that many teenagers are unaware of ChatGPT, and a minority have used it for schoolwork.
  • The data suggests that AI chatbots are not currently disrupting the educational landscape as anticipated.
  • High school students who have used AI chatbots primarily use them for generating ideas or editing assignments.
  • The findings underscore the need to focus on responsible and critical use of AI tools in education.

Main AI News:

In December of last year, the introduction of an AI chatbot called ChatGPT into high school and college classrooms prompted widespread concerns of potential mass cheating across the United States. To mitigate the threat of bot-fueled plagiarism, educational authorities in major public school districts such as Los Angeles, Seattle, and New York City swiftly barred ChatGPT from school-issued laptops and Wi-Fi networks. However, it appears that the initial alarm may have been exaggerated, particularly in high schools.

A recent study conducted by Stanford University suggests that the integration of AI chatbots has not led to a significant increase in cheating rates within schools. Surveys conducted across more than 40 US high schools this year indicate that approximately 60 to 70 percent of students admitted to engaging in cheating, a percentage consistent with previous years, as reported by Stanford’s education researchers.

Denise Pope, a senior lecturer at Stanford Graduate School of Education and co-founder of an educational nonprofit, noted, “There was a panic that these AI models will allow a whole new way of doing something that could be construed as cheating, but we’re just not seeing the change in the data.”

ChatGPT, developed by OpenAI in San Francisco, captured public attention with its remarkable ability to generate human-like essays and emails. This innovation led advocates of classroom technology to assert that AI tools like ChatGPT would revolutionize education. Conversely, critics warned that these tools, which are adept at creating content, could fuel widespread cheating and misinformation in schools.

However, both Stanford University’s research and a recent report from the Pew Research Center are casting doubt on the idea that AI chatbots are disrupting public education. Surprisingly, many teenagers are unaware of ChatGPT, as indicated by Pew’s survey of over 1,400 US teenagers aged 13 to 17. Among the respondents, roughly one-third had heard nothing about the chatbot, while 44 percent had only heard a little about it. Only 23 percent claimed to have heard a lot about ChatGPT.

Responses to the survey varied based on race and household income, with white teens and those from higher-income households being more likely to have heard about ChatGPT. Additionally, Pew’s survey inquired whether teenagers had used ChatGPT for their schoolwork, to which only 13 percent answered affirmatively.

These findings suggest that, for now, ChatGPT has not become a disruptive force in schools, with the majority of teenagers, even those aware of it, not incorporating it into their academic endeavors.

In the context of academic integrity, cheating has been a long-standing issue in educational institutions. According to surveys conducted between 2002 and 2015, 64 percent of high school students admitted to cheating on tests, and 58 percent confessed to plagiarism. Remarkably, since the introduction of ChatGPT in 2022, the overall incidence of high school students reporting recent engagement in cheating has remained relatively stable, according to Stanford’s research.

The research, however, does not provide insight into the frequency with which college students employ chatbots for academic dishonesty. Both Stanford and Pew researchers did not survey college students on their use of AI tools.

This year, Stanford researchers added questions to their surveys that specifically addressed high school students’ use of AI chatbots. The results showed that a percentage of students in several high schools on the East and West Coasts had used an AI tool or digital device, such as ChatGPT or a smartphone, as an unauthorized aid during school tests, assignments, or homework within the last month.

Among high school students who admitted to using an AI chatbot, the majority used it to generate ideas for papers, projects, or assignments. Fewer reported using it for editing or completing portions of their work, and a smaller proportion admitted to using it to produce an entire paper or assignment.

These findings urge a shift in the discourse surrounding chatbots in education, emphasizing the importance of teaching students to critically understand and utilize AI tools rather than solely focusing on cheating concerns. As Victor R. Lee, an associate professor at Stanford Graduate School of Education, who led the research on cheating alongside Dr. Pope, remarked, “There’s so much more that could and should be talked about in schools beyond viewing AI as an uncontrollable temptation that undermines everything.”

While schools are in the process of formulating acceptable usage guidelines for AI tools, students are developing nuanced perspectives on the use of ChatGPT for academic purposes. Pew’s research revealed that only 20 percent of teenagers aged 13 to 17 deemed it acceptable for students to use ChatGPT for essay writing. In contrast, nearly 70 percent believed it was acceptable to research new topics.

Although the majority of students may not endorse the use of chatbot-generated content for essays, it remains crucial for educators to address the evolving role of AI in education, emphasizing responsible and ethical utilization while encouraging students to harness these tools for meaningful learning experiences.

Conclusion:

The apprehension surrounding AI chatbots’ impact on academic integrity in schools appears to be exaggerated. Stanford University’s research and Pew Research Center’s findings suggest that while concerns about cheating persist, the focus should shift towards fostering responsible and critical utilization of AI tools in education. This shift presents an opportunity for the education technology market to provide guidance and support for educators and students in navigating the evolving landscape of AI in the classroom.

Source