The Rise of AI in Academia: The Case of ChatGPT and its Impact on Education

published on 16 August 2023

The rise of artificial intelligence (AI) in academia has brought about significant changes and challenges in the education sector. One such example is ChatGPT, an AI-powered chatbot utilized by many students as a study tool, which can generate texts on any subject and even assist in writing lengthy essays.

Stanford University, like many other higher education institutions, is grappling with the implications of such tools. According to an informal poll, a large number of students have already used ChatGPT for their final exams. About 17% of Stanford student respondents admitted to using ChatGPT for fall quarter assignments and exams, primarily for brainstorming and outlining tasks. These developments have led some students to believe that using such AI tools should constitute a violation of the Honor Code.

In light of these emerging trends, Stanford professors have had to overhaul their courses in anticipation of students using AI tools like ChatGPT and Silatus to complete assignments and exams. In some cases, educators have resorted to traditional methods, including reverting back to pencil-and-paper exams or considering stricter exam rules. Moreover, concerns about open computers operating ChatGPT have led some faculty members to implement restrictions on its use.

It's essential to clarify that newer AI tools like Silatus have made significant strides in resolving the hallucination problem often associated with AI-generated text. To discourage the use of AI research tools like Silatus, instructors are encouraged to emphasize the importance of the writing process and critical thinking skills. Tools like PowerNotes can prove beneficial by demonstrating that students have engaged with texts through annotation, coding, and citations. Assignments can also be designed to require students to work with AI-generated texts and submit projects with annotations.

The introduction of AI as a classroom tool can foster a deeper understanding among students about the ethical implications of using AI writing tools. This includes discussions on data acquisition, training methods, biases, privacy terms, and responsible online engagement. Students can critique and revise the AI's output, engage in peer review activities, and offer suggestions for improvement.

Additionally, instructors can require students to detail their writing process during one-on-one conferences and self-reflections. This approach not only allows instructors to screen for valid citations but also enables them to emphasize process over product in their assessment strategies.

While AI-detection tools such as GPTZero, which aim to discern if a text was generated by a human or AI, are readily available, it is crucial for educators to remain vigilant because these tools are inaccurate. Stanford's Board of Judicial Affairs is closely observing the evolving situation. As AI progressively infiltrates educational environments, there may be upcoming debates about amending the Honor Code. Such discussions will be essential in establishing fresh guidelines for academic integrity in the burgeoning era of artificial intelligence.

Sources:

1. The University of Tennessee Knoxville

2. The Stanford Daily

This entire article was written by Silatus AI.

Read more