In late 2023, faculty members at a mid-sized European university noticed something unusual: a sharp rise in essays that were perfectly fluent yet lacked original arguments, citations, and nuanced critical thinking. After informal comparisons and manual reviews, they suspected a new culprit—generative AI tools like ChatGPT.
This case study examines how Riverton University (a fictionalized yet realistic composite based on real-world practices) addressed the surge in AI-generated student work. The university’s multi-layered strategy offers a model for others navigating similar challenges.
Background: The Problem Emerges
At the start of the academic year, instructors across several departments—English, Business, and Social Sciences—reported similar concerns:
- Assignments that lacked depth but had flawless grammar
- Repeated phrasing suggestive of AI output
- Decreased student participation in discussion-based tasks
Initial plagiarism scans revealed no matches, raising suspicion that AI, rather than traditional cheating, was responsible. The university knew it had to act—but how?
Phase 1: Assessment and Stakeholder Engagement
Riverton University first launched a faculty survey and student focus groups to understand the scale of the issue.
Key Findings:
Group | Insight Gathered |
---|---|
Faculty | 67% suspected AI use in written submissions since ChatGPT launched |
Students | 42% admitted trying AI tools, mostly for idea generation or outlines |
IT Department | Needed guidelines on acceptable AI software usage on university devices |
This data formed the foundation for a university-wide action plan, rooted in transparency and support rather than punishment alone.
Phase 2: Policy Development
One of Riverton’s first steps was to revise its academic integrity policy to explicitly address AI. The new framework categorized AI involvement based on intent and outcome, rather than mere use.
AI Use Category | Examples | Status |
---|---|---|
Supportive | Grammar correction, outline generation, idea brainstorming | Permitted with disclosure |
Substitutive | Full draft writing, auto-responses in forums, essay completion | Prohibited; considered academic misconduct |
Ambiguous | Summarizing readings, rewriting sections with AI paraphrasers | Context-dependent; requires instructor permission |
This nuanced approach enabled flexibility while reinforcing the university’s standards of authorship and originality.
Phase 3: Educating Students and Faculty
Rather than penalize students blindly, Riverton invested in digital literacy training. The university launched a campaign titled “Write Right in the Age of AI”, which included:
- Online modules about ethical AI use
- Department workshops on citation and transparency
- A chatbot to answer questions about acceptable practices
“Our goal isn’t to ban AI—it’s to teach students to use it responsibly,” said Dr. Elisa Hartmann, Academic Integrity Officer.
Instructors were also trained to spot signs of AI-generated work, such as:
- Uniform sentence length and tone
- Absence of concrete examples
- Overuse of generic transitional phrases
- Replacing “write an essay on X” with localized or personal application prompts
- Adding oral components or in-class reflections
- Incorporating draft submission checkpoints and peer review stages
- Clear and flexible policies
- Ethical AI education
- Assessment redesign
- Support for both students and faculty
Phase 4: Technology Integration
To support detection and fairness, the university adopted Originality.AI and Turnitin’s AI Detection module. However, instructors were encouraged to combine these tools with manual review.
Pros and Cons of the Tools Used:
Tool | Strength | Limitation |
---|---|---|
Turnitin AI Detection | Integrated with LMS, simple reports | False positives on high-level student writing |
Originality.AI | High sensitivity for GPT models | Cost and accuracy vary by text type |
The university emphasized that detection tools are indicators, not judges. A panel including faculty, the student, and a neutral academic advisor reviewed any flagged case.
Phase 5: Assessment Redesign
Riverton also tackled the root of the issue—assessment design. Many assignments were too generic, allowing AI to generate passable work.
Reforms included:
These measures not only reduced AI misuse but also increased student engagement and writing quality.
Results After One Semester
Metric | Fall 2023 | Spring 2024 |
---|---|---|
Suspected AI misuse cases | 72 | 19 |
Student disclosure of AI use | 6% | 34% |
Policy violations requiring discipline | 15 | 3 |
Faculty satisfaction with support | 53% | 81% |
The case demonstrated that open dialogue, policy clarity, and effective pedagogy could transform how AI is utilized in academia.
A Blueprint for Action
Riverton University’s experience offers a scalable model for institutions facing the AI revolution in student work. Instead of relying solely on detection, the university prioritized:
As the landscape of academic writing evolves, the core mission remains: to cultivate authentic learning, critical thinking, and personal authorship, with or without the aid of AI.