Investigative Feature
It began with a bibliography that looked perfect. The citations were formatted correctly, the authors were real academics, and the book titles sounded authoritative. But when the professor tried to find the books, they vanished. They didn't exist. In November 2025, a scandal erupted at HKU regarding a PhD paper riddled with "fictitious AI-generated references." This phenomenon, known as "hallucination," reveals the dangerous double-edged sword of the AI era.
The AI had "hallucinated", inventing books, authors, and page numbers that simply did not exist to satisfy the user's prompt. This happens because students fundamentally misunderstand the technology. They treat the chatbot as a search engine, which retrieves facts, rather than a Large Language Model, which predicts the most probable next word based on statistical patterns. When an AI doesn't know the answer, it often makes one up with total confidence, creating a "truth trap" for the unwary.
This incident is not merely a case of academic dishonesty; it is a symptom of a much deeper crisis. It highlights a profound digital literacy gap where students possess the technical skill to use these powerful tools but lack the critical thinking required to verify them. As reliance on these "silent partners" grows, we risk raising a generation of scholars who are efficient editors of machine output, but who have forgotten the rigorous struggle of original thought. The machine can write, but it cannot care about the truth—and that is where the human student must draw the line.
It is 11:00 PM on a weekday. For students, this hour usually signals the peak of academic stress from assignments deadline and examination, along with the sound of keyboard typing, textbook flipping and calculators clicking.
But tonight, for thousands of undergraduates, the room is particularly quiet. The struggle has been outsourced.
With a few keystrokes, complete solutions appear on screen, generated in seconds. The doubt surrounding this digital assistance is vanishing in a blink of an eye.
According to a report by the South China Morning Post in July 2025, the taboo has broken at the highest level: at least 9 out of 16 top scorers in this year's DSE exams openly admitted to using AI tools like ChatGPT or DeepSeek during their preparation.
And according to a 2024 survey by CAMPUSTECHNOLOGY, the most common AI use cases of students are search for information (69%), check grammar (42%), summarize documents (33%), paraphrase documents (28%) and create a first draft (24%).
Thie phenomenon showing that AI is no longer a secret weapon, but a standard equipment for students to help with their studies.
Although students are usign AI to help with their studies, but it also affecting their choices of university subjects and future careers.
We collected data on local student habits, and the results strictly challenge the "lazy student" stereotype often pushed by older generations. The primary motivation isn't to avoid work; it is to survive it. In Hong Kong's hyper competitive education system, efficiency is the most valuable currency.
A 2024 survey by The Standard revealed that AI trends now influence students' university subject choices, causing them choosing subjects blindly, 84% of students consider choosing subjects and jobs that are not easily replaced by AI; 66% of students blindly moving towards AI related fields, while not familiar in AI related topics.
The supervisor of HKFYG Andy Chan Ying-kit also said "Students should not blindly follow the trend of AI. They should have a thorough understanding of the content of each subject and make decisions after careful consideration."
We also spoke to a first-year student Luk Wai-man, who just completed the DSE this year.
“I used AI to help with my studies during the DSE, but it affected my university subject choices,” he said.
The main reason of why he changed his original subject choices is he think that AI will easily replace his original career path in future.
“I originally wanted to study data science, but AI is advancing so fast. That's why I chose a professional subject like nursing, which is less likely to be replaced by AI easily,” Luk added.
This revealed challenge students are facing, balancing the benefits of using AI and the fear of being replaced by it.
Regarding to the fail of the PhD student. We asked academic experts to critique the mistake. The verdict is clear: The student did not fail simply because they used AI; they failed because they lacked AI Literacy.
Scholars define this as "Automation Bias"—the tendency for humans to suspend their critical thinking when presented with output from a computer. The student accepted the machine's output as truth without verification, treating the tool as an oracle rather than a generator.
The expert consensus provides a new verdict for education: "AI won't replace you; a human using AI will replace you." The competitive edge isn't the software itself, but the human capacity to audit, verify, and improve the algorithm's output. The future belongs to those who treat AI as a junior assistant that requires constant supervision, not a replacement for their own brain.
Universities are continously updating their policy and standards to match this new reality. The shift has been dramatic. In 2023, many universities treat student who use AI as academic dishonesty.
Today, Many universities such as HKU and HKBU has moved to provide generative AI tools to students directly and offering course in teaching the use of generative AI tools in their studies and how they should follow academic honesty while using AI.
Which is a 180-degree turn in their attitude towards AI tools in education.
While AI tools are now permitted, to ensure students are still learning, universities are changing their assessment methods.
For example, a fourth-year computer science student Mr Wong told us that professors now will conduct random one-on-one interviews after code assignments submissions.
“When we hand in the code, the professor will randomly select some of us for an interview,” Wong said. “They ask detailed questions about the logic and decisions behind the code to make sure we actually understand it, ensuring that we didn't just relying on AI to help us write the code.”
We stand at a precipice. The students currently navigating the DSE and university applications are the pioneers of a chaotic new frontier. They face a university system still writing the rulebook as it plays the game, vacillating between embracing the future and policing the past. The future of education in Hong Kong is no longer about memorizing answers, but verifying them. It has become an arms race between educational assessment and technological advancement.
This shift marks the death of rote memorisation and the birth of a new kind of competency. The diploma of the future may not prove what you know, but how well you can orchestrate synthetic intelligence to solve complex problems. However, this transition is turbulent. Students are currently caught in the crossfire, pressured to use AI to compete with their peers, yet terrified of being flagged by imperfect detection software.
As our interviewees suggest, the ultimate question is no longer whether AI will replace students, but whether students who can wield AI critically will replace those who rely on it blindly. In this digital wild west, the most important skill isn't coding—it's critical thinking. To see how real students are handling this high-stakes balancing act, we looked beyond the data and listened to their stories in the user logs below.
HKU PhD paper investigation revealed AI invented book titles that do not exist.
"Mr.Luk (2025 DSER): I used AI to help with my study duting DSE, it improved my efficiency."
"Mr.Mak (Year 4 CS): AI helps me structure my thoughts and give me idea when I am confused in assignments."
Source: CAMPUSTECHNOLOGY
AI won't replace you.
A human using AI
will replace you.
> DIAGNOSIS: LACK_OF_VERIFICATION
> SOLUTION: CRITICAL_THINKING
Prof. Xiang Zhang
HKU President
Source: Exclusive Interview (June 2025)
Audio transcripts from Interviews