
The Use of Artificial Intelligence (AI) in Academia: Research Ethics and Tools
Event Report Article by Fen-Ni Yu
This lecture, organized by the CJD program, sought to respond to the growing ambivalence and ethical anxiety surrounding the use of artificial intelligence (AI) in academia. As generative AI becomes increasingly common in academic writing and research, scholars and students alike often find themselves torn – on the one hand worried that misuse of AI may lead to ethical breaches, and on the other hand unable to deny the productivity it affords. This lecture invited Professor Alexandre Erler to explore the topic “The Use of AI in Academia: Research Ethics & Tools,” examining the tension between technological application and ethical responsibility from a philosophical perspective.
This lecture was structured around three main axes: a mapping of current AI tools used in academic contexts; the potential and promise of these tools; and the ethical risks and gray areas they introduce. From the perspective of a philosopher, Erler wove between technological development and humanistic reflection, leading us to rethink the essence and future trajectory of “academic creation.”
Header Image: Photograph featuring speaker Professor Alexandre Erler by CJD team member Li Qi during event.
1. AI Tools in Academia: Functions, Types, and Potentials
Erler began with a detailed overview of currently available generative AI applications for academic users. These tools can be generally categorized as follows:
(1) Text Generation in Writing
First are the most prominent language generation models (LLMs), such as ChatGPT, Google Gemini, Claude (Anthropic), Microsoft Copilot, and Perplexity AI, which can be used for drafting papers, refining language, generating outlines, etc. Chinese domestic models like Baidu ERNIEand DeepSeek are becoming increasingly advanced, though they raise concerns over political censorship and biased corpora. As an alternative, he introduced TAIDE, a Taiwanese-developed AI language model that emphasizes traditional Chinese character processing and sensitivity to local languages – an option tailored for researchers working in Chinese.
Beyond LLMs, there are also AI-powered writing aids, such as paraphrasing and summarizing tools like Quillbot, writing assistants like Grammarly and Wordtune, which help with sentence rewriting, grammar correction, and stylistic improvements. These tools, however, are more about editing than generation.
(2) Multi-Functional Research Assistants
AI can also act as a research assistant. ChatGPT, Google Gemini, and Perplexity AI offer deep research functions, helping users synthesize online data and generate detailed reports. Elicit AI supports literature summarization and integration, while Research Rabbit helps users search for related papers and visualize academic networks. Tools like Tome IA and Gamma can assist in preparing presentations. These signify a shift into the era of AI “agents” in academic work, where humans can delegate specific tasks (like literature gathering and data integration) to AI, reshaping traditional timelines and labor structures.
AI is also capable of producing vivid and entertaining audiovisual content as “deepfakes.” Tools such as Google Veo 3 and OpenAI’s premium Sora can generate lifelike videos from prompts. Unlike Veo 3, Sora hasn’t generated any sound effects yet – it’s expected that this function will be added at some point in the future. Although AI is technically “deepfakes,” Erler pointed out their educational potential – especially in immersive, scenario-based teaching for history or culture (e.g., reenacting scenes from Ancient Rome or Imperial China).
AI also integrates with existing platforms like Adobe, Zotero, and EndNote, each of which now includes its own AI assistant. OpenAI further provides AI code generators such as Codex andGitHub Copilot.
(3) AI Detection Tools
Contrasting the tools used for knowledge production, a different set of AI tools has emerged to police academic integrity. Tools like Turnitin, GPTZero, and Copyleaks are designed to detect AI-generated content and estimate how much original labor a human author contributed. However, these tools remain controversial and are prone to inaccuracies that may unfairly penalize students. (4) Looking Ahead
Erler posed the question: could we eventually see the rise of “superhuman AI tutors” – personalized, 24/7, highly adaptive instructors that might reshape the structure of education as we know it?
2. AI’s Promises in Higher Education: Efficiency, Equity and Personalization
According to Erler, one of main promises highlighted by proponents is that AI in the academic context will increase our productivity, alongside efficiency, which suggest that people will be able to get more work done within a given amount of time, but also to get more done overall (as logically follows if people keep working for as long as they currently do). In the case of an academic researcher, this could mean, for instance, publishing twice as many articles per year as they currently do.
Specifically, in higher education, AI tools can:
(1) Save time and resources: For instance, by quickly generating literature summaries that help researchers grasp new trends.
(2) Improve writing quality: Particularly useful for non-native English speakers, AI can correct grammar and enhance expression, contributing to more equitable academic participation. (3) Enable personalized learning: AI tutors can adapt to different students’ learning speeds, challenging the one-size-fits-all approach.
(4) Enhance visual appeal in teaching: Integrating video and animation generation intoclassroom material can make learning more engaging and memorable.
Yet, while these optimistic projections are not without merit, Erler reminded us that they are accompanied by unresolved ethical tensions.
3. Ethical Risks and Challenges: Rethinking Responsibility and Creativity
This part of the lecture was perhaps the most compelling, as Erler tackled the ethical and practical dilemmas of AI use in academic practice. He began with concrete examples: some top French universities have banned ChatGPT to prevent academic dishonesty; Michigan LawSchool barred ChatGPT in its application process; yet some institutions have lifted bans despite these concerns. Clearly, plagiarism is a major worry – but how we understand students’ relationships to these tools reflects broader value judgments, unequal resource access, and even ecological concerns.
(1) Cheating and the Blurring of Academic Integrity
Because AI can easily generate text or paraphrase passages, the boundary of what constitutes “academic honesty” becomes increasingly vague. Erler suggested that we move beyond total prohibition or laissez-faire attitudes and instead encourage students and peers to use AI responsibly – disclosing when and how it’s used, remaining accountable for all final outputs, and preserving a reflective distance between human and tool. Such practices maintain critical integrity in scholarly production.
(2) Detection Inaccuracies and Inequities
We’ve already mentioned tools like Turnitin, but can these AI detectors really be trusted? What happens when they falsely flag an authentic writer as a “cheater” while savvy users find ways to evade detection? Erler cautioned against relying too heavily on vague metrics like “50%AI-generated,” which don’t meaningfully assess intellectual labor.
Erler also encouraged us to reflect on what these scores truly signify. Are we reproducing new inequalities? And what kind of originality are we really pursuing? (3) De-skilling and the Alienation of Writing
AI might dull students’ abilities for critical thinking and self-expression. Erler proposed that future education should emphasize co-authorship between humans and AI (what matters is recognizing the relational dynamic between us and the tool). Teachers can also adopt small-scale strategies – such as designing assessments that require real-time interaction and in-person presentation.
(4) Homogenization of Thought and Style
Long-term reliance on AI risks flattening diversity in academic voice and thinking. But Erler avoided falling into fatalism. Instead, he advocated ensuring that we get input from diverse sources and not just practicing with one particular LLM, and that we make a substantial intellectual contribution to our writing.
Other ways include experimenting with “personalized AI models” that mimic the user’s style and thus prevent the homogenizing effects of GPT-style language.
(5) Environmental Cost and the Collective Action Problem
LLMs consume significantly more energy than basic search engines, raising concerns about carbon emissions and water usage. Erler criticized the current emphasis on self-restraint, noting that such appeals often fail in collective action contexts. He proposed institutional-level policies, such as usage caps or investments in low-energy AI models and green infrastructure.
4. Discussion and Reflections
During the open-floor discussion, participants raised issues that ranged from environmental ethics to the reconstitution of subjectivity – highlighting how generative AI is reshaping both academic culture and the practice of knowledge production.
(1) Creativity and Cognitive Boundaries: Is AI truly creative, or is it just recombining what’s already known? Erler distinguished between weak innovation (surface novelty) and strong innovation (paradigm-shifting thought like Einstein’s relativity). AI currently does the former well, but not the latter. Still, it can offer enlightening prompts and classification schemes that aid philosophical reasoning.
(2) The Absence of Emotion and Embodiment: Can AI Replace Teachers? A student asked whether AI’s lack of embodiment and emotional nuance limits its educational role. Erler agreed: current AI lacks consciousness and affective processing, making it inadequate for moral education or relational pedagogy. He advocated for a hybrid educational model centeredon human-machine collaboration.
(3) The Emergence of Multi-Self Subjectivity: As humans engage more with AI agents, are we producing fragmented or multiple selves? Erler noted that AI’s deep integration into language and cognition could alter how we experience subjectivity – not pathologically, but culturally, through shifts in how we interact and understand ourselves.
(4) Environmental Justice and Corporate Responsibility: Carbon Labeling for AI? One student proposed a “carbon label” for AI services. Erler endorsed the idea, citing studies where displaying energy consumption data in households significantly reduced electricity use. He argued that transparency can act as a behavioral nudge and that environmental externalities should be made visible.
(5) Privacy and Anthropological Ethics: A student expressed concern about using AI to transcribe sensitive interviews. Could these be stored or exploited? Erler warned that uploading such data to commercial AI servers raises ethical red flags. He advised data anonymization and using offline tools or encrypted solutions for handling confidential research.
(6) Institutional Standards and Global Academic Inequality: Different journals and universities have varying policies on AI usage, will it potentially disadvantage scholars from certain regions? Erler acknowledged this issue and urged institutions to provide fair access to tools, licenses, and digital literacy training.
5. Conclusion: Toward a New Academic Culture of Collaborative Ethics
In closing, while Professor Erler doesn’t believe that AI will eliminate human scholarship for the foreseeable future, it is difficult to predict what will happen in the long run – human obsolescence in the face of superintelligent AI remains a distinct possibility…
In turn, it compels us to rethink what responsible academic practice truly means. Erler introduced the concept of “responsible inefficiency”: intentionally preserving space for slowness, dialogue, and diversity in the face of technological optimization. AI should serve as a partner – not a substitute – in the pursuit of knowledge. And as we embrace its potentials, we must remain committed to academic ethics, critical reflection, and the cultivation of humanistic values.
Event Information
Poster created by CJD team member Ip Po Yee for the promotion of event
Event Title: The Use of Artificial Intelligence (AI) in Academia: Research Ethics and Tools
Speaker: Alexandre Erler, Institute of Philosophy of Mind and Cognition, National Yang Ming Chiao Tung University
Organizer:
- International Center for Cultural Studies, National Yang Ming Chiao Tung University
- International Master’s Program in Inter-Asia Cultural Studies, National Yang Ming Chiao Tung University
- Conflict, Justice, Decolonization: Asia in Transition in the 21st Century
Date: May 27, 2025
Time: 13:30 – 15:00 (Taiwan Standard Time, GMT+8)
Venue: R103 HA Building 3, NYCU, 1001 University Road, Hsinchu, Taiwan 300
Event link: https://iccs.chss.nycu.edu.tw/zh/activity.php?USN=1582
About the Speaker: Alexandre Erler is currently an Associate Professor at the Institute of Philosophy of Mind andCognition at National Yang Ming Chiao Tung University. He holds a Ph.D. in philosophy fromOxford University and previously served as a researcher at the Uehiro Centre for Practical Ethics, Oxford. He has long focused on how technological developments impact human conditions, social systems, and ethical values. His current research pays close attention to the philosophical, ethical, and political challenges posed by emerging AI technologies, including their implications for human rights and democracy.