Articles
Cyberbullying among University Students in the Age of Algorithmic Platforms: Artificial Intelligence, Deepfakes, and Challenges for Science Communication

This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright
The authors shall retain the copyright of their work but allow the Publisher to publish, copy, distribute, and convey the work.
License
Journal of Artificial Intelligence and Science Communication (JAISC) publishes accepted manuscripts under Creative Commons Attribution 4.0 International (CC BY 4.0). Authors who submit their papers for publication by Journal of Artificial Intelligence and Science Communication (JAISC) agree to have the CC BY 4.0 license applied to their work, and that anyone is allowed to reuse the article or part of it free of charge for any purpose, including commercial use. As long as the author and original source is properly cited, anyone may copy, redistribute, reuse and transform the content.
Received: 6 February 2025; Revised: 24 July 2025; Accepted: 7 August 2025; Published: 12 September 2025
In the context of increasingly algorithmically driven digital platforms, cyberbullying has evolved into a complex communication phenomenon shaped by artificial intelligence, platform design, and automated content distribution. This study examines the prevalence and characteristics of cyberbullying among university students, focusing on awareness, reporting behaviour, and perceptions of institutional support within AI-mediated communication environments. The research was conducted on a sample of 67 university students using a structured online questionnaire. Results indicate that 30% of participants reported experiencing cyberbullying, while formal reporting to institutional authorities remained extremely low (3%). Although awareness of the term cyberbullying was high, only half of the respondents demonstrated a comprehensive understanding of the phenomenon. Anxiety, stress, and reduced self-confidence emerged as the most frequently reported consequences. The study further situates cyberbullying within contemporary developments in artificial intelligence, including algorithmic amplification and AI-supported content moderation, which influence the visibility of harmful content and user responses. Despite increased awareness, students rarely seek institutional support, often normalizing or ignoring abusive behaviour. The findings highlight the need for preventive strategies grounded in digital literacy, transparent AI governance, and science communication approaches that address both human and algorithmic actors, positioning cyberbullying as a critical challenge at the intersection of artificial intelligence, digital communication, and youth well-being.
Keywords:
Cyberbullying Artificial Intelligence Algorithmic Platforms Deepfake Abuse Science Communication Digital Literacy University StudentsReferences
- Smith, P.K.; Mahdavi, J.; Carvalho, M.; et al. Cyberbullying: Its Nature and Impact in Secondary School Pupils. J. Child Psychol. Psychiatry 2008, 49, 376–385. DOI: https://doi.org/10.1111/j.1469-7610.2007.01846.x
- Kowalski, R.M.; Limber, S.P.; McCord, A. A Developmental Approach to Cyberbullying: Prevalence and Protective Factors. Aggress. Violent Behav. 2021, 57, 101461.
- Hinduja, S.; Patchin, J.W. Connecting Adolescent Suicide to the Severity of Bullying and Cyberbullying. J. Sch. Violence 2018, 18, 333–346. DOI: https://doi.org/10.1080/15388220.2018.1492417
- Gillespie, T. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media; Yale University Press: New Haven, CT, USA, 2018.
- Chesney, R.; Citron, D.K. Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics. Foreign Affairs 2019, 98, 147–155.
- European Parliament. Artificial Intelligence, Deepfakes and Online Harms (EPRS Briefing No. 775855); European Parliamentary Research Service: Brussels, Belgium, 2025.
- Kircaburun, K.; Griffiths, M.D.; Billieux, J. Cyberbullying among Adolescents and Young Adults: A Systematic Review. Curr. Psychol. 2022, 41, 1151–1167.
- Napoli, P.M. Social Media and the Public Interest: Governance of News Platforms in the Realm of Individual and Algorithmic Gatekeepers. Telecommun. Policy 2019, 43, 101–112.
- Gran, A.-B.; Booth, P.; Bucher, T. To Be or Not to Be Algorithm Aware: A Question of a New Digital Divide? Inform. Commun. Soc. 2021, 24, 1779–1796. DOI: https://doi.org/10.1080/1369118X.2020.1736124
- Westerlund, M. The Emergence of Deepfake Technology: A Review. Technol. Innov. Manag. Rev. 2019, 9, 39–52. DOI: https://doi.org/10.22215/timreview/1282
- Gillespie, T. Content Moderation, AI, and Platform Governance. In The SAGE Handbook of Social Media, 2nd ed.; Burgess, J., Marwick, A., Poell, T., Eds.; SAGE Publications: London, UK, 2024.
- Long, D.; Magerko, B. What Is AI Literacy? Competencies and Design Considerations. Comput. Educ. Artif. Intell. 2024, 5, 100123.
- Vaccari, C.; Chadwick, A. Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News. Soc. Media Soc. 2020, 6. DOI: https://doi.org/10.1177/2056305120903408
- Boyd, D. It’s Complicated: The Social Lives of Networked Teens; Yale University Press: New Haven, CT, USA, 2014.
- Zuboff, S. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power; PublicAffairs: New York, NY, USA, 2019.
- Pariser, E. The Filter Bubble: What the Internet Is Hiding from You; Penguin Press: New York, NY, USA, 2011.
- Ng, D.T.K.; Leung, J.K.L.; Chu, S.K.W.; et al. Conceptualizing AI Literacy: An Exploratory Review. Comput. Educ. Artif. Intell. 2021, 2, 100041. DOI: https://doi.org/10.1016/j.caeai.2021.100041
- Citron, D.K. Sexual Privacy. Yale Law J. 2019, 128, 1870–1960.
- van Dijck, J.; Poell, T.; de Waal, M. The Platform Society: Public Values in a Connective World; Oxford University Press: New York, NY, USA, 2018.
- Gillespie, T. Moderation of Content. In The SAGE Handbook of Social Media, 1st ed.; SAGE Publications: London, UK, 2020.
- Roberts, S.T. Behind the Screen: Content Moderation in the Shadows of Social Media; Yale University Press: New Haven, CT, USA, 2019.
- Jhaver, S.; Ghoshal, S.; Bruckman, A.; et al. Online Harassment and Content Moderation: The Case of Blocklists. ACM Trans. Comput.-Hum. Interact. 2019, 26, 1–33.
- Gorwa, R.; Binns, R.; Katzenbach, C. Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance. Big Data Soc. 2020, 7. DOI: https://doi.org/10.1177/2053951719897945
- Helberger, N.; Pierson, J.; Poell, T. Governing Online Platforms: From Contested to Cooperative Responsibility. The Inf. Soc. 2018, 34, 1–14. DOI: https://doi.org/10.1080/01972243.2017.1391913
- Floridi, L.; Cowls, J. A Unified Framework of Five Principles for AI in Society. Harvard Data Sci. Rev. 2019. DOI: https://doi.org/10.1162/99608f92.8cd550d1
- European Parliament. EU AI Act; European Parliamentary Research Service: Brussels, Belgium, 2024.

Download
