Welcome to the Disinformation Space!
When? Tuesday, September 22, 2026
Where? Dresden, Informatikfest of the German Informatics Society (GI).
Who organizes this? Marc-Oliver Pahl (IMT Atlantique, German Chapter of the ACM, GI), Michael Jülich (AIHorizon)
Call for Participation
Disinformation powered by generative AI is no longer a theoretical concern — it is a rapidly evolving socio-technical challenge that affects democracies, institutions, and everyday citizens. Addressing it requires technical rigor, critical reflection, and collective responsibility.
We invite researchers, security experts, social scientists, policymakers, technologists, journalists, activists, and engaged citizens to contribute to this workshop. Whether through empirical research, technical prototypes, case studies, conceptual reflections, or critical perspectives — your insights matter.
Our goal is not only to analyze the automation of the disinformation lifecycle, but to create a space for open, engaged, and constructive dialogue across disciplines and communities. We seek contributions that challenge assumptions, bridge technical and societal viewpoints, and help shape responsible responses to AI-driven manipulation.
Join us in building a forum that sparks meaningful exchange, fosters collaboration, and generates momentum that extends far beyond the workshop itself.
Let’s move from awareness to action — together. Subscribe on the right to receive all info!
Motivation / Introduction
Generative AI systems are transforming how information is created, distributed, and consumed. While these technologies offer powerful capabilities for communication and creativity, they also lower the barrier for producing targeted, scalable, and convincing disinformation. Understanding this shift is essential for safeguarding democratic processes, public discourse, and societal trust.
Context / Domain Description
This workshop focuses on how AI reshapes the disinformation kill chain—from reconnaissance and message design to automated content generation, distribution, and adaptation. We explore technical mechanisms, socio-technical dynamics, and adversarial strategies that leverage AI models to produce synthetic text, audio, images, and video at unprecedented scale. The domain spans computer science, security, HCI, ethics, social computing, and policy.
Possible Contributions
We invite work that analyzes disinformation pipelines, detects or mitigates AI-generated manipulation, models adversarial capabilities, or evaluates defensive interventions. Contributions may include empirical studies, system designs, methodological approaches, case analyses, or conceptual frameworks that deepen our understanding of automated disinformation lifecycles.
Invitation to Submit and Participate
Researchers, practitioners, policymakers, and industry experts are encouraged to submit papers, demos, or position statements. The workshop aims to foster interdisciplinary exchange and build a community capable of addressing emerging threats posed by generative AI. Join us to advance research and shape responsible, resilient information ecosystems.
