Loading Events

« All Events

  • This event has passed.

AI Workshop for Civil Society: Understanding AI Harms and the Need for Documentation

May 13 May 15

Bali, Indonesia, 13-15 May 2025 – Hari Shankar, INITIATE.MY’s Data Scientist, participated in an intensive workshop hosted by EngageMedia. The programme was designed to equip civil society actors with the necessary tools to ensure that artificial intelligence (AI) is developed and deployed in an accountable and ethical manner. The workshop, led by award-winning journalist Karen Hao, brought together experts from civil society, academia, and government to explore the multifaceted challenges of AI governance. The workshop sessions emphasised on the foundational principles to hands-on methods for investigating and documenting AI-related harms.

First day

  • reinforced the idea that the challenges posed by AI are not entirely new. Instead, they are extensions of long-standing human rights concerns. 
  • international regulatory approaches. In particular, the European Union’s risk-based AI Act was highlighted as a pioneering framework. This legislation categorises AI systems based on their potential for harm, ranging from minimal risk to applications deemed “unacceptable,” such as social scoring systems, which are completely prohibited.
  • the importance of grounding AI governance in established human rights principles. Participants examined frameworks such as the UN Guiding Principles on Business and Human Rights, which clearly delineate the responsibilities of both governments and private companies. Under this model, states are expected to protect human rights, while businesses must exercise due diligence to avoid causing harm.
  • Human Rights Impact Assessments (HRIAs) as a practical and proactive tool for evaluating the effects of AI systems. These assessments help organisations critically examine their assumptions, anticipate how vulnerable communities might be affected, and identify concrete strategies to mitigate potential harms.

Second day

  • the real-world implications of AI, with a focus on systemic bias, the spread of misinformation, and the need for critical engagement with industry narratives. One impactful session revealed that AI systems are not inherently objective. When these systems are trained on historical data that reflects societal biases, they can perpetuate and even exacerbate those biases. Concrete examples included a recruitment tool built using OpenAI’s GPT model that showed bias against job applicants with racially distinctive names, as well as a welfare algorithm in Rotterdam that disproportionately penalised minority communities.
  • The role of AI in spreading misinformation and disinformation was another urgent topic, particularly in the context of preventing and countering violent extremism (PCVE). Participants examined how social media algorithms can be exploited to promote hate speech and extremist content. They also discussed the increasing threat of generative AI, which can be used to produce persuasive and misleading propaganda at scale
  • the “AI B.S. Detector,” encouraged participants to maintain a healthy scepticism toward overblown AI marketing claims and alarmist narratives about hypothetical, long-term threats. The general consensus was that civil society should focus its attention on the tangible, immediate harms caused by AI—such as discriminatory outcomes, breaches of privacy, and social polarisation—rather than on speculative concerns about existential risks.

The workshop also introduced several important tools and resources to support accountability efforts such as:

  1. UNESCO’s Readiness Assessment Methodology (RAM), which is currently being implemented in over 70 countries, including Indonesia, to assess national preparedness for ethical AI deployment. 
  2. AI incident databases, such as the OECD’s AI Misuse (AIM) repository and the community-driven AI, Algorithmic and Automation Incidents and Controversies (AIAAIC) archive. These platforms serve as essential resources for documenting real-world AI failures and informing better policies and regulatory frameworks.
  3. insights into the AI development landscapes in Indonesia and Malaysia. These regional case studies included updates on emerging national strategies and governance initiatives, such as Malaysia’s newly established National AI Office (NAIO), which reflect a growing recognition of the need for responsible innovation in the region.

The workshop strengthened INITIATE.MY’s capacity to protect marginalised communities from AI exploitation risks. Malicious actors exploit not only algorithms but also data voids, manipulated content, and weak platform design to amplify hate and violence. With enhanced understanding of AI governance and harm tracking, INITIATE.MY can effectively advocate for evidence-based, rights-respecting policy improvement.

Participants at a three-day workshop of hands-on training on ethical, rights-based approaches to AI accountability.

Skip to content