This consultation is now closed.

Read the Summary Report: Promoting Information Integrity in Elections.

You can now check out more on the UNDP's Oslo Governance Centre Information Integrity Portfolio here and about the Action Coalition on Information Integrity here

Thank you to all participants around the globe who shared their valuable knowledge and expertise in this SparkBlue ‘Promoting Information in Elections’ e-discussion hosted by UNDP Oslo Governance Centre and the Action Coalition on Information Integrity in Elections

We had contributions from across 25 countries, sharing learning and best practice from a range of electoral contexts. These have helped sharpen our thoughts and created “a pool of wisdom” that is now guiding the programmatic guidance paper on Information Integrity in Elections. This will be presented at multiple global forums, disseminated by the Action Coalition members and participating experts, and will be the first of its kind: A consensus-led guidance on addressing election disinformation in a technological age.

A special thank you to the fantastic discussion moderators from member organisations of the Action Coalition:  Ingrid Bicu Niamh Hanafin, Hedda Oftung, Anneliese Mcauliffe, Jiore Craig, Petra Alderman, Professor Nic Cheeseman, Vusumuzi Sifile,  Mirna Ghanem, Carolyne Wilhelm, Bianca Lapuz, Clara Raven, Gilbert Sendugwa.

Member organizations of the Action Coalition:

  • UNDP  
  • Africa Centre for Freedom of Information    
  • Centre for Elections, Democracy, Accountability and Representation (CEDAR), Birmingham University   
  • Institute for Strategic Dialogue    
  • International IDEA   
  • Samir Kassir Foundation   
  • Panos Institute Southern Africa    
  • Maharat Foundation  

Following the e-discussion, here are the next steps:

  • In-Depth Consultations: We continue to consult with individual UNDP teams, other UN entities, partners, donors, and thematic experts to further sharpen the guidance paper on how the Action Coalition can best respond to enhance information integrity in elections
  • Validate our findings:  We will be hosting a virtual event which will run through the findings of the guidance paper and ensure that the final paper is consensus-led guidance.
  • Programmatic Guidance Paper: By the end of 2022, we will have a final programmatic guidance paper on addressing information integrity in elections.  We hope to share this with contributors to this consultation before promoting during early 2023.
  • You can continue the exchange thoughts or contributions to this topic by contacting UNDP Oslo Governance Centre, Niamh Hanafin ( or Clara Raven (



In recent years, digital technology has played an increasingly important role in elections. Social media platforms have been used to spread disinformation and manipulate public opinion, and ill-intentioned actors have been accused of weaponising online dissemination methods to interfere in the electoral process. However, digital technology can also be used to combat these threats. For example, online fact-checking sites can help to debunk false information, and experts can use big data analysis to identify potential vulnerabilities. In addition, public awareness campaigns can help educate voters about the risks of disinformation and how to protect themselves from it. By harnessing the power of digital technology, alongside more “traditional” approaches, can we better protect our elections from interference and manipulation?


  1. What kinds of digital solutions are being deployed?
  2. What are the benefits and risks of digital tools to counter disinformation in an electoral setting?
  3. What are the recommendations for effectively deploying digital tools into existing information/election landscapes?
  4. How do we understand and measure impact of digital interventions and responses?


We are committed to protect the identities of those who require it. To comment anonymously, please select "Comment anonymously" before you submit your contribution. Alternatively, send your contribution by email to requesting that you remain anonymous.

Comments (19)

Clara Raven
Clara Raven Moderator

Hello and welcome to Week 3 of Promoting Information Integrity in Elections. A big thank you to Jiore Craig for moderating this discussion room last week. There have been some excellent contributions.

My name is Clara Raven, and I work with UNDP Oslo Governance Centre (OGC) providing support to their Information Integrity portfolio.  Prior to joining OGC I was with UNDP in the Cambodia Country Office. I look forward to getting to know some of you this week.

Digital technologies, by nature, are iterative and fast-changing; the nature of digital tools and users’ interactions requires us to think differently about measuring outcomes and impact than we might have in traditional, offline interventions. One of the questions I would really love to explore more this week is ‘How do we understand and measure impact of digital interventions and responses?’

A recent project we have been working on under the Tech for Democracy initiative is looking at just this. We are developing an M&E framework to understand the impact of UNDP’s iVerify platform has had on information integrity on pre-election and post-election periods in Kenya.   

What are some of the ways you are exploring impact? Online surveys, comment analysis or are you using machine learning to analyse data and evaluate impact?  How are you incorporating digital data and metrics into your monitoring and evaluation frameworks? What core metrics are you focused on?

This discussion is open to the public and all contributions are welcome.  The questions above provide some guidance but if you have thoughts beyond those, please feel free to share.  We're looking to hear from researchers, journalists, civil society actors, electoral commissions, UN agencies, tech companies and of course voters. Invite colleagues to participate and share examples of your work!

Please indicate the question(s) you are answering in your comment and feel free to introduce yourself

Looking forward to a great discussion...

Mark Belinsky
Mark Belinsky

Thanks Clara! My name is Mark Belinsky and I work with the UNDP ExO from the Digital Office. I've been leading the technical implementation of iVerify and grappling with the ways to measure impact and metrics. Currently we do this in a number of ways and are looking to further expand our methods as we continue to build new features into the product.

In terms of analytics, we've been looking at tracking website usage using tools that are GDPR-compliant. For instance we leverage to give us insights into the reach that our website is having with users. 

When reviewing hate speech, we're filtering millions upon millions of posts from Facebook for instance to see what instances are occurring. By leveraging machine learning to detect potential cases then filtering down further with human-in-the-loop machine learning to both ensure that humans are making decisions on content and that they don't get burdened with the same content twice. This allows us to get to the core questions of whether a lack of hate speech is due to a healthy ecosystem or due to limited tracking on our side. From there, we can adjust our system to become more exacting and thus start to measure the levels of interventions we need to pursue as well as the success of those interventions. 

Similarly we are looking to model efficiencies into other social media sites. Tools like Twitter and YouTube are often monitored erratically because location is hard to determine. Instead of using weaker methods of monitoring, we're creating methods to determine what a country's communication ecosystems look like then tracking those to secure more accurate results. Other newer social media sites like TikTok have proven even more difficult due to a lack of access to their APIs as well as the cost and difficulty of tracking video and audio content. 

Niamh Hanafin
Niamh Hanafin Moderator

Greetings friends and colleagues, you're very welcome to this discussion room on week 1. Thank you for taking the time to join the conversation! 

My name is Niamh Hanafin and I'm senior advisor for Information Integrity at UNDP's Oslo Governance Centre.  I'll be moderating this room all this first week.

In this room we want to explore how to best use digital technology to protect information integrity during elections.  If you have experience using or benefiting from digital tools during elections, please do share your thoughts on how it went, what impact it had or what challenges you faced. 

This discussion is open to the public and all contributions are welcome.  The questions above provide some guidance but if you have thoughts beyond those, please feel free to share.  We're looking to hear from researchers, journalists, civil society actors, electoral commissions, UN agencies, tech companies and of course voters. Let us know if and how technology has made it easier to tackle electoral disinformation and other harmful content on and offline. 

Please indicate the particular question(s) you are answering in your comment. Looking forward to a great discussion...

Bacem Chouat
Bacem Chouat

Greetings friends,

My name is Bacem Chouat and I'm a product designer and marketing researcher from Tunisia.
As I'm a Tunisian citizen, I will comment taking into consideration our democratic situation here in Tunisia.

1: After the Tunisian revolution we have experienced some digital tools that contribute to enhance and improve the democracy in Tunisia such as some mobile application offering the possibility to register for the elections and learn how the electoral process works via some small videos, also to urge people to avoid some behaviors and election crimes that would impose penalties on people and parties.

2: In a democracy situation, I think the benefits of digital tools are more than their risks because the most of people and political parties will use them to counter disinformation and to detect frauds, for example to share some videos or photos on social media showing a person or a party violating the ban on campaigning during the electoral silence. However, since the coup of 25 July 2021, all powers are concentrated in the hands of one person so I think we can't talk more about using digital tools to counter disinformation in an electoral setting because in a such situation the electoral process is untrustworthy and I can't participate in any election under the governance of one person that I don't trust him as he will manipulate the media the way he want and he did.

3: In a democracy situation, I think we have to intensify awareness about the benefits of digital tools and to make serious penalities on individual that use digital tools in a risky way (fake informations, trolling, the manipulation of the information, etc.)

Niamh Hanafin
Niamh Hanafin Moderator

Bacem Chouat thank you for this thoughtful kickoff to our discussion, you raise so many important points about the context in which digital technologies are deployed. It seems that trust is a big factor for you, and perhaps for other Tunisians, and in that situation, it's hard to trust the integrity of any election regardless. 

As you say increasing public awareness of the risks and benefits of technologies available is a great starting point and critical to building resilience to their misuse. 

I hope some of the questions you've raised will be addressed by others in the coming weeks. At what point is digital technology redundant? What role does public trust play in electoral legitimacy? Please feel free to continue the conversation with us and thank you again.

Osama Aljaber
Osama Aljaber

Dear Niamh and colleagues,

Thank you very much for initiating this discussion. I’m Osama Aljaber, and I work as a Digital Democracy Specialist at UNDP Regional Hub for the Arab States. 

One tool I have been developing and deploying with the team in UNDP RBAS and Tunisia is eMonitor+. It is a platform that helps scan and monitor digital media and identify issues such as misinformation, electoral violations, hate speech, political polarization and pluralism, and online violence against women during elections. eMonitor+ platform is already being used by media and electoral commissions in Tunisia, Lebanon and Libya and by CSOs in Peru. It currently works in four languages: Arabic, English, French, and Spanish. It relies on machine learning to track and analyze the content on digital media, including utilizing various algorithms to, for example, conduct sentiment analysis, topic classification, hate speech analysis, and reversing image and video sources. The platform also allows manual analysis done by trained monitors working daily on the platform to explore and analyze content. Afterwards, the results of the data analysis of both the machine and the monitors is visualized on the platform and published on external and social media platforms to inform the public. 

One of the advantages of deploying such technology is allowing social media content analysis on a big data level. Meaning: huge in volume (size), massive in velocity (real-time data) and diverse types and verities of data. Big data analysis, on the one hand, can be seen as a powerful tool to offer new insights about issues such as misinformation and hate speech during elections. On the other, it could enable invasions of privacy, decrease civil freedoms, and significantly increase state and corporate control when states are using such a tool for surveillance. 

As on social media, anyone with an account can create and share information at any time, resulting in a chaotic news environment. Understanding, analyzing, and responding to harmful content during elections needs to be timely and respond to the emerging context. Thus, manual analysis alone cannot handle the amount of online content. Therefore, using tools such as eMonitor+ would allow different stakeholders to expand and amplify their work. For example, during the 2022 Lebanese elections, the Supervisory Elections Commission in Lebanon was able to monitor and analyze more than 350k posts in less than two months, using eMonitor+ and its AI, while during the same period, the monitors working manually were able to analyze 15k. However, while AI allows analyzing a large amount of data, it does not always give as accurate results as manual work, as building a high-level AI module requires a large amount of data that needs to be as much as possible from biased assumptions. Also, retaining context using such tools remains critical. Context is hard to interpret at scale and even harder to maintain when data are reduced to fit into a specific machine-learning model.

Few recommendations I can share from the experience of deploying eMonitor+: firstly, it is essential to use these technological tools not only during the elections campaign periods but also before to ensure the expertise and techniques needed during elections. This can be done not only by conducting research studies on different topics, including the participation and hate speech against vulnerable groups but also by developing the accuracy of the machine learning modules by feeding them with more unbiased data and building new machine learning modules. Secondly, conducting machine learning models need to be evaluated and tested constantly for potential harms of biased results. Finally, communication and outreach are essential steps in deploying such tools, as communicating the results to relevant stakeholders and the public is as important as analyzing social media content and the process of fact-checking itself.

Looking forward to continuing to follow the discussion, and thank you for the opportunity to contribute! 


Niamh Hanafin
Niamh Hanafin Moderator

Thank you Osama Aljaber this is a very comprehensive overview of eMonitor+ and also a thoughtful assessment of how to increase the effectiveness of this kind of tool. Given electoral support projects tend to be of limited duration and intense implementation exercises, do you feel then that digital tools like eMonitor+ should be a more sustained approach, and well embedded in advance of elections? Do you have any thoughts on how that could work?

Jonathan Tanner
Jonathan Tanner

These are all such important questions it is frustrating that there don’t appear to be any clear answers! 

I’ve spent a good deal of time thinking about how digital information environments are evolving and have been lucky enough to pick the brains of plenty of experts on my podcast too. 

It’s great to see Osama’s example of using advanced social media monitoring to spot potential threats to information integrity during elections. I think this type of approach is essential and, as Osama, says it’s not just something for election time. By monitoring influential social platforms for accounts and wider networks that are known to produce false information it is possible to play an ‘early warning’ function that limits the impact malicious actors can have in the information space. That’s something we are learning to do at my company, Rootcause. 

We know that despite their frequent reluctance to take responsibility for the social harms their platforms enable, major social media companies are willing to take action to remove accounts that can be proven to be engaging in inauthentic coordinated behaviour. Platforms like Emonitor+ have the potential to help make this happen and with so many elections looking in 2024, exploring how to adapt this to different contexts could be valuable. 

I am interested in the role of public institutions responsible for delivering free,fair elections and the extent to which they have grasped the transition to a digital public sphere. 

In brief, there are five trends shaping digital information environments. Data, fragmentation, contested reality, disintermediation and a decline in the role of text as a medium. 

Each of these has consequences for those wanting to monitor an election. The explosion in available data lends itself to the approaches Osama describes but it also offers major opportunities to political actors seeking to influence voters. One such example of this is through political advertising (Who Targets Me do a great job of monitoring this during some major elections) but there are also plenty of unofficial political actors on social media promoting party and candidate narratives. 


The fragmentation of the media ecosystem - where traditional TV and radio are supplemented by podcasts, you tube channels, telegram groups etc makes effective monitoring hard. The rise of independently influential content creators adds to this complexity of digital information ecosystems. These challenges lead me to wondering if the existing role of independent electoral observers has any digital dimension to it, and if it doesn’t - should it?

On the question of mis and disinformation there are certainly no silver bullets. The effectiveness of fact checking organisations is highly contested, the willingness of social media companies to invest in protecting citizens appears - through product design or content moderation - appears limited in the global north and practically non-existent in the global south. Furthermore it is an essential part of democracy that people are able to selectively use information to make the strongest possible argument for their candidate - something usually referred to as ‘spin’. When you add in the ways that technology looks set to make it harder, if not impossible, to spot synthetic content then it gets even more difficult to see how to prevent mis and disinformation becoming embedded in digital information ecosystems.

We can hope that better regulation and better digital literacy will help to weaken the hand of those who seek to profit from false information but that will take a long time. 

In the meantime trust is clearly the gold standard currency of the internet. Many organisations are recognising the need to invest in building and sustaining the trust of their audiences by building communities that provide useful information and frequent engagement to their members. 

For independent institutions with an important role to play in democratic governance - such as electoral bodies -  the question of how to win trust is essential. In traditional information ecosystems semiotic power was helpful but in modern digital information ecosystems this power is much weaker. In countries like mine (the UK) the factors which create and maintain trust amongst citizens have changed, they are less about deference and expertise and more about authenticity and personality. Better strategic communications can undoubtedly play a role in helping institutions react to this shift but it will remain challenging.

Brian Simpande
Brian Simpande

Hello everyone, it's a great pleasure to be part of this forum. 

Instant communication and social media have made it simpler than ever for people to get in touch with each other, regardless of time or place. Digital tools help shape a nation's democracy. Many people now find it easier to stay informed and discuss or express their views related to elections online. The use of digital tools in a technological environment is beneficial for several reasons, however social media, for instance, has provided the ideal atmosphere for spreading false information on elections, where some members of the public disseminate fake and unverified information regarding elections. 

An online technology-based platform called iVerify Zambia Response and Fact Checking Mechanism is being implemented by Panos Institute Africa (PSAf) in conjunction with the United Nations Development Programme (UNDP) through the Democracy Strengthening in Zambia project and the UNDP Joint Task Force on Electoral Assistance (JTF). This platform aims to identify and address misinformation, disinformation, and hate speech during elections and beyond, as well as to promote responsible speech. The application uses AI and human fact-checking to identify whether election-related accounts and narratives are true or false. 

In countering disinformation, the iVerfy Zambia Fact Checking and Response Mechanism digital tool has increased the capacity of mandated national actors to identify and fact-check mis/disinformation. It has been welcomed by various stakeholders, Civil Societies, the Media, law enforcement agencies, and the electoral body to support and respond to cases of misinformation regarding electoral processes and concerns that threaten the credibility of a free and fair election environment. This has resulted in an increase in awareness about election misinformation and its dangers and why it should be avoided. Stories that contain elements of misinformation have been retracted or deleted by responsible actors.  Digital tools specially designed for electoral settings may risk losing their credibility if they are partial, lack professionalism, and are not transparent. These principles are important if citizens must embrace any digital tool in an electoral setting.  

I think that to effectively deploy digital tools in any electoral landscape, state and non-state actors must be fully involved in the implementation, for them to also have ownership and understand that the digital tools are intended for the public good. However, in some environments, despite the advantages of digital tools in advancing electoral accountability and transparency, it may be challenging. 





Niamh Hanafin
Niamh Hanafin Moderator

Week One Summary

Thank you to all our early bird contributors for a fantastic first week of discussion in this room. Here is the summary of week 1: 

Bacem Chouat in Tunisia explained that after the revolution, there was a sense of optimism about the role of digital tools in strengthening elections. And in a democratic setting, there are meaningful ways to deploy digital tools such as to combat disinformation and monitor electoral violations.  Even then more effort is needed to increase public awareness of digital tools and their uses and to disincentive those who abuse them. However, currently in Tunisia, in a context where trust in media, government and the legitimacy of elections is low, it is not applicable anymore.

Osama Aljaber highlighted a UNDP-developed tool eMonitor+, which uses machine learning to monitor and make sense of vast amounts of online data in a way that would be impossible manually. eMonitor+ has been adapted to a number of countries, languages and information priorities and incorporates human analysis as well as automated monitoring.  It’s used to identify issues such as misinformation, electoral violations, hate speech, and other narratives and trends. 

One of the advantages of deploying such technology is allowing social media content analysis on a big data level. On the other hand, it also threatens online privacy and can be used by states and others as a means of control or reducing civic freedoms.

However AI-analysed data can lead to biases in results and doesn’t readily adapt to different contexts. To avoid this, Osama recommends deploying such responses well in advance of elections, and ensuring enough time to mitigate the risk of bias and conducting outreach and communication to all stakeholders on the objectives and results of these kinds of platforms.

Jonathan Tanner also highlighted the need for a more sustained approach to online monitoring as a means to flag problematic behaviour online in advance in order to pressure social media platforms to address it.

Jonathan raises the question of how well prepared electoral bodies are to face this challenge, given the ease with which political actors can try to influence voters.

Responses to disinformation are particularly challenging as social media platforms aren’t investing enough in most of the world and synthetic content is increasingly hard to spot. Regulation and digital literacy are good ways to start but are long term solutions.

He also raises the issue of trust, and the need to build public trust through better engagement. Institutions such as electoral bodies need to understand better how trust is gained and maintained in this new information landscape and adapt their strategic communications accordingly.

Brian Simpande reminds us of the importance of digital tools in helping us stay informed in election times but it brings with it the risk of exposure to disinformation. He describes a digital tool implemented by Panos Institute in Zambia in collaboration with UNDP. The iVerify platform aimed to identify electoral disinformation and hate speech and provide fact-checking capabilities.

The deployment of the platform was generally well received and raised awareness of electoral disinformation and its risks. It also led to practical steps such as content being retracted or deleted. Gaining buy-in of the public though requires professionalism and impartiality. Gaining the ownership of election stakeholders requires their involvement in implementation.

Thank you again and handing over to our Week 2 moderator, Jiore Craig 



Gabriel van Oppen Ardanaz
Gabriel van Oppen Ardanaz

Dear colleagues,

In light of the interesting discussions that have been shared here, from the EC-UNDP Joint Task Force in Electoral Assistance, we would like to share another initiative named iVerify. The iVerify initiative is a digital tool being implemented globally to combat dis/misinformation and hate speech, which has recently been awarded the as the organization’s first Digital Public Good (DPG) in this field.

At its core, iVerify is a comprehensive support package premised on principles of national sovereignty, multi-stakeholder engagement and sustainability, which allows to enhance institutional capacities of national stakeholder in safeguarding information integrity during electoral processes and beyond. It is intended to allow for ongoing, real-time identification of, and response to, harmful online and offline content combining two powerful digital solutions, Facebook’s Crowtangle monitoring tool and Meedan’s fact-checking environment. Technical expertise is provided to UNDP Country teams and national counterparts to customize and rollout the system according to local needs and conditions. 

  1. Technical support is made available to map the existing institutional infrastructure and initiatives and to build synergies between them; supporting the creation and coordination of a multi-stakeholder mechanism for the rapid and efficient identification, verification, and response to online threats to the integrity of information.
  2. iVerify is the digital component of the package, combining solutions to cover the entire spectrum from identification to verification and response. It combines both human and automated features, and an algorithm to detect hate speech to feed into coordinated response.

To date, the iVerify solution has been rolled out in Zambia (2021), Honduras (2021), Kenya (2022) and currently Liberia (2022), with several other countries in the pipeline. The objective is to expand the scope and effectiveness of this initiative, for which the sharing of knowledge, experiences and lessons learned is critical. For this reason, we appreciate the creation of this space and are looking forward to continue reading and sharing with other members on the role of digital solutions in responding to electoral disinformation.

For more information on the iVerify initiative, don’t hesitate to learn more at: .

Jiore Craig
Jiore Craig Moderator

Hi everyone! My name is Jiore Craig and I am the Head of Elections and Digital Integrity at the Institute for Strategic Dialogue. I will be your moderator for Week 2! 

I very much enjoyed the interventions from the first week. My background is in public opinion research as well as digital research and interventions around global elections. I spend a lot of time thinking about 1) how people/voters use different social media platforms in different ways based on those platforms' UX but also the country and cultural context and 2) how interventions/research can better incorporate the social and human dynamics at play when it comes to online harms in elections, and 3) how elections being a petri dish for bad actors and a focal point for media, researchers, and civil society keeps us distracted from deploying the necessary interventions outside of election context, on an ongoing, long-term basis. 

  1. What kinds of digital solutions are being deployed? I would love to hear others thoughts on the following categories for grouping solutions.
    • Trust-building solutions
    • Tech solutions
    • Regulation/legal solutions
  2. What are the benefits and risks of digital tools to counter disinformation in an electoral setting? 
    • I will answer this question specifically addressing the use of "tools". As others here have mentioned, trust - either building trust or breaking it - is essential to the online harms we are trying to mitigate in an election setting. In most cases, the tools I see most frequently referenced such as AI content-moderation, automated fact-checking, and others are rather devoid of a corresponding feature that either acknowledge the breakdown in trust creating the void for bad actors to exploit or working to repair it. There seems to be a lack of emphasis on building trust on messengers/institutions etc. despite how important the strategy of tearing down trust in those entities is for bad actors.
    • That said, in some cases, tech tools like messaging apps like WhatsApp and Telegram themselves or bespoke apps meant to connect civil society groups or help voters access information about the polls can be used for relational organizing or 'warm' organizing that focuses on building relationships over pushing content and calls to action. This approach requires investment over time and people power to be successful, but promising examples are presently working well in places like Brazil. 
    • Tools for researchers are also essential though increasingly the lack of transparency on the part of the social media platforms is making it harder for researchers with access to even the best tools to get a full picture read on what's actually at play especially when it comes to attribution, financial tracking, and the reach and impact of different online activity/tactics/influence operations/narratives.
    • The tech companies have policies - both public and internal policies  - that if enforced would improve the information ecosystem around elections and in general. Their failure to enforce their own policies and/or their prioritization of profit over safe spaces online has led to several flops where interventions could have been deployed but weren't. The Facebook "Break glass plan" comes to mind. 
    • Like much of what I'm sure this group will discuss, bad actors exploiting the techniques used to inoculate against disinformation, protect against harassment and threats, or promote factual information can always be coopted for bad. Same with policies developed in democratic settings intended to hold tech accountable that are then used as oppressive policies in other country settings. 

I'll be back mid-week to say more on measuring the impact of online harms during elections as well as how to measure the impact of interventions attempting to solve for them! I'd love to hear thoughts on what we're getting wrong on this front to date? For example, I consistently see a disproportionate focus on Twitter data and inaccurate claims that it can be taken as representative all because it has a more open API (for now anyway). Similarly, people often write questions into public opinion research assuming people's recall on their social media use is the same as their recall on other broadcast media platforms which is likely not the case. 

Keen to hear others' thoughts! 

Clara Raven
Clara Raven Moderator

Hello everyone! I read an interesting article recently by International Foundation for Electoral Systems (IFES) on Transparency in Online Campaigning. There are some suggestions for available routes to bring greater accountability to political campaigning including data visualization, independent political ad repositories, and crowdsourcing political ad data. Would be great to hear if anyone has experience of exploring similar initiatives?


Gilbert Sendugwa
Gilbert Sendugwa Moderator

Dear colleagues,

It has been three enriching weeks of discussions on experiences and work on informaton integrity in electoral process. Thank you for all the knowledge and experiences shared.

I am pleased that during this week, week 4 of the series, I will be co-moderating the conversation this time round focusing on digital solutions to promote information integrity in electoral processes. As I invite you to share, I would like to start with sharing from one of our cases. 

Electoral violence is quite common in many places especially in Africa and impacts electoral processes and outcomes. Disinformation on this violence, quite often is used to undermine accountability and create an atmosphere of impunity for electoral violence. It is also used to discredit opposing sides, scare voters or acheive other goals in favour of the source. Beyond disinformation, sometimes people genuinely pass on information they believe to be correct on key aspects as electoral violence which also has significant impacts on the quality of information on electoral proceses. 

To address electoral violence and promote information integrity in electoral processes, AFIC developed and deployed a tool to track electoral violence in terms of location, date of incident, perpetrators, victims, etc. The use of this tool is integrated with verification data collection, verification and reporting. 

Here is how it works, monitors were trained, provided with phones and deployed in disctricts. Upon picking an incident they reported to our secratariat where a 3- person verification team would interview other sources in the area- police officer, journalist, local leader or NGO worker. Once a case has been aunthenticated by at least two other people in adition to the monitor it would get published from the back end to the front end. Alerts to monitors would either start from offline with incidents happening in the community, radio or other media source or online especially through social media. Please follow this link for details of the dashboard

1. What digital tools have you used or found helpful in promoting information integrity in your contexts?

2. What are experiences with including everyone,

3. What has worked well and why?

4. What are the lessons and what could be better?

5. What issues are not being tracked and yet to critical for information integrity?


Vusumuzi Sifile
Vusumuzi Sifile

Thank you Gilbert. Just to weigh in, at the risk of repeatting what my colleagues have shared, I can mention that in Zambia we deployed the iVerify Mechanism, which is a technology based platform for identifying and mitigating the spread of misinformation, disinformation and hate speech, as well as facilitating relevant response actions by various actors. In our context where accessibility of the internet and digital tools is still low in some parts of the country, we had to also leverage other offline approaches - like community media - to strengthen the flow of verified and verifable content from offline platforms to online platforms, and from the online platforms to the offline platforms. In our Mechanism, we embedded a number of social media platforms, and embedded algorithms that identify key words and channel them through the first part of the system. Most of the work is done by our "human in the loop" cohort of fact checkers, media monitors and coordinators who engage different actors to address specific issues raised, and to facilitate response actions. 

Jasmin Gilera
Jasmin Gilera
  1. What kinds of digital solutions are being deployed?
  • Verification – double-checking the sources of information, seeking primary evidence from eye witnesses or double checking facts and figures and serves as an overall way of quality control for a new outlet content before publication
  • Fact-checking – happens after publication. This form of “ex post” fact checking seeks to make public figures and the media accountable for the truthfulness of their statements
  • Debunking – this means publishing the evidence that proves and demonstrated falsehoods and often by explaining for the press involved in reaching this conclusion.
  • Electoral-related media monitoring – establishing and managing a well-organized program for surveying the news and the electoral related content disseminated on social media
  1. What are the benefits and risks of digital tools to counter disinformation in an electoral setting?
  • Flagging – online platforms usually allow mechanisms for users and internal processes to flag unlawful, offensive or violent content
  • Labeling – allows users to identify paid advertisement particularly important when it is related to political propaganda
  • Black listing – can imply even removing a particular user from a social media platform. Very contended as it can collide with the freedom of expression and the right to information.
  1. What are the recommendations for effectively deploying digital tools into existing information/election landscapes?
  • Regulations of online contents during electoral period.
  • Censorship – any attempt to regulate online content should balance the rights to freedom of expression and access to information with the protection of other civil and political rights such as the right to political participation, privacy and freedom from discrimination.
  • Self-regulation – a mechanism of voluntarily compliance as sector or industry level where legislation does not necessarily play role in entering the standards
  1. How do we understand and measure impact of digital interventions and responses?
  • Set the expected outcomes
  • Develop a code of practice
  • Review and make public “the way industry has or has not met the standard”
  • Ensuring transparency of political advertising
Gilbert Sendugwa
Gilbert Sendugwa Moderator

Dear colleagues and participants,

Thank you for participating and sharing experiences on digital solutions to promote integrity in electoral processes. There was greater depth in the discussions as summarised below: 

Clara Raven shared a very interesting article from IFES with suggestions to promote accountability in political campaigning with specific examples and tools. 

Vusumuzi Sifile shared pratifcal experience with the use of UNDP’s iVerify platform to address misinformation, disinformation and hate speech. Of significant note, is how they used other channels like community media and social media to compliment iVerify especially in overcoming challenges of digital divide especially in rural communities where access to internet is a challenge. 

Jasmin Gilera summarised the natire of digital tools being deployed, their benefits and recommendations. She noted that existing tools are used for verification, fact-checking, debunking falsehoods and for electoral related media monitoring. The main benefits, she notes are flagging unlawful, offensive or vioent content; labelling and black listing.  Jasmin recommends strengthening online regulation during electoral period, prevent censorship while regulating by creating the right balance between protection of other rights and freedom of expression and public access to information; and strengthening mechanisms for self regulation. 

Quite often, there is a lot of misinformation and disinformation around electoral violence. This many times escalates violence but overall affects electoral process. I shared AFIC election violence monitoring dashboard indicating trends, victims, perpetrators of electoral violence. 

Earlier on Osama Aljaber shared reflections on the use of eMonitor+ 

Overall, during the week there was great learning on the existance of various tools to promote information integrity from various fronts. A major take away is creating awareness and promoting active use of these tools for better elections. 

Thank you everyone!