This consultation is now closed

Read the Summary Report - Better Information Ecosystem.

Many thanks all contributors from over 50 countries for sharing your valuable knowledge, experience and perspectives in UNDP and UNESCO's global online consultation on the impact of, and responses to disinformation. The contributions from over 150 UN colleagues and other experts in this field will help to inform and sharpen UNDP and UNESCO’s responses to disinformation going forward.  If you missed the opportunity, you can still participate by submitting your written contribution to [email protected] on or before 13th November 2020.

With much gratitude to our excellent team of moderators: 

Based on the results of this e-discussion, we have continued to sharpen our thinking through focused consultations with key private sector actors, donors, UN and civil society organisations. As a result, a summary report from the e-discussion and consultations has now been compiled and is available on this page. The report summarises key points raised by the consultation participants. The views and opinions in the report are those of the contributors and do not necessarily reflect those of UNDP and UNESCO.

Thank you to all contributors for your great support. 

 
Welcome to engagement room 3!

There are many innovative responses to disinformation being rolled out across the world, from advanced artificial intelligence to community leaders working to dispel rumours. There is a need for digital solutions, policy and regulation, media and journalism development, citizen engagement and much more. Given the complexity of the issue, it seems that a holistic approach to disinformation is needed, one that involves many different groups and competencies.

This is the space to seek solutions. The invitation is for you to join in exploring these from three response areas:

  1. Reducing the creation and production of disinformation
  2. Stemming the dissemination of disinformation
  3. Building resilience of the target audiences or consumers of disinformation

As a reminder, disinformation is “false, manipulated or misleading content, created and spread unintentionally or intentionally, and which can cause potential harm to peace, human rights and sustainable development”.
 


Please answer any of the below questions (including the question numbers in your response).  Feel free to introduce yourself if you wish. We look forward to hearing from you.  

  1. What role should legislation and regulation play? How can regulatory responses be developed in a way that respects fundamental rights such as right to privacy, freedom of expression and right of access to information?
     
  2. What role should policy play at the national, regional and international level? How can these be harmonised?
     
  3. How can internet companies be effectively governed or regulated to ensure they act in the public interest?
     
  4. What are the digital and technological options for addressing this? Do you have examples of effective responses?
     
  5. What is the role of journalism? How can journalists, broadcasters and news editors be better equipped and supported to address this issue?
     
  6. Which stakeholders need to be engaged and which strategic partnerships should be considered?
     
  7. How can we build greater resilience to disinformation, especially among vulnerable or marginalised groups, such as through greater media and information literacy?

 


We commit to protect the identities of those who require it. To comment anonymously, please select "Comment anonymously" before you submit your contribution. Alternatively, send your contribution by email to [email protected] requesting that you remain anonymous.

Comments (45)

Daria Asmolova
Daria Asmolova Moderator

Week Four Summary

This week was a great way to conclude this consultation with more rich and thoughtful insights into problems and solutions to information pollution. Key themes that stood out to me in the final days of our discussion:

  1. There were different opinions on the regulation of mis- and disinformation online, so it’s still up for debate – how far should the regulation go in trying to preserve information integrity?
  2. Transparency and information sharing: online media should be open about their algorithms and actions to fight information pollution, and knowledge and research on the issue should be open and accessible. Larriza Thurler shared a couple of examples of that.
  3. We need to have diverse and inclusive participation in shaping policies and strategies for dealing with mis- and disinformation.
  4. We should make more effort to improve digital and media literacy, so the public is less susceptible to information pollution.

Some other notes:

Journalists are important to help us fight misinformation. But I would raise a question: how can we grow trust in journalism as the recent Edelman’s Trust Barometer shows that in many countries, developed and developing, less than 50% of people trust media?

Marine Ragnet  highlighted three interconnected elements which need to be addressed to fight disinformation: “1) the medium – the platforms on which disinformation flourishes; 2) the message – what is being conveyed through disinformation; and 3) the audience – the consumers of such content.”

Orna Young suggested to focus the regulation on transparency and accountability and raised an important point that many regulatory bodies have limited competence on digital issues.

AlSur consortium questioned how to deal with official positions which may not be aligned with those of scientific communities.

Alasdair Stuart commented on how we can approach this problem proactively rather than reactively. He also reminded us that fact-checking is labour intensive and we need to cooperate and support fact-checkers more.

João Brant shared ideas on how to mitigate disinformation on messengers which have been used to spread unverified information and disinformation at a scale we can't even assess.

Sarina Phu noted the difference between ‘illegal’ and ‘legal but harmful’ content/activity which would require different regulatory approaches.

But this is just a summary and I would encourage everyone to read all comments in full.
Thank you all for participating in the consultation!

-

Week Three Summary (Room 3) by Moderator Emanuele Sapienza.
Week Two Summary (Room 3) by Moderator Stijn Aelbers.
Week One Summary (Room 3) by Moderator Daria Asmolova.

Caroline Hammarberg
Caroline Hammarberg Moderator

Good morning and a warm welcome to this three-week UNDP-UNESCO consultation on how we can forge a path to a better information system spanning Effective Governance, Media, Internet and Peacebuilding Responses to Disinformation.

My name is Caroline Hammarberg, Coordinator for #CoronavirusFacts Projects at UNESCO’s Section for Freedom of Expression and Safety of Journalists and together with Daria Asmolova, Digital Transformation Analyst from UNDP, I will be your moderator during this week before handing over to colleagues.

We very much look forward to hearing your thoughts and solutions to the above list of questions and hope that we can extract some valuable recommendations together.

When commenting, please let us know which particular question(s) you are responding to. The floor is yours!

Warm regards,

Caroline & Daria

Ayushma Basnyat
Ayushma Basnyat

Question 5:

In Nepal, UNDP and UNESCO have collaborated on a range on issues; in particular, the two organizations collaborated to mark the World Press Freedom Day in 2019, which was themed, “Media for Democracy and Peace: Journalism and Elections in the Age of the Internet.” Nepal’s Ministry of Communication and Information Technology, the Election Commission Nepal and the Federation of Nepali Journalists, together with the European Union, UNESCO Nepal and UNDP’s Electoral Support Project organized a one-day national conference to celebrate this occasion.

The programme included several deliberations at the inaugural session as well as the six thematic panel discussions where a lot of pertinent ideas on the nexus between disinformation and governance - elections in particular - were raised.

Another event that the two organizations collaborated on was in orienting young female journalists on political reporting in order to bridge the gender gap in electoral journalism. The event emphasized the importance of having a balanced gendered perspective in journalism. It also proved to be an effective platform to discuss solutions to common problems, for women to unite and work collectively to make progress in their professions and ensure a women-friendly professional environment.

In Nepal, approximately 70% people have access to the Internet: 95 thousand have access to Facebook and 35 thousand have access to Twitter.

From both the events, the following recommendations ensued:

• Media should honour its professionalism and should be unbiased in their reporting. It should abide by the Code of Conduct for the media.

• Journalists should be very careful about disinformation in social media and should not publish or broadcast news without proper verification.

• Social media is a tool that can both spread as well as fight disinformation.

• The Federation of Nepali Journalists should finalize social media guidelines for journalists and share it with other media houses as a reference.

• The Election Commission, Nepal should issue a social media code of conduct for the general public and political parties during the elections. Based on this recommendation, UNDP’s Electoral Support Project also supported the Commission to draft a social media policy to productively engage in social media platforms and dispel disinformation.

• All the media laws that will be brought up by the federal, provincial or local governments must honour the Constitution of Nepal.

• The laws and policies should be based on the international standard of press freedom in democratic countries.

• Media laws should be focused on making media self-regulatory rather than controlling them. Press freedom must be protected. • Adapt code of conduct and legislation accordingly, without curtailing fundamental freedoms.

• The Nepal Police should remain vigilant and act to ensure that trolling and hate speech is discouraged, especially on females, gender and sexual communities, marginalized communities and activists.

• The state must protect and promote social media as a public accountability tool, and act upon any complaints received through social media.

• Increase ECN capacity to detect and respond to disinformation and other threats, both within the Election Commission in Nepal and through partnerships.

• Regularly review the situation in order to be prepared for emerging challenges.

• Experience sharing platform journalists with other stakeholders can help in trust building and also help in disseminating factual information.

• Constitutional Commissions, such as the Election Commission in Nepal, should prepare a roster of media and journalists and update it regularly. This way, a list of credible media houses are recognized.

• Cyber-crime should be made punishable by law.

The specific recommendations for the Election Commission, Nepal on the issues to consider before using technology in elections are as follows: Attention should be paid to the impact of the use of new technology; When using new technology, transparency of the system used in the technology should be ensured. Adequate attention should be paid to minimizing the risks associated with new technology.

More Information:

The full report, complete with recommendations and visual assets, of the event can be accessed here: https://www.np.undp.org/content/nepal/en/home/presscenter/articles/2019/world-press-freedom-day-2019.html

Other events where UNDP and UNESCO have collaborated include:

Stefan Liller
Stefan Liller

Hi Caroline and Daria,

I would like to share a bit about our journey to address disinformation in Uruguay – firstly in connection with the parliamentary and presidential elections in 2019 and now in 2020 in the context of COVID and the regional and local elections.

Our work began in April 2019 when campaigning started in earnest for the parliamentary and presidential elections, scheduled for October 2019. We noted how an increased number of what at the time was referred to as fake news emerged around several of the candidates and parties.

We reached out to a number of stakeholders worried about this development – and joined forces with the Association of Uruguayan Press, UNESCO, the Astur Foundation and Fredrich Ebert Stiftung to promote the signature of an ethical pact between all the political parties to not engage in and discourage the spread of disinformation in the context of the elections.

The pact was signed in Parliament on the 26 of April 2019, with the participation of the whole political establishment in Uruguay – including the siting vice President a former president, all the political parties and the main candidates. In the morning the same day we organized a closed dialogue at the UNDP office in Montevideo jointly with UNESCO, where we brought together representatives of all the political parties, the electoral court, civil society, representatives of media outlets and the internet platforms (Google, Facebook and Twitter) to share information and approaches of how these different groups and organizations work to address disinformation and explore how to further work together.

Following the signing of the pact and the dialogue there was a lot of attention in the media around the issue and several different initiatives appeared in the following months, including a fact checking service called verificado.uy. Jointly with UNESCO and the internet platforms we then organized capacity building workshops for media and central, regional and local governments on how to address disinformation. This work culminated with the elections in October 2019.

In March 2020, in the context of the unfolding COVID pandemic, we produced produced spots for social media with UNESCO on how to address disinformation in times of Coronavirus. The spots were subsequently translated into several languages by different UNDP and UNESCO offices, and highlighted in the context of the celebration of the World Press Freedom Day the 3 of May 2020.

In April, under the leadership of UNESCO we joined them, WHO and the University of Texas to produce an Massive Open Online Course on the topic “Journalism in a pandemic: Covering COVID-19 now and in the future” with a special focus on how to report during a pandemic and the issue of disinformation. It was facilitated from the 4 to 31 of May and during that time more than 9,000 people/journalists from 162 countries participated in the course, which is now available online.

On the 29 of August 2020, in an official act with the congress of governors and the participation of the Association of Uruguayan Press, UNDP and UNESCO, the political parties in Uruguay reconfirmed their commitment to the ethical pact – this time in anticipation of the regional and local elections that were held on the 27 of September. 

We are currently developing an agenda of work and dialogues in Uruguay related to internet content moderation, freedom of expression and democratic governance, where we want to look at 1. New gatekeepers, private censorship and freedom of expression, 2. Concentration, diversity and pluralism on the internet, and 3. Private regulation, the role of the State and democratic governance. We hope this can be a contribution to the discussion of the roles and responsibilities of the internet platforms and others with regards content management on the Internet.

Best,

Stefan

Daria Asmolova
Daria Asmolova Moderator

Thank you for sharing your action plan, Stefan! The ethical pact signing is a brilliant idea which reminds us that non-tech solutions can have a big impact even in the digital space.  
And I would love to learn more about verificado.uy in terms of the usage, engagement and overall reception of the platform.
The agenda for the internet content moderation, freedom of expression and democratic governance is so on point - these are the questions asked in pretty much every part of the world, so please share your thinking as it progresses and feel free to reach out to Niamh Hanafin and other policy experts for more input.

Niamh Hanafin
Niamh Hanafin Moderator

Stefan Liller your approach contains many of the components that we "intuitively" feel should be used to address information pollution.  I was curious to know if you have a sense of impact of the efforts, either individually or collectively?  Were there any measurable changes as per your initial assumptions?

Larriza Thurler
Larriza Thurler

Thanks, Stefan! How is the use of WhatsApp in Uruguay? Here in Brazil we have a lot of fake news spreading through WhatsApp. Media outlets fact check it,  but the government supports are very critical of the media and they don't believe in anything they say. It's a challenge to fight about that. 

Ruth Stewart
Ruth Stewart

Hi everyone, 

So I'm leading a project in partnership with Africa Check which is exploring the evidence for mitigating strategies for misinformation (we're focussing on COVID-19 misinformation shared on social media, but there will be lessons for the wider community). We're just in the process of analysing the research evidence as part of a rapid review, and also analysing a set of interviews conducted with fact checkers across the continent to understand the strategies they use. Happy to share findings once available at the end of the month! (Sorry, it's too early to say much about what we've learnt so far).

 

Caroline Hammarberg
Caroline Hammarberg Moderator

Thank you for sharing Ruth, this is very interesting indeed. What type of factors are you looking at as part of the review? Do you think the findings will be available by 23 October which is the last day for these consultations and is there otherwise a website where you plan to publish them? We look forward to hearing more and please don't hesitate to share also indicative findings to inform the discussions in the meantime if there is enough to go on. 

Katie Burnham
Katie Burnham

We are Farm Radio are also very interested in this! We've been relying on your lists of common myths and misconceptions for some of our work in the past few months!

Ludwwin
Ludwwin

1. Hace falta enseñar los métodos y canales para identificar y denunciar los actos de desinformación en contextos digitales. 

Los análisis de cuentas falsas, el origen de los contenidos y el impacto generado, deberían poder ser clasificados con facilidad para que la sociedad civil entienda la gravedad de cada caso de desinformación.

4. Se debe combatir la desinformación compitiendo con las velocidades y escalas de difusión de las redes sociales. Los jóvenes e influenciadores digitales son importantes aliados que pueden ayudar a difundir información a gran escala en corto tiempo, al igual que las pautas en redes sociales de manera estratégica para competir con medios descentralizados como Whatsapp. En marzo realizamos un piloto con influenciadores de latinoamérica y la desinformación sobre COVID-19 https://es.unesco.org/news/jovenes-influenciadores-desafian-desinformac…

Adicionalmente es importante explorar herramientas de monitoreo de redes sociales para identificar con más facilidad potenciales estrategias estructuradas con fines políticos. Estas herramientas son usadas normalmente en campañas publicitarias comerciales.

5. Las organizaciones que hagan fact-checking son prioritarias para dar respuesta a la desinformación.

7. Contenidos audiovisuales cortos de alfabetización que puedan introducirse con facilidad en redes sociales. Involucrar instituciones educativas, influenciadores y visibilizar los efectos negativos de casos de desinformación locales. 

Juan Pablo Miranda
Juan Pablo Miranda

1.-A first step is to settle the type of information pollution that is being discussed. For example, a fake news is not the same type of information pollution that can be classified as biased or exaggerated information. It is also important to establish different levels of responsibility. For example, a person that create false information with the purpose of misinform has a different level of responsibility compare with the person that share false information unconsciously. In addition, it is important to develop awareness policies in this regard, which at least in Chile have not existed in a massive and systematic way.

3.- It is important to develop a regulatory framework that, on the one hand, respect personal information of users of social networks, and, in the other, makes companies responsible for the content created and shared on their platforms. Thus, it is important to check the existence of mechanisms within platforms to identify information pollution. Likewise, it is important that companies take an active role in the process of raising awareness about the possible adverse effects of social networks and think about ways to mitigate the effects of the information bubbles that are formed from algorithms used by social networks platforms.

6.- It is important to involve the companies that own digital platforms, as well as public institutions, the media, civil society organizations and political parties.

Daria Asmolova
Daria Asmolova Moderator

Great points! When we discuss information pollution, we always try to distinguish between misinformation, disinformation and malinformation, using the typology from the Council of Europe report:

Disinformation. Information that is false and deliberately created to harm a person, social group, organisation or country.

Misinformation. Information that is false, but not created with the intention of causing harm.

Mal-information. Information that is based on real facts, but manipulated to inflict harm on a person, organisation or country.

But maybe this should be reviewed as well?

 

And you raised an interesting point about personal privacy and platform accountability. We are seeing platforms self-moderating hate speech, for example, so is it the way to go or should we still look at regulatory framework?

Daria Asmolova
Daria Asmolova Moderator

Week One Summary

Thank you for all contributors this week. Here is the summary:

1) Beyond tech and regulatory solutions: an ethical pact between political parties in Uruguay helps prevent the spread of political misinformation.

2) What could be the strategies for fighting misinformation on WhatsApp?

3) Personal data privacy and protections vs platforms' responsibility for the content they host: is there a role for regulation or relying on self-regulation is enough?

4) Uruguay is also exploring these questions: 

  • New gatekeepers, private censorship and freedom of expression.
  • Concentration, diversity and pluralism on the Internet.
  • Private regulation, the role of the State and democratic governance.

5) A study with learnings on mitigating strategies for misinformation in Africa should be available at the end of October.

6) More solutions:

  • Digital literacy for identifying misinformation;
  • Engaging influencers.
Ruth Canagarajah
Ruth Canagarajah

"How can we build greater resilience to disinformation, especially among vulnerable or marginalised groups, such as through greater media and information literacy?"

This question invites quite a few different, interesting avenues that could be explored. For instance, there's research that shows that crowdsourcing insights online re: misinformation/disinformation has the potential to work despite concerns of laypersons' (i.e. non-experts) knowing how to accurately flag disinformation, misinformation, or news that is heavily biased and politically-oriented information. This study was run in 2019 in the hyperpartisan context of the States and would need to be replicated in other contexts to see if the findings hold.

Another interesting idea to test how to build greater resilience to disinformation is "inoculation". Just as the biological means of inoculation is introducing an antigen/pathogen into a system to produce immunity, perhaps this idea can be applied to misinformation. It would involve showing populations of interest the "hallmarks" of misinformation (i.e. being mindful of content source, emotive language, amongst others) to build psychological resistance to mis/disinformation. This is an approach that has already been built by Cambridge University in the form of an online game, and certainly one worth exploring given a capacity to "gamify" resilience-building in consuming misinformation rather than relying on information provision alone. 

Regardless of the approach, and especially given that the two mentioned studies derive from the US/UK, the need for testing solutions on a small scale (and our assumptions on whether they will work) is absolutely vital before building it up.

Katie Burnham
Katie Burnham

In regards to questions 5-7, I wanted to share a bit of our experience at Farm Radio International. We are a Canadian NGO with a network of more than 1,000 radio stations across Africa. We provide these stations with information and training materials, and collaborate with many stations on radio campaigns particularly to support farmers and rural audiences.

Radio broadcasters and journalists are vital for getting good information to people, but they also need access to good information. COVID-19 was new and fake news was almost as common as good information. Farm Radio shared information in a variety of ways, from print resources, to an IVR system, to Facebook chatbot. But perhaps one of the more effective tools was our WhatsApp discussions with expert guests. We have a dozen WhatsApp groups bringing together more than 1,000 broadcasters as a community of practice. We invited public health experts and other experts (gender, agriculture, nutrition, etc) to join our WhatsApp groups so that broadcasters could ask questions about the pandemic. This was an opportunity for broadcasters to fact-check their information with experts. Sometimes, journalists don’t have access to goods of information or the right experts to fact-check what they are seeing shared on social media.

We also tried to provide our broadcasting partners with tools to identify fake news, and developed a Broadcaster how-to guide on this topic, which is available in English, French, Swahili, and Amharic. This speaks to Ruth Canagarajah’s point about “inoculation.” Journalists need to understand how to identify misinformation and how to fact-check it.  

To help broadcasters share good information with their audiences, we published two series of radio spots sharing key COVID-19 info and addressing common myths, such as the idea that alcohol, garlic, lemon, or antibiotics will prevent COVID-19. These are also available in English, French, Swahili, and Amharic, and as short messages, are easier to translate into local languages. This way, we are hoping to reach audiences who don’t speak English or French and can find it more difficult to access information in their language. Marginalised groups can have a challenge getting good information if it’s not available in their language.

Anonymous

Dear all - sorry for joining this discussion a bit late. It looks like some great solutions are already under consideration. I thought I'd add a little, from a UK perspective, having recently chaired the LSE's Trust, Trust and Technology Commission, with a focus on dealing with mis/disinformation. Our report is at https://www.lse.ac.uk/media-and-communications/truth-trust-and-technolo… - beyond the analysis in the report (more appropriate for the problem statement group on this forum), our recommendations were as follows: 

Establish an Independent Platform Agency

The UK and devolved governments should introduce a new levy on UK online platforms’ revenue, a proportion of which should be ring-fenced to fund a new Independent Platform Agency (IPA). The IPA should be structurally independent of Government but report to Parliament. Its purpose, initially, will not be direct regulation, but rather an ‘observatory and policy advice’ function that will establish a permanent institutional presence to encourage the various initiatives attempting to address problems of information reliability. The IPA should be established by legislation and have the following duties: ■ Report on trends in news and information sharing according to a methodological framework subject to public consultation. This should include real data on the most shared and read stories, broken down by demographic group. ■ Report on the effectiveness of self-regulation of the largest news-carrying social and search platforms. This should include reports on trust marks, credibility signalling, filtering and takedown. ■ Mobilise and coordinate all relevant actors to ensure an inclusive and sustained programme in media literacy for both children and adults, and conduct evaluations of initiatives. The IPA should work with Ofcom to ensure sufficient evidence on the public’s critical news and information literacy. ■ Report annually to Parliament on the performance of platforms’ self-regulation and the longterm needs for possible regulatory action. ■ Provide reports on request to other agencies such as the Electoral Commission, Ofcom and the Information Commissioner’s Office, to support the performance of their duties, according to agreed criteria. ■ Work closely with Ofcom and the Competition and Markets Authority to monitor the level of market dominance and the impact of platforms on media plurality and quality. In order to fulfil these duties, the IPA will need the following powers: ■ Powers to request data from all the major platforms (determined by a UK advertising revenue threshold) on the top most shared news and information stories, referrals, news-sharing trends and case studies of particular stories. The types of data should be determined on the basis of public consultation on monitoring methodologies and according to a shared template that applies across different companies above the threshold. These data will be held by the IPA within a tight confidentiality regime to protect privacy and commercial sensitivities. ■ Powers to impose fines on platforms if they fail to provide data, and to request additional data when a court order is granted. ■ The IPA’s independence from government should be established in law and protected financially and through security of tenure of its governing Board. The IPA should have close links with civil society and be transparent about how it interprets and performs its remit. In addition to this new institution, we make further recommendations:

In the short-term: ■ News media should continue their important work to develop quality and innovative revenue and distribution models. They should also continue to work with civil society and the platforms on signalling the credibility of content. ■ Platforms should develop annual plans and transparent open mission statements on how they plan to tackle misinformation. They should work with civil society and news providers to develop trust marking. ■ Government should mobilise an urgent, integrated, new programme in media literacy. This could also be funded by the digital platform levy and should include digital media literacy training for politicians. ■ Parliament should bring forward legislation to introduce a statutory code on political advertising as recommended by the Information Commissioner.

In the medium-term (3 years): ■ Standard setting for social media platforms. Until now, standards have been set by platforms themselves. If this fails to improve the UK information environment, the IPA should set these in collaboration with civil society, Parliament and the public. ■ The news industry should develop a News Innovation Centre to support journalism innovation and quality news, funded by the levy on digital platform revenue.

In the longer-term (5 years): ■ The IPA should provide a permanent forum for monitoring and review of platform behaviours, reporting to Parliament on an annual basis. ■ The IPA should be asked to conduct annual reviews of ‘the state of disinformation’ that should include policy recommendations to a parliamentary committee. These should encompass positive interventions such as the funding of journalism.

Emanuele Sapienza
Emanuele Sapienza Moderator

Thank you Sonia Livingstone for this comprehensive set of recommendations! It would be great to hear from consultation participants about the relevance and applicability of these measures to different contexts. Personally, I am quite intrigued by the institutional model envisaged for the IPA and was wondering: are you (or other consultation members) aware of similar mechanisms in other countries?

Louise Shaxson
Louise Shaxson

Hello Sonia, thanks for the reference - very interesting reading indeed.  I have come over from Room 2 where we're discussing the drivers of disinformation, and it's clear that one of the drivers is something around the splintering of a sense of community and a sense of identity.  In a UK context, how would you see an organisation like the IPA working in tandem with (eg) citizens' juries or similar groups that foster debate and consultation across many different groups?  Is that something your group looked at?

Claire Pershan
Claire Pershan

EU DisinfoLab's recommendations are addressed to the EU level, but can certainly be applied more widely. We hope that the EU legislative packages and roadmaps on the table now (the DSA, EDAP, DEAP, and MAAP, to put in a few Brussels acronyms!) will all work in harmony and find strong support, particularly regarding decentralized funding for the variety of actors fighting disinformation. As all here will know, there is not one shape or size to the solution -- civil society and other stakeholders are extremely diverse and approach this problem from all angles, and all of these actors need sustained, flexible support. 

In particular, the upcoming European Democracy Action Plan, or EDAP, will be critical. Here are a number of points that we think this position should take into account

  • The EU (but, again, this principle applied more broadly!) urgently needs an ambitious framework to fund a decentralised network of journalists, academics, fact-checkers and open-source investigators.
  • We also need better protection for disinformation researchers, guaranteeing the physical and psychological well-being online and offline with funds that account for these risks. Intimidation tactics in our sector are omnipresent (hack-and-leak, intimidation tactics online, etc).
  • Multi-annual grants and specific financing on cybersecurity to cover costly triple penetration tests and resilient IT systems for NGOs working on disinformation would be one way to manage this threat.
  • We need to set standards on data-access and enforcing consistent definitions for platforms to respond to cases of information manipulation.
  • Last, we need to form best practices for political campaigning and a clear distinction between disinformation and strategic communications. Disinformation cannot become a regular political campaigning strategy. Political candidates should commit to respecting best practices for online campaigning and funding should be conditioned on fair and transparent online campaigning.
Larriza Thurler
Larriza Thurler

Hi, exploring more the case of media synergy in Brazil, they created a consortium to present the same information across multiple sources, after difficulties to have access to governamental data. The following media outlets: Estadão (newspaper), G1 (news portal), O Globo (newspaper), Extra (newspaper), Folha  (newspaper) e UOL (news portal).

Jamie Hitchen
Jamie Hitchen

Hi all,

Some reflections below:

  1. I think, as others have said, empowering citizens to tackle misinformation and disinformation, on platforms like WhatsApp is perhaps the most viable solution when it comes to thinking about how to create a better online environment. In Nigeria, one idea proposed is working with key influencers on the platforms (who also have a strong offline presence) and group admins who can have a significant knock-on effect on both the conduct of others and in terms of stemming the flow of dis/misinformation. I think the dangers of more formal regulation, is that they can be accused to clamp down on political freedom of speech as interpretations of what is disinformation will become politicised. The regulation would also exacerbate divisions if used in this way and this has been a concern raised in Nigeria. But I do think its important to differentiate between disinformation and hate speech, with the latter, often covered by existing legislation and this should be applied appropriately if an individual is using social media to call for violence against others or worse. 
  2. I think regional or continental wide bodies such as ECOWAS or the AU can play a role in ensuring that their protocols enshrine the right of freedom of speech online and that should include a commitment by states not to shut down the internet (which has been a feature across a number of African countries in recent years). They can also establish and enforce a data protection act that covers online users, in line with the 2010 Supplementary Act on Personal Data Protection within ECOWAS, for example. For all the possible risks posed by disinformation, these platforms also allow the spread of useful and empowering information, or provide space for discussions to take place between groups or across borders that are hard to facilitate in other ways. For me closing down the space isn't really the solution, in fact keeping it open should be a key commitment. 
  3. Tough question. Not sure I have a good answer, other than to say they should! In Africa, they need to be more present (physically) but also in terms of how they are listening to and engaging with user concerns and thinking about how to build platform solutions to some of the concerns. For example the labelling of the veracity of some US election tweets, are these services going to be available in Uganda next year for example. States need to do more in this regard, to ensure that companies like Facebook have the ability to effectively moderate content that users share on the platform (in the language they chose to do that). But its very difficult given the size of these social media companies, and many are also involved in building internet cables across the continent.
  4. Nothing to note
  5. Journalists can work closely in networks, with locally trained fact-checkers, so that they can respond as quickly as possible to false news stories and provide their counters. This network can also help ensure a greater reach of their fact-checks through the same channels. In an election period journalists can interview with leading candidates from political parties about key issues in their manifestos on Facebook Live and Twitter. In addition to questions from the interviewers, space can be given for citizens to submit their own questions using WhatsApp and Facebook, to be asked in a segment of the interview that would aim to focus election campaigns around issues and policy promises. In general I think journalists, globally, can try to do more to set the agenda when it comes to key and pressing issues rather than always responding to what is circulating on social media (the US being a good example). But this is often difficult to align with the need to sell newspapers or raise revenue as many viewers want more clickbait content. 
  6. I think a whole range of stakeholders need to be engaged. Around an election this should include representatives of political parties (to discuss codes of conduct), the election commission (who can try and establish a credible voice on social media platforms), civil society, media (who can try and ensure those codes are applied in practice), an so on. Social media platforms also have a responsibility to be present, and I understand they are.
  7. I think the key is more digital and civic literacy. One idea we have from recent research done in The Gambia is, "listening clubs to discuss what is debated on radio talk shows and social media about politics at the village level across The Gambia. These would aim to stimulate further debate and discussion among citizens about transparent and accountable governance, as well as misinformation, and would be overseen by local moderators, trained on these themes". This can be combined with tips on how to look for false news and creating credible platforms that share more accurate news (recordings of radio shows as audio clips, or newspaper articles presented in a way that can be digested and shared on WhatsApp). The Continent (https://twitter.com/thecontinent_) is Africa's first WhatsApp newspaper so I think these kind of initiatives can help get more better quality information circulating (to balance out the falsehood). Propaganda has always been a feature of life, so for me the question is how do we highlight for the positive uses and reduce the negative uses and for that to happen the key is more empowered and informed users who make those trying to sow division less relevant I think what Finland is doing at primary, secondary school level is interesting - https://edition.cnn.com/interactive/2019/05/europe/finland-fake-news-in… as a long term solution. I think targeted digital literacy, targetting in the case of Nigeria key religious and traditional leaders for example, can have more immediate short-term impacts, but that the longer-term goal has to be on this kind of education driven approach (as national as possible).
Stijn Aelbers
Stijn Aelbers Moderator

Week Two Summary

Dear all,

Week 2 has brought us some really practical examples and thought-provoking ideas, like:

Farm Radio highlights the importance of access to reliable information for local journalists, as they play a vital role in getting good information to people.  One of the most successful interventions to achieve this, was organising Whatsapp discussions allowing journalists to ask questions to experts. They also support them with training and tools to identify fake news, and create short messages in some key languages that can easily be translated in more languages by local journalists.

Some interesting thoughts were shared around disinformation and the need for any response to work at the same speed and scale as it spreads - young people and influencers can help with this.

Interesting ideas around crowdsourcing insights online, by “non-experts” to flag disinformation, misinformation and heavily biased news items.

There also seems to be a need for better categories and definitions around disinformation and info "pollution", because there's a difference between exaggeration, bias and fake news, but also a difference between someone who creates disinformation and someone who shares it, with different responsibilities for each. New terminology is also introduced, like the idea of "inoculation" as we way to identify "hallmarks" or indicators of information, which can also be thought through gamificaton.

There's also a lot of work and ideas around making companies responsible for the content on their platform:
 

In the UK, LSE has issued a report with some recommendations, most notably to establish an independent platform agency (IPA), structurally independent of government, but report to parliament. The IPA would have an ”observatory and policy advice” role on information reliability. It should or could report on trends in information, effectiveness of self-regulation, coordinate media literacy efforts, have the power to request information from all major platforms, the power to impose fines if platforms don’t provide the requested data.

In the immediate there’s still an important role for media, an urgent need for media literacy, and invest in news innovation.

The EU DisinfoLab’s has also issued recommendations that could be applied more widely. One of them highlighting the importance to fund a variety of actors, as there will never be a “one size fits all” solution.
 

--

Week One Summary (Room 3) by Moderator Daria Asmolova.

 

Emanuele Sapienza
Emanuele Sapienza Moderator

Thank you Claire Pershan Larriza Thurler and Jamie Hitchen for very rich contributions!

You raise some very important questions, including - among others - the challenge of decentralized funding for actors fighting mis- and dis- information and the importance of journalists working in networks. It would be great to hear more about how these issues have been addressed in different countries.

Another important theme coming up in today's contributions is the role of regional bodies (with references to EU, AU and ECOWAS). Any insights participants could share about the role of these institutions in other regions would be greatly appreciated!

Louise Shaxson
Louise Shaxson

Dear all, it struck me that we should be trying to invite young people into this conversation as well and I happened to come across Abbie Richards, who has gone viral with this chart about conspiracy theories.  I have heard anecdotally that humour can be effective in dealing with the proponents of disinformation: does anyone know of any other initiatives like this from around the world?  

Niamh Hanafin
Niamh Hanafin Moderator

This is fantastic and agreed that it's very important to get different youth perspectives on these issues, as they have very diverse and sometimes ingenious ways of navigating disinformation online.    

Lillian Njoro
Lillian Njoro

Hello everyone,

Please see below some thoughts from UNDP Acc Lab Kenya.

----

What role should legislation and regulation play? How can regulatory responses be developed in a way that respects fundamental rights such as right to privacy, freedom of expression and right of access to information?

Legislation should play a role in protection of citizens. Misuse of technology to harm others physically, mentally, and emotionally must have consequences. Misuse and obtainment of data in criminal ways must also be addressed and criminalized. Protecting and defending individuals, devices and networks should form the basis of any cybersecurity strategy. Regulatory responses should be developed through participatory approaches by engaging with citizens, and prioritising minority groups such as women, youth, and People Living with Disabilities to gather insights based on their lived experience. The response should also be created in such a way as to leave little/no room for abuse, especially when it comes to things like politics and elections.

What role should policy play at the national, regional, and international level? How can these be harmonised?

Harmonisation of rules is critical. The impact of digital technology transcends borders, and this applies both positively and negatively. There are currently no data protection or universal privacy laws that apply to the web and no body mandated with the regulation or oversight of the web space. Some structure and precedence should exist to have online platforms take more responsibility on the security of their users. 

How can internet companies be effectively governed or regulated to ensure they act in the public interest?

The existence of laws is one way to ensure that internet companies abide to order and regulation. Regulation should encourage more innovation noting the rapid nature of change and growth within the digital technology space and incorporate a futures thinking mindset to try and anticipate changes down the road. Policy makers should bring in diverse expertise to the table to ensure the nitty-gritty specificities and details are captured and to advise on new frontiers of technology 

What are the digital and technological options for addressing this? Do you have examples of effective responses?

No specific examples but I'd like to suggest 2 key policy and advocacy contacts from Kenya who have done a lot of research, strategy and advocacy work on the intersection of technology, public policy and governance with a focus on Africa. 1) Nanjala Nyabola author of "Digital Democracy, Analogue Politics" ;  2) Nanjira Sambuli

Aaron Sugarman
Aaron Sugarman

Hello everyone, I am answering questions 1, 3, 4, and 6 on behalf of the Global Disinformation Index (GDI).

One of the primary drivers of disinformation creation is the financial incentive—as I mentioned in the other discussion room, GDI estimates that disinformation sites generate more than a quarter billion dollars per year in ad revenues. In order to get rid of the financial incentive and defund disinformation sites there needs to be regulatory behavior done by a broad coalition of brands, ad tech companies and platforms. This regulatory response respects fundamental rights, freedom of speech does not include the right to profit from your speech or to have that speech algorithmically amplified.

To ensure that internet companies are being regulated to act in the public interests ad exchanges, e-commerce, and-payment platforms must:

  1. Be transparent about where they are placing adverts. This will give all of these companies more control over the domains on which they service and to ensure ethical, accountable and transparent business practices.
  2. Get regularly updated lists of disinformation domains by automatically classifying domains containing disinformation. For example, GDI provides a “worst offenders list” of sites more likely to carry disinformation.
  3. Support quality and trusted news domains. This applies especially for ad networks. In the case of ads, when brands use lists of high disinformation risk news sites to vet their ad placements, it not only cuts funding from disinformation actors but also focuses more money into supporting high-quality news sites.
  • For Google and Facebook we have found a notable disconnect in their policies on what content they restrict and on what content that they will provide ad services. 
  • For key adtech players like Criteo, Revcontent and Taboola, we also see a wide gap in policies to address risky content that they provide ad services to. For many of these mentioned companies, they will service high-risk disinformation sites. https://disinformationindex.org/2020/05/why-is-tech-not-defunding-covid-19-disinfo-sites/

Digital and technological options for this regulation include artificial intelligence that GDI has built which tracks disinformation narratives online and allows us to flag disinformation as it appears in the English language in real-time. Our instruments can help governments, platforms, brands, and health officials see which sorts of narratives are gaining the most attention and hence what counter measures should be launched. This approach is more nuanced than current blunt keyword-based blocklists and results in fewer false positives, allowing advertisers to return to placing ads on quality news coverage.

Based on our experience and engagement with ad tech companies over the last 12 months, there needs to be an industry-wide response and action on the above points. Any voluntary standards must be set, agreed and implemented in a robust manner with other stakeholders, including brands, UN regulatory authorities and civil society and academic experts. Two global examples of such an effort are the Global Alliance for Responsible Media (GARM) and GIFT-C. However, if such measures cannot be agreed, then a regulatory response on the part of the UN is needed for industry-wide uptake.


 

Emanuele Sapienza
Emanuele Sapienza Moderator

Thank you Aaron Sugarman for very thoughtful points on the financial incentives fueling information pollution. I wonder if you could elaborate a little further from a political economy perspective. Based on your engagement with different industry stakeholders, what would you say is needed to generate the (political) will to enact the types of measures you outline?

Aaron Sugarman
Aaron Sugarman

Emanuele Sapienza 

The problem of ads funding disinformation is not well-known to certain members of the public and policymakers, so the key to generating political will is by further publicizing this issue. GDI raises awareness through several annues:

  • Our partnered research, for example we recently published a paper on the online funding of hate groups with the Institute for Strategic Dialogue
  • Weekly reading lists covering recent news on the topic of disinformation and advertising
  • Blog posts explaining relevant research and summaries of GDI findings

 

Political will is not necessarily required for many of the measures that GDI outlined. We cite the need for actions from ad exchanges, e-commerce, and payment platforms, which could be done without government support or regulation. However, explicit regulation holding internet companies responsible for funding disinformation would help ensure that the issue is solved rather than relying on individual stakeholders acting independently.

Pablo M. Fernández
Pablo M. Fernández

4. At Chequeado we are working in automatization processes that allow us to be quicker in identifying misleading and false content, through the use of AI and machine learning. We are also developing tools to make our workflows more efficient, and be able to better respond to our community in the different channels we have. This is something that we are already using.
We believe there is much to be done in this area, and that collaboration among fact checkers and other interested organizations could be very helpful, as we face similar challenges and much of the technology developed can be used in different countries and contexts. 

Milena Rosenzvit
Milena Rosenzvit

In Chequeado we believe that fact checking is a very useful tool, but we need to complement it with other strategies. We are now studying how we could be more effective giving trustworthy information to specific groups that can be vulnerable to certain disinformations, and taking them through the process of fact checking, so they better understand the dynamics and strategies behind the construction of disinformation, which can then help them identify it more easily in the future. We believe there is much to be done in research to better identify which groups are vulnerable to what kind of disinformation, and which would be trustworthy sources that could reach them with verified information.  

Emanuele Sapienza
Emanuele Sapienza Moderator

Week Three Summary

Another week of great contributions! Here are some highlights:

 

  • Sonia Livingstone shared recommendations emerging from LSE's Truth, Trust and Technology Commission, including the establishment of an Independent Platform Agency
  • Claire Pershan from the EU Disinfo Lab outlined recommendations addressed to the EU level, but applicable also more widely. These include issues such as decentralized resourcing of fact-checking networks as well standards on data access and political communication
  • Larriza Thurler shared an example of media collaboration in access to information from Brazil
  • Jamie Hitchen provided insights based on different African experiences, including Nigeria and Gambia and shared ideas on getting more quality information circulating, such as Africa’s first WhatsApp newspaper
  • Lillian Njoro made a contribution based on the experience of UNDPs’ Accelerator Lab in Kenya covering, among others, issues related to regulation
  • Aaron Sugarman posted on behalf of the Global Disinformation Index (GDI) on the financial incentives fueling information pollution and shared a series of recommendations focused on ad exchanges, e-commerce, and-payment platforms
  • Pablo M. Fernández and Milena Rosenzvit from Chequeado raised very important points about the need to complement fact verification efforts with partnership-building and audience-targeting strategies

--

Week Two Summary (Room 3) by Moderator Stijn Aelbers.
Week One Summary (Room 3) by Moderator Daria Asmolova.

Marine Ragnet
Marine Ragnet

Hello and thank you for this great discussion! I would like to address questions 4 and 6 by providing you with an overview of our work in this area. 

Disinformation has a considerable negative impact on public trust and engagement. In a March 2019 report, Weapons of Mass Distraction: Foreign State-Sponsored Disinformation in the Digital Age, my colleagues Christina Nemr and William Gangware at Park Advisors conducted an interdisciplinary review of the human and technological vulnerabilities to foreign propaganda and disinformation. They assert that today’s information ecosystem presents significant vulnerabilities that foreign states can exploit, and they revolve around three primary, interconnected elements: 1) the medium – the platforms on which disinformation flourishes; 2) the message – what is being conveyed through disinformation; and 3) the audience – the consumers of such content. The problem of disinformation is therefore not one that can be solved through any single solution, whether psychological or technological. An effective response to this challenge requires understanding the converging factors of technology, media, and human behavior.

One way to fight back against disinformation is by leveraging innovative tools and technologies to identify and analyze disinformation trends, support counter-disinformation strategies, and strengthen psychological resilience. This is why my organization, Park Advisors, created Disinfo Cloud, an online hub connecting diverse stakeholders from government, private sector, academia, media, and civil society to help them discover the latest technologies, research, and news on countering foreign-sponsored disinformation. The platform is supported by the U.S. State Department’s Global Engagement Center and is part of its broader efforts to counter harmful foreign-sponsored disinformation and propaganda, both state and nonstate.

Disinfo Cloud features a variety of technologies across a number of themes and provides varying levels of assessments to help users understand the capabilities of each one and how it may be unique. Tools featured include the following: 

i) social listening tools to help understand the online information environment; 

ii) manipulated media assessment tools to alert users to potentially altered texts, videos, or images; 

iii) fact-checking tools to help users determine the credibility of a news source or website; 

iv) blockchain-based media authentication technologies to ensure the validity of original content; v) internet censorship circumvention tools to facilitate the continued flow of information; and 

vi) gamified education tools to increase psychological resilience against disinformation and promote critical thinking. 

These tools are useful to a variety of stakeholders, from government to academia to media to the general public, but if we’re aiming to increase psychological resilience more broadly, it’s also worth exploring how we can work with the tech sector to make certain tools, like fact-checking alerts, a default feature of browsers and apps. That way we can reach a broader mass of users, including those who regularly consume low quality and false information and may not realize it.  

Through Disinfo Cloud, we aim to facilitate public-private sector partnerships to push forth collaboration like the afore-mentioned on this complex problem. 

We welcome you to join the Disinfo Cloud community and likewise welcome any feedback on the platform, such as which tools are most helpful. In addition, we would love to get your thoughts on the key disinformation issues impacting your work, as well as the technology/tool gaps that you feel exist.

Feel free to reach out to chat about potential engagement opportunities. You are also welcome to connect with us via LinkedIn and Twitter and visit our blog for some of the latest news, events, and research related to disinformation.

Daria Asmolova
Daria Asmolova Moderator

Disinfo Cloud looks like a great initiative. I just signed up for access to explore it as our team (Chief Digital Office at UNDP) has been looking into different tools, ideally opensource, which could be deployed to identity mis- and disinformation and assess its scale.

And agree that working with the tech platforms is the next logical step, to make fact-checking easily accessibly to the public rather than keeping it as research exercises or behind-the-doors moderation. 

Orna Young
Orna Young

Hi everyone

Just wanted to respond to three specific questions. 

 

What role should legislation and regulation play? How can regulatory responses be developed in a way that respects fundamental rights such as right to privacy, freedom of expression and right of access to information?
Regulation may only be feasible in the most egregious cases of disinformation; there is no singular solution that will ‘cure’ harmful information. Rather, public policy efforts should continue to ask smart questions, do good research and evidence gathering, and communicate public information clearly. We also believe that there are positive synergies between fact-checking organisations, universities, and statutory agencies, to promote good information, which is at least as important as countering bad information.
 

What are the digital and technological options for addressing this? Do you have examples of effective responses?
Recent years have witnessed a range of efforts being made by social media platforms to address disinformation being shared. This has been done to varying degrees of consultation with fact checkers. FactCheckNI (Northern Ireland's first and only fact checking organisation) is a partner on the EU Horizon 2020 project, Co-Inform.  The objective of this project is to create tools to foster critical thinking and digital literacy for a better-informed society. These tools have been designed and tested with policymakers, journalists, and citizens in three different EU countries. “Co-creation” is the approach that underpins this project. This is with a view to achieving the right balance between actors and types of solutions against disinformation. The aim of this method is to ensure that governments will have the opportunity to promote interaction between researchers, journalists, private sector, non-profit sector and citizens with minimal intervention.

How can we build greater resilience to disinformation, especially among vulnerable or marginalised groups, such as through greater media and information literacy?
 Our partnership with a networked organisation working on addressing health inequalities in our region evidenced the need at the grass root community level for fact checked information and clear messaging on issues profoundly affecting individuals and communities. While disciplined government and thoughtful mainstream media can do much to prevent the spread of disinformation, we believe resilience to disinformation begins with the individual. 2020 has proven that good information saves lives. Anecdotally, we also see examples of disinformation being spread informally through local networks (for example, community and family WhatsApp groups). As a result, we believe that an accessible and community-focused approach to disinformation will empower people in our particular region to make informed decisions. The stresses of COVID-19 come on top of competing historical and political narratives that continue in a society that is still adapting to diminishing conflict and trying to grapple with legacy issues. To this end, we deliver training on fact checking and critical thinking skills, which will allow organisations and individuals to access our methods and apply them in their work, while also tooling them with the necessary skills and awareness to tackle disinformation. We have also adapted our approach to the presentation of our fact checking work—ensuring its accessibility in terms of the visuals and the brevity and concise nature of the messages our fact checks contain. 

 

 

Miroslava Sawiris
Miroslava Sawiris

This feedback is submitted by 10 organizations and civil society initiatives (GLOBSEC, nelez.cz, PSSI, CSD, Res Publica, Semantic Visions, Global Focus, Political Capital, Eastern Europe Studies Centre, DISI) from 6 European countries (Bulgaria, Hungary, Czechia, Lithuania, Slovakia and Romania) joined in the Alliance for Healthy Infosphere.

  1. What role should legislation and regulation play? How can regulatory responses be developed in a way that respects fundamental rights such as right to privacy, freedom of expression and right of access to information?
  • Regulation represents the key prerequisite for tackling disinformation online, especially on social media platforms which serve as the key source of information for millions of people across the world.
  • Regulatory measures should focus on two key issues – transparency and accountability.
  • First, social media platforms should be obliged to 100% transparency in:
      • their usage of users’ data for commercial purposes
      • the use of algorithms to generate content for each user
      • measures taken everyday to take-down illegal, inappropriate content breaching their community standards (e.g. inciting violence, racism, content harmful to public health, etc.)
  • Second, social media should be responsible for the content posted on the platform. Thus, if social media fails in taking down content breaching their own community standards as well as national or international law, they should be held accountable for their actions by receiving a fine amounting to certain % of their revenue in a given country.
  • Third, social media platforms should employ proportionate number of employees with language skills for each country the platforms operate in according to country’s user base size. These should be independent of local governments and authorities, be trained for neutrality and act as fact-checkers. Once the false content is identified, it should be labelled as such and made unavailable to share. The source-sharing should be disabled, respectively, as well.
  1. What role should policy play at the national, regional and international level? How can these be harmonised?
  • Policies stated above should be implemented on the national level across the world. National governments should be able to restrict and pressure the platforms to do more without compromising the users’ privacy and freedom of expression.
  • Regional groups, such as the EU, should, with all necessary data in hands, jointly decide to act in a more coherent way to pressure the social media giants into more drastic changes. For example, they should be able to force the platforms to stop using micro-targeting for advertisement and limit the use of algorithms that lead to radicalized or polarizing content, which often falls under the category of disinformation and conspiracy theories.
  • On the international level, the issues of freedom of expression and data privacy should be overseen, particularly against the governments with non-democratic regimes.
  • Also, the cooperation between different platforms should be encouraged on the regional and international level in the exchange of data on disinformation identified.
  1. How can internet companies be effectively governed or regulated to ensure they act in the public interest?
  •  Platforms should:  
    • be 100% transparent about their actions and data use
    • be held accountable for a lack of actions
    •  stop using algorithms to generate content for users according to their behaviour online, characteristics and personality
    • stop offering the option of micro-targeting for advertising purposes
    • be required to hire staff with language skills for each country they operate in. the no. of people hired should correspond to a specific % of users from the given country (e.g. 1 employee/2000 users).
  1. What are the digital and technological options for addressing this? Do you have examples of effective responses?
  • Transparency reports, voluntarily published by the platforms, could serve as a starting point. Those, however, should be mandatory, regular and specific for each country. The rest of the measures depend on regulation and the change in technological approach within the platforms themselves.
  • Independent oversight body
  1. What is the role of journalism? How can journalists, broadcasters and news editors be better equipped and supported to address this issue?
  • Journalists play a pivotal role in addressing disinformation as they should serve as a credible source of information. In comparison to other content, which should be taken down or disabled for sharing, the content of journalists and recognized media sources should be promoted more on the social media.
  • Journalists and credible media sources should be assessed by an independent body, their content should be labelled as credible and promoted as such across the platforms.
  • International organisations should provide more funding to make sure the good quality journalism is nurtured.
  1. Which stakeholders need to be engaged and which strategic partnerships should be considered?
      • We need better cross-borders cooperation within judiciary and applicability of law
      • Cooperation between media regulators in overseeing respect for media and social media standards should be reinforced
      • So far, the bodies responsible for overseeing the implementation of online standards have had limited competences. Functional cooperation needs responsible bodies with capacities to sanction.
      • Cooperation of media councils with social media platforms, academic institutions, journalists and non-governmental organisations is needed and should be fostered not only within the developing countries, but all across the world.
      • Strong pro-democratic coalition of nations striving for “free internet” and information space should be established as a counterpart to several attempts of autocratic countries to restrain and control information and the right for the access to information.
      • Need to strengthen multi-national and multi-regional cooperation
      • Educations systems and teachers should not be omitted, building media literacy and critical thinking which will lead to stronger and more resilient societies
      • We need to engage with political representatives and increase their knowledge about the problem of disinformation, because they are the ones setting policies and only a strong political will and leadership accomplish the changes needed
      • Social media platforms need to be actively engaged in the debates and need to be involved in the policy process.
      • Active citizenship and responsible consumer behaviour on social media platforms can lead to better information environments. People, who also invented social media platforms, can be the driving force of the change.

 

  1. How can we build greater resilience to disinformation, especially among vulnerable or marginalised groups, such as through greater media and information literacy?
      • We need to develop life-long education programs focusing on media literacy, critical thinking and new technologies
      • We should exchange best practices and promote journalistic standards across nations and regions
      • We need to create funds for investigative journalists and support their work
      • Investigative media should not be behind the pay-wall, while disinformation outlets spread false information for free and easily accessible to people
      • Good quality content and information need to be produced and disseminated in countries where domestic media are being utilised for propaganda
      • International community should promote policies enabling people the access to information and utility of modern technologies
      • Experiencing the difference first hand can be more impactful than reading about it, thus exchange programs for vulnerable groups should be supported
      • Teach the teachers programs and reforms of education systems in many countries are an absolute must
Larriza Thurler
Larriza Thurler

7. How can we build greater resilience to disinformation, especially among vulnerable or marginalised groups, such as through greater media and information literacy?

I would like to share two experiences in Brazil that can contribute to a greater resilience to disinformation. 

1. The first one is a transparency index of Covid-19 open data, by Open Knowledge. It analyzes governmental data from each state and how open and transparent they are. https://transparenciacovid19.ok.org.br/ 

2. The second one is the Observatório de Evidências Científicas Covid-19 - Covid-19 Scientific Evidences Observatory (I am part of their Knowledge Management team). http://evidenciascovid19.ibict.br/ 

The Observatory's mission is to assist anyone interested in quality information about Covid-19 so that they can orient themselves and make appropriate decisions on different aspects of this collective health problem, based on research conducted with methodological rigor, making available and communicating the set of scientific evidence on this website in a clear, objective, useful and applicable manner. A team of professionals select reliable studies, based on Systematic reviews, develop article reviews, and disseminate them through social media, videos and site. 

 

AlSur
AlSur

In recent years, Al Sur consortium has had a greater interest and work on the phenomenon of disinformation on the Internet, driven, first, with the last presidential elections in the United States, and then in a succession of legislative attempts in several Latin American countries, increasingly driven by crises like COVID-19.

In general terms, as a consortium, we are concerned that the concept of disinformation has been used as an umbrella that harbors political and social problems which are very diverse and that, if approached in an excessively simplistic and general way, may result in the undermining of  freedom of expression and other fundamental rights. Within this framework, we have worked on two reports that may be of interest to UNESCO and UNDP:

- "Disinformation on the Internet in electoral contexts in Latin America and the Caribbean. Regional contribution of civil society organizations linked to Human Rights in the digital environment". 2019. This contribution was sent for the report on the subject led by the Office of the Special Rapporteur for Freedom of Expression of the Inter-American Commission on Human Rights (IACHR).

- "Disinformation and the pandemic: A human rights perspective." 2020. This document was prepared for the Office of the Special Rapporteur for Freedom of Expression of the Inter-American Commission on Human Rights (IACHR) as an input report for the public discussion for its next statement on the matter.

Because the heart of our work on the matter has been problematizing public policies and legislative proposals in this regard and their implications for human rights, we concentrated on answering questions related to public policies in the consultation. We hope that this contribution can dive into a very complex problem tied to different social and political realities globally.

- What role should legislation and regulation play? How can regulatory responses be developed in a way that respects fundamental rights such as right to privacy, freedom of expression and right of access to information?

For Al Sur, it seems necessary that the legislative powers avoid regulatory responses that impose obligations and sanctions on intermediaries for users' content. For example, the bill half-sanctioned by the Brazilian Senate is a typical example of legislation that should be rejected. 

The main reason is that these types of initiatives seek to solve a problem but create a much bigger one: they create incentives for private companies that are already powerful to exert exacerbated control over what citizens express through their services, with the potential effect of overreaching according to which more speech than necessary will be censored to avoid falling into the responsibilities that these types of regulations impose. Of course, the establishment of criminal law categories to deal with disinformation can also be rejected for similar reasons: it is a disproportionate response, difficult to apply, and would not effectively combat the phenomenon.

Alternatively, there has been little attention paid to the responsibility and potential liability of other actors: political parties, public officials, journalists, or health care practitioners in the case of COVID-19 disinformation campaigns. All these other actors have specific duties regarding the information and ideas they sponsor or share. Although rare, there are initiatives and/or regulatory proposals that may contribute to mitigate the impact of disinformation and are currently underrepresented in these debates: Chile, for instance, a couple of years ago, debated a bill to suspend and demote public officials that incur in disinformation campaigns. In Argentina, some civil society organizations have looked to campaign financing as a means to address both corruption and disinformation.

Other regulatory areas currently underrepresented in these debates include data protection, not from a point of view of companies but the permissible use approach. All these approaches focus on public officials and political candidates rather than Internet companies, steering the focus away from the messengers and onto the authors and those who actually have public duties towards their constituencies. In fact, international human rights jurisprudence and principles have long held that public officials have a duty to provide truth and accurate information on public affairs; the judiciary has long established a right to truth in the aftermath of dictatorship but also about serious human rights violations (particularly in the Latin American context), and concrete and specific restrictions on the freedom of expression of public officials and politicians (i.e., duty not to stigmatize journalists or media companies) because of the role that they have in a democratic society.

Besides regulatory approaches that have been postponed or underrepresented in debates, the importance of data protection laws and advertising rules have also been postponed. These are long-standing fields where regulation of speech is common and normalized across different legal cultures. So are customer protection of sorts. While content-oriented solutions may be perceived as quick and easy fixes, other less explored approaches place the spotlight back on those who have duties towards the democracies that we are concerned about. 

- What role should policy play at the national, regional, and international level? How can these be harmonised?

Inter-American human rights standards have insisted on the need for intermediaries not to be held liable for the content produced by their users, given the incentives of "private censorship" that such rules would generate. Simultaneously, a recent discussion process known as the Santa Clara Principles launched in February 2018, and which included experts, NGOs, and defenders of digital rights, established minimum criteria that companies should respect when moderating users' content. These include the obligations to proactively publish quantitative information on the amount of content removed, the duty to notify users of the reasons for the decision, and the need to establish internal "appeal" procedures so that these decisions can be reviewed. These general principles appear to be being implemented by the platforms. 

On the other hand, the preference for official speech that platforms have shown in recent months also presents problems and challenges. In the case of COVID-19, these official discourses are not always aligned with scientific discourses (as seen in many countries in the region). Moreover, official speeches do not have an epistemic status superior to other discourses and — especially when decisions are made with imperfect knowledge, as in the pandemic — they must be subjected to citizen scrutiny. 

In this sense, we need to insist that inter-American human rights standards impose special responsibilities on public officials to verify their speech, ensure that it does not violate human rights, promote discrimination, etc. But it is equally important to emphasize that private companies that concentrate a large part of the flow of information online should not be forced to act as "enforcers" of these standards, nor should they voluntarily assume that role. In this sense, it seems to us that platforms' behaviors in recent times that have tried to provide context or have directly censored content published by the highest public officials of various countries are problematic from this point of view: it implies assuming a role that does not correspond to them and that it is also difficult to administer. Indeed, recent weeks' erratic policies and their application — by case — to Presidents Trump and Bolsonaro are examples of the kinds of problems that this line of action faces. It is not entirely clear, in that sense, why some content is reported or contextualized and other is not. In part, the answer to this challenge has to do with the clarity and transparency of content moderation policies and their enforcement mechanisms. It is not only important that internal policies of moderation are clear, but that their application is consistent in different countries.

In any case, policies that add more information to the public debate are unproblematic for freedom of expression standards. In this regard, the IACHR has said that the first response to the "abuses" that can be committed through freedom of expression must be the right to rectify or respond. Following this principle's same logic, the platforms' moderation actions that give more context to problematic information could be a proportionate response to certain types of content. But for this response to be entirely consistent with this principle, we understand that platforms should adopt additional actions that limit their own power of moderation and that guide their exercise, such as that moderation actions do not limit — through algorithms — the scope of certain content and that the automated moderation mechanisms and criteria are made transparent and periodically reviewed, to prevent them from capturing more speech than is strictly necessary and implementing moderation policies with restrictive rather than broad criteria.

Alasdair Stuart
Alasdair Stuart

Hi - sorry to join this conversation at the very end, just back at work after some time of with covid! Thanks to UNDP and UNESCO for convening and great to read all the contributions so far. A few thoughts on this from BBC Media Action:

5.What is the role of journalism? How can journalists, broadcasters and news editors be better equipped and supported to address this issue?

There has been a lot of focus on fact (and to a lesser extent source) checking approaches in recent years, which we think is important, but is inherently a reactive approach to the problem. We need to support journalists and broadcasters more holistically so they are able to compete for attention and engagement against disinformation, e.g.:

  • Media practitioners/organisations understand the audiences they serve and are able to produce content that is relevant  to their lives (driving interest and engagement in factual content)
  • Media practitioners/organisation reach audiences with accurate, trusted and engaging information on issues of importance to their lives - we need to support journalists to think creatively about how to package accurate information in a way that can compete for attention online
  • Media practitioners/organisations have the skills to hold those responsible for spreading mis/disinformation (e.g. governments, political figures or celebrities)  to account where appropriate
  • Media practitioners/organisations have the knowledge and skills to use powerful narratives and emotionally engaging content to influence audiences’ attitudes, beliefs, norms and behaviour in relation to information consumption, production and sharing [THIS IS A WIDER OBJECTIVE THAT FITS UNDER Q7…BUT WE’D ARGUE WE NEED TO EQUIP MORE JOURNALISTS AND BROADCASTERS TO THINK ABOUT HOW TO DO THIS IF WE’RE GOING TO OVERCOME THIS ISSUE].

With regard fact-checking, we need to recognise this is a specialised skill and can be labour intensive – we need to assess the size, resources, business focus and need of different media partners when deciding what is realistic, and we should be supporting partnerships of fact-checkers in any particular context (to avoid duplication of effort), and linking local initiatives to regional, national or international expertise (e.g. expert fact-checking organisations) wherever possible.

In terms of fact checking approaches, we’d tend to encourage:

  • Consideration of the source of mis/disinformation as well as the information itself, when assessing it’s truthfulness and intent
  • Careful assessment of the extent to which mis/disinformation has spread and application of ‘strategic silence’ to avoid amplifying it further if it is assessed as not having gone viral or posing a significant threat
  • A focus on providing the correct information, rather than simply saying something is inaccurate/false, as well as providing some information on the fact-checking approach used
  • Use of short engaging formats, such as images, graphics and videos
  • Use of trusted communicators to communicate fact-checked information (these will vary depending on context, target audience and type of information).

Beyond specific journalism skills, a lot of this links back to the fundamental need to strengthen the media environment to increase provision of and access to trusted, accurate and engaging information at scale. For example, we need to support:

  • Independent/public interest media organisations to have better management and more viable business models that better adapt to audience demands and market conditions
  • More enabling legal and regulatory frameworks for independent media
  • Dynamic networks of media partners who can share knowledge and resources and advocate for media freedom with civil society and other allies. 

We also need to start preparing now for the impact that AI generated ‘synthetic media’ is going to have (sorry this perhaps should sit more in the problems chat).

7.How can we build greater resilience to disinformation, especially among vulnerable or marginalised groups, such as through greater media and information literacy?

We believe that media & communication can be used to build greater resilience to disinformation by:

  • Providing greater access to and encouraging consumption of accurate, trusted and engaging information on issues of importance
  • Exposed people to a diverse range of views and opinions & creating space for inclusive public debate (helping to reduce echo chambers)
  • Providing audiences with media and digital literacy skills that enable them to consume, produce and share information in a more safe and responsible way
  • Helping people to understand the harm that the sharing and spread of mis/disinformation can have
  • Educating people on how to constructively challenge friends or family who are sharing mis/disinformation, and have the confidence to do so
  • Influence attitudes and beliefs which discourage the consumption, production and sharing of mis/disinformation
  • Influence attitudes, beliefs and behaviours in relation to safe and responsible information consumption, production and sharing.

In relation to media and digital literacy, there are many areas that can be covered, but some key areas we have been focussing on to date include using our programmes to influence understanding, skills and behaviour, such as:

  • Making judgements on whether information is truthful/accurate (e.g. through analysis and comparison of sources)
  • Distinguishing news and high-quality content from other kinds of content
  • Developing an awareness of the agendas of different sources of information
  • Developing emotional scepticism towards information, especially when it provokes an emotional response
  • Pausing and reflecting before deciding whether to share information

While a focus on media/digital literacy holds some promise in terms of a more preventative approach, we think there should also be efforts to try to influence the culture around information consumption, production and sharing, by influencing people’s attitudes and beliefs and societal norms in relation to this. What we need to influence will vary by country, culture, and context, but could include:

  • Beliefs and norms about what safe and responsible information consumption, production and sharing looks like
  • The acceptability of sharing or acting on unconfirmed or false information (e.g. making it embarrassing and socially unacceptable)
  • Attitudes, beliefs and techniques around challenging friends and family who share unconfirmed or false information
  • Social norms about the important of having good media literacy (in addition to improving the literacy itself) and applying these skills
  • The value placed on quality news and journalism.  
João Brant
João Brant

I'd like to give contributions on how to mitigate disinformation in messaging systems, which is a big issue in Global South countries such as Brazil, India, and Nigeria. 

The context
Messaging services are both a means of interpersonal communication and a means of viral communication. In interpersonal communication (between individuals or in groups), when there is encryption, the privacy of conversations is guaranteed – which is essential for private dialogue.

On the other hand, if we take the example of WhatsApp, each user is allowed to participate in up to 10,000 groups with up to 256 members (accessible even through open links) and an unlimited number of broadcast lists with up to 256 members each. This arrangement allows messages to be viralized to millions of people in minutes, without identifying the initial sender.

WhatsApp claims that messages that go viral are only 0.5% of the total. But the central issue is not the relative number, but the absolute. The company reports that 80 to 100 billion messages are served globally. 0.5% of this is 400 to 500 million messages.

The absolute number is therefore high and relevant, regardless of being small in relation to the total number of messages. In a rough comparison, it is as if in the 90s the daily number of TV stories and phone calls were compared. The first was much smaller, but with much more impact on democracy due to its massive character.

Analysis
Viral messages multiply in applications in an opaque environment. There is no public panel where messages are posted - they are only visible to recipients. You may be the victim of a smear campaign that reaches half the population and not knowing it. Or you may even know, but there is no way to defend yourself. Perspectives circulate without contradiction. False news circulates without opportunity for questioning.

Here it is worth saying: cryptography, sometimes pointed out as a problem, should not be blamed. Unencrypted applications are also opaque. And encryption is essential to guarantee confidentiality in the interpersonal aspect of services.

Viral messages also take advantage of anonymity being a general rule on WhatsApp. The long history of rumors in politics has always relied on anonymity, but the difference is that they circulated word of mouth or apocryphal pamphlets. In the world of the internet, they have a huge impact.

Anonymity in mass communication is important as an exception to protect vulnerable people and groups, but when it becomes a general rule it creates two problems. First, the circulation of content without an author prevents, in practice, their moral and legal responsibility. This creates a mechanism to encourage the distribution of deceptive content or slander for political purposes.

The second problem with anonymity as a rule is that it paves the way for the pernicious exploitation of weaknesses in human psychology. Non-attributed content is more easily passed on, because it does not depend on the author's credibility and because it does not make those who pass it morally liable.

In addition, even though its main functionality is not guided by algorithms and artificial intelligence, the application's own architecture and resources induce certain behaviors. As anthropologist Letícia Cesarino points out, the atmosphere of closed groups is marked by an intensive pace, trust based on personal relationships, fusion of personal, social and professional contexts and isolation from the adversary.

The isolation is reinforced by the logic of live and synchronous communication that occurs in a private environment. In such environments, it becomes emotionally costly to participate in debates with a strong adversary. The gradual trend is for groups to become more homogeneous.

This problem affects democracy due to the place that messaging apps have gained in the opinion formation process.

There is no doubt that messaging apps have huge social contributions, that is not in question. The problem is that the opaque and mostly anonymous mass communication model that characterizes messaging applications implies the burial of public debate. It prevents public scrutiny of ideas and hinders the visibility of contradictory perspectives.

Without transparency, the reliability of information is directly weakened. In arenas of discussion visible to the public and with moral and legal responsibility of the interlocutors, lies are less likely to prosper. Perhaps that is why the problem of fake news was not so relevant until 2014.

It is a pillar of democracy that citizens are well informed for decision making. But being well informed depends on access to plural, diverse and reliable information. In this sense, freedom of expression and access to information are siamese.

The problem is that these democratic values are not taken into account by applications. It is not enough to break the monopolies or try to inhibit abusive behavior by users if the service architecture itself induces these behaviors.

The trap lies in the fact that the interpersonal character of part of the messages attracts for the whole service, including viral messages, a data processing model specific to private communications - based on privacy and confidentiality. But public debate in democracies needs light.

Solutions

Technical solutions could mitigate not only the problem of liability, by identifying message senders, but also the problem of opacity. In order to protect interpersonal communication and shed light on viral communication, messaging services could offer the user the possibility of separating functions.

This could be done, for example, by offering a resource for the message creator to choose whether or not to forward it. It would be analogous to what Facebook does when it offers the user the alternative of the post being restricted to friends or being public and shareable.

If applications made viralization conditional on user authorization, they could completely protect interpersonal messages and at the same time shed light on viral messages. There would be two distinct paradigms within the same service, with application stemming from the user's option.

Measures like this, although simple, affect the modeling of the service. Thus, if there are no rules defined by law, companies will not tamper with products, under penalty of losing users to competitors.

Sarina Phu
Sarina Phu

Hi,

This is Sarina Phu from the Global Network Initiative (GNI). First of all, thank you to UNESCO and UNDP for putting this discussion together and for all of the great insight that others have already shared. GNI is a unique, multistakeholder initiative that brings leading companies, civil society organizations, academics, and investors together to help protect freedom of expression and privacy in the tech sector. Our Principles and Implementation Guidelines align with the UN Guiding Principles on Business and Human Rights (UNGPs) and are informed by over a decade of information sharing among our expert members, including through our unique company assessment process. Those assessments allow independent assessors (who we accredit) to produce reports based on access to confidential company information, including relevant systems and policies, as well as detailed case studies. Those reports are then used by our multistakeholder board to determine whether member companies are implementing our Principles and Implementation Guidelines in good faith, with improvement over time.

In addition to assessment, GNI also facilitates shared learning among our members and coordinates policy advocacy in support of freedom of expression and privacy. We address disinformation in all aspects of our work and have provided further information in response to questions 1, 3 and 6 below. We look forward to continuing to engage with UNESCO, UNPD and others on this important work.

Through ten years as an organization, GNI has found that shared respect for human rights provides a foundation upon which to build consensus among diverse and varied stakeholders. GNI, the only multi-stakeholder organization focused on freedom of expression and privacy in the ICT sector, has used this approach to build trust and cooperation among diverse stakeholders, produce important research on topics of mutual interest, and advocate collectively for freedom of expression and privacy in countries across the world.

  1. What role should legislation and regulation play? How can regulatory responses be developed in a way that respects fundamental rights such as right to privacy, freedom of expression and right of access to information?

GNI recently produced a policy brief focused on “content regulation.” We examined 21 different government efforts to address harmful content, including disinformation, in over a dozen different countries around the world. In the brief, we analyze these initiatives through the lens of human rights principles related to freedom of expression and privacy and provide recommendations for policymakers on formulating content regulation efforts that also protect these rights. 

As the brief makes clear, it is essential that any potential legislation and regulation distinguish between ‘illegal’ and ‘legal but harmful’ content/activity, and not obligate online platforms to remove legal content or otherwise violate free expression rights. This is particularly important in the context of disinformation, where it can be challenging to identify a person’s intent. 

If designed appropriately, legislation combatting disinformation should facilitate third-party notification of content believed to be illegal or in violation of platforms’ terms of service/community standards, provide clear guidance and liability protection to platforms so that they can address such content appropriately and flexibly, and facilitate meaningful transparency, oversight, and accountability to help ensure such decisions are made consistently, appropriately, and fairly.

On the topic of appropriate transparency and accountability measures, legislation should make clear what information must be made public by which types of online platforms, recognizing that different information may be more or less relevant vis-a-vis different services. In line with these guidelines, covered online platforms could be required to: (i) make clear what processes and tools they rely on to identify illegal content/activity; (ii) make their terms of service and procedures for identifying possibly infringing content/activity, as well as the reasons for changes thereto, clear and publicly available; and (iii) periodically report on the number of notices of illegal and otherwise improper content/activity received, as well as who those come from, what law or term they were alleged to violate, and what action, if any, was taken.

Transparency and clarity around the use of algorithms for detecting and assessing possibly infringing content, including algorithmic impact assessments, can help ensure that the use of algorithms is rights-respecting and non-discriminatory. As more platforms turn to automated tools for content moderation, the risk of erroneous removal increases, especially for those categories of content that are particularly context-dependent.

Finally, any sanctions for systematic non-compliance with requirements should be clear, specific, and predictable. In line with international human rights principles, any legislation regulation disinformation should avoid: requiring intermediaries to adjudicate the legality of content/behavior; imposing arbitrary time periods for responding to notifications; setting rigid compliance targets; or requiring reporting of allegedly illegal content to authorities. Where online platforms discover content that they believe may be illegal, they should nevertheless be encouraged to report it.
 

  1. How can internet companies be effectively governed or regulated to ensure they act in the public interest?

GNI has found that stakeholders should be guided by mutual respect, a drive to include diverse perspectives and voices, and an openness to creative or novel approaches to governance.

Mutual respect is generated by and in-turn engenders a sense of trust, which enables stakeholders to engage with each other openly and constructively as they discuss challenging or controversial topics. When this trust exists, stakeholders are more willing to candidly discuss challenges and collaborate on finding solutions.

For cross-border and universal issues, such as those in the digital realm, a truly diverse group makes fewer assumptions, increases the robustness of policy positions and advocacy strategies, and anticipates and minimizes otherwise unforeseen harms or challenges. Diversity and inclusion are key ingredients in meaningful engagement of participants, a core component of effective multi-stakeholder initiatives. Taking active steps to ensure that all participants have the opportunities, resources, and tools available to help them succeed within the cooperative initiative is essential to this approach.

Last, an important principle of cooperation is openness to creative or novel approaches to governance and advocacy. GNI and its members have found value in adopting new or unconventional ways of approaching governance challenges. For example, GNI and its partners have developed frameworks and tools to quantify the impact of State-ordered network shutdowns around the world, creating data with which advocates can appeal to the economic interests of States while still aiming to achieve human rights goals. 

 

6. Which stakeholders need to be engaged and which strategic partnerships should be considered?

As GNI has learned from over a decade of work facilitating non-company collaboration with and oversight of company policies and practices, when designed appropriately, multistakeholder oversight can help avoid and alleviate potential violations of users’ rights. Such arrangements can help build trust, foster collaboration, and facilitate transparency in circumstances where legitimate data protection, privacy, competition or other concerns may limit the degree to which information can be made public. Stakeholders should include companies, civil society, investors, and academics to best engage in advocacy at international, regional, and national levels.

GNI members extend and leverage insights into the legal and policy environment to navigate existing laws and policies and to advocate for changes to these. GNI also fosters an environment of learning and information sharing by which members from different constituencies familiarize themselves with and develop a deep understanding of other members. Last, GNI holds companies accountable to the GNI Principles through an internal biennial assessment mechanism, the only one of its kind in the ICT field. Companies are evaluated to determine if they are making good faith efforts to implement the GNI Principles with improvement over time. Assessments and the resulting written reports are carried out by accredited and trained third parties, and GNI Board members review the results and make recommendations accordingly. A summary of key takeaways from each assessment cycle is made public on GNI’s website. Though it is difficult to track causation, in the Ranking Digital Rights rankings of companies’ adherence to human rights principles, GNI member companies consistently rank as the most rights-respecting companies