Welcome to engagement room 2!

While there is much focus on how to stem disinformation flows online and offline, there may also be pre-existing factors which enable disinformation to spread more easily in different contexts.  We want to identify those factors, social, political, and otherwise, that determine just what kind of foothold disinformation can get in a country, and understand how addressing those factors might build resilience and reduce the impact of disinformation. We also want to know how we can effectively monitor and anticipate waves of disinformation in order to address it preemptively.

In this room, we would like to explore the contextual enablers that we should be paying attention to, and hear about how you have effectively monitored disinformation.

As a reminder, disinformation is “false, manipulated or misleading content, created and spread unintentionally or intentionally, and which can cause potential harm to peace, human rights and sustainable development”.
 


Please answer any of the below questions (including the question numbers in your response).  Feel free to introduce yourself if you wish. We look forward to hearing from you.  

  1. In your country/community, what are the primary sources and motivations driving the creation and sharing of disinformation? Who are the “super spreaders” of disinformation, those who have sufficient influence and following to amplify on and offline?
     
  2. What are considered trusted news sources among different groups? What are the criteria for trusting these sources?
     
  3. What are the factors that make the public vulnerable to disinformation? What are the key normative, technology, governance and social cohesion enablers of disinformation?
     
  4. To what extent do Internet companies’ algorithms actively push disinformation, and what is the role of closed communications networks in amplifying the problem?
     
  5. What kind of monitoring can provide effective early warning of risks of potentially harmful disinformation?
     
  6. What examples have you seen of disinformation mapping feeding effectively into national or local policy decisions, including institutional codes of conduct, and/or programmes?
     
  7. What roles of civil society organisations, government, media and Internet companies play in terms of monitoring drivers and enablers of disinformation as a contribution towards social preparedness?

 


We commit to protect the identities of those who require it. To comment anonymously, please select "Comment anonymously" before you submit your contribution. Alternatively, send your contribution by email to niamh.hanafin@undp.org requesting that you remain anonymous.

Comments (42)

Rachel Pollack Moderator

Week Two Summary

Dear All,

Thank you for your thought-provoking contributions to the discussion in Room 2.

For those just joining, and as the moderation of Room 2 is skillfully taken up by Ruth, here are a few highlights from last week and over the weekend.

Ruth kicked off the conversation by sharing a study by MIT that found that false political news gets shared up to 3x as quickly as factual news. She pointed out that this reveals information about both the content itself and the way it targets specific audiences.

The sources and motivations driving the creation of disinformation vary significantly depending on the country/community, topic and medium, pointed out Dina. She stated that CSOs, government, media and internet companies have an important role to play in monitoring and sharing accurate and informed knowledge, and she warned that banning content has proven ineffective.

We received an insightful perspective on the situation in Ukraine from Olena. Civil society plays a crucial role in monitoring disinformation and providing training to journalists and others, Olena pointed out. She underlined the need for greater media and information literacy, while also pointing to challenges related to wider social and political divisions.

Jim reflected on the importance of community in countering disinformation, starting with challenging the false information spread by peers. It is also at the community level that trust in sources and greater transparency in political decision-making can happen, he noted. Jim raised a series of questions, including on how we can “create a culture of ‘stop, think, check, before you respond”?

A common thread across these contributions was the need for sharing reliable information and for fostering greater media and information literacy.

With great thanks again to all for these insightful contributions, I wish you an engaging discussion up ahead! Please do feel free to comment again and invite your colleagues and partners to join the discussion as well.

Every contribution—no matter the length, no matter if it addresses one question or all—will be valuable for shaping our collective work on countering disinformation.

Rachel

--

Week One Summary (Room 2) by Moderator Louise Shaxson.

Louise Shaxson Moderator

Hello, and a very warm welcome to this UNDP-UNESCO consultation on the globally important topic of disinformation.  I'm Louise Shaxson, Director of the Digital Societies Programme at ODI, a London-based think tank, and I'm delighted to be moderating the consultation with Simon Finley from UNDP for this week.  Next week we'll hand over the moderating job to other colleagues, but we will still remain very engaged.  As a reminder, the consultation runs for three weeks, so there's plenty of time to get involved in some fascinating and in-depth discussions with a truly worldwide group of people!  

When you comment, please let us know which particular question or questions you are responding to; it helps other people anchor themselves in the conversation. 

And now it's over to you - how would you answer the questions set out at the top of this page?

We look forward very much to hearing from you all,

Louise & Simon

 

Yves

Bonjour à tous,

Je suis heureux de faire partie groupe assez intéressant. Je travaille dans la partie est de la République Démocratique du Congo comme coordonnateur média dans l'organisation internationale search for common ground. Ces derniers mois nous avons fait à l'épidémie d'Ebola et celle de la pandémie Covid10 dans un contexte sécuritaire extrêmement tendu et surtout très accentué par les fausses infomations sur les réseaux sociaux.  lesoù nous implémentons des activités de sensibilisation et d'éducation s

Bonjour à tous,

Je suis heureux de faire partie de ce groupe très intéressant dont je salue l’initiative. Je travaille dans la partie est de la République Démocratique du Congo comme coordonnateur média dans l'organisation internationale search for common ground. Ces derniers mois nous avons fait face à une montée phénoménale et inédite des fakenews et autres manipulations d’opinions sur les réseaux sociaux durant l'épidémie d'Ebola et celle de la pandémie Covid10 , et tout cela dans un contexte sécuritaire extrêmement tendu.

Fin 2018 alors que le pays est en pleine fièvre électorale, une dixième épidémie d’Ebola est déclarée à Beni, au Nord Kivu, dans l’est de la RDC considérée comme partie acquise à l’opposition.  La zone est de Beni et ses environs est écartée du processus électoral afin d’éviter la propagation rapide de l’épidémie dans le reste de la province, ce qui créera la révolte des partis d’opposition et des groupes de pressions tels que les mouvements des jeunes, des femmes et autres structures de la société civile. Les candidats députés aux élections annulées selon les premiers à lancer des discours de haine et encourager la circulation des fakenews sur les réseaux sociaux et surtout  à travers les médias classiques dont la radio et la télévision.

Beaucoup des messages incendiaires ont été relayés contre les équipes médicales de riposte ebola  ainsi que les agences humanitaires accusées d’avoir été achetées par le pouvoir en place pour inoculer le virus d’Ebola à des habitants afin que cette partie du pays hostile au pouvoir ne puisse pas participer aux élections.  Des manipulations d’images, des sons montés et des faux témoignages ont circulés pendant toute la période du début d’épidémie ; ce qui a provoqué des attaques contre des centres de traitement  qui ont causé la mort des plusieurs soignants, dans les territoires de Beni et Lubero où plusieurs groupes rebelles armés restent actifs.

Avec nos équipes, nous avons travaillé dans un contexte assez particulier avec des journalistes déjà eux-mêmes convaincus par les rumeurs et manipulations politiciennes. Nous avons pu organisé certaines activités assez importantes dont le mapping des groupes influenceurs sur les réseaux sociaux facebook et whatsapp qui a conduit sur l’identification des différents administrateurs principaux qui ont été formé sur les astuces principaux de factchecking et gestion des rumeurs. Les journalistes des radios ont quant eux été regroupés en synergie afin de composer une rédaction spéciale qui a lancé un journal spécial diffusé au même moment par toutes les radios.

Le combat contre les fakenews et discours de haine n’a pas été vaincu à 100% mais déjà des débats sur la véracité des certaine infos à travers des questions posées par des membres des groupes des différents réseaux sociaux restent un aspect positifs .

Actuellement le grand défi reste celui des rumeurs autour de la Covid19 dans cette zone à sécurité volatile où des habitants pensent que le port des cache-nez et masques facilitent l’entrée des groupes rebelles étrangers dans leurs villages car la maladie n’est pas censée exister en Afrique.

Ssanyu Rebecca

Qn1: In my country, Uganda, the primary sources of disinformation are politicians and some people who are closely connected to the powers that be - both in political and technocratic circles. Often, the disinformation comes as a result of rumours of issues that are being discussed at the high levels of decision making but have not been concluded or sanctioned for dissemination. Insiders leak the half-baked information and as it does its rounds on various social media platforms, particularly WhatsApp and Facebook, it becomes distorted and sensationalised. And by implication, the motivations driving especially the sharing of disinformation include the apparent fun that comes from sensationalism, the desire to tarnish names of key personalities such as opposition politicians (by those in government) or high level technocrats (by their workplace rivalries) among others. It would be difficult to pin point the so called “super spreaders” of disinformation. It is important to note however that redundancy is a major problem. When may people, especially the youth lack employment, they have ample time to engage in unhelpful communication, including the spread of disinformation both on and offline. The extent to which disinformation becomes influential is dependent on the subject matter. Political propaganda against opposition politicians tends to negatively influence public opinions and often creates hostile sentiments and responses.

Louise Shaxson Moderator

Hi Rebecca, thanks very much for your comment.  I'm interested to know what the role of the mainstream media is in all of this: do they provide any checks and balances on the spread of disinformation?  

Ssanyu Rebecca

[~96623] The country has laws and regulations for the media, including electronic media. But these only affect traditional electronic media and are not linked in any way to social media. The only instances where content of social media might become regulated by the policy and legal framework is when the state perceives that it is offensive especially of the person of the high level political personalities. 

Simon Alexis Finley Moderator

Very interesting. As lockdowns continue across the world we see over one billion students spending more and more time online, not able to participate in important face-to-face contact activities that have important benefits for mental health and well-being. It would be interesting to see if anyone had any ideas on risk-mitigation for this enormous and, potentially at-risk, demographic.

 

Edem Agbe

I agree with your posting. Recently in Ghana during the compilation of the new voters register for the upcoming election in December, social media (particularly WhatsApp and Facebook) were awash with various forms of disinformation created by politicians to court the displeasure of the public against the government and electoral commission. Disinformation is thus becoming a threat to our democracy and participatory governance.

Ben Schonveld

[~84983] this is hugely important...the prior research suggests that the impact of online on its own may not lead to impact. But in today's very particular context and an external context that is unresearched (people stuck at home while jobs etc go down) the outcomes are unknown. Similarly, I think its important that we recognise that the message and means ofdisinformation is understudied. The model is not that ideology actually has a point but rather that the message seeks to confuse - leaving the individual baffled as to what truth looks like and hence disempowered. We have no real research about how that impacts. 

Simon Alexis Finley Moderator

[~98935] Good Point. From the Preventing Violent Extremism work we know that the link between online and offline is often cited but with a non-existent evidence-base. Further research on how disinformation persuades or doesn't in the current environment would be enlightening

 

Edem Agbe

Disinformation has become quite a big issue in Ghana. With experience in social policy and development space there are about 3 main drivers of disinformation (i) Politics - politicians and their surrogates intentionally creates fake news to discredit their opponents or government interventions. Citizens then believe some of this fake news and its affect their confidence in government or otherwise. (ii) Bloggers - Many bloggers depends on followers to earn money and they believe fake and sensational news attracts followers. So they create fake news to gain followers and they get paid based on the number of followers they have. (iii) The advent of social media has also become a primary driver of disinformation. Posting and sharing information of social media is not that much regulated. And also the Ministry of Communication and the Ministry of Information have warned the public about fake news but they are unable to effectively regulate the use of social media and production of social media content.

Louise Shaxson Moderator

Thanks Edem - that's a great contribution.  Both you and Rebecca have highlighted the issue of sensationalism.  What role do newspapers and tv stations play in Ghana in countering some of this sensationalism?  Do they ever call it out and give more evidence-based reports, or do they just report it (and by doing that, give it more airtime)?  

Edem Agbe

[~96623] Sections of the media in Ghana are becoming critical but the bigger challenge is politicians own some of the media houses so they must do according to the dictates of the owner. Civil Society Organisations are making efforts to combat spread of fake news. Recently there is a project by Media Foundation for West Africa (MfWA) that is dedicated to combating fake news and disinformation in Ghana during the COVID 19 and elections. They have developed a fact-checked system that checks the accuracy of information made by politicians on political podiums and information on COVID 19. The organisation then disseminates the accurate information to the public in local Ghanaian languages using radio and online platforms.

They challenge is disinformation spreads faster than accurate information.

Melody Azinim

I agree with Edem's submission and to also add that in Ghana one of the drivers of disinformation is some of the media houses want to be seen as the first to report on breaking news because of that they do not take the time to verify the information before sharing. Over the last week, there was a media from on the shooting of a gentleman in one of the regions in the northern part of the country i decided to follow up on one of the institutions mentioned as part of the report only to release that  the media report is not accurate unfortunately the news had already spread widely. To mitigate this issue, i believe that such media houses and individuals need to be called out to redraw such stories using the same platforms.

Rachel Pollack Moderator

Thank you, Melody Azinim. Very interesting example.

Do you think increased capacity building for journalists to enable them to follow professional standards would help address this phenomenon?

Daniel Barraez

Hello, everyone.
Disinformation and its implication are crucial for all societies, and their discussion should involve as many stakeholders as possible. This consultation is a concrete way to make the debate open and broad. Congratulations to UNDP/UNESCO for this initiative!.

I am the Human Development, Multidimensional Progress, and SDG Center Manager in UNDP-Venezuela. 

Regarding the QN1, the primary source of disinformation in Venezuela is the political confrontation. In our cases, political hyperpolarization affects almost every topic in social discussions and deepens social cohesion deterioration.  The traditional national media are more prudent about pollution information. But the social networks are the preferred field for disinformation disseminators. Hyperpartisan behavior has a strong correlation with disinformation in social networks.

It is not easy to think that the volumes of misinformation could reduce in this context. But  I believe a way to address de information pollution is to identifying and supporting trusted sources. 

Ema M Fong

Aloha Daniel,

Thank you for your post and the analysis of what is happening in Venezuela. I am a moderator from room one (1).  Please could you share more about the key stakeholders that you spoke of - the trusted sources? Who are they, and have they earned the trust of the people or the government or both? What roles do they play, what kind of power do they have, social capital, informational power, expert power, political power, etc.? Could you please also identify other key stakeholders and whether they are allies or drivers of violence or shadows? Can you share what role civil society organizations, government media, and internet companies play in monitoring drivers and enablers of disinformation as a contribution toward social preparedness?

Thank you and warmest aloha, Ema

Daniel Barraez

Hi Ema,

Thank you for your interest in Venezuela Case. Regarding trusted sources, there are already fact-checking units mentioned by poynter.org (https://www.poynter.org/fact-checking/2019/against-all-odds-fact-checki…):  Espaja.com, CotejoEfecto CocuyoObservatorio Venezolano de Fake NewsCazadores de Fake News, and  Observatorio Venezolano de Desinformación (in twitter). In these units, there are from specialized checking news webs to the news portal. It is unclear if the fast-checking has increased its credibility, but this is a significant advance over the recent past.

In hyperpolarized contexts like Venezuela, there is a tendency to confuse reliable sources of information with neutrality in political conflict. Neutrality and trusted sources are two different concepts that don't necessarily come together. It is essential to promote the idea that it is possible to defend a political position without resorting to disinformation. That may seem obvious, but it is easy to get lost in these basic ideas in the middle of hyperpolarization.

 

The media, governmental and non-governmental, are very polarized and frequently contribute to polarization. Young professionals and academia are potential allies to address the disinformation. 

Simon Alexis Finley Moderator

Thanks Daniel! The political motivations and drivers for disinformation are extremely important and can sometimes get lost as people focus on how social media spreads polarizing information. If the political climate is a driver, does anyone have positive experiences in addressing this in their own context?

 

Ruth Stewart

Hi Everyone, I'm wondering if you've tapped into the fact checking community? They have so much experience in this area.... I recommend you approach them if you can.

Simon Alexis Finley Moderator

Yes! They are doing some great work once the disinformation is out there. It would be great to hear from some of them in this consultation.

 

Daniel Barraez

Thank Simon for your comments.

We are also working on disinformation about gender issues in Panama during the pandemic. It is a work in progress. As Simon has pointed out, political motivations are extremely important in disinformation, and Panama is no exception. Political issues are second place as a source of gender pollution information in this country on social networks. The clear first place is for the confrontation between secular and progressive values. The clash between abortion,  gender diversity, and gender equality can reach a high aggressiveness level, even resorting to misinformation. 

It is worth noting that on twitter, bots play a significant role in the political disinformation spread. But disinformation in the confrontation values, we couldn't detect bots. People seem to have no problem in publicly assuming their true identity to disqualify their adversary. Disqualifying people, instead of their arguments, seems to be a frequent cause of misinformation in hot social issues.

Regarding user communities, the spread of disinformation by users outside the main communities are significant. In general, communities are careful, but there are also few communities prone to spread disinformation. It will be interesting to meet these communities' influencers prone to disinformation to improve information quality.

Niamh Hanafin Moderator

Hi Ruth, thanks for the suggestion, we are indeed in contact with IFCN and many of its members.  Baybars Orsek we'd love to hear from you and your community for this important perspective!

Daniel Barraez

Thanks for your suggestion. We have identified the main communities, and this is very informative for disinformation purposes. We find out in this context that all users' communities have some kind of pollution information. But the super spreaders" of disinformation almost always belong to the hyperpartisan communities. The hyperpolarization drives disinformation.

 

 

Ludwwin

1.En Colombia proviene principalmente de figuras políticas, estrategias organizadas con cuentas falsas que posicionan tendencias en Twitter, y estrategias organizadas en grupos de Whatsapp que difunden contenidos a grandes velocidades.

2. Medios de comunicación tradicionales o con larga trayectoria.

3. El desconocimiento de los métodos de verificación de información y la falta de casos ejemplares sobre las implicaciones de compartir desinformación.

6. El documento de UNESCO sobre desinfodemia  reúne varios ejemplos locales de casos e iniciativas: https://en.unesco.org/covid19/disinfodemic

Las organizaciones de fact-checking son fundamentales para que la sociedad civil y medios de comunicación tengan herramientas para combatir las acciones de desinformación.

Louise Shaxson Moderator

Week One Summary

Hi all

Thanks for all your thoughtful comments - this is developing into a very rich discussion.  For my own benefit I tried to summarise what I think has been said, and thought I would share it with you - please let me know whether I'm on the right track!

Where does it come from?  Several factors give rise to it: political partisanship came through quite strongly in Rebecca, Edem and Daniel's points.  Disinformation helps this partisanship to turn into hyperpolarisation, as Ludwwin pointed out, when people actively strategise to spread disinformation at high speed.  But it's not just partisanship: the desire to achieve political influence is obviously important, but there's also a desire for sensationalism and excitement, particularly from people who are simply bored and who have time to 'stoke the rumour mill'.  Edem made the point that bloggers who rely on numbers of eyeballs for their revenue are more likely to write sensationalist content - we shouldn't forget that Facebook already knows that our brains are hardwired to be attracted to divisiveness and immediate experiences INSERT LINK.  And even as far back as 2009, we knew that social media was leading to 'an inability to emphathise and a shaky sense of identity" (see  https://www.theguardian.com/uk/2009/feb/24/social-networking-site-chang… - who remembers Bebo??).  

What is disinformation trying to achieve?   Polarisation, disempowerment, confusion and discrediting others all came through from the contributions - all of which came through in Yves' writing about the situation in DRC.  If the aim is disempowerment and confusion then it seems to me that what might be driving it is the combination of a 'shaky sense of identity' and an inability to emphathise with others.  Obviously the question of how strong your sense of identity is isn't just about being online - there are long histories of questioning the identity of marginalised groups in order to disempower them.  But the way that social media changes our ability to emphathise does seem to be a particular facet of the online world and I wonder whether there's a sort of flow of lack of ability to emphathise --> shaky sense of identity --> feeling of disempowerment.  And boredom probably doesn't help with our sense of identity either.  

Then I started looking for the factors that don't help slow the spread of disinfo (or actively encourage it to spread).  Politicians owning traditional media, existing tensions & conflicts, and the perception of being overlooked in decision making processes all stood out for me, and it was interesting that they all seemed to come from the offline space.  Which took me to Ben's point, and how he highlighted that this online/offline question hasn't been very well researched.

A lot for me to think about!  Do let me know what you think about my summary - it's just what I highlighted as I was reading the above.  And please bring your friends, colleagues, Twitter followers etc into the discussions - it would be great to hear from different parts of the world.  

Louise Shaxson Moderator

I'm not sure the Guardian link came through properly: it's an article entitled "Facebook and Bebo risk 'infantilising' the human mind" from 24th Feb 2009 on guardian.com, if anyone would like to look it up.  

Daniel Barraez

Hi Loise, 
Thank you very much for the Guardian references on the social networks and your summary discussion.

I want to share a  comment about " the factors that don't help slow the spread of disinfo (or actively encourage it to spread)" you have pointed out in your summary.

In the gender disinformation in Panamá during the pandemic, we have seen in almost all misinformation messages,  the intention to harms is through murder accusations, no matter the subject (gender or politic).  

I believe this way to harm is related to two factors:
-the anonymity of users who create and disseminate disinformation
-the murder accusation (or another way to harm) doesn't have consequences on the spreader. In many countries, false allegations in media may have severe implications like lawsuits in court. But in social networks, it is no the case. This can explain why social networks spread more disinformation than traditional media.

Minimal regulation on social networks could help to address both factors and other factors like bots.

Juan Pablo Miranda

From Chile, we would want to contribute with this reflections:

1.- In Chile, sharing false or biased information is associated with digital activism. One reason is that these groups tend to share more information on social networks. Other reason is the way information pollution works on social networks. When a certain information coincides with our worldviews, we tend to believe in it more easily since it confirms the visions and values that we previously hold. This is called confirmation bias.

3.- At least two factors can be mentioned. First, the lack of awareness about how social networks work and how the algorithms used by platforms such as twitter or Facebook filter and bias the information to which we are exposed based on our values, political views and interests. Second, there is no culture about checking the information to which we are exposed and there is little awareness of the magnitude of the circulating information that could be false.

4.- One of the main problems with information pollution in social networks is the way in which the algorithms of platforms such as Twitter segregate users. In general, users on social networks interact with other users based on common interests and worldviews, which results in ideologically and socially closed social networks. The existence of these informative "bubbles" makes it difficult to contrast information and views on different topics.

7.- Awareness and pedagogy campaigns. Build alliances that allow breaking the information bubble.

Rachel Pollack Moderator

Hi Juan Pablo, thank you for this interesting perspective from Chile.

You mention that much of the sharing of false and biased information shared in Chile is associated with digital activism. Does this false content relate to specific types of issues? What are the individuals or organizations spread this false information trying to achieve? 

Regarding Question 3, you identify a lack of awareness about how social networks work, as well an absence of a culture of checking information, as factors that make the public vulnerable. Do you think that media and information literacy could help to counter these?

Juan Pablo Miranda

[~96622] 

Hello. Many political organizations use misleading or exaggerated information to make a point. However, most of people do not know when they are sharing disinformation in social platforms. The problem is that people tend to believe as true that information or news that math with their believes or political points of view. That is why political activism is related to disinformation spread in social media. Currently, fact checking organizations have detected many misleading or false information about the constitutional process that Chile is experiencing. 

About the second question, I agree that media and information literacy could play an important role, specially social platforms. For example, social media should explains actively how their algorithms work. 

 

Rachel Pollack Moderator

Hello! Welcome to Week 2 of the UNDP-UNESCO consultation on disinformation.  I'm Rachel Pollack, and I work in UNESCO's Section for Freedom of Expression and Safety of Journalists.

I'll be serving as moderator of Room 2 this week, taking over from Louise Shaxson and Simon Finley.

This Room covers topics surrounding the drivers, enablers and mitigation mechanisms of disinformation such as: the primary sources of disinformation; the factors that make the public vulnerable to it; the extent to which internet companies' algorithms push disinformation; and the role of various stakeholders in monitoring disinformation. 

Please answer any of the questions at the top of the page, whether just one or all seven. It will help if you can please indicate the question numbers in your response. Feel free to introduce yourself if you wish. 

Your contributions will help shape UNESCO and UNDP's actions to counter disinformation around the world.

Looking forward to your input!

Rachel

 

Ruth Canagarajah Moderator

What are the factors that make the public vulnerable to disinformation? What are the key normative, technology, governance and social cohesion enablers of disinformation?
 

An interesting finding from MIT is that fake political news, aside from just getting consumed quickly, gets shared up to 3x as quickly as factual news. Moreover, "controlling for many factors, false news was 70% more likely to be retweeted than the truth." This suggests to me two things: 1) there's something very intrinsic to the content itself (wording/language, "novelty" in ideas, formatting and headlines) that draw people in; and/or 2) that the content has been targeted well to the individual, whether it's through alignment in socio-political beliefs or being involved in an active social network that drives the spread of misinformation via information cascades. Regarding Point 1, I would be super keen to see the use of Natural Language Processing via sentiment analysis to explore if similar linguistic patterns emerge in identifying misinformation. Regarding Point 2, this is what Busara has been largely occupied by in recent projects. 

At the center is the capacity of fake news to override System 1/deliberate thinking by drawing upon reflexive human biases. This includes limited attention, the need for group belonging/identification (which occurs on a spectrum but is quite tied to Louise's idea on a shaky sense of identity that needs to be validated), and the comparative advantage that mis/disinformation is oftentimes more evocative and/or novel than run-of-the-mill news (well, depending on what country we're talking about these days). 

Dina Mansour-Ille

A very interesting follow-up to the earlier discussion. I would say nowadays, it is very difficult to pinpoint one primary source or specific motivations driving the creation and sharing of disinformation (Q1). It very much depends not only on the country/community, but also very much on the topic, the medium and people most influential in relation to that particular topic. That said, however, I believe that social media is by most accounts a powerful 'super spreader' of disinformation, because quite simply anyone can share and sell information as facts that then grows and mushrooms into new shapes and forms. 

That leads me to Q2: nowadays, I would never trust any news or information shared on social media. Unfortunately, even the most neutral information sources today, such as BBC etc., do get influenced by the super spreading of false information. I therefore only trust the 'law of 3', i.e. if in neutral, more traditional mediums of news sharing I see the same piece of information or news being shared for more than 3x then I do classify it as credible. I do not, however, think there is one source of news or information sharing that can be trust nowadays as information has become increasingly fluid.

In relation to Q3, I wouldn't say that the public is vulnerable to disinformation in general. It very much depends on who is sharing that information - i.e. if you are a supporter of a particular political figure and he/she share a particular piece of news or information as a fact, you are simply more likely to believe it. In a sense, the name, credibility and your interest in some public figures can make you vulnerable to disinformation. In addition, you could also be more likely to believe disinformation if it supports your position – it is simply human nature, especially when you are not sufficiently informed about the topic. We have witnessed this quite strongly with the coronavirus pandemic. Ample disinformation has been shared as ‘facts’ that support different positions on the virus and hysteric super spreading of these different – and sometimes even contradictory ‘facts’ – has been shared around. This leads me to Q4.

In relation to the virus, algorithms did indeed play a role in the spread of disinformation. I do not know, however, to what extent. I think this differs by topic, company, community and country. But it no doubt amplifies the problem. I am, however, conflicted over Q5. In relation to this question, I am not entirely sure if monitoring works. Monitoring can be and is many cases bias or follows algorithms that might censor information that may not be necessarily inaccurate. So personally, I am not sure if monitoring is an effective solution – self-censorship combined with awareness raising tools/campaigns might be more effective in my opinion. In relation to the coronavirus, FB, for example, has been actively monitoring content for a while now and its policy hasn’t been effective. If anything, it has been counter-productive, as the more content is deleted and banned the more those who didn’t believe in the severity of virus, were convinced that there is some sort of conspiracy happening. I have witnessed this first hand within my own network – friends whose content has been deleted on FB ended up believing in some sort of conspiracy around the coronavirus.

I am not sure about Q6, but I definitely think that CSOs, governments and media and internet companies have a serious role to play in controlling the spread of disinformation (Q7). But this shouldn’t be about banning content, but rather about monitoring the flow of information and identifying drivers and enablers of disinformation and sharing more accurate and informed knowledge and information to combat disinformation. Banning content on one platform adds to the drivers of disinformation – as we have seen with the coronavirus pandemic. Also banned content finds an audience elsewhere. I think educating people, sharing accurate and informed information and re-directing drivers and enablers would help and might prove to be more effective.

    Rachel Pollack Moderator

    Dear Dina,

    Thank you for this thoughtful and comprehensive response, which underscores the nuance in understanding the drivers and spread of disinformation.

    As an academic, could you point us to some research that addresses some of the issues you raised? Are there specific examples or studies that we could consult to learn more?

    Best wishes,

    Rachel

    Olena Borodyna

    Evening, everyone!

    Just a few reflection from me on fake news in Ukraine.

    In Ukraine, enablers of disinformation from Russia and other sources such as domestic TV channels and newspapers with ties to oligarchs include many factors, including exposure, political attitudes and receptivity to foreign media influence. While there was some development of independent media ecosystem, Ukraine and many other FSU countries continued consuming (and producing ) content from and for Russian (and Russian-speaking ) audiences. Such was (and still is) the case in many parts of Ukraine. In many cases people will continue consuming such content despite being aware that its biased. When confronted about false or misleading facts in the news sources some also dismiss it as false allegations by someone from the other end of political spectrum. I would add that understanding politics of disinformation in Ukraine requires getting acquainted with the interaction of domestic political and media landscapes, as media companies are owned by oligarchs and are generally perceived to be biased in favour/ against whoever that in-person seeks to support/ undermine. 

    In Ukraine, civil society plays a crucial role in monitoring disinformation and providing training to journalists and other representatives of civil society to enhance early warning capacity. However, as is the case in many countries,  media literacy of the population is generally quite low and fact checking is limited to a small community of civil society organisations and engaged citizens. I should add that raising media literacy alone won't help tackle disinformation. In case of Ukraine (as is the case in many post-Soviet countries) trust in government and political institutions is weak, and in many cases people will trust official sources no more than other media channels. In the post-Soviet era, many countries of the former Soviet bloc, including Ukraine struggled with nation-building (precarious economic conditions, lack of unifying political leadership and ethnic divisions are some of the reasons for that). Lack of national unity and absence of impartial news sources, is also driving people to engage with media channels which often spreads false or misleading information creating siloed communities where polarising opinions can flourish. 

    Rachel Pollack Moderator

    Dear Olena, 

    Thank you for this interesting perspective. 

    Going to Question 4, do you see a specific impact of the internet, and specifically social media platforms, in the spread of disinformation?

    Best wishes,

    Rachel

    Jim Della-Giacoma

    A few brief reflections from an analyst who works on fragility and conflict in South and Southeast Asia, but who lives in the United States.

    • In your country/community, what are the primary sources and motivations driving the creation and sharing of disinformation? Who are the “super spreaders” of disinformation, those who have sufficient influence and following to amplify on and offline?

    For the angry and aggrieved, social media are an immediate, unfiltered, and powerful tool of expression. Their views go unchallenged by facts, laws, or historical complexity. They blend into the daily flood of information online. It can be sometime before the extreme and often ill-informed viewpoints are seen and assessed as a threat by those who are in a position to counter or check them. This ability to be challenged by peers or members of their own community is an important first step towards compromise, developing a common understanding, or narrowing the gap between extreme viewpoints.

    • What are considered trusted news sources among different groups? What are the criteria for trusting these sources?

    I am not sure the traditional "fact-checking by media" model is working. There will never be enough resources. It is too slow and often deliberately discredited by those with a political interest in spreading disinformation.

    My sense is that countering deliberate or misinformed disinformation, particularly that that prompts violent conflict, needs to be done at a much more local level by activating allies that have their own sources of community legitimacy. Community by community, however we define this, we need to understand who are the trusted sources of information, which is not always the same as a "news source".

    • What are the factors that make the public vulnerable to disinformation? What are the key normative, technology, governance and social cohesion enablers of disinformation?

    In some places, I see the distrust of and dysfunction in representative government as a distinct problem, especially when it is not representative. Weak relationships between communities and those who govern them aid disinformation. A lack of transparency in decision making does not help. Corruption undermines legitimacy. A lack of understanding of how and why decisions are made is part of this problem. The way decisions are made involving communities and communicated to them is important in countering those inclined to use disinformation as an unconscious or deliberate tool. These weak relationships undermine the ability of concerned or active citizens, who are a small group, to quickly go to the source to cross-check disinformation and potentially counter it. These active citizens are the go-to people for many others in a community as sources of information when a "hot" issue appears. Making direct source information more easily available or increasing its prominence online, including on social media, could be a way to counter-disinformation. These should not just be understood as mass tools, but tools for the key active citizens or activists to use.

    • To what extent do Internet companies’ algorithms actively push disinformation, and what is the role of closed communications networks in amplifying the problem?

    The algorithms reward anger, passion, and frenetic levels of activity. They are easily gamed. How do we add speed bumps into these social media tools to slow things down or slow down reaction time? How do we create a culture of "stop, think, check, before you respond" in communities of concern? How do we create or reinforce trusted people or places to check-in before responding? How do we reinforce or encourage placing more value on information from named, certified, or known sources versus the anonymous, pseudonymous, and unverified?

    Ruth Canagarajah Moderator

    Hello all, and thanks for joining us for the third and final week of the UNDP-UNESCO consultation on disinformation. I’m Ruth Canagarajah and I currently work as a Senior Associate at the Busara Center for Behavioral Economics in Nairobi. 

    I’ll be your moderator for Room 2 and given that it is our final week, I am especially keen to hear your remaining insights and experiences in relation to disinformation and its drivers, enablers, and risk mitigation approaches. Along with still fleshing out ideas on the guiding 7 questions found at the top of this page, I encourage you to interact with the other commenters on this forum. This is a wonderful time in our consultation process for more dynamic conversations and informal back-and-forths to take place on this hub for idea exchange. Additionally, I also encourage you to explore Rooms 1 and 3 for comments, questions, and cross-pollination.

     

    I look forward to the last and final round of contributions, no matter how brief or in-depth the inputs!

     

    Warm regards,

    Ruth

    Louise Shaxson Moderator

    Hello everyone, if you haven't seen this report in today's news, it is very relevant to what we are discussing here - how disinformation is combined with other methods of cyber warfare to destabilise.  It makes chilling reading. 

    Separately, I've commented in Room 3 on a report that has come out in the UK that advocates for setting up an independent agency to monitor platforms (do go over there and have a look).  I wonder, though, how much that would achieve without a supporting structure that specifically looks at the issues of identity and community that we are discussing here.  And does it all come back to governance?  If, as Jim says, the traditional fact-checking model isn't working, is part of the problem that we haven't been nearly as imaginative as we need to be about how our governance structures need to change to handle our very rapidly shifting sense of identity and community?  Has anyone who's posted here had any experience of working with citizen's juries, for example?  Or with anything similar or more innovative?

    Jamie Hitchen

    I wanted to share a few thoughts based on my experience researching, primarily the use of WhatsApp, during recent elections in West Africa (Sierra Leone, Nigeria and The Gambia).

    1. Given the nature of my work, it is perhaps not surprising that I see political actors as the main sharers of falsehoods. This extends to government officials and those who are affiliated with political actors and parties, that are not always formally part of the party, but who work with it to advance its electoral chances (with the aim of a political appointment if the party/candidate is successful).
    2. Yes, a strong, existing, presence on all social media platforms is key (content is often shared on one platform - say Facebook - and then copy and pasted to Twitter and WhatsApp for example. In northern Nigeria several social commentators who had built up a following over several years online and offline were able to use, sell that captive audience to political bidders. I think here the idea of offline credibility can help with how information you share online is received (and vice-versa). Pastors and other religious/traditional leaders in Nigeria can be super-spreaders, as they are trusted arbiters of information among their congregation so if they take a piece of false information from online (deliberately or not) and share it with followers (offline), people will be much more inclined to believe it because of their standing in society.
    3. The source [the person sharing it] of news, matters more in some cases, than the content itself on platforms like WhatsApp. I think there is some good Afrobarometer data on this for multiple African countries, which puts radio as the most trusted source of mainstream media information (and the most listened to) and this was backed up my some of the small survey work done in Nigeria, where radio remains a source to verify content. 
    4. In Nigeria, there is a lack of trust in the government's ability to provide credible information and so that provides a space for falsehoods to flourish, and to be more easily believed, that align with existing biases and divisions. And the most effective electoral disinformation in Nigeria draws on/out these divisions. There is also a lack of digital/civic literacy among, particular older users, but also more generally, that means people struggle to discern what is true and what is false. And this is also limited by people's access to the internet. In Sierra Leone many young people I spoke with 'managed 25MB a day' which allowed them to use WhatsApp but not to download videos or PDFs or to fact-check away from the application. In many more rural parts of Nigeria this also applies. This also links to Facebook Basics and its potential issues in making its platforms the only source of online 'news' for some users. Others have researched/written about this more extensively - http://democracyinafrica.org/facebook_scramble_africa/
    5. Certainly closed communication networks like WhatsApp can amplify the risks of disinformation. First by being more private they make it easier to people to hide their identity when creating content (people in Nigeria create WhatsApp groups of 2 or 3 people and share into it first, so that it comes out of the group as labelled 'forwarded' making it hard to know the original source). Even if you do know the source, all you know is their phone number, names and a profile are not required like on Facebook. Instead users often rely on who shares the information with them on WhatsApp as the intimacy of the platform lends itself to this. So they don't always look to fact-check or verify online (though many do), but will judge its veracity based on what they know and who shares the content with them/how many times they receive it.
    6. This is difficult. A story can go viral on social media in 1-2 hours and its not always clear what that story will be, so early warning is quite difficult to do. Catching up with disinformation once it starts circulating is also difficult, something that limits the impact fact-checking can have (as by the time you do a through fact-check and present the findings, the disinformation can be everywhere online and already feeding in to people's bias. Efforts to flag accounts or groups that are known to have previously spread disinformation is one possibility, but when it comes to politics for example this will always lead to accusations of taking sides politically, for things like health disinformation its perhaps more plausible to identify accounts where disinformation is shared regularly and then communicate this information to citizens. 

    Overall I think understanding the context in which disinformation is shared is fundamental to providing tailored responses that have shared principles but that are tailored to meet the specific reality of a country, or even states within a country. One thing that our Nigeria research (https://www.researchgate.net/publication/334736880_WHATSAPP_AND_NIGERIA…) showed was that localised pieces of disinformation (those focused on a state level event, rather than a national happening) were more likely to have been seen/engaged with. 

    Jamie Hitchen

    Just to add on the question of how government institutions can participate and monitor in the information landscape. In politically polarised environments its hard to establish and sustain credible independent bodies (how will they be appointed etc) and that's even before you discuss whether they should be. You can perhaps look at the model used for journalism as one possible solution, whereby members are represented by a body that sets standards they must adhere too, but this only can apply to bloggers or online influencers, as otherwise you'd have every citizen being a part of this association. I think the more viable shorter term solution is around building codes of conduct for online engagement that are enforced by other users. So with WhatsApp you empower group admins to be better able to control what is discussed and shared in a group (perhaps here the platforms can also do more to support this). Though this might be a lot of work for the group admins!

    Generally I think social media platforms need to do a lot more in the African context in terms of getting content moderators who speak local languages. Hausa has c.50 million speakers, so even if 2% online that's 1 million people who might use the language to converse (Hausa Facebook very vibrant in Nigeria). But I don't know if Facebook or Twitter have many, if any. There are also questions about whether the user terms are available in these languages (https://www.vice.com/en/article/xg897a/hate-speech-on-facebook-is-pushi…) and if that's the case, users can argue they didn't sign up to the platforms rules for use (hypothetically). 

     

    I think platforms, working with media and civil society, can also do more to educate users about features of their applications that might not be well known. For example on WhatsApp how to change your group settings so you aren't automatically added to groups but that you receive an invite and then decide if you want to join. 

     


    Please log in or sign up to comment.