Misinformation doesn’t just work on social media. Its playing field is much wider. It sometimes interferes in certain television or radio broadcasts, in the written press and in our private communications. These three environments being different, they deserve to be studied separately. Indeed, we probably do not have the same perception of information, whether it comes from a Facebook news feed, a streaming news channel or a WhatsApp group. Brazilian researchers have questioned whether parameters like political orientation or openness influence our judgment when exposed to information on the WhatsApp app.
Brazil has experienced a very high death rate during this pandemic, totaling to date . For most health authorities, there is no doubt that part of these deaths were caused in part by what the called infodemia, that is to say, ” information overload and the rapid spread of misleading or fabricated information, images and videos, [qui], like the virus, are highly contagious, grow exponentially (…) and complicate response efforts to the Covid-19 pandemic ”. However, this supposed infodemia is very heterogeneous. If we want to better understand the information and the way people react to it must be studied wherever it spreads. This is what Brazilian researchers undertook by submitting participants to messages built from scratch. They publish their results in the journal .
The emergence of the Internet in Brazil
What variables influence our discernment?
The experimenters highlighted the first thing in this study. The majority of people going through the experiment did not identify with explicitly politicized WhatsApp groups (Bolsonaro’s family / Free Lula) and did not want to be part of them. On the other hand, they were inclined to identify with and want to participate in the conversations of more neutral groups, especially those called “friends”. Experience also suggests that there is a small negative correlation between trustingand private couriers in d’ and the ability to discern right from wrong and that the time spent on the application does not influence the capacity for discernment.
In addition, even if the correlations remain very modest, we can note that being open-minded (measured by the scale of active thinking and open-mindedness on the evidence which accounts for our propensity to change our beliefs based on new evidence), trusting traditional media and reading newspapers are all three correlated (correlation rates are 0.29; 0.29 and 0.26 respectively) to good discernment skills between ” true ”and“ false ”information.
Infodemia = misinformation = bad decision making?
While it is necessary to better understand the ecology of misinformation, how it spreads and how it influences the population, several questions remain unanswered: is infodemia a major cause of misinformation and this misinformation is it? -Is it even a preponderant cause in bad decision-making by the population? According to several specialists in scientific communication who publish a commentary in the , it is far from being that simple.
In their short article, they remind us first of all that we are once again fascinated by a very old problem. Indeed, the problem of misinformation is a historical by-product of liberal democracies. It is neither new nor co-emerging with the arrival of social networks. One can cite propaganda as a leading example of disinformation or other forms of persuasive political messages intended to deceive the population which have existed for several centuries. The authors note that, according to the work published on the subject, misinformation is particularly increased during an election period, even though it is a time frame when individuals are generally more attentive to the information around them. For example, very few Americans were able during the duel between Al Gore and George W. Bush (long before the arrival of social media therefore) to restore their respective political positions on crucial points such as the compulsory registration of weapons. at[display-posts orderby="rand"] .
Like politics, we know that people have trouble understanding scientific facts, especially how they are produced. Most scientists unfamiliar with the social sciences and decision-making then think that setting the record straight is enough to solve the problem. Without realizing it, they place their trust in knowledge deficit models that are not consistent with the best available science on the subject. These individuals then exert colossal efforts for often minimal results because they consider that if people were only exposed to correct information, they would automatically make better decisions. Again, this is not what empirical research on the subject suggests as pointed out by trained by the National Academy of Sciences and Medicine in the United States, ” the main reasons people don’t do the things they know they should be doing are cognitive preferences for old habits, forgetting, small inconveniences in the present moment, preferences for actions that require the least efforts or confrontations and reasoning ”.
The fuzzy trace theory
There is a theory that adequately accounts for these issues: the fuzzy trace theory or . The latter postulates that we classify information into two memory categories: the essential and the precise (gist and verbatim, in English). The essential corresponds to the global idea that we have on a subject that we have heard about, while the precise corresponds to the exact memories of the information that we know and a contextualization of the latter. According to this theory, we prefer to reason with the essential part rather than with the precise part. For example, according to this theory, one may have heard the figures for the benefits and risks of , to have accepted them and yet, during our reasoning in order to make a decision, to remember only the essentials in a confused way by thinking that the two choices involve risks without again mobilizing the exact figures and the nature of the risks. Faced with this conclusion, the that we mentioned above would then encourage us not to be vaccinated.
Why are extension efforts ineffective against disinformation?
By putting the disinformation present in the minds of individuals on a pedestal and by often omitting social and contextual explanations, we are undoubtedly guilty of (a bias which consists in giving more importance to the internal characteristics of individuals than to external events to provide an explanation). As we have just seen, the link between misinformation and behavior is not as linear as one might think. Therefore, this state of affairs is a first avenue for considering the modest efficiency of the fact-checking or even popularization that strives to reestablish the “truth”.
Then, these interventions which are based on models which do not have the best level of evidence (knowledge deficit models) encounter a completely new information ecology. Indeed, even if the problem of disinformation is old, the fact remains that social media have helped to accentuate it because of their economic model which consists of . Therefore, even the information is used to serve this objective: precision is left aside in favor of engagement (some efforts were nevertheless made during the , heterogeneously depending on the platform).
Also, in the context in which we have lived for more than two years, the category of scientific disinformation is far from being engraved in the is a good example). Likewise, science is a corrective enterprise. Therefore, the resulting recommendations evolve over time based on the best available evidence. It would be interesting to investigate in more detail how the general public perceives this and how it impacts their decision-making. It would also be desirable for the public authorities in charge of communication to take stock of the work carried out to date and draw inspiration from it. Finally, as a last plausible explanation, actions against disinformation sometimes focus on very specific details of a topic in question. Yet, according to fuzzy trace theory, these details have little impact on decision making.. For example, information can be categorized as misinformation and then ultimately be considered a possible hypothesis (