Vous êtes sur la page 1sur 5

FALSE INFORMATION IS KILLING PEOPLE

Facebook became a tool for the largest forced migration in recent human history.
700,000 Rohingya Muslims fled to Bangladesh in order to escape the brutality of the
ethnic cleansing operations the Myanmar military had been doing to them. However,
how does an event as abhorrent as this be allowed to continue in this age of
information, communication and technology (ICT)? One may be able to trace the
problem in social media, specifically Facebook. Numerous fake accounts and pages
were attributed to the spread and normalization of anti-Rohingya sentiment. Even
Facebook officials themselves confirmed the presence of “clear and deliberate attempts
to covertly spread propaganda” originating from the Myanmar Military whose main goal
was to "set the country on edge… [and] to generate widespread feelings of vulnerability
and fear that could be salved only by the military’s protection”. Nevertheless, the
question still stands, how is it still possible for people to believe these since if they can
access Facebook, then surely they can also access the greater part of the internet to
know the truth?

Facebook
Being one of the most popular and largest social media networks presently, Facebook
has been continually been marred with controversy due to instances in which user data
was seen to be manipulated to subtly influence elections or even used a tool for a
targeted ethnic extermination. The social medium prided itself on its ability to create
connections but did not look far ahead enough it seems and see how this ability of theirs
can be abused. From people connections to false information propagation leading to
multiple serious crises, one should ask who is to blame or perhaps more importantly
what is to blame for the answers seem to lie in the social medium’s Personalized Feed.
With the rise of the age of ICT, decisions previously made organically and relegated in
an analog manner have been more increasingly digitized. Social media for one have
complemented and perhaps revolutionised the way people communicate with one
another. However this seems to be at a cost for one’s actions in that medium might not
necessarily be truly theirs anymore more often than not for what one gets to see are
things that have already been filtered or personalized with the help of "algorithms" so
that the value of whatever one sees is amplified and seemingly made more important
which translates to having a greater appeal or likelihood of one being interested in it.
This then therefore changes how one would approach information for its previous
probabilistic quality now comes into question since if all one’s interactions on the Onlife
are interpreted and then meddled by algorithms then would not that immediately imply
an inherent meaning or value within that specific information. There are also various
ethical implications attached to this practice of personalization for one could make the
case that algorithms themselves are also essentially not objective and do contain
inherent biases, the algorithmic manipulation of information utilizing one’s behavioral
data is ethically dubious and may even perhaps violate one’s fundamental rights and
lastly, the societal potential and impact of personalization is still yet to be fully realized.
Thus there is a need to examine the ethical implications of content personalization and
whether there could be an ethical way to utilize it or if it inherently detrimental to its
users and the information it manipulates.

Policing Facebook
Facebook enforces their community guidelines in two ways. The first barrier, which FB
considers quite successful seems to be an algorithm which oversees most of the
content uploaded in the medium and and even extends to the creation of “fake
accounts”. However, it seems that it still has difficulty in identifying hate speech and
thus enters the second barrier of Facebook which is its review teams who individually
look at user made reports and determine whether it violates their standards or not.
Immediately the practicality and reasonableness of FB’s system pops out. The sheer
amount of information being uploaded in the medium is bound to overwhelm any
individual and perhaps even groups of individuals if they’d be given the task to nitpick
each and every single one of them and it does seem agreeable that concerns that are
normative such as hate speech remain a human responsibility. However, the issue that
comes to mind and that Facebook also seems to recognize is what about the violations
not being reported even allowed to be propagated? One may argue that these types of
information will eventually be reported to the system and, although that is possible, the
probability of its instance would be lowered passively and perhaps even actively due to
the environment Facebook has fostered due to existence and the medium’s extensive
use of personalized feeds.

What does this mean for information?


Previously held principles of accessibility of information in the internet would not be the
case anymore for it it not information that are more accessible and reliable (loads faster,
mobile friendly, factual, etc.) to be prioritized but instead it would now be
information that would suit the views and ideals of the person the said feed is
personalized for.
Additionally it appears to me that the probabilistic quality of information, being without
truth or meaning until it is organized as such, is not anymore in contention here. This is
since some may argue that the information in the feed have no organic meaning but that
is not the case for, as stated above, personalized feeds such as the one Facebook has
already have been filtered by algorithms in such a way that it'll appeal to the person it is
being personalized to. A study's observation of this instance would be discussed later
on but for now, what seems to be certain, is that the information in FB's feed are not
devoid of organic meaning and thus explains why the medium does not merely have the
problem of "information overload" but instead the problem of misinformation.
To validate my claim of the probabilistic quality's removal one would need to recall the
definitions of algorithms for in the case pf content personalization, algorithms are not
anymore mere mathematical constructs or configurations that still lack certain variables.
"Algorithms" as used by Facebook in their content personalization are already
implementations combining the mathematical construct to fit a particular configuration to
achieve a specific task which is the aforementioned personalization. The information
then is therefore not anymore merely in some vacuum waiting to be organised for one to
make sense out of. What a user of Facebook sees are information that have already
been organized by the algorithms, through the data the user has supplied it with, and
are already predicted to have value to the user.

How does FB create or perpetuate an environment akin to an


echo chamber? Why?
- The PNAS study quantitatively analyzed the cascade dynamics of Facebook users
under with regards to two aspects: scientific information and conspiracy theories. The
study singled out these two aspects since, on one side, conspiracy theories tend to
"simplify causation, reduce the complexity of reality, and are formulated in a way that is
able to tolerate a certain level of uncertainty" while scientific information spreads
"scientific advances and exhibits the process of scientific thinking". Conspiracy theories
are organized in such a way that although it may not fully explain or answer something,
it still is an explanation regardless of whether it is actually verifiable or not. While
scientific information do give explanations that may indeed be verifiable but would either
need one to be at least familiar with the subject matter already or at least willing to
exhibit critical scientific thinking to do the verification. It all comes down to content
verifiability. This prelude I believe is significant for it contextualizes one of the study's
findings for they noticed that scientific news is usually assimilated, meaning that it is
quickly spread yet the interests towards does not linger and usually quickly drops.
Conspiracy theories however, at first, have a slower momentum and spread more
slowly but would, over its lifetime, only keep gaining a more consistent following and are
able to continually disseminate itself and form discussions allowing them to as they
persist for a much longer amount of time. This then begs the question and allows me to
transition to why does this happen and how is it actually being done.

Relevant vs Reliable
Due to FB still essebtially being a commercial platform, there exists personalized feeds
to keep, what essentially are, their customers through maximizing their retention of them
on the social medium as well their use of it. This makes it clear that in this part of the
infosphere relevance has greater weight than reliability, in the sense that it is accurate
and factual by present standards. Thus information in a personalized feed may be
relevant but not necesarrily reliable. One need not of course think the worst of this for
there are indeed times that the truth is something one needs the least. Also, most of the
time, it would probably not be the case that what one encounters in their feed in FB is
misinformation especially considering the medium's efforts to ramp up the measures by
which misinformation would be immediately halted in its tracks. However due to the
nature of a personal feed being self-fulfilling and its algorithms inly letting through
information that it thinks one needs seems inherently problematic. Additionally, one may
be even more doubtful of FB due to them putting more restrictions on their api making it
uncertain whether they're actually clearing misinformation or not for essentially, if these
misinformation really are relevant are essential to their customers staying then wouldn't
it run counter to their interests as a business if they filtered it out? How would one even
notice that what one is seeing in their feed is not misinformation if it's personalized for
you especially with the PNAS study's findings that confirmation bias is not only stronger
in a social medium but perhaps also actively encouraged. Thus with the combination of
confirmation bias and selective exposure, it has been observed that users would then
tend to hone in on information that would reinforced their own preconceived notions
even if the information are not factual and, at times, even contradictory.
This is since the algorithms responsible for this information personalization, specifically
in the social medium of Facebook, are still unreliable in their ability to detect and
accurately sort the information they come across with by either their importance or
"truthfulness". It has been surmised that 63% of Facebook's users acquire their news
from the social medium (Newman, Nic, Levy, & Nielsen, 2015). But Facebook, although
having intensified their fight against misinformation, apparently subjects news to the
same popularity dynamics, such as clicks, shares and general reading time, as any
other content in their medium (Newman et al., 2015). The public interest is now
overriden by the interest of the public.

Notes:
Facebook
– One of the few social media that had remained and thus had consequently reigned
along with few serious competitors.
– Headed by Mark Zuckerberg, the medium makes most of its revenue
from advertisements.
– Its innovation, user-friendly interface and its general emphasis on social sharing and
forming connections with other people allowed it to be a much better social platform
thus allowing them to successfully reach a wider audience. More information about
Facebook here.

Misinformation
– Misinformation is defined as "false or incorrect information that is spread intentionally
or unintentionally (i.e. without realizing it is untrue)"[2]
Echo Chamber
- An echo chamber, in the context of the infosphere, is defined by the Cambridge
dictionary as “a situation in which people only hear opinions of one type, or opinions that
are similar to their own”
- It has also been defined as clusters of homogeneity (Bessi et al., 2017 ).
Algorithm

Construct: Formally defined as a concept of mathematics that is “a finite, abstract,


effective, compound control structure, imperatively given, accomplishing a given
purpose under given provisions” (Hill 2015, p. 47)

Decision Process: Some also defined it as “any procedure or decision process, however
ill-defined” (Hill 2015, p.36)
Implement: Another definition made relevant by the popular usage of “algorithm” is due
to “‘algorithms’ that suggest potential mates for single people and algorithms that detect
trends of financial benefit to marketers” (Hill 2015, p.36). “Algorithm” is not used in
either cases to refer to any specific formulae but is rather used as “the implementation
and interaction of one or more algorithms in a particular program, software or
information system” (Allo et al., 2016).

References:

Vous aimerez peut-être aussi