Académique Documents
Professionnel Documents
Culture Documents
Facebook became a tool for the largest forced migration in recent human history.
700,000 Rohingya Muslims fled to Bangladesh in order to escape the brutality of the
ethnic cleansing operations the Myanmar military had been doing to them. However,
how does an event as abhorrent as this be allowed to continue in this age of
information, communication and technology (ICT)? One may be able to trace the
problem in social media, specifically Facebook. Numerous fake accounts and pages
were attributed to the spread and normalization of anti-Rohingya sentiment. Even
Facebook officials themselves confirmed the presence of “clear and deliberate attempts
to covertly spread propaganda” originating from the Myanmar Military whose main goal
was to "set the country on edge… [and] to generate widespread feelings of vulnerability
and fear that could be salved only by the military’s protection”. Nevertheless, the
question still stands, how is it still possible for people to believe these since if they can
access Facebook, then surely they can also access the greater part of the internet to
know the truth?
Facebook
Being one of the most popular and largest social media networks presently, Facebook
has been continually been marred with controversy due to instances in which user data
was seen to be manipulated to subtly influence elections or even used a tool for a
targeted ethnic extermination. The social medium prided itself on its ability to create
connections but did not look far ahead enough it seems and see how this ability of theirs
can be abused. From people connections to false information propagation leading to
multiple serious crises, one should ask who is to blame or perhaps more importantly
what is to blame for the answers seem to lie in the social medium’s Personalized Feed.
With the rise of the age of ICT, decisions previously made organically and relegated in
an analog manner have been more increasingly digitized. Social media for one have
complemented and perhaps revolutionised the way people communicate with one
another. However this seems to be at a cost for one’s actions in that medium might not
necessarily be truly theirs anymore more often than not for what one gets to see are
things that have already been filtered or personalized with the help of "algorithms" so
that the value of whatever one sees is amplified and seemingly made more important
which translates to having a greater appeal or likelihood of one being interested in it.
This then therefore changes how one would approach information for its previous
probabilistic quality now comes into question since if all one’s interactions on the Onlife
are interpreted and then meddled by algorithms then would not that immediately imply
an inherent meaning or value within that specific information. There are also various
ethical implications attached to this practice of personalization for one could make the
case that algorithms themselves are also essentially not objective and do contain
inherent biases, the algorithmic manipulation of information utilizing one’s behavioral
data is ethically dubious and may even perhaps violate one’s fundamental rights and
lastly, the societal potential and impact of personalization is still yet to be fully realized.
Thus there is a need to examine the ethical implications of content personalization and
whether there could be an ethical way to utilize it or if it inherently detrimental to its
users and the information it manipulates.
Policing Facebook
Facebook enforces their community guidelines in two ways. The first barrier, which FB
considers quite successful seems to be an algorithm which oversees most of the
content uploaded in the medium and and even extends to the creation of “fake
accounts”. However, it seems that it still has difficulty in identifying hate speech and
thus enters the second barrier of Facebook which is its review teams who individually
look at user made reports and determine whether it violates their standards or not.
Immediately the practicality and reasonableness of FB’s system pops out. The sheer
amount of information being uploaded in the medium is bound to overwhelm any
individual and perhaps even groups of individuals if they’d be given the task to nitpick
each and every single one of them and it does seem agreeable that concerns that are
normative such as hate speech remain a human responsibility. However, the issue that
comes to mind and that Facebook also seems to recognize is what about the violations
not being reported even allowed to be propagated? One may argue that these types of
information will eventually be reported to the system and, although that is possible, the
probability of its instance would be lowered passively and perhaps even actively due to
the environment Facebook has fostered due to existence and the medium’s extensive
use of personalized feeds.
Relevant vs Reliable
Due to FB still essebtially being a commercial platform, there exists personalized feeds
to keep, what essentially are, their customers through maximizing their retention of them
on the social medium as well their use of it. This makes it clear that in this part of the
infosphere relevance has greater weight than reliability, in the sense that it is accurate
and factual by present standards. Thus information in a personalized feed may be
relevant but not necesarrily reliable. One need not of course think the worst of this for
there are indeed times that the truth is something one needs the least. Also, most of the
time, it would probably not be the case that what one encounters in their feed in FB is
misinformation especially considering the medium's efforts to ramp up the measures by
which misinformation would be immediately halted in its tracks. However due to the
nature of a personal feed being self-fulfilling and its algorithms inly letting through
information that it thinks one needs seems inherently problematic. Additionally, one may
be even more doubtful of FB due to them putting more restrictions on their api making it
uncertain whether they're actually clearing misinformation or not for essentially, if these
misinformation really are relevant are essential to their customers staying then wouldn't
it run counter to their interests as a business if they filtered it out? How would one even
notice that what one is seeing in their feed is not misinformation if it's personalized for
you especially with the PNAS study's findings that confirmation bias is not only stronger
in a social medium but perhaps also actively encouraged. Thus with the combination of
confirmation bias and selective exposure, it has been observed that users would then
tend to hone in on information that would reinforced their own preconceived notions
even if the information are not factual and, at times, even contradictory.
This is since the algorithms responsible for this information personalization, specifically
in the social medium of Facebook, are still unreliable in their ability to detect and
accurately sort the information they come across with by either their importance or
"truthfulness". It has been surmised that 63% of Facebook's users acquire their news
from the social medium (Newman, Nic, Levy, & Nielsen, 2015). But Facebook, although
having intensified their fight against misinformation, apparently subjects news to the
same popularity dynamics, such as clicks, shares and general reading time, as any
other content in their medium (Newman et al., 2015). The public interest is now
overriden by the interest of the public.
Notes:
Facebook
– One of the few social media that had remained and thus had consequently reigned
along with few serious competitors.
– Headed by Mark Zuckerberg, the medium makes most of its revenue
from advertisements.
– Its innovation, user-friendly interface and its general emphasis on social sharing and
forming connections with other people allowed it to be a much better social platform
thus allowing them to successfully reach a wider audience. More information about
Facebook here.
Misinformation
– Misinformation is defined as "false or incorrect information that is spread intentionally
or unintentionally (i.e. without realizing it is untrue)"[2]
Echo Chamber
- An echo chamber, in the context of the infosphere, is defined by the Cambridge
dictionary as “a situation in which people only hear opinions of one type, or opinions that
are similar to their own”
- It has also been defined as clusters of homogeneity (Bessi et al., 2017 ).
Algorithm
Decision Process: Some also defined it as “any procedure or decision process, however
ill-defined” (Hill 2015, p.36)
Implement: Another definition made relevant by the popular usage of “algorithm” is due
to “‘algorithms’ that suggest potential mates for single people and algorithms that detect
trends of financial benefit to marketers” (Hill 2015, p.36). “Algorithm” is not used in
either cases to refer to any specific formulae but is rather used as “the implementation
and interaction of one or more algorithms in a particular program, software or
information system” (Allo et al., 2016).
References: