By Dave Avran
T Nhaveen, 18, died in Penang last week after a brutal beating, being burned on his back, having his genitals twisted and being sodomised by a blunt object.
Even after his death, unscrupulous people spread fake pictures of a naked man with his hands tied, purportedly Nhaveen on social media. However several tattoos and a skinhead haircut soon put paid to that claim.
At the same time, a request for monetary aid, allegedly from Nhaveen’s family, went viral on social media.
The whatsapp message included a CIMB account number under the name of a “Shanmugam Mahalingam” and asked recipients to “kindly and lovingly donate some money as our token of compensation for our late dear brother and friend Navi (Nhaveen) who passed away today, Thursday 15th June, 2017.
“You can donate to the account above. Please give wholeheartedly. This collection shall be our humble offering to his family,” the message read.
Yes, social media is at the very heart of the misinformation problem, but if we’re going to address this problem, we need to look at the very specific ways that social media is driving fake news and misleading content.
Ironically, social media has ended up hurting our democracies for the very reason it was once greeted with enthusiasm: because anyone can create a blog, post a YouTube video or send out a tweet, established media outlets no longer have a lock on creating or distributing the news.
Certainly the accessibility of social media has diversified and democratised media creation. But it’s also the reason that a random group of Russian teenagers could become a fake news powerhouse. Social media has become the overwhelming distribution network for fake news and other forms of misinformation.
Social media has contributed to the fragmentation of audience attention for reasons that are closely related to the phenomenon of media democratisation.
As the explosion in online media sources gives people more and more choices of where to put their attention, readers and viewers increasingly gravitate to the specific sources and stories that appeal to their narrow interests and worldview.
This creates the opportunity for media creators to thrive by serving people the specific news and commentary they want to see, making it inevitable that some people would seize the political and financial opportunities without worrying about whether the content they create are actually true.
As if audience fragmentation wasn’t a big enough problem, social media platforms have amplified the divisions among Internet users. As our particular interests and preferences manifest in what we choose to view or share online, the algorithms driving major social networks take note of what we like, and what we avoid.
To encourage us to spend more and more time on Facebook, its algorithm shows us the kind of content it knows we like; Twitter, YouTube and every other social platform do the same thing. That leads to a situation in which many of us spend our time online in a “filter bubble”: an online conversation in which we only hear from people and publications that reinforce our pre-existing worldview.
That in turn feeds the misinformation ecosystem, because we are less and less likely to get exposed to facts or perspectives that challenge us, which means that when we see stories that aren’t true, we may not see those stories get corrected.
Social media has contributed to the shortening of our attention span. There is little doubt that digital distraction has made it harder and harder for people to pay sustained attention to long-form content. As attention spans get shorter, news stories have to get simpler, even though our political, economic and social challenges are only getting more complex. In a media environment that rewards stories that can be quickly absorbed and shared, not only depth but accuracy becomes a frequent casualty.
While humour can be a very effective way of challenging dominant narratives or bringing attention to complex issues, news parodies and satire can inadvertently serve to disseminate false information. If audiences don’t realise that what they’re watching, hearing or reading is intended as a joke, they can end up repeating or sharing it as truth.
The social media economy of attention, measured in clicks, likes and shares, rewards provocative content. A balanced story is all well and good, but if you really want to explode on Facebook, write a strongly opinionated article with a polarising headline. This creates incentives to commission or create commentary (or biased reporting) rather than accurate and inaccurate reporting.
Fake news doesn’t just come in the form of text. Images like those ubiquitous memes can be a powerful way of transmitting false information. Social media loves images, so once information is embedded in a shareable image, it spreads quickly and may be readily accepted as truth.
Social media is also the cure for misinformation. Hearts and minds aren’t necessarily won over by an endless stream of facts; personal relationships and conversations are a crucial part of opening people’s minds to new information and ideas.
Therefore social media users who spread misinformation can also stop it. Social verification—including media consumers in confirming the veracity of accurate stories, or refuting false ones —ares a key tactic for tackling misinformation.
In its simplest form, you see social verification all the time on social networks: whenever you see a friend backing up (or debunking) a Facebook post by adding a credible link in the comment thread, that’s social verification. And social media data can be an important resource for assessing the validity of a story or source.
If social media is the context in which many media consumers encounter fake news, then it can also be the context in which inaccuracies may be quickly corrected by fact checking.
Selamat Hari Raya and Maaf Zahir & Batin