Home / Technology / BioScience / Fake news is difficult to identify and fix, require new regulations and technologies for combating it.

Fake news is difficult to identify and fix, require new regulations and technologies for combating it.

Fake news or junk news or pseudo-news is a type of yellow journalism or propaganda that consists of deliberate disinformation or hoaxes spread via traditional print and broadcast news media or online social media. This type of news, found in traditional news, social media or fake news websites, has no basis in fact, but is presented as being factually accurate.

 

Fake news is written and published usually with the intent to mislead in order to damage an agency, entity, or person, and/or gain financially or politically, often using sensationalist, dishonest, or outright fabricated headlines to increase readership. Similarly, clickbait stories and headlines earn advertising revenue from this activity.

 

The false information is often caused by reporters paying sources for stories, an unethical practice called checkbook journalism. Digital news has brought back and increased the usage of fake news, or yellow journalism. The news is then often reverberated as misinformation in social media but occasionally finds its way to the mainstream media as well.

 

The relevance of fake news has increased in post-truth politics. For media outlets, the ability to attract viewers to their websites is necessary to generate online advertising revenue. Publishing a story with false content that attracts users benefits advertisers and improves ratings. Easy access to online advertisement revenue, increased political polarization, and the popularity of social media, primarily the Facebook News Feed, have all been implicated in the spread of fake news, which competes with legitimate news stories.

 

Hostile government actors have also been implicated in generating and propagating fake news, particularly during elections. Fake news punctuated some of the most important elections of recent years, including 2016’s BREXIT referendum and U.S. presidential campaign.

 

Thanks to social media, fake news can now be disseminated at breakneck pace to vast audiences that are often unable or unwilling to separate fact from fiction. Studies suggest that fake news spreads up to six times faster on social media than genuine stories, while false news stories are 70 percent more likely to be shared on Twitter. Observers call it “spam on steroids.” If one spam email is sent to only 1,000 people, it effectively dies. However, if fake news is sent to the same number of recipients, it’s more likely to be shared, become viral, and eventually reach millions.

 

“Pizzagate and similar events show the unintended consequences of social media,” said NIU psychology professor Keith Millis, who studies comprehension and memory. Millis is a co-author on the analysis along with Dylan Blaum, an NIU graduate student in psychology, and Jean-François Rouet, director of research at the French National Center for Scientific Research at the University of Poitiers, France.

 

The analysis notes that the spread of false information can have dire consequences. The authors point to the example of the “pizzagate conspiracy” that cropped up online prior to the November 2016 election. It falsely claimed that Hillary Clinton was the head of a sex-trafficking ring that was using a pizza restaurant in Washington, D.C., to hold children captive. In December of that year, an assailant on a self-appointed mission to save the children entered the restaurant and fired an assault rifle before being arrested.

 

“…while the debate in the US is hyper-focused on the Facebook newsfeed, globally, the real challenge is closed messaging apps like WhatsApp, Viber, Telegram, and FB Messenger,” the study’s authors, who also conducted semi-structured interviews with a mix of 30 stakeholders such as internet service providers (ISPs), policymakers, media and influencers, fact-checkers, academia, and political parties, wrote. “Those who are relatively new to use of technology, internet and smartphones may be more susceptible to fake news than others.

 

“Importantly, our analysis shows that basic research in the social and behavioral sciences can help to explain why these events occur,” Millis says. “Knowing why something occurs is a huge step in changing behavior.”

 

The Indian government has been pressurizing the social network to implement steps to curb the spread of fake news. This gains even more significance ahead of the 2019 Lok Sabha elections in India. Just 2.7% of Indians believe the information they receive on social media platforms like Facebook, Twitter, and WhatsApp, according to a survey by the non-profit Internet and Mobile Association of India (IAMAI) and data journalism portal Factly. The real danger is misinformation disseminated by trusted news outlets or their friends and family, most of those surveyed said.

Challenges of Fake news

“Fake news is a bigger problem than we thought, and we can’t rely on simple answers,” says Northern Illinois University professor Anne Britt, an expert in cognitive and instructional psychology. Britt is lead author of the analysis, published earlier this month in the journal, Policy Insights from the Behavioral and Brain Sciences.

Britt says the way human memory is wired, our own biases in selecting and interpreting information, and the sheer volume of online misinformation, make us all susceptible to being duped. “Now more than ever, people need to become aware of their own cognitive biases and how to avoid being exploited because of them,” Britt says.

 

She and her co-authors cite a wide body of research literature that helps explain how and why the glut of online information, often spread rapidly via social media, poses challenges for the human thought process.

 

Past research has shown:

  • Human interpretation of information is guided, not by objectivity, but by personal goals and prior beliefs.
  • Interpreted memories are susceptible to change, depending on the situation.
  • People are more likely to search for belief-consistent information.
  • People are more likely to accept belief-consistent information without serious scrutiny.
  • Simply repeating information can increase confidence in its perceived truth.
  • Information we later learn was incorrect remains in memory and can continue to affect us.
  • In the pre-internet era, the public relied on gatekeepers, such as highly trained librarians, publishers and journalists. But nowadays even the source of online information can be veiled or intentionally deceptive, the authors note, adding that the problem is exacerbated by the very nature of social media. By design, it encourages users to seek out and share info with like-minded thinkers, as opposed to exposing themselves to different perspectives.

 

The scientists say accurate evaluation requires readers to set and monitor accuracy as a goal, employ strategies to achieve that goal and value the time and effort-consuming systematic evaluation.

 

“Self-monitoring for accuracy is very time consuming and requires some work,” Britt says. “Even the most skilled person isn’t going to do that all the time because we’re constantly getting tidbits of information. It just takes too much effort to track everything down.”

 

Protection from Online Falsehood and Manipulation Act (POFMA) 2019

Singapore has enacted POFMA for four main reasons. Firstly, to prevent the communication of false statements of fact and enable measures to counteract the effects of such communication. Secondly, to supress the financing, promotion and support of online locations in Singapore that repeatedly communicate false statements of facts. Thirdly, to enable measures to be taken to detect, control and safeguard against coordinated inauthentic behaviour and other misuse of online accounts and bots. And lastly, to enable measures to be taken to enhance the disclosure of information concerning paid content directed towards a political end.

 

Article 11 of Part 3 concerns communicating a “Correction Direction”. A Correction Direction can be issued to a person to communicate a Correction Notice (Statement nullifying a false information and inserting a corresponding true statement and/or its link next to the false statement) within a specified time limit to all persons who have received the false information and/or publish the correction notice in a newspaper or other print publications of Singapore. Article 12 of Part 3 empowers the Competent Authority to issue a “Stop Communication” direction, and Article 16 of Part 3 empowers the Minister to direct the Information Communication Media Development Authority (IMDA) to order the internet access service provider to disable access to an online location for all end users by issuing an “Access Blocking Order”.

 

An Internet Intermediary has been classified as a person providing internet intermediary services (like social networking services, search engine, content aggregator, internet based messaging service, video sharing services, etc.). Part 4 of POFMA deals with directions to internet intermediaries and providers of mass media services. Article 21 talks of “Targeted Correction Direction” wherein the internet intermediary that has been used as a medium to propagate the false information is required to send a correction notice within a specified time limit to all the end users in Singapore who had accessed the false information. In addition, Article 22 deals with the issuance of “Disabling Directions” by the internet intermediary to stop access to the end user in Singapore of a specified false information while Article 28 deals with “Access Blocking Order”.

 

Article 7, Part 2 of the act states that a piece of news is deemed to be fake and worthy of action if and only if it meets two criteria. Firstly, it should be a false statement of fact. And secondly, communication of this false news is likely to affect the security of Singapore or be prejudicial to public health, safety, tranquillity or finances or be prejudicial to Singapore’s relations with other countries or influence the outcome of elections or incite feelings of hatred, enmity/ ill will or diminish public confidence in the state or its institutions. Article 10 of Part 3 of the act makes any Minister in the Singapore government competent to classify news as fake and take appropriate action to deal with such news.

 

Part 5 of the act deals with Declaration of online locations. Such a Declaration happens “when an online location is responsible for propagating three or more different false statements subject to active Part 3 and/or Part 4 directions”. Once an online location has been “Declared”, its owner is thereafter required to inform its declaration status to all end users who access that online location. Suitable directions can also be passed to IMDA to block access to the declared online location for a specified period of time.

 

Part 6 of the act deals with directions to the internet intermediary to counteract inauthentic online accounts and coordinated inauthentic behaviour. Part 7 of the act deals with Other Measures, while Part 8 specifies the appointment of alternate authority during elections and other specific periods. The final Part 9 of the act deals with miscellaneous issues.

 

Identifying, and combating fake news

Fake News has become increasingly prevalent over the last few years, with over a 100 incorrect articles and rumors spread incessantly just with regard to the election. These fake news articles tend to come from satirical news websites or individual websites with an incentive to propagate false information, either as clickbait or to serve a purpose. Since they typically hope to intentionally promote incorrect information, these articles are quite difficult to detect.

The International Federation of Library Associations and Institutions (IFLA) published a summary in to assist people in recognizing fake news. Its main points are:

  • Consider the source (to understand its mission and purpose)
  • Read beyond the headline (to understand the whole story)
  • Check the authors (to see if they are real and credible)
  • Assess the supporting sources (to ensure they support the claims)
  • Check the date of publication (to see if the story is relevant and up to date)
  • Ask if it is a joke (to determine if it is meant to be satire)
  • Review your own biases (to see if they are affecting your judgement)
  • Ask experts (to get confirmation from independent people with knowledge).

 

The world’s most popular instant messaging app, the Facebook-owned WhatsApp now has new tools for Groups on the platform. Specifically, you can now control who can add you to a WhatsApp Group. This means that no one, including those who may not be saved as a contact in your phone, can now add you to a WhatsApp Group without your explicit consent. This could be just another step in curbing the spread of fake news and misinformation on WhatsApp, something the instant messaging platform has been criticized for in many countries.

 

One measure that WhatsApp has already implemented is the limit to how many times a user can forward a particular message on the platform. It is believed that WhatsApp is also testing a feature that will show how many times a particular message has been forward, making a sort of trail that could give a better indication of where the chain started—this could be useful in identifying the source of a potentially incorrect information being spread on the platform.

 

The authors underscore the need for a concerted interdisciplinary effort involving educators, researchers, trained journalists, social media companies and the government. Their recommendations are:

 

Public and in-school awareness campaigns involving educators, journalists and government agencies are needed to teach consumers how memory and comprehension processes work, how personal biases can be manipulated and how to evaluate information for accuracy.

 

Political and social media business leaders need to adopt regulations and/or guidelines to both inform the public about their susceptibility to cognitive biases and, wherever possible, prohibit use of information channels for fraudulent commercial and other anti-social purposes.

 

Funding agencies need to support rigorous interdisciplinary research on argumentation and persuasion to help identify interventions for different populations of readers in K-12 and beyond.

 

“Our country guarantees free speech, but there are limits to free speech,” Britt says. “Just like the automobile created a need for traffic lights and speed limits, the internet calls for some sort of traffic signals or guidelines.

 

“I know there are people who hold to the ideology that information should be unfiltered,” she adds. “But we can’t handle volumes and volumes of unfiltered information. We can’t work that hard all the time to make sure we’re getting the truth.”

 

In the age of instant information via the internet, fake news and false information pose a dangerous threat to society that requires a concerted response involving government, educators, social media businesses, professional journalists and consumers, the researchers conclude.

 

 

References and Resources also include:

https://scienceblog.com/506962/psychologists-no-easy-fix-for-fake-news/

https://idsa.in/idsacomments/singapore-fake-news-act-170519

 

About Rajesh Uppal

Check Also

Navigating the Ethical Landscape: The Growing Imperative of Technoethics in an Era of Technological Risks

Introduction: In the ever-evolving landscape of technology, the rapid pace of innovation brings unprecedented benefits …

error: Content is protected !!