Google explains how it tries to control fake news in search, News, YouTube, and ads

“The search giant has presented a white paper at the Munich Security Conference detailing how it fights disinformation”

The growing menace of fake news is what Google, Facebook and other internet giants have been most criticised for in the past couple of years. These platforms have been trying to control it with several measures, including awareness. Now, Google has presented a white paper at the Munich Security conference detailing how it fights disinformation aka ‘fake news’ or ‘post-truth’ in search, News, YouTube, and ads.

Google disinformation

To start with, the search giant defines ‘disinformation’ as “deliberate efforts to deceive and mislead using the speed, scale, and technologies of the open web.” Google admits it’s challenging to fight misinformation on the platform because of the near-impossibility to determine the intent behind the content.

“The entities that engage in disinformation have a diverse set of goals. Some are financially motivated, engaging in disinformation activities for the purpose of turning a profit. Others are politically motivated, engaging in disinformation to foster specific viewpoints among a population, to exert influence over political processes, or for the sole purpose of polarizing and fracturing societies. Others engage in disinformation for their own entertainment, which often involves bullying, and they are commonly referred to as trolls,” the report reads.

google_news_ios.0

To tackle this challenge, Google has designed a framework that includes three strategies tailored for each product. First, ‘Make Quality Count’ — the strategy organises and makes content visible using “ranking algorithms” that do not encourage or promote “the ideological viewpoints of the individuals that build or audit them.”

The second strategy is ‘Counteract Malicious Actors’ — it looks into the content creators that manage to deceive the ranking system to get more visibility. Google believes that algorithms alone cannot verify the accuracy of a piece of content so, it has invested in systems that can reduce spammy behaviours. It also relies on human reviews.

The third and the last strategy is ‘Give Users More Context’. As per Google, providing a “diverse set of perspectives [is] key to providing users with the information they need to form their own views.” The mechanism includes knowledge panels, fact-check labels, ‘full coverage’ in Google News, ‘Breaking news’ panel on YouTube, ‘Why this Ad’ on Google Ads, and feedback buttons in search, YouTube, and advertising products.

youtube_down

Other than these, Google pledges to support quality journalism with News Initiative, outside experts and researchers. It notes that Google Search and News share the same defences against spam, but they do not employ the same ranking systems and content policies. A case in point, Google Search does not remove content except in very limited circumstances whereas Google News is more restrictive. Interestingly, the company claims there is very little personalisation in search results based on users’ interests or search history.

Google also notes that the field of “synthetic media” (generated by AI) is fast-moving and is hard to predict what might happen in the near future. To help prepare for this issue, “Google and YouTube are investing in research to understand how AI might help detect such synthetic content as it emerges, working with leading experts in this field from around the world.”

You can read the full report about how Google fights disinformation across its properties by downloading the PDF file.

Ashish is one of the youngest members of 91mobiles, and a recent tech geek convert. When he's not churning out articles, you’ll find him watching sports or binging TV shows. He listens to John Mayer when beating Delhi traffic.
Facebook Comments