Between 2018 and 2020, multiple red flags about Facebook’s operations in India were raised internally, ranging from a “continuous onslaught of polarising nationalistic content” to “false or inauthentic” messaging, from “misinformation” to content “denigrating” minority communities.
Despite these explicit warnings from staff mandated to perform oversight functions, an internal review meeting in 2019 with Chris Cox, then Vice President of Facebook, found “comparatively low prevalence of problem content (hate speech, etc)” on the platform, according to Chris Cox, then Vice President of Facebook.
In the months leading up to the Lok Sabha elections, two studies highlighting hate speech and “problem content” were released.
In a third report, published in August 2020, the platform’s AI (artificial intelligence) technologies revealed that they were unable to “identify vernacular languages,” and hence had failed to detect hate speech or harmful content.
“Survey informs us that individuals typically feel comfortable,” the minutes of the discussion with Cox ended. According to experts, the country is relatively stable.”
These gaping gaps in response are disclosed in papers submitted to the US Congress in redacted form by the legal counsel of former Facebook employee and whistleblower Frances Haugen as part of her disclosures to the US Securities and Exchange Commission (SEC).
A group of international news organisations, including The private news media agency, evaluated the redacted versions acquired by the US Congress.
A private news media agency contacted Facebook for comment on Cox’s meeting and the internal memos, but received no response.
The discussions with Cox took place a month before the Indian Election Commission published the seven-phase plan for the Lok Sabha elections on April 11, 2019.
The conversations with Cox, who left the firm in March of that year only to return as Chief Product Officer in June 2020, did highlight the fact that “major concerns in sub-regions may be lost at the country level.”
According to the first paper, “Adversarial Harmful Networks: India Case Study,” up to 40% of the top VPV (view port views) postings in West Bengal were fraudulent or inauthentic.
Viewport views, or VPV, is a Facebook metric that measures how often users actually view content.
The second, a February 2019 internal report written by an employee, is based on the results of a test account. A test account is a fictitious user with no friends who was created by a Facebook employee in order to better study the impact of the platform’s various features.
The test user’s news feed had “become a near continual barrage of polarised nationalistic content, misinformation, and violence and gore” in just three weeks, according to the research.
Only the content recommended by the platform’s algorithm was followed by the test user. This account was created on February 4, had no friends, and had a “very barren” news feed.
When a person isn’t linked with pals, the “Watch” and “Live” tabs are pretty much the only ones with material, according to the report.
The employee’s report stated that “the quality of this stuff is… not optimal,” and that the algorithm frequently recommended “a load of softcore porn” to the user.
Over the next two weeks, particularly after the terror attack in Pulwama on February 14, the algorithm began to propose groups and sites centred primarily on politics and military topics. “I’ve seen more photographs of deceased individuals in the last three weeks than I have in my whole life,” the test user claimed.
“Our teams have developed an industry-leading process of reviewing and prioritising which countries have the highest risk of offline harm and violence every six months,” a spokesperson for Meta Platforms Inc said in response to a specific query based on conclusions presented in the review meeting with Cox. We make these decisions in accordance with the United Nations Guiding Principles on Business and Human Rights, as well as a study of societal damages, the extent to which Facebook’s products contribute to these harms, and significant developments on the ground.”
On October 28, the Facebook group was renamed Meta Platforms Inc, bringing together multiple apps and technology under a single firm name.
“We invest in internal research to proactively identify where we can improve – which becomes an important input for defining new product features and our policies in the long-term,” the spokesperson said, when asked if the study’s findings — that a lack of Hindi and Bengali classifiers was leading to violence and inciting content — were taken into account before the conclusions presented in the review with Mr Cox were made. Since 2018, we’ve had hate speech classifiers in Hindi and Bengali. In early 2021, the first classifiers for violence and provocation in Hindi and Bengali went online.”
The company’s spokesman specifically addressed the test user analysis, stating that the “exploratory effort of one hypothetical test account spurred deeper, more thorough research of our recommendation systems, and lead to product improvements to improve them.” “Following more thorough analysis, we made product improvements such as removing questionable content and civic and political groups from our recommendation systems. Separately, they stated, “Our work to combat hate speech continues, and we have upgraded our hate classifiers to include four Indian languages.”
Facebook informed a private news media agency in October that it had spent heavily in technology to detect hate speech in a variety of languages, including Hindi and Bengali.
“As a result, this year we’ve cut the amount of hate speech individuals see in half.” It’s now down to 0.05 percent. Hate speech directed at underprivileged groups, such as Muslims, is on the rise around the world. As a result, we’re beefing up enforcement and committing to upgrading our standards as hate speech on the internet evolves,” a Facebook representative said.
However, in August 2020, employees questioned Facebook’s “investment and plans for India” to prevent hate speech content, citing the company’s algorithm and proprietary AI tools’ inability to recognise hate speech and harmful content.
“Based on the call I received earlier today, it appears that AI (artificial intelligence) is unable to recognise vernacular languages, thus I’m curious as to how and when we plan to address this in our country?” Another internal memo stated, “It is very evident that what we have now is insufficient.”
The documents are a result of a meeting between Facebook staff and top executives. Employees were perplexed as to why Facebook did not have “even basic key stroke detection set up to catch” potential hate speech.
“It’s incomprehensible to me that we don’t have even basic key work detection in place to prevent this.” After all, we can’t be proud of ourselves as a firm if we continue to allow such barbarism to thrive on our network,” one employee commented during the conversation.
Employees also inquired about how Facebook planned to “regain” the trust of colleagues from minority populations, according to the memos, especially after a senior Indian Facebook executive uploaded a post on her personal Facebook profile that many believed “denigrated” Muslims.
The claim that Facebook’s AI couldn’t recognise vernacular language, according to the Meta spokeswoman, is false. “Starting in 2018, we implemented hate speech classifiers in Hindi and Bengali.” We also have hate speech classifiers in Tamil and Urdu,” they responded to a question.
The Shining Media is an independent news website and channel, covering updates from the world of Politics, Entertainment, Sports, International, National, and a lot more.