A new story from MIT Tech Review has revealed that Facebook’s Ethical AI team has been failing to properly monitor and respond to bias in its algorithms. As a result, the algorithms have amplify content that violates community standards. This itsmy means that the social media giant is not only profiting from hate and misinformation, but it is actively encouraging them.
MIT Tech Review story on Facebook’s Ethical AI team meltdown
Facebook’s ethical AI team has a pretty big problem. According to an MIT Tech Review story, its decision making process is not exactly scientific. In fact, it’s been questioned by a number of prominent censorship studies. Regardless, the company has been trying to tamp down on the amount of misinformation on its platform.
The company has rolled out an array of gameplanet content restrictions to try to mitigate its most egregious sins. It has also pushed to encourage encrypted messaging via the WhatsApp app, which it owns. But it has hardly been a model of user-centricity.
Facebook’s decision making model has a lot of flaws, but the most notable is its aversion to the ol’ fashioned truth. Aside from the usual suspects, the social networking giant has been criticized for its inability to do the requisite due diligence when it comes to truth, especially in the context of politics.
Facebook’s willingness to profit from hate and misinformation
Facebook’s willingness to profit from hate and misinformation has come under scrutiny. This is due in part to an investigation into the company’s business practices, but also because of the recent death of a Black Lives Matter protestor in Kenosha, Wisconsin.
A group of nine civil rights organizations including the National Hispanic Media Coalition and Common Sense have joined together in a campaign to pressure the social media giant to address its problems. They have formed a coalition called Stop Hate for giveme5 Profit.
According to the Wall Street Journal, a whistleblower at Facebook is preparing to testify to Congress. Frances Haugen, a former Google employee, joined the company in 2019 to work on addressing misinformation. She plans to go before the Senate’s Subcommittee on Government Oversight and Reform in an effort to get regulations on Facebook.
Facebook’s algorithms amplify content that nearly-but-not-quite violates its own community standards
Facebook’s algorithms amplified content that almost-but-not-quite violates its own community standards, according to internal documents. That’s despite the fact that the company has made significant strides in detecting hate speech and other problematic content.
Facebook’s latest public report shows that executives have taken steps to enforce its policies. But they also acknowledge that their system isn’t reliable.
The company’s artificial-intelligence systems comb through billions of posts to find misinformation, fake news, and other content. Some algorithms push content up the news feed, while others remove it. These naasongs.net techniques work well for spam and spam-like content, but they fail in sensitive areas.
Classifiers, the key components of Facebook’s content moderation system, are used to detect and identify bad content. They have been particularly successful in identifying and removing content on topics that are sensitive, but they also fail in areas that are polarizing. For example, Facebook users have reported finding videos of cockfights, crashing cars, and fighting roosters in their feeds.
Facebook’s unwillingness to limit user engagement
The Wall Street Journal has published a series based on internal documents a former Facebook employee leaked. These documents have placed Facebook in the worst crisis since the Cambridge Analytica scandal.
Facebook has a history of ignoring rules. It has repeatedly allowed demagogues to post on its platform, including Tommy Robinson, and Britain First, a nationalist group.
However, Facebook’s policy aims to limit activities of “risky groups” such as militias. Moreover, Facebook has a zero-tolerance policy for hate speech.
One of the former employees who leaked the documents claims that Facebook has a clear incentive to ignore its own policies. As an example, she says the company intentionally targets kids.
This has been reflected in the way Facebook incentivizes content. The underlying content ranking algorithms actively undermine user transarc autonomy.
Facebook’s unwillingness to test algorithms for bias
Facebook’s refusal to tame its rabid fan base has led to an flurry of privacy lawsuits. While a plethora of privacy policies have been adopted, many have fallen short of the mark. This isn’t just a matter of a lack of regulation: a company as big as Facebook may also be a little big to keep a lid on. It’s no wonder the social media juggernaut has found itself in the crosshairs of a number of government officials. One such agency, the US Department of Justice, has a pending multi-billion dollar settlement courtesy of Facebook. And even though the company’s top dogs have yet to be tamed, Facebook is still in the doghouse in the eyes of the aforementioned bureaucrats. The aforementioned misdeeds are only the tip of the iceberg when it comes to Facebook’s privacy shortcomings.