Friday, November 27, 2020

Turkey Week: A Complete About Face(Book)

Why yes, Facebook has developed the tools to stop the viral spread of Trump cultist hate porn and election misinformation, but they refused to use those tools because it would actually affect Trump cultists.


Several employees said they were frustrated that to tackle thorny issues like misinformation, they often had to demonstrate that their proposed solutions wouldn’t anger powerful partisans or come at the expense of Facebook’s growth.

The trade-offs came into focus this month, when Facebook engineers and data scientists posted the results of a series of experiments called “P(Bad for the World).”

The company had surveyed users about whether certain posts they had seen were “good for the world” or “bad for the world.” They found that high-reach posts — posts seen by many users — were more likely to be considered “bad for the world,” a finding that some employees said alarmed them.

So the team trained a machine-learning algorithm to predict posts that users would consider “bad for the world” and demote them in news feeds. In early tests, the new algorithm successfully reduced the visibility of objectionable content. But it also lowered the number of times users opened Facebook, an internal metric known as “sessions” that executives monitor closely.


“The results were good except that it led to a decrease in sessions, which motivated us to try a different approach,” according to a summary of the results, which was posted to Facebook’s internal network and reviewed by The Times.

The team then ran a second experiment, tweaking the algorithm so that a larger set of “bad for the world” content would be demoted less strongly. While that left more objectionable posts in users’ feeds, it did not reduce their sessions or time spent.

That change was ultimately approved. But other features employees developed before the election never were.

One, called “correct the record,” would have retroactively notified users that they had shared false news and directed them to an independent fact-check. Facebook employees proposed expanding the product, which is currently used to notify people who have shared Covid-19 misinformation, to apply to other types of misinformation.

But that was vetoed by policy executives who feared it would disproportionately show notifications to people who shared false news from right-wing websites, according to two people familiar with the conversations.

Another product, an algorithm to classify and demote “hate bait” — posts that don’t strictly violate Facebook’s hate speech rules, but that provoke a flood of hateful comments — was limited to being used only on groups, rather than pages, after the policy team determined that it would primarily affect right-wing publishers if it were applied more broadly, said two people with knowledge of the conversations.


Mr. Rosen, the Facebook integrity executive, disputed those characterizations in an interview, which was held on the condition that he not be quoted directly.
He said that the “correct the record” tool wasn’t as effective as hoped, and that the company had decided to focus on other ways of curbing misinformation. He also said applying the “hate bait” detector to Facebook pages could unfairly punish publishers for hateful comments left by their followers, or make it possible for bad actors to hurt a page’s reach by spamming it with toxic comments. Neither project was shelved because of political concerns or because it reduced Facebook usage, he said.

“No News Feed product change is ever solely made because of its impact on time spent,” said Mr. Osborne, the Facebook spokesman. He added that the people talking to The Times had no decision-making authority.
 
We totally didn't do this because it would hurt our bottom line.

And Mark Zuckerberg is a right-wing sociopath, period.

No comments:

Related Posts with Thumbnails