Facebook’s ‘fake news’ highlights need for social media revamp, say experts

“I thought they’d fact-check it, and it’d make them look worse,” said Horner. “I mean that’s how this always works: Someone posts something I write, then they find out it’s false, then they look like idiots.”

Facebook’s struggle with ‘fake news’ underscores the urgent need for a new social media legal framework, warned experts.

“It’s an enormous problem,” Keith Altman, a lawyer at 1-800 Law Firm, told FoxNews.com. “It’s the distribution, the infrastructure of these sites that allow the misinformation to be disseminated.”

In November Facebook CEO Mark Zuckerberg defended the social media network’s news algorithm against allegations that the company allowed ‘fake news’ to tilt the election.

“Personally, I think the idea that fake news on Facebook—of which it’s a very small amount of the content—influenced the election in any way is a pretty crazy idea,” Zuckerberg said at conference in Half Moon Bay, Calif, according to the Wall Street Journal.

Facebook’s Trending Topics fell prey to some high-profile fake stories after the social network implemented an algorithmic feed this summer. These included a false article that Fox News had fired anchor Megyn Kelly and a hoax article about the Sept.11 attacks.  On another occasion a seemingly innocent hashtag that appeared in Trending Topics linked to an inappropriate video.

While these incidents were clearly embarrassing for Facebook, social media companies are protected by Section 230 of the Communications Decency Act, which says that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

This means that online “intermediaries” that host or republish speech are protected against a number of laws that might otherwise be used to hold them legally responsible for what others say and do, according to the Electronic Frontier Foundation.

Altman, however, says the “unfettered” nature of social media sites such as Facebook, where content can be shared with vast numbers of people, necessitates a new legal framework.

“I think that [Section] 230 needs to be looked at and maybe more clearly defined,” Altman told FoxNews.com. “A better framework and accountability needs to be implemented to cause these companies to act responsibly.”

Altman is also representing the family of Naomi Gonzalez, who was killed in the Paris terror attacks in a lawsuit claiming that Google, Facebook and Twitter provided “material support” to Islamic State.

Eric Feinberg, a founding partner of deep web analysis company GIPEC, says that his firm has developed technology that can anticipate where the likes of fake news stories are lurking on the Internet.

“We deep dive into the web looking for bad patterns,” Feinberg told FoxNews.com. “We can anticipate what is bad, nefarious, on the web.”

Like Altman, Feinberg also cited the shortcomings of Section 230. “All of this is a huge problem,” he said.

With more than 1.7 billion monthly active users, Facebook’s role in society continues to be closely scrutinized. In October 2016, for example, civil rights groups including the American Civil Liberties Union and Black Lives Matter signed a letter to the company’s CEO Mark Zuckerberg urging him to clarify Facebook’s policy on content removal.

In early November 2016, Facebook said that it will block a British car insurer from profiling users of the social network to decide whether they deserve a discount on their insurance.

Facebook declined to provide additional comment for this story when contacted by FoxNews.com.

Hoax news articles mistaken as factual events have been increasingly dominating people’s Facebook feeds, spreading an alarming amount of inaccurate or completely falsified information to the public.

Pope Francis did not endorse Donald Trump during his election campaign, WikiLeaks did not confirm Secretary of State Hillary Clinton sold weapons to ISIS, and an FBI agent investigating Clinton’s use of a private-email server was not found dead.

During the months prior to the election of 2016, there was over 8,711,000 Facebook reactions, shares, and comments generated by the top 20 fake elections stories according to BuzzFeed News’ Craig Silverman. These engagements outpaced the top 20 election stories from 19 major news websites, which received a total of 7,367,000 shares, reactions, and comments on Facebook.

One of the most prolific fake news authors, Paul Horner, claimed he earned more than $10,000 a month writing “satire” articles on multiple websites, according to an interview with the Washington Post. In the past, his stories range from completely outlandish claims such as the infamous graffiti artist Banksy being arrested (and his true identity being revealed to be named Paul Horner), to Yelp suing South Park for lampooning them, to claiming that President Obama had banned the pledge of allegiance in schools.
However, Horner’s articles have deceived millions, including top political figures. Former Trump campaign manager Corey Lewandowsky as well as Trump’s son, Eric Trump, both fell victim to Horner, and retweeted his fake story claiming an anti-Trump protester was paid $3,500 to protest at a Trump rally in Arizona.

Horner, who said he hates Trump, told the Washington Post that he wrote many fake articles supporting Trump’s claims to discredit those who shared the articles.

“I thought they’d fact-check it, and it’d make them look worse,” said Horner. “I mean that’s how this always works: Someone posts something I write, then they find out it’s false, then they look like idiots.”

It is becoming more and more difficult to identify the truth.

Not all fake news is 100 percent false, and a slight blend of truth to a story can make it even more difficult to distinguish the accuracy of its claims.

Brian Stelter describes three different types of fake news headlines designed to deceive you: 1) Hoax sites, filled with completely made up news; 2) Hyper-partisan sties, which don’t necessarily lie but only share good news about their party, and bad news about the opposition; 3) Hybrids, which mix fact with fiction enough that you can’t tell which is true.

1621561_630x354

The false news article on the right, from Nov. 2016, had taken elements from a true 6ABC story from 2015 on the left. (6abc.com | christiantimesnewspaper.com)

One such hybrid website, which has since deleted its article, claimed that anti-Trump protestors in Philadelphia had beaten a veteran to death after he scolded them for burning the American flag following the 2016 election results. The article, which was categorically false, had taken elements from a true 2015 story about a homeless man who had died in a coma several months after being assaulted at a gas station. The fake news website even went as far as ripping video of the original news coverage from ABC station WPVI (6abc Action News), and muting the audio track that included WPVI anchor Sharrie Williams’ narration of the true events. With the sound off, those who watched the video were misled to believe it was supporting the article’s false claims.

On Nov. 14,2016, both Facebook and Google announced a ban on fake news sites from using their online advertising services. Facebook CEO Mark Zuckerberg wrote a public post in response to the mounting pressure to curb the spread of fake news, claiming that the social media giant is working on a system to better reduce the spread of falsified stories, though does not want to censor people’s opinions.

“We believe in giving people a voice, which means erring on the side of letting people share what they want whenever possible, wrote Zuckerberg. “We need to be careful not to discourage sharing of opinions or to mistakenly restrict accurate content. We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties.”
How can you spot “Fake News’?

1621558_630x354

“Websites that end in “.com.co” as they are often fake versions of real news sources,” wrote Melissa Zimdars. (abcnews.go.com | abcnews.com.co)

An assistant professor of communication at Merrimack College in Massachusetts, Melissa Zimdars, shared her criteria for suspect news sources via a public Google Doc. In the document, Zimdars describes some tips for identifying fake news such as:

Avoid websites that end in “lo” ex: Newslo. These sites take pieces of accurate information and then packaging that information with other false or misleading “facts.”

  • Watch out for websites that end in “.com.co” as they are often fake versions of real news sources.
  • Watch out if known/reputable news sites are not also reporting on the story. Sometimes lack of coverage is the result of corporate media bias and other factors, but there should typically be more than one source reporting on a topic or event.
  • Odd domain names generally equal odd and rarely truthful news.
  • Some news organizations are also letting bloggers post under the banner of particular news brands; however, many of these posts do not go through the same editing process (ex: BuzzFeed Community Posts, Kinja blogs, Forbes blogs).
  • Bad web design and use of ALL CAPS can also be a sign that the source you’re looking at should be verified and/or read in conjunction with other sources.
  • If the story makes you REALLY ANGRY it’s probably a good idea to keep reading about the topic via other sources to make sure the story you read wasn’t purposefully trying to make you angry (with potentially misleading or false information) in order to generate shares and ad revenue.
  • If the website you’re reading encourages you to DOX individuals, it’s unlikely to be a legitimate source of news.

How to stop fake news from spreading: fact check and share.

Both Zuckerberg and Zimdars recommend reading other sources to verify information, and some 3rd party websites are dedicated to debunking fake or questionable stories.

PolitiFact.com, run by the Tampa Bay Times, is a Pulitzer Prize-winning website dedicated to fact-checking political statements made by both major parties. Claims and statements are rated on a grade scale of accuracy dubbed the “Truth-O-Meter,” ranging from “True,” “Mostly True,” “Half-True,” False,” and “Pants On Fire!”

Snopes.com is a privately-run website, created by Barbara and David Mikkelson, dedicated to debunking urban legends and stories of questionable origin.

FactCheck.org is a non-profit, non-partisan consumer advocate site run by the University of Pennsylvania dedicated to reducing “deception and confusion in U.S. politics.”

If you spot a false news story on social media, we recommend finding the fact-checked link at any of the above sources and share it with those spreading inaccurate information.

How to stop fake news from appearing in your Facebook news feed.

You can report a news story as being false on Facebook, and prevent any story from that source appearing in your feed again.

1621555_630x354

You can report a story as being false on Facebook, and prevent any story from that source appearing in your feed again. (Facebook.com)

When you come across a fake news article on Facebook, you can click the small arrow in the top right of the post. A dropdown will appear, giving you the option to “Report Post.” From there, select “I think it shouldn’t be on Facebook” and push continue.

Next, Facebook will ask “What’s wrong with this post” and in the list of options you can select “It’s a false news story.” The incident has been reported, and in the next set of options you can choose to either block the website entirely or hide all future posts from the source.

However, three of Bickert’s former colleagues tell a very different story of how Facebook deals with controversial content. They and others declined to be named for fear of job repercussions (at Facebook or at their current employers, also Internet companies), but their descriptions are consistent with each other.

When a user flags a post on Facebook — whether it’s a picture, video or text post — it goes to a little-known division called the “community operations team.”

In 2010, the sources say, the team had a couple hundred workers in five countries. Facebook found it needed more hands on deck. After trying crowdsourcing solutions like CrowdFlower, the company turned to the consulting firm Accenture to put together a dedicated team of subcontractors. Sources say the team is now several thousand people, with some of the largest offices in Manila, the Philippines, and Warsaw, Poland.

Current and former employees of Facebook say that they’ve observed these subcontractors in action; that they are told to go fast — very fast; that they’re evaluated on speed; and that on average, a worker decides about a piece of flagged content once every 10 seconds.

Let’s do a back-of-the-envelope calculation. Say a worker is doing an eight-hour shift, at the rate of one post per 10 seconds. That means they’re clearing 2,880 posts a day per person. When NPR ran these numbers by current and former employees, they said that sounds reasonable.

A Facebook spokesperson says response times vary widely, depending on what is being reported; that the clear majority of flagged content is not removed; and that the numbers are off. Facebook did not provide alternative numbers.

If the sources — who have firsthand knowledge and spoke separately with NPR — are correct, then this may be the biggest editing — aka censorship — operation in the history of media. All the while, Facebook leaders insist they’re just running a “platform,” free of human judgment.

A person who worked on this area of “content management” for Facebook (as an employee, not a subcontractor) says most of the content you see falls neatly into categories that don’t need deep reflection: “That’s an erect penis. Check.” So it’s not like the workers are analyzing every single one in detail.

The problem is, simple and complex items all go into the same big pile. So, the source says, “you go on autopilot” and don’t realize when “you must use judgment, in a system that doesn’t give you the time to make a real judgment.”

A classic case is something like “Becky looks pregnant.” It could be cyberbullying or a compliment. The subcontractor “pretty much tosses a coin,” the source says.

Here’s another huge barrier. Because of privacy laws and technical glitches (such as a post that is truncated to only show part of the conversation), the subcontractors typically don’t get to see the full context — to which Bickert referred so often.

Frequent errors

That could be the cause of frequent errors.

NPR decided to stress-test the system by flagging nearly 200 posts that could be considered hate speech — specifically, attacks against blacks and against whites in the U.S. We found that Facebook subcontractors were not consistent and made numerous mistakes, including in instances where a user calls for violence.

We say they were mistakes because the company changed its position in dozens of instances, removing some and restoring others — either when we flagged it a second time through the automated system or brought it to the attention of Facebook headquarters in Menlo Park, Calif.

Consider this post:

sniping-edit_custom-68b9e03f56ff5eea64335235139a385c94cf2c8b-s800-c85-1

A comment on this post (indicated by the light gray text) was flagged by NPR. Facebook’s subcontractors did not remove the comment, but after being pressed further, a spokesperson said they made a mistake and it should have been removed.
Facebook/Screenshot by NPR

One user shares a video of a police officer kicking someone on the ground, and another says they “need to start organizing and sniping these bitches.” This is a call to shoot cops.

And it’s occurring in a very specific context: days after Philando Castile was shot and killed by an officer (the shooting’s aftermath was live-streamed on Facebook); and in a city in Minnesota that’s just a few miles away from where Castile lay bleeding.

The subcontractors did not remove the post. When NPR emailed Facebook headquarters about it, a spokesperson said they made a mistake and the post should have been removed.

One source tells NPR that the subcontractor “likely” could not see the full post if just the comment was flagged, could not see the profile of the user or even view the video to which the user was responding — again for privacy or technical reasons.

Different projects within Facebook must compete fiercely for engineering talent, and must make the case that their to-do list is worth expensive company resources. Two sources say that over the years, it’s been hard to make that case for fixing the editing system.

white-lives-matter-hashtag2_custom-81e71767052a9096d946dd2b97dd5bc2e4a9a6e7-s800-c85

Of posts that NPR flagged, this one, with the hashtag #blacklivesdontmatter, was left up erroneously, according to Facebook officials.
Facebook/Screenshot by NPR

NPR shared many posts with the company to get specific feedback on why something stayed up or was taken down. This post, with the hashtag #blacklivesdontmatter, was left up. The spokesperson says it should have been removed, but the reviewer’s perspective is not the same as a regular Facebook user; and because the company is protecting privacy, reviewers don’t have access to everything, which can affect their judgments. Also, the spokesperson says, reviewers don’t have the time a person at Facebook headquarters or at NPR may have to decide.

Another spokesperson notes that just because the reviewer has a limited view does not mean Bickert’s description is incorrect — that the two versions of the process are not mutually exclusive.

A restrictive platform

NPR also finds, interestingly, that Facebook can be strict.

Think of it this way. Every media outlet has its own culture and voice. If The New York Times is urban liberal, Fox News is conservative, and Playboy is racy, you could say Facebook aspires to be nice — a global brand that’s as easy to swallow as Coca-Cola. (In this provocative talk at Harvard’s Shorenstein Center, law professor and author Jeffrey Rosen says Facebook favors “civility” over “liberty.”)

The bias is evident in these real-life examples:

smack-cheese-edit_custom-ba74c37ee123fac3f669f0406811bf1976130be4-s800-c85

One example of a Facebook post that was flagged where racial slurs were used.
Facebook/Screenshot by NPR

rant_custom-1c72170f7b6b7d0d8330cde65fea6db8b0dc8adc-s800-c85

A second example of a Facebook post that was flagged where racial slurs were used.
Facebook/Screenshot by NPR

NPR asked two newsroom veterans to put themselves in the position of a Facebook content arbiter. We don’t have access to Facebook’s internal guidance on hate speech, so they had to rely on their own judgment and familiarity with Facebook as users.

Lynette Clemetson, formerly with NPR and now at the University of Michigan, says of Post 1, “now that’s just dumb.” Post 2, she says, sounds like a “rant.” Chip Mahaney, a former managing editor at a Fox television station, agrees. Both experts venture to guess that the posts are acceptable speech on the platform.

To their surprise, they’re not.

When NPR flagged the posts, Facebook censors decided to take them down. The spokesperson explains to NPR: It’s OK to use racial slurs when being self-referential. A black person can say things like “my niggers.” But no one can use a slur to attack an individual or group. That’s prohibited. A white person cannot use the word “nigger” to mock or attack blacks. Blacks can’t use “crakkker” (in whatever spelling) to offend whites.

Wiggle room in pictures

But there are so many caveats and exceptions — particularly when it comes to interpreting images and videos.

Consider this post:

swing-set-edit-final_custom-607fe4060240382150d79dfee8f71c7067a2438c-s800-c85

This image showing a noose, which was flagged by NPR, was not removed. A Facebook spokesperson explains that if the image included a person, a specific victim of a hate crime, then it would be removed.
Facebook/Screenshot by NPR

This is a noose, the kind used to hang slaves. Beside it is a sign. While the letters are faded, you can make out the words: “Nigger swing set.” It was shared by a user named “White Lives Matter 2.”

The news veterans would take it down. Clemetson says it’s not quite a call to violent action, but it’s certainly a reference to past action. “It’s a reference to lynching — and making a joke of lynching.” Mahaney says this is “obviously a pretty difficult picture to look at” and his instinct is, unless there’s a deeper context, “I would say it has no place on here.”

Facebook left it up, and stands by that decision.

The spokesperson explains this historical reference doesn’t have a human victim clearly depicted. If the image included a person, a specific subject of a hate crime, then it would be removed.

The spokesperson also made a claim that has not panned out: that in some situations, like this one, Facebook requires the user who created the page to add their real name to the “about” section. By removing anonymity, the hope is, people will be more thoughtful about what they post.

The spokesperson says “White Lives Matter 2” was told to identify him or herself promptly. Yet more than a month after NPR flagged the post, it hadn’t happened — yet another way Facebook’s enforcement mechanism is broken.

With news, a double standard emerges

If the user rules weren’t nuanced enough, add to that a new plot line: CEO Zuckerberg decided to forge partnerships with news media to make his social network the most powerful distributor of news on Earth. (Facebook pays NPR and other news organizations to produce live videos for its site.)

What Facebook has found in the process is that it’s much harder to censor high-profile newsmakers than it is to censor regular users. As one source says, “Whoever screams the loudest gets our attention. We react.”

And that’s making newsworthy content a whole other category.

Consider the scandals around “Napalm Girl” and Donald Trump. In the first, Facebook was slammed for not allowing users to share a Pulitzer Prize-winning photo because it showed child nudity. In the second, Facebook came under criticism for allowing Donald Trump to call for a ban on Muslims coming to the U.S. — what is clearly hate speech under their regular rules, per two former employees and a current one. One source said, “he hadn’t even won the Republican primary yet. What we decided mattered.”

In both cases, the company caved to public pressure and decided to bend the rules. The source says both decisions were “highly controversial” among employees; and they signal that Facebook leadership is feeling pressure to move toward a free-speech standard for news distribution.

How to define news is a huge source of controversy — not just with the elections, but with every charged moment. After Trump won, some critics blamed fake news on Facebook for the election’s outcome. (Zuckerberg dismisses that idea). After Philando Castile was shot by police, his video disappeared from Facebook. The company says it was “a glitch” and restored it. And the concern over its removal is part of an ongoing debate about what police-civilian standoffs should be live-streamed.

What now?

Some in Silicon Valley dismiss the criticisms against Facebook as schadenfreude: Just like taxi drivers don’t like Uber, legacy media envies the success of the social platform and enjoys seeing its leadership on the hot seat.

A former employee is not so dismissive and says there is a cultural problem, a stubborn blindness at Facebook and other leading Internet companies like Twitter. The source says: “The hardest problems these companies face isn’t technological. They are ethical, and there’s not as much rigor in how it’s done.”

At a values level, some experts point out, Facebook must decide if its solution is free speech (the more people post, the more the truth rises), or clear restrictions.

And technically, there’s no shortage of ideas about how to fix the process.

A former employee says speech is so complex, you can’t expect Facebook to arrive at the same decision every time; but you can expect a company that consistently ranks among the 10 most valuable on Earth, by market cap, to put more thought and resources into its censorship machine.

The source argues Facebook could afford to make content management regional — have decisions come from the same country in which a post occurs.

Speech norms are highly regional. When Facebook first opened its offices in Hyderabad, India, a former employee says, the guidance the reviewers got was to remove sexual content. In a test run, they ended up removing French kissing. Senior management was blown away. The Indian reviewers were doing something Facebook did not expect but which makes perfect sense for local norms.

Harvard business professor Ben Edelman says Facebook could invest engineering resources into categorizing the posts. “It makes no sense at all,” he says, that when a piece of content is flagged, it goes into one long line. The company could have the algorithm track what flagged content is getting the most circulation and move that up in the queue, he suggests.

Zuckerberg finds himself at the helm of a company that started as a tech company — run by algorithms, free of human judgment, the mythology went. And now he’s just so clearly the CEO of a media company — replete with highly complex rules (What is hate speech anyway?); with double standards (If it’s “news” it stays, if it’s a rant it goes); and with an enforcement mechanism that is set up to fail.

We’ve all fallen victim to sharing of FAKE NEWS on Facebook… but I sure would love to see more people take the time to read and research things a little before they decide to share a story on Facebook.

Just because you saw it on Facebook… Does NOT mean that it is TRUE!  We EACH need to be a part of the Solution NOT part of the problem.

It’s up to the users of Social Media (Facebook, Twitter, etc) to keep this platform honest and trustworthy. So, before you share your next post… read the post, research it… and always, always check other sources… if it isn’t reported on multiple news outlets… IT’s FAKE… PLEASE, PLEASE… DO NOT SHARE IT! Do your part in reporting these “FAKE NEWS” posts to Facebook… Not ALL will be removed as discussed earlier, but persistence pays off.

It’s past time to end the division in our country and Fake News is a main reason there is so much division today.

DO YOUR PART!

How do you confirm what you share is the truth? Have you fallen victim to the “Fake News”? How did you deal with it? Share your ideas, we love hearing from you.

Share this blog post with your friends to help them determine what is Fake News and what is Real News.

This blog post is a combination of several other articles on the issue of Fake News and how we can recognize it when we see it.

Advertisements

2 thoughts on “Facebook’s ‘fake news’ highlights need for social media revamp, say experts

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s