main

Gaming News

The Media Tried To Game The Machines and You'll Never Guess What Happened Next (Facebook Won)

August 16, 2018 — by Kotaku.com0

wrnt5vho7bc0klcwon8q.png

Illustration: Jim Cooke

Just five years ago, not only was it possible for a reputable outlet to flatly characterize Upworthy—a website that didn’t make much but a system for testing what drew clicks on Facebook—as the “fastest-growing media company of all time,” but it happened more than once. By 2013, a year after its founding, Upworthy had favorable print profiles and $12 million in funding from people like Facebook co-founder Chris Hughes and Reddit’s Alexis O’Hanian. Its founder was making the rounds advising other sites hungry for traffic to “stay away from politics” and “focus on Facebook, not Twitter.” Michelle Obama guest-edited the site.

From the beginning, Upworthy’s social-justice curation machine was based on a specific, paternalistic set of ideas. It proposed that the internet is inherently addictive; that young people are attracted to the shiny and uplifting; that aggregated videos play better online than original material; and that with a little massaging, all that terribly boring stuff about policy and food shortages in Africa could trap eyeballs just as well as heartwarming cats could.

Advertisement

Upworthy delivered what there is nothing else to call but content, which in the early days was just a repackaging of YouTube videos with enticing, emotional headlines people would be inclined to like and share so as to demonstrate that they agreed with the worthy messages. They did this nearly exclusively through Facebook, which in itself represented the idea most important to Upworthy: Not only could Facebook be successfully gamed, but a sustainable property could be built on doing so.

Within a year of its launch, Upworthy reported 87 million unique visits in a month, more than the New York Times. Its so-called “curiosity-gap” headlines inspired copycats and parodies and the Twitter account “Saved You A Click.” In 2013 it raised an additional $8 million, on top of the $4 million it drummed up prior to its launch. Legacy publications dissected the site’s cloying, two-sentence headlines—”This Amazing Kid Got To Enjoy 19 Awesome Years On This Planet. What He Left Behind Is Wondtacular”; “Watch The First 54 Seconds, That’s All I Ask. You’ll Be Hooked After That I Swear”—with alarm. They poked fun at its style and lack of substance, and then they invited consultants to their offices to tell them how they, too, could crack the Facebook code.

Advertisement

While the internet is unquestionably addictive, the rest of the ideas underlying Upworthy were less unyielding truths about the way things work than specific tics of a particular time. Young people, it turns out, are mainly attracted to what the algorithms serve them, and what that is shifts over time, largely at the whims of the machines and the people who program them. Facebook, it turned out, was happy for those algorithms to serve those young people media, until serving them media didn’t align with its own bottom line. In 2013, though, it was possible to think that Upworthy had cracked a code, or at least showed that it could be cracked. The alternative—that the distribution of news and information had largely been subsumed by an arbitrary and merciless technological regime—was simply too depressing to consider.

Last week, 31 members of Upworthy’s editorial staff were unceremoniously given a few weeks’ severance and let go, some fired through a script read over the phone by a third party. The site’s editor in chief and CEO both resigned shortly after. Eli Pariser, Upworthy’s most visible founder, a one-time clickbait apologist who has become fond of comparing Facebook to the weather (you prepare for it, but you can’t change it) left the parent company’s board. The layoffs, which followed a messy takeover by Good Media that had been framed as a merger, represent the end of an embarrassing moment in which the media considered social media algorithms a fixed and exploitable asset.

In the early 2010s, by far the most promising resource for publishers was Facebook, which referred so many people to struggling (or brand-new) websites that it was nearly impossible to argue for investment in anything else. “It just dominates,” one long-time Upworthy editor told me. “People higher up were like, why would we spend money to bring in less traffic” on other social media sites. Over the next years, as publications floundered, they followed Facebook’s algorithmic tweaks, pivoting to viral headlines or listicles or “long-form” or video, remaking entire newsrooms to generate what the platform suggested would play well.

Advertisement

This game continued until Facebook didn’t want any of it anymore: Earlier this year, the platform announced it would prioritize “meaningful interactions” from family and friends over the news industry, and move away from pushing video it was recommending publishers produce as recently as last year. “We are not interested in talking to you about your traffic and referrals any more,” the social media company’s head of news allegedly told a number of Australian media executives recently. “That is the old world and there is no going back.”

Hard numbers are difficult to come by, considering how few media companies are inclined to share such bad news, but by most accounts the recent tweaks to Facebook’s news feed algorithm have been disastrous. Over the last year, Slate says it’s seen traffic referrals from Facebook drop 81 percent. Traffic to Gizmodo.com from its Facebook page dropped around 75 over roughly the same period. The solution, for many large publishers, has mostly been to pivot away again, to game other algorithms, like the one used by Google search.

Advertisement

What’s left of Upworthy and Good insist the company hasn’t changed, and both websites are still up. But everyone I spoke to, from both sides of the merger, predicted the remaining brand was best poised to become a consulting agency, considering its distinct lack of editorial staff. (Good’s CEO and spokespeople did not return requests for comment.) Since 2013, the website had rebranded and changed its strategy several times, but in the end, a handful of employees speculate, Good purchased Upworthy largely for its viral audience and Facebook reach. Unfortunately, you really can’t be on one platform, as one long-time Upworthy editor told me: “They’re going to find a way to screw you,” which is as true for a feel-good factory as for anyone else.

Eli Pariser and Peter Koechley, the painfully earnest founders of Upworthy, met making viral videos for MoveOn.org during the 2008 presidential campaign. Their site, as they envisioned it, would take broadly appealing liberal positions—income equality is good, racism not so cool—and package them like memes. As Pariser told New York at Upworthy’s peak, the site’s goal was to reach as many people as possible, appealing to the emotions to find consensus rather than hinging on contentious points. “You don’t want to be that guy in your Facebook feed going, ‘These ReTHUGlicans out there,” he said. “We see Upworthy as confirmation that the potential to have a broadly well-informed public still exists,” he told the magazine, which in today’s internet terms is just unbearably quaint.

Advertisement

This particular editorial strategy, piloted by people with more experience in politics and technology than journalism, was inseparable from what that “broadly well-informed public” deemed fit to like and share online. Editors insist they had “no idea what we were about to unleash, how big the algorithm would let us be.” But there was “a maniacal focus” on gaming the Facebook system to the exclusion of all else, says Michael Wertheim, a digital strategist and consultant who helped launch the site. (Wertheim would go on to consult for dozens of businesses, including Fusion, the millennial-focused site launched by Gizmodo Media’s parent company Univision.) Upworthy encouraged its reputation for generating viral hits, producing numerous slide deck tutorials instructing others “how to win the internets.”

Most famously, Upworthy popularized the “curiosity gap” headline: Basically, a headline that over-promises emotionally (“These words were meant to break these women”) while revealing almost nothing about a story’s content (“But the opposite happened.”) The idea here was that a person’s knee-jerk desire would compel them to click headlines that were the right combination of tantalizing and vague, which they pretty uniformly did. Such headlines were introduced largely through Upworthty’s Facebook testing process:“Curators” wrote 25 headlines for every video they wanted to post, and then deployed combinations of images and text to a small batch of test subjects before posting whichever combination they clicked and shared the most.

Advertisement

Within a year of its launch, using this system, the site was getting 100 million visitors a month. An editor remembers asking in an early job interview what Upworthy would do if everyone started using that style. “Like, You Won’t Believe That Obama Just Did This? That’s 25 stories. What will we do then?”

The model, essentially a machine-generated string of phrases, did spread: Wertheim estimates that in the years after working with Upworthy, he consulted for 20 businesses about what he’d learned there, including New York Magazine and the New York Times. The Atlantic did a full-day seminar, all the way up to the execs. CNN had some bad Upworthy-style tweets. Time launched its viraly-focused Newsfeed, brands started using the emotional over-promise, we got Distractify and ViralNova and Elite Daily.

Advertisement

Then, in late 2013, shortly after Upworthy got its $8 million series A, the algorithm changed. Facebook, in an effort to cut down on clickbait and memes and go after its competitor Twitter, altered the news feed to focus more on current events. The drop in Upworthy’s readers was swift. In November 2013, the site had 90 million visitors. By January 2014, that number was 48 million.

Adam Mordecai, one of the site’s first employees and an editor until 2017, says he distinctly remembers the moment he “couldn’t get into the matrix anymore. It was like night and day.” The staff began to anxiously test tactics for Facebook again. The editor who’d voiced concerns about the spread of the Upworthy model remembers resizing thumbnails, scrawling arrows over images in her posts. “We were just spinning our wheels,” she says. Then, one of their data people realized Facebook was looking for, you know, actual sentences and words inside of a post. “Within 24 hours we had everyone on our staff doing long-form” she says. “It’s like, we’ve got to pivot and we’ve got to do it right now, we’re just dead in the water.”

But those pageviews, which brought investors and stood in for a long-term business strategy, kept declining. (Upworthy’s original monetization scheme involved partnering with non-profits, later it would run sponsored content and paid ads.) It pivoted to journalism, hiring a copy chief to fact-check its stories and announced a partnership with ProPublica. Still, by November of 2014, the site got 20 million views a month, one-fifth of the traffic it was bringing in at its peak. In January of 2015, Amy O’Leary left the New York Times to become Upworthy’s editorial director. A few months later the site laid off six people to focus on reporting and “original content,” an apparent concession to the idea that aggregated hyper-shareable video was no longer a viable model. Still, the site packaged its stories in extremely Upworthy ways. (“A Dad’s Letter To Himself On His Worst Day, From Himself On His Best Days.”)

Advertisement

Only a few months later, at the beginning of 2016, Upworthy “pivoted to video,” laying off 14 of its staff. As at other media companies that would make similar shifts later that summer, the change in strategy capitalized on Facebook’s newfound algorithmic preference, for videos that could be shared in its feed. “Facebook’s telling us to do video, so we do video. They say live video especially. So you’re just kind of chasing the carrot,” says one editor. 2016 was the “moment when people had really started to build their business on this platform, and they suddenly realized it could shift and change their entire operation,” says another. Over the next year, MTV News, Mic, Vocativ, Mashable, and Vice Media would all make similar shifts, laying off employees to make room for the same kind of production, which guarantees airtime and thus ads. Sometimes that meant original documentaries. Often it meant glorified PowerPoint slides.

Advertisement

By the spring, under the direction of O’Leary, Upworthy had a handful of original video series, and was licensing more from outside entities, as well as creating in-house campaigns for “socially responsible” brands. They entered a deal with Facebook Live, to put videos directly on the social media site. Largely through sharing those videos to the then-friendly algorithm, Upworthy said its videos were getting 256 million views a month. Traffic to the site itself, however, sat under 10 million. And, as with other publications that took Facebook’s advice about video, the site would find the rapid change of strategy didn’t pay off: In 2016, Mark Zuckerberg was telling outlets that in five years, half of all Facebook feeds would be video. What he meant, actually, was that Facebook would soon de-prioritize publishers’ videos to make more room for its own in-house work.

And in any case those video ad deals weren’t enough to keep up with what Upworthy was spending. According to employees through 2016 the company was pushing for growth, getting ready for a sale. The sale to Good, which was finalized in December 2017 for what one former Good manager close to the merger characterizes as a “bargain price,” came as a surprise to both staffs: “It was nobody’s first choice,” says one editor at Upworthy. “It was Eli’s way of saving everyone’s jobs,” says another.

Advertisement

Good’s official line is that it was profitable, though multiple people dispute this, and was interested in filtering its “elevated prestige content” through the “Upworthy distillation machine.” But more to the point, while Good appreciated Upworthy’s mission, they wanted what the former employee characterized as “our message, on their microphone … they really, truly believed they could make shit go viral.” They wanted the click-testing mechanism, the Facebook-optimized backend of the site. But with no cohesive plan for how the companies would be combined, there was a rift between the two teams. Executives at Good slashed Upworthy’s budget and asked senior employees, most of whom worked remotely, to move to Los Angeles (they refused). Upworthy found Good’s lack of scruples when it came to advertisers obscene, which Good considered insane given the company’s financial straits. Frustrated with their new overlords, a number of long-term employees fled Upworthy, including O’Leary.

Managers at Upworthy blame Good for the destruction of their operation, saying the rate at which ads were selling through its new revenue model simply hadn’t caught up to what they were spending. It wasn’t Facebook that killed them, they say: executive mishandling effectively gutted the site. The former Good employee I spoke to, who helped oversee the merger, disagrees: “Upworthy was going to be sold no matter what,” she says. “They were completely out of money, and would have been sold for parts.” And former Upworthy employees are clearly frustrated with Facebook’s hold on the site: One asked rhetorically why the platform could act quickly to ban what she referred to as “prestige clickbait” and not get rid of “fake news.”

In the few months before the merger, Good and Upworthy got into e-commerce and started selling T-shirts, of all things. Between the management exodus and the changes to Facebook’s algorithm, traffic dipped again. Until a few weeks before the staff was let go at Upworthy, the editorial process still included a lengthy two-part Facebook testing phase, where writers would test headlines and thumbnail images for their “clickiness.” The process could sometimes take more than an hour for a post, one writer estimates. Shortly before the layoffs, Upworthy stopped testing its stories on Facebook, though the reason for abandoning its signature strategy was unclear.

Advertisement

Recently, Facebook’s head of news reportedly told media executives Mark Zuckerberg doesn’t care about publishers. “I’ll be holding your hands with your dying ­business like in a hospice,” he allegedly said. As the reality of Facebook’s position on the media set in, publishers who’d found their traffic crippled shifted to rely more on SEO, which involves stuffing posts with often-Googled keywords. Around the time Upworthy laid off its staff, Google deployed a “broad core algorithm update,” which changed what web searches deliver to users across the globe, in every language. Some of the sites under Gizmodo Media Group felt an immediate hit.

Gaming News

'People You May Know:' A Controversial Facebook Feature's 10-Year History

August 8, 2018 — by Kotaku.com0

l061sd370qmbkkovfqqk-1.jpg

In May 2008, Facebook announced what initially seemed like a fun, whimsical addition to its platform: People You May Know.

“We built this feature with the intention of helping you connect to more of your friends, especially ones you might not have known were on Facebook,” said the post.

It went on to become one of Facebook’s most important tools for building out its social network, which expanded from 100 million members then to over 2 billion today. While some people must certainly have been grateful to get help connecting with everyone they’ve ever known, other Facebook users hated the feature. They asked how to turn it off. They downloaded a “FB Purity” browser extension to hide it from their view. Some users complained about it to the U.S. federal agency tasked with protecting American consumers, saying it constantly showed them people they didn’t want to friend. Another user told the Federal Trade Commission that Facebook wouldn’t stop suggesting she friend strangers “posed in sexually explicit poses.”

In an investigation last year, we detailed the ways People You May Know, or PYMK, as it’s referred to internally, can prove detrimental to Facebook users. It mines information users don’t have control over to make connections they may not want it to make. The worst example of this we documented is when sex workers are outed to their clients.

When lawmakers recently sent Facebook over 2,000 questions about the social network’s operation, Senator Richard Blumenthal (D-Conn.) raised concerns about PYMK suggesting a psychiatrist’s patients friend one another and asked whether users can opt out of Facebook collecting or using their data for People You May Know, which is another way of asking whether users can turn it off. Facebook responded by suggesting the senator see their answer to a previous question, but the real answer is “no.”

Facebook refuses to let users opt out of PYMK, telling us last year, “An opt out is not something we think people would find useful.” Perhaps now, though, in its time of privacy reckoning, Facebook will reconsider the mandatory nature of this particular feature. It’s about time, because People You May Know has been getting on people’s nerves for over 10 years.

Advertisement

Advertisement


Facebook didn’t come up with the idea for PYMK out of thin air. LinkedIn had launched People You May Know in 2006, originally displaying its suggested connections as ads that got the highest click-through rate the professional networking site had ever seen. Facebook didn’t bother to come up with a different name for it.

“People You May Know looks at, among other things, your current friend list and their friends, your education info and your work info,” Facebook explained when it launched the feature.

That wasn’t all. Within a year, AdWeek was reporting that people were “spooked” by the appearance of “people they emailed years ago” showing up as “People They May Know.” When these users had first signed up for Facebook, they were prompted to connect with people already on the site through a “Find People You Email” function; it turned out Facebook had kept all the email addresses from their inboxes. That was disturbing because Facebook hadn’t disclosed that it would store and reuse those contacts. (According to the Canadian Privacy Commissioner, Facebook only started providing that disclosure after the Commission investigated it in 2012.)

Though Facebook is now upfront about using uploaded contacts for PYMK, its then-chief privacy officer, Chris Kelly, refused to confirm it was happening.

“We are constantly iterating on the algorithm that we use to determine the Suggestions section of the home page,” Kelly told Adweek in 2009. “We do not share details about the algorithm itself.”

Advertisement

Advertisement

Address books were so valuable to Facebook in its early days that one of the first companies it acquired, at the beginning of 2010, was Malaysia-based Octazen, a contact importing service that had been used, until its acquisition by Facebook, to tap into user contacts on the world’s biggest social and email sites.

In a TechCrunch post at the time, Michael Arrington suggested that acquiring a tiny start-up on the other side of the world only made sense if Octazen had been secretly keeping users’ contact information from all the sites it worked with to build a “shadow social network.” That would have been incredibly valuable to a then-fledging Facebook, but Facebook dismissed the unsupported claim, saying that it just needed a couple of guys who could quickly help it build tools to suck up contacts from novel services as it expanded into new countries.

That was important because to be the best social network it could be, Facebook needed to develop a list of everyone in the world and how they were connected. Even if you don’t give Facebook access to your own contact book, it can learn a lot about you by looking through other people’s contact books. If Facebook sees an email address or a phone number for you in someone else’s address book, it will attach it to your account as “shadow” contact information that you can’t see or access.

That means Facebook knows your work email address, even if you never provided it to Facebook, and can recommend you friend people you’ve corresponded with from that address. It means when you sign up for Facebook for the very first time, it knows right away “who all your friends are.” And it means that exchanging phone numbers with someone, say at an Alcoholics Anonymous meeting, will result in your not being anonymous for long.

Smartphone behemoth Apple seems to have only recently realized how valuable address books are, and how easily they can be abused by nefarious actors. In a Bloomberg report, an iOS developer called address books “the Wild West of data.” In June, Apple changed its rules for app developers to forbid accessing iPhone contacts “to build a contact database for your own use.” Apple didn’t respond to a request for comment about whether Facebook’s collection of contact information for its People You May Know database violates that rule.


In 2010, Ellenora Fulk of Pierce County, Washington, saw a woman she didn’t recognize pop up in her People You May Know. In the accompanying profile photo, the woman was with Fulk’s estranged husband, standing next to a wedding cake and drinking champagne. After Fulk alerted the authorities, her husband, a corrections officer who had changed his last name, was charged with bigamy. He was sentenced to one year in jail, but was able to suspend the sentence by paying a $500 “victim compensation” fee, presumably to wife #1. Both marriages were ended, the first in divorce and the second in annulment. PYMK takes casualties.

Advertisement

Advertisement

Early on, Facebook realized there were some connections between people that it shouldn’t make. A person familiar with the People You May Know team’s early work said that as it was perfecting the art of linking people, there was one golden rule: “Don’t suggest the mistress to the wife.”

One of the primary ways PYMK systems figure out who knows each other is through “triangle-closing,” as LinkedIn put it in a blog post: “If Alice knows Bob and Bob knows Carol, then maybe Alice knows Carol.” But that can get awkward if you are making those connections by looking at a person’s private contact list rather than at their public friend list. Bob might have phone numbers for both Alice and Carol in his phone because Alice is his wife and Carol is his side piece. Bob doesn’t want that particular triangle to close, so Facebook’s engineers initially avoided making suggestions that relied solely on “two hops” through a contact book.

Despite hiccups like the Fulk incident, People You May Know was batting it out of the park. During a presentation in July 2010, the engineer in charge of PYMK said it was responsible for “a significant chunk of all friending on Facebook.” That was important because “people with more friends use the site more,” according to the 2010 presentation by Lars Backstrom, who went on to become the head of engineering for all of Facebook.

Backstrom got his PhD from Cornell where he studied how social networks evolve. When he joined Facebook in 2009, he got the chance to control the evolution. Backstrom built “the PYMK backend infrastructure and machine learning system.” Backstrom explained in his 2010 talk how the PYMK algorithm decided which “friends of friends” to put in your “People You May Know” box: Facebook looked at not just how many mutual friends you had, but how recently those friendships were made and how invested you were in them.

That all got converted into math. In engineering language, a person is a “node” and a friendship between people is an “edge.” If you appear to be in a clustered node with someone else—i.e., have a lot of mutual friends—and all the edges are fresh—i.e., a lot of those friendships are recent—that is like an algorithmic alarm bell going off, saying that a new clique has been formed offline and should be replicated digitally on the social network.

Illustration: Lars Backstrom (Graphanalysis.org)

Advertisement

Advertisement

But just having friends in common doesn’t mean that you necessarily want to be friends with someone. In 2015, Kevin Kantor recounted in spoken poetry how painful it was to have his rapist show up as a “person you should know.” He and his rapist had three mutual friends.

[embedded content]“When my rapist showed up under the People You May Know tab on Facebook it felt like the closest to a crime scene I’ve ever been.”

The same year, a woman whom I will call Flora, to protect her anonymity, went on a first date with a guy she met via a dating app. Flora doesn’t like new, strange men to know too much about her, so she only tells them her nickname. She was happy about that in this case, because things immediately turned sour with the guy, and he began to harass her via text, sending her messages repeatedly for months which she ignored. In the fall of 2016, about a year after she first met him, he sent her a message revealing he now knew her real name because she had been suggested to him as a “person he may know” on Facebook.

When you start aggressively mining people’s social networks, it’s easy to surface people we know that we don’t want to know.


In the summer of 2015, a psychiatrist was meeting with one of her patients, a 30-something snowboarder. He told her that he’d started getting some odd People You May Know suggestions on Facebook, people who were much older than him, many of them looking sick or infirm. He held up his phone and showed her his friend recommendations which included an older man using a walker. “Are these your patients?” he asked.

The psychiatrist was aghast because she recognized some of the people. She wasn’t friends with her patients on Facebook, and in fact barely used it, but Facebook had figured out that she was a link between this group of individuals, probably because they all had her contact information; based apparently on that alone, Facebook seemed to have decided they might want to be friends.

Advertisement

Advertisement

“It’s a massive privacy fail,” the psychiatrist told me at the time.


In 2016, a man was arrested for car robbery after he was suggested to his victim as a Facebook friend. How that connection was made, if it wasn’t just a coincidence, is inexplicable.

In his 2010 presentation, Lars Backstrom said it would be near impossible for Facebook to suggest more than “Friends of Friends” as People You May Know. Yet he showed a graph that demonstrated that a good number of friendships on Facebook were between people who had no obvious tie. There was no path between them, even if you did a network analysis that allowed for 12 degrees of Kevin Bacon.

To be able to predict connections between people where the “paths” weren’t obvious, Facebook would need more data. And since then, it has developed new avenues to learn more about its users. It bought Instagram in 2012, and can now use information about whose photos you care about to recommend friends. In 2014, it bought WhatsApp, which would theoretically give it direct insight into who messages who.

Facebook says it doesn’t currently use information from WhatsApp for People You May Know, though a close read of its privacy policy shows that it’s given itself the right to do so: “Facebook … may use information from us to improve your experiences within their services such as making product suggestions (for example, of friends or connections, or of interesting content).”

Advertisement

Advertisement

Facebook continues to seek out novel ways to better get to know its users, reportedly seeking data from hospitals and from banks. And as more and more people downloaded Facebook’s apps to their smartphones, Facebook engineers realized that offered a well of valuable data for PYMK. In 2014, Facebook filed a patent application for making friend recommendations based on detecting that two smartphones were in the same place at the same time; it said you could compare the accelerometer and gyroscope readings of each phone, to tell whether the people were facing each other or walking together.

Facebook said it hasn’t put that technique into practice and despite persistent claims to the contrary, says that it doesn’t use location derived from people’s phones or IP addresses to make friend suggestions.

In 2015, an engineer suggested in a patent application that Facebook could look at photo metadata, such as presence of dust on the camera lens, to determine if two people had uploaded photos taken by the same camera. That anyone would ever want to be subjected to this level of scrutiny and algorithmic pseudo-science for the sake of a friend recommendation was not addressed by the engineer.


In 2016, North Carolina artist Andy Herod opened a show called Sorry I Made It Weird: Portraits of People You May Know. Herod had painted portraits of 30 strangers who Facebook had suggested he might know. He didn’t actually know any of them.

Artist Andy Herod with one of the portraits inspired by a People You May Know suggestion he received on FacebookPhoto: Andy Herod

“Facebook is such a big part of people’s lives,” said Herod by phone. “They don’t think about the fact that their photos are constantly being popped up into strangers’ homes, through PYMK.”

Advertisement

Advertisement

Herod wanted to put those photos permanently on someone’s walls. An Asheville art collector, who prefers to stay anonymous, bought the bulk of Herod’s series. As it happens, the collector is not a member of the social network; he quit Facebook in 2009 because it was “one big ad space” and a “graveyard of ex girlfriends”—which is how a lot of people might describe their People You May Know.

Quitting Facebook is the obvious answer for users disturbed by the social network’s practices. But for people dependent on Facebook for professional or personal reasons, it’s not an option, so they remain and have to accept that the social network will mine information about them that they can’t see or control to make unwelcome suggestions to them.

That mining is particularly disturbing because it seems Facebook may have abandoned its own golden rule against making friend suggestions based on “two hops” though contact books. Last year, in 2017, Facebook recommended I friend a relative I didn’t know I had. I could not figure out how Facebook had linked me to Rebecca Porter, a biological great-aunt from an estranged part of my family, because none of the people who linked us were on Facebook. Since then I’ve determined it must be because Facebook drew a long and complicated path between me and a distant relative by analyzing information in the contact books of two otherwise disconnected users: Rebecca Porter and my stepmother both had the email address and phone number for another Porter, and I am friends with my stepmother on Facebook. If that is indeed how Facebook made the link, that is some NSA-level network science.

Making connections like that is how you wind up “recommending the mistress to the wife.” An acquaintance of mine recently told me that happened to him, but the gender roles were reversed. He figured out his wife had resumed an affair she had ended years earlier when the guy suddenly started showing up in his People You May Know. Facebook was essentially telling him, “Hey, this guy is part of your network again.” He confronted his wife and she admitted to it.

“Thank you, Facebook, for being the fucking Stasi,” he texted me.

Advertisement

Advertisement

Facebook won’t make its current People You May Know team available for interviews. But in a leaked memo published by Buzzfeed in March, Facebook executive Andrew Bosworth explained the thinking that motivates tools like PYMK.

“The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good,” he wrote in 2016. “That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends.”

In other words, People You May Know is an invaluable product because it helps connect Facebook users, whether they want to be connected or not. It seems clear that for some users, People You May Know is a problem. It’s not a feature they want and not a feature they want to be part of. When the feature debuted in 2008, Facebook said that if you didn’t like it, you could “x” out the people who appeared there repeatedly and eventually it would disappear. (If you don’t see the feature on your own Facebook page, that may be the reason why.) But that wouldn’t stop you from continuing to be recommended to other users.

Facebook needs to give people a hard out for the feature, because scourging phone address books and email inboxes to connect you with other Facebook users, while welcome to some people, is offensive and harmful to others. Through its aggressive data-mining this huge corporation is gaining unwanted insight into our medical privacy, past heartaches, family dramas, sensitive work associations, and random one-time encounters.

So Facebook, consider belatedly celebrating People You May Know’s 10th anniversary by letting users opt out of it entirely.


Contact the Special Projects Desk

This post was produced by the Special Projects Desk of Gizmodo Media. Reach our team by phone, text, Signal, or WhatsApp at (917) 999-6143, email us at tips@gizmodomedia.com, or contact us securely using SecureDrop.


Gaming News

Facebook Wanted Us to Kill This Investigative Tool

August 7, 2018 — by Kotaku.com0

ghjbf43lcmtkpiipznzz.png

Illustration: Jim Cooke

Last year, we launched an investigation into how Facebook’s People You May Know tool makes its creepily accurate recommendations. By November, we had it mostly figured out: Facebook has nearly limitless access to all the phone numbers, email addresses, home addresses, and social media handles most people on Earth have ever used. That, plus its deep mining of people’s messaging behavior on Android, means it can make surprisingly insightful observations about who you know in real life—even if it’s wrong about your desire to be “friends” with them on Facebook.

In order to help conduct this investigation, we built a tool to keep track of the people Facebook thinks you know. Called the PYMK Inspector, it captures every recommendation made to a user for however long they want to run the tool. It’s how one of us discovered Facebook had linked us with an unknown relative. In January, after hiring a third party to do a security review of the tool, we released it publicly on Github for users who wanted to study their own People You May Know recommendations. Volunteers who downloaded the tool helped us explore whether you’ll show up in someone’s People You Know after you look at their profile. (Good news for Facebook stalkers: Our experiment found you won’t be recommended as a friend just based on looking at someone’s profile.)

Advertisement

Facebook wasn’t happy about the tool.

The day after we released it, a Facebook spokesperson reached out asking to chat about it, and then told us that the tool violated Facebook’s terms of service, because it asked users to give it their username and password so that it could sign in on their behalf. Facebook’s TOS states that, “You will not solicit login information or access an account belonging to someone else.” They said we would need to shut down the tool (which was impossible because it’s an open source tool) and delete any data we collected (which was also impossible because the information was stored on individual users’ computers; we weren’t collecting it centrally).

We argued that we weren’t seeking access to users’ accounts or collecting any information from them; we had just given users a tool to log into their own accounts on their own behalf, to collect information they wanted collected, which was then stored on their own computers. Facebook disagreed and escalated the conversation to their head of policy for Facebook’s Platform, who said they didn’t want users entering their Facebook credentials anywhere that wasn’t an official Facebook site—because anything else is bad security hygiene and could open users up to phishing attacks. She said we needed to take our tool off Github within a week.

Advertisement

We started to worry at this point about what the ramifications might be for keeping the tool available. Would they kick us off Facebook? Would they kick Gizmodo off Facebook? Would they sue us?

We decided to change the tool slightly so that it directed users to a Facebook sign-in page to log in, and then used session cookies to keep logging in each day and checking the PYMK recommendations. Facebook, though, still disapproved, and said they had another problem with the tool.

“I discussed the general concept of the PYMK inspector with the team with respect to whether it is possible to build the inspector in a policy compliant manner and our engineers confirmed that our Platform does not support this,” wrote Allison Hendrix, the head of policy for Facebook’s Platform, by email in February. “We don’t expose this information via our API and we don’t allow accessing or collecting data from Facebook using automated means.”

Advertisement

In other words, Facebook doesn’t have an official way for people to keep track of their PYMK recommendations and that means users aren’t allowed to do it. Facebook is happy to have users hand over lots of data about themselves, but doesn’t like it when the data flows in the other direction.

Shortly thereafter, in March, Facebook’s world exploded, when it was revealed that Cambridge Analytica had gotten access to the profile information of millions of Facebook users, going through what was considered an “official route” in 2012. Facebook stopped bothering us about our PYMK Inspector, and the tool currently remains up.

Advertisement

“We often work with developers to address concerns about their apps and tools; especially if they are found to violate our terms. We contacted Gizmodo concerning their tool because it asked people to provide their Facebook login information and did so in a way that may have made them vulnerable to phishing attempts,” said Hendrix in an emailed statement when we asked for comment about the tool this week. “When people are encouraged to provide their Facebook information in a way that is different from what they’re used to, they might trust other, malicious forms and our terms attempt to prevent this. After extended conversations where we gave Gizmodo an opportunity to make updates; they decided to make the necessary changes. We understand that the tool is still live.”

The episode demonstrated a huge problem to us: Journalists need to probe technological platforms in order to understand how unseen and little understood algorithms influence the experiences of hundreds of millions of people—whether it’s to better understand creepy friend recommendations, to uncover the potential for discrimination in housing ads, to understand how the fake follower economy operates, or to see how social networks respond to imposter accounts. Yet journalistic projects that require scraping information from tech platforms or creating fictitious accounts generally violate these sites’ terms of service.

That’s why a team of lawyers at Knight First Amendment Institute at Columbia University has sent a letter to Facebook on behalf of Kashmir Hill of Gizmodo Media Group and other journalists and academic researchers asking Facebook to amend its terms of service to create a safe harbor for journalistic and research projects. That would mean journalists and researchers using automated means or fictitious accounts to gather data about Facebook and how it works for stories that serve the public interest won’t be threatened with breach of contract or violating the Computer Fraud and Abuse Act, which has been interpreted in the past as prohibiting violations of a site’s TOS.

Advertisement

“Facebook shapes public discourse in ways that are not fully understood by the public or even by Facebook itself. Journalists and researchers play a crucial role in illuminating Facebook’s influence on public discourse,” wrote the Knight First Amendment Institute’s Jameel Jaffer, Alex Abdo, Ramya Krishnan, and Carrie DeCell in a letter sent to Facebook CEO Mark Zuckerberg on Monday. “Facebook’s terms of service severely limit their ability to do that work, however, by prohibiting them from using basic tools of digital investigation on Facebook’s platform.”

The lawyers at the Knight First Amendment Institute, which recently successfully sued President Donald Trump for violating people’s freedom of speech by blocking their tweets, attached a proposed amendment for Facebook to add to its terms of service. They asked for a response by the beginning of September.

Facebook seemed reticent to offer such a safe harbor when asked for comment.

“We appreciate the Knight Institute’s recommendations. Journalists and researchers play a critical role in helping people better understand companies and their products – as well as holding us accountable when we get things wrong,” said Campbell Brown, Facebook’s head of global news partnerships, in a statement sent by email. “We do have strict limits in place on how third parties can use people’s information, and we recognize that these sometimes get in the way of this work. We offer tools for journalists that protect people’s privacy, including CrowdTangle, which helps measure the performance of content on social media, and a new API we’re launching to specifically analyze political advertising on Facebook.”

Advertisement

The API refers to Facebook’s plans to give journalists and researchers automated access to an archive of the political ads run on Facebook. That, of course, is a small sliver of what disturbs people about the Facebook world, leaving a lot of other information officially out of journalists’ reach.

Contact the Special Projects Desk

This post was produced by the Special Projects Desk of Gizmodo Media. Reach our team by phone, text, Signal, or WhatsApp at (917) 999-6143, email us at tips@gizmodomedia.com, or contact us securely using SecureDrop.

Tech News

John Oliver made his own version of Facebook’s ‘we're sorry’ video

July 30, 2018 — by Engadget.com0

HBO

In April, after the Cambridge Analytica scandal erupted, Facebook put out an ad that was meant to reassure users about how their data would be treated going forward. Dubbed the “Here Together” ad, the video points to some of the issues that have come along with Facebook, like spam, clickbait and fake news, though the closest it gets to actually acknowledging the Cambridge Analytica debacle is noting “data misuse.” “Facebook will do more to keep you safe and protect your privacy,” said the ad. Well last week, Facebook’s stock prices plummeted, knocking off around $120 billion in market value from the social media giant and inspiring John Oliver to tweak the ad just a bit.

The parody ad Oliver played on Last Week Tonight starts out similarly, a soothing voice dubbed over various Facebook posts while a piano plays in the background. It talks about the friends that brought you to Facebook in the first place. But then, rather than highlighting how it wants to get back to a focus on friends, as the original ad did, Oliver’s goes on to describe just how much data Facebook collects on its users, and how much money it makes off of that data.

[embedded content]

“Your data enabled us to make a fuck-ton of money from corporations, app developers and political campaigns,” says the ad. “Seriously you guys, we were making so much money off of you, you don’t even understand. But then, you found out about it. And we had to testify and issue bullshit apology ads, all so we could lose $120 billion.” But what will Facebook do going forward? “But here’s the thing,” it says, “nothing’s really gonna change. We’ve got your data. We’ve got your friends. And, really, where are you gonna go? Friendster? Fuck you.”

The ad then says in no uncertain terms that Facebook will continue to find “subtle ways to violate your privacy.” And while users watch videos of cats eating corn, dogs riding horses and kids “beating the shit out of each other,” the company will just keep making “an ungodly amount of money.” It wraps up with a poignant, “Facebook: We own who you are.”

You can check out the segment in the video above.

Tech News

UK politicians blame Facebook for the rise of fake news

July 30, 2018 — by Engadget.com0

Francois Lenoir / Reuters

After an 18 month investigation, the UK parliament has issued a scathing report on the handling of fake news and illegal election ads by tech companies, especially Facebook. The Digital, Culture, Media and Sport (DCMS) committee said Facebook “obfuscated” information and refused to probe potential Russian abuse until forced to by the US Senate. Worst of all, the social network helped spread disinformation and hatred against the Rohingya minority in Myanmar. “Facebook is releasing a product that is dangerous to consumers and deeply unethical,” the report states.

The first part of the report details Facebook’s failings and proposes measures that would result in drastic changes at the social network. For one, it recommends criminal prosecution against Facebook if it fails to act against harmful and illegal content.

It also recommends bans on micro-targeted political ads, the creation of a new designation for Facebook that’s somewhere in between platform and publisher, new powers for the UK Electoral Commission to combat fake news on social media, and a comprehensive overhaul of election advertising legislation. Finally, it issued another demand that Mark Zuckerberg “come to the committee to answer questions to which Facebook has not responded adequately to date.”

A professional global Code of Ethics should be developed by tech companies, in collaboration with this and other governments, academics, and interested parties, including the World Summit on Information Society, to set down in writing what is and what is not acceptable by users on social media, with possible liabilities for companies and for individuals working for those companies, including those technical engineers involved in creating the software for the companies.

The committee also expressed concern about Cambridge Analytica, specifically that it had worked for the UK government with a “secret clearance.” It also points out that the firm had ties with a Malta-based company that was essentially selling passports to Malta (and by extension to Europe). It notes that investigative journalist Daphne Caruana Galizia, assassinated by a car bomb last year, was investigating that very passport scheme.

Finally, the DCMS urged an investigation into Russia’s ties to disinformation on social media. Committee chair Damian Collins said that early inquiries into Russian disinformation on Facebook soon led to questions about interference in Brexit and other UK elections. “And we noticed an aggressive campaign against us even asking these questions. It underlined the need to persist, which we have done,” he told the Observer.

It said that the millionaire backer of Leave.EU had ties to Russian companies and officials, and urged the National Crime Agency (NCA) to follow the money used that paid for ads on Facebook and other sites. It also criticized statements by Facebook’s UK Policy Director Simon Milner that the company wasn’t aware of Russian interference in the Brexit referendum. “We deem Mr. Milner’s comments to the Committee to have been disingenuous

Tech News

After Math: The price of doing business

July 29, 2018 — by Engadget.com0

spots-960x618.jpeg

AFP/Getty Images

Elon Musk just can’t seem to stay out of the news. After last week’s tirade against the Thai cave rescue diver, his girlfriend took to Twitter to defend his large donations to the GOP as “the price of doing business in america [sic].” But that price differs depending on who you ask. For right-wing troll Alex Jones, that price is a 30-day timeout from Facebook and Yahoo, but for MoviePass that price could well be the company’s entire operation.

83 million active paying users: Family plans have long been a staple of mobile carriers as a means of locking in customers (and their families) to long-term contracts and now the practice is bleeding over into streaming services as well. It seems to be working for Spotify, which announced it’s added 8 million new paid subscribers in the last fiscal quarter.

$5 million in loans: MoviePass might not be around for much longer. The company suffered a service outage this week because it didn’t have the funds to buy a sufficient number of tickets, forcing MoviePass’ parent company to borrow $5 million. This is the second such outage this month alone.

$0: If you live in Japan and your iPad was damaged in the recent, deadly flooding, there is a silver lining. Apple announced this week that it will repair or replace any of its products damaged by the rising floodwaters free of charge.

$71.3 billion: Disney and Fox’s proposed merger took another step towards completion this week when the companies’ respective shareholders voted to approve the multi-billion dollar deal. There’s still work to be done before the merger is scheduled for completion early next year. The companies must shed 22 regional sports networks to comply with DOJ anti-trust demands, for example.

30 days: Alex Jones will have to go back to shouting his conspiracy theories on street corners for the next month after both YouTube and Facebook have issued temporary bans on his use of their platforms (as well as removed a number of community standard-violating videos). This, of course, is barely a slap on the wrist and will likely do nothing to dissuade him from arguing that fluoride in the water supply turns frogs gay or whatever he’s making up this week to sell his snake oil brain supplements.

0 loot crates: As the makers of Star Wars Battlefront II can attest, the inclusion of loot crates in modern games is quickly becoming a toxic asset for developers. That’s why Turn 10, makers of the Forza racing series, announced that they’ll be phasing out the crates in Motorsport 7 and won’t include them at all in Horizon 4.

Tech News

Facebook blocks Infowars’ Alex Jones from posting for 30 days

July 27, 2018 — by Engadget.com0

Facebook

Days after YouTube took down multiple videos from conspiracy theorist Alex Jones’ video channel, Facebook has suspended his account on the platform for 30 days. The InfoWars founder had violated the social network’s Community Standards, according to Mashable. If he or his fellow admins keep breaking the rules, Jones’ personal page could be permanently banned.

“Our Community Standards make it clear that we prohibit content that encourages physical harm [bullying], or attacks someone based on their religious affiliation or gender identity [hate speech],” a Facebook spokesperson told Mashable.

The social network received reports of four videos — several of which were exactly the same as those taken down on YouTube — on pages that Infowars and Jones maintained on the platform. Facebook subsequently took that content down and blocked Jones. This functionally shuts him out from posting any content on his own personal page, or on any he’s an admin for, during the 30-day period. In other words, other InfoWars admins can keep posting on pages within “The Alex Jones Channel” unless they, too, break Facebook’s rules.

After providing conflicting info, Fb now confirms Alex Jones’ personal profile was banned, but not his Pages like “Alex Jones”, The Alex Jones Channel, or Infowars https://t.co/fLlPx6LtMJ

— Josh Constine (@JoshConstine) July 27, 2018

After some confusion earlier today, Facebook clarified that Jones had personally received a warning previously, and thus earned the punitive 30-day block for the four Community Rules-violating videos he uploaded. His channel only received a reprimand this time and is still active, but is close to reaching the level where Facebook could permanently remove the page, a spokesperson told CNET.

Whether this sets a precedent for content appearing on Facebook, their ban methodology remains vague. One of the four banned videos had even been marked as acceptable after a moderator reviewed it last month, according to TechCrunch. Only after YouTube took down Jones’ videos two days ago did Facebook revisit this content and declare its earlier decision mistaken. While the social network may further clarify this incident, it remains unclear what does and doesn’t violate the social network’s content guidelines — especially after it decided just days ago that Jones’ rant accusing special counsel Robert Mueller of pedophilia could stay on the platform.

Tech News

White House reportedly working on federal data privacy policy

July 27, 2018 — by Engadget.com0

Design Pics

The Trump administration is working on a set of data privacy protections, the Washington Post reports, and according to the National Telecommunications and Information Administration, officials have held 22 meetings with more than 80 companies and groups since last month. Companies like Facebook, Google, AT&T and Comcast have been involved, according to four Washington Post sources familiar with the matter. The short-term goal is to deliver a data privacy proposal — including how data should be collected and handled and what rights consumers have regarding that data — which could serve as a guide for lawmakers as they consider legislation.

Axios reported last month that the White House was looking into a data privacy plan, meeting with groups like the Information Technology Industry Council, a trade group representing companies such as Apple, Google and Facebook, and The Business Roundtable, a lobbying group that hosts tech CEOs like Apple’s Tim Cook, IBM’s Virginia Rometty and Verizon’s Lowell McAdam.

“Through the White House National Economic Council, the Trump Administration aims to craft a consumer privacy protection policy that is the appropriate balance between privacy and prosperity,” Lindsay Walters, the president’s deputy press secretary, told the Washington Post. “We look forward to working with Congress on a legislative solution consistent with our overarching policy.”

The move comes as major missteps, such as Facebook’s Cambridge Analytica debacle, have caused both consumers and lawmakers to call for more consumer control over digital data. The White House’s interest also follows the implementation of Europe’s GDPR regulations, a rigorous set of data privacy rules put into place in May.

But the Washington Post notes that while consumer protections may be driving some of these White House conversations, the desire for a less aggressive set of regulations at the federal level may be contributing as well. Commerce Secretary Wilbur Ross has said previously that GDPR “could significantly interrupt transatlantic cooperation and create unnecessary barriers to trade” and some businesses are reportedly pushing the Trump administration toward a plan that’s less rigorous than GDPR. A draft proposal put together by the Chamber of Commerce and obtained by the Washington Post calls for consumers to have more control over their information but appears to limit what legal options consumers would have against companies that collect their data.

The draft proposal also asks Congress to devise a law that would preempt any state laws, notable as California has just passed its own set of data privacy regulations. Vermont has taken on data privacy through legislation as well.

The White House is reportedly working to have its data privacy plan set this fall. Meanwhile, multiple lawmakers have now introduced their own data privacy bills in both the Senate and the House of Representatives.

Tech News

Facebook pulls back the curtain on its content moderators

July 26, 2018 — by Engadget.com0

@ETFP (Twitter)

When someone reports an offensive post on Facebook (or asks for a review on a message caught by its automatic filters, where does it go? Part of the the process is, as it always has been, powered by humans, with thousands of content reviewers around the world. Last year Facebook said it would expand the team to 7,500 people, and in an update posted today explaining more about their jobs, it appears that mark has been hit.

The size is intended to have people available for review in a post’s native language, although some items like nudity might be handled without regard to location. Of course, there’s extensive training and ongoing reviews to try and keep everyone consistent — although some would argue that the bar for consistency is misplaced.

Facebook didn’t reveal too much about the individuals behind the moderation curtain, specifically citing the shooting at YouTube’s HQ, even though it’s had firsthand experience with leaking identities to the wrong people before. It did however bring up how the moderators are treated, insisting they aren’t required to hit quotas while noting that they have full health benefits and access to mental health care. While it might not make understanding Facebook’s screening criteria any easier — or let us know if Michael Bivins is part of the rulemaking process — the post is a reminder that, at least for now, there is still a human side to the system.