Tag: Opinion

Facebook’s fake war on fake news

It's hard watching Facebook struggle. Like how for the past two years it's alternated between looking like it's doing something about fake news, and actually doing something about fake news.

The company's latest stab at the problem is saying it will change what people see in their News Feeds. The goal is to show users less posts from companies or brands, and more shares (or posts) from friends, in particular ones its algorithm thinks will get you excited.

They're not specifically saying this has anything to do with stopping the spread of fake news from virulent racists, politically active conspiracy theorists, or propaganda farms successfully goading our country into tearing itself apart.

No, because that would indicate they've identified the problem. Instead, Facebook says this notable change to the News Feed -- its cash cow fed by your attention -- is to make Facebook feel more positive for users. To bring people closer together.


At this stage, Facebook could lead a masterclass on how not to solve the fake news problem. De-prioritizing actual news organizations and instead highlighting that InfoWars story about eating Tide Pods that your racist uncle shared and commented on five times is just the latest hasty missive in what seems like Facebook's desire to amplify the issue.

While some are stoking their chins thoughtfully and musing unquestionably that Facebook just wants its users to be happy, it's appropriate to examine the contempt for its users that got us in this situation in the first place.

Right after the November 2016 election, despite being warned by the US government and it being widely reported months in advance that fake news and propaganda were influencing American voters, Mark Zuckerberg flatly denied what everyone was telling him, his users, and the world. "Of all the content on Facebook, more than 99% of what people see is authentic," he wrote. He also cautioned that the company should not rush into fact-checking.

America began its spin into chaos. Countries around the world, including the US, were seeing racial violence in the streets we now know was directly correlated with racist rhetoric on Facebook.

Facebook had all the data. All the answers.

Facebook treated it like a reputational crisis.

In response the company rolled out a "Disputed" flagging system announced in a December 2016 post. The weight of policing the fake news disaster was placed on users, who would hopefully flag items they believed were fake. These items were then handed to external fact-checking organizations at a glacial pace. When two of the orgs signed off on the alleged fake item, the post got a very attractive "disputed" icon and the kind of stern warning that any marketer could tell you would be great for getting people to click and share.

The "Disputed" flag system was predicted to fail from the start, but Facebook didn't seem to care.

Facebook characterized its efforts as effective in April 2017 stating that "overall that false news has decreased on Facebook" but did not provide any proof. It said that was because "It's hard for us to measure because we can't read everything that gets posted."

In July, Oxford researchers found that "computational propaganda is now one of the most powerful tools against democracy," including that Facebook plays a critical role in it.

In August, Facebook announced it would ban pages that post hoax stories from being allowed to advertise on the social network. This precluded a bombshell. In September, everyone who'd been trying to ring the alarm about fake news, despite Facebook's denials and downplaying, all found out just how right they were.

This was the month Facebook finally admitted -- under congressional questioning -- that a Russian propaganda mill used the social-media giant's ad service for political operation around the 2016 campaign. This came out when sources revealed to The Washington Post that Facebook was grilled by 2016 Russia-Trump congressional investigators behind closed doors.

Meanwhile, Facebook's flag system shambled along like a zombie abandoned in the desert.

In September Facebook's fact-checking organizations told press they were ready to throw in the towel citing Facebook's own refusal to work with them. Facebook seemed to be actively undermining the fact-checking efforts.

Politico wrote:

(...) because the company has declined to share any internal data from the project, the fact-checkers say they have no way of determining whether the "disputed" tags they're affixing to "fake news" articles slow -- or perhaps even accelerate -- the stories' spread. They also say they're lacking information that would allow them to prioritize the most important stories out of the hundreds possible to fact-check at any given moment.

By November 2017 fake news as a service was booming.

The following month, December 2017, Facebook publicly declared that the disputed flag system wasn't working after all. One full year after its launch.

Its replacement, "Related Articles," was explained in a Medium post that came across as more experimentation on users and a deep aversion to talk about what's really going on behind the scenes.

There's more to this story, but you get the idea. It's a juggernaut of a rolling disaster capped off by this month's hand-on-heart pledge to connect people better and cut actual news organizations out of the picture.

"Facebook is a living, breathing crime scene for what happened in the 2016 election -- and only they have full access to what happened," Tristan Harris, a former design ethicist at Google told NBC News this week.

Facebook's response included the following in a statement:

In the past year, we've worked to destroy the business model for false news and reduce its spread, stop bad actors from meddling in elections, and bring a new level of transparency to advertising. Last week, we started prioritizing meaningful posts from friends and family in News Feed to help bring people closer together. We have more work to do and we're heads down on getting it done.

As my colleague Swapna Krishna put it, "The company is making it harder for legitimate news organizations to share their stories (and thus counter any false narratives), and by doing so, is creating a breeding ground for the fake news it's trying to stamp out in the first place."

If only Facebook put as much effort into policing fake news as it does actively stomping on the face of free speech in the form of human sexuality, enforcing extreme, antiquated notions of puritanism with its exacting sex censorship.

Indeed, this week a former Facebook content monitor told NBC News that "Facebook's team of content reviewers focused mainly on violence and pornography, making it "incredibly easy" for Russian trolls to fly under the radar with their fake news." They added, "To sum it up, what counts as spam is anything that involves full nudity."

Thank goodness. I mean, who knows how the world would spiral uncontrollably into chaos and violence if someone saw a boob.

Images: Getty Images/iStockphoto (Holding hands); Simon Potter via Getty Images (Readouts); Getty Images (Mark Zuckerberg)

After Math: CES 2018 by the numbers

After a week in the desert, CES 2018 has finally come to a close. Booths were trod, products were demoed and the conference was visited by only one of the biblical plagues. Puffco debuted one of the only cannabis gadgets seen at CES in recent memory, a gaming robot beat virtually every human who challenged it in Scrabble, and Toyota's "E-Palette" mobility concept turned all of the heads. Numbers, because how else will you tally votes for the Best of CES awards?

20 seconds: That's how long it takes for the Puffco Peak concentrate vaporizer to fully heat up -- a fraction of the time it takes e-nails to do the same and far less flammable than the butane torch method.

5 years: That's how long we'll have to wait for regulators to work their magic before hopping into Volocopter's 18-rotor autonomous sky taxi. Just make sure wherever you're flying to is within the 30-minute range limit.

$120: That's how much you're going to pay for the Vortx gaming accessory if you want to have air puffed into your face during your next Overwatch session.

82: That's the age of first-time CES exhibitor Carol Staninger, who's here to help save the lives of children by alerting an adult when said ankle biters are left unattended in hot cars.

112–81: That was the final score in a friendly match between Engadget managing editor Terrence O'Brien and ITRI's Scrabble-playing AI. The AI won handily, despite its hands actually being manipulators.

2 hours: That's how long the Las Vegas Convention Center was thrust into darkness on Wednesday. Oh, the sweet, sweet irony of the world's largest electronics expo losing electricity for hours on end.

3: That's how many Best of CES awards the Toyota e-Palette mobility concept took home: Best Transportation Tech, Best Innovation and the coveted Best of the Best of CES award. Fingers crossed it actually makes it out of testing and onto our roadways.

Click here to catch up on the latest news from CES 2018.

CES showed us smart displays will be the new normal

Before the start of CES 2018, the only real smart speakers with a display were the Amazon Echo Show and the Echo Spot. But now that Google has partnered with several manufacturers to make a whole line of Echo Show rivals, a bona fide new device category has been born: the smart display. And based on the devices revealed this week, I believe the smart display will slowly start to outnumber smart speakers and will likely be the norm going forward.

The simple reason for this argument is that the display makes such devices much more useful. Sure, you could have Alexa or Google Assistant tell you there's a Starbucks 1.5 miles away from you. But wouldn't it be nice to actually see where it is on a map? Or if you wanted to know the time, you could just, you know, look at the screen. Or if you wanted to know who the artist of the song is but couldn't be bothered to interrupt the track, you could do the same. That extra visual layer is really useful, especially for quick, glanceable information.

Of course, you could've made this same argument months ago when the Echo Show debuted. But these new Google Assistant displays are so much better in almost every way. For example, when you make a search query, it won't just spit out a short generic answer with the transcript showing up on-screen; it'll actually appear in a way that makes sense. So if you search for "cornbread recipe," the display will offer an array of recipes to choose from. Tap on one and you'll be presented with a lovely step-by-step recipe guide, all without having to install any additional skill or action.

Or if you ask a Google Assistant smart display to play relaxing music, it won't pick out a random playlist and start playing a song you don't want (something that happens quite frequently with the Echo). Instead, it'll offer a visual selection of playlists, which you can then scroll through and pick the one you want. Perhaps my favorite feature is when you ask for directions. It will not only show you the map on the screen but also send those same directions straight to your phone without you having to ask.

Plus, Google has now opened the door for so many more companies to start making smart displays. At CES, we saw Lenovo, JBL and LG show off their versions, each with very different designs. Eventually, even more companies will join the fray, adding their own spin on what a smart display looks like. With so many options on the market, there'll soon be a smart display for every kind of home. Amazon might've introduced the smart-display concept, but Google will be the one to democratize it.

And this is just the beginning. Smart displays can be incorporated in more than just a little 10-inch prop on the table. Personal assistants are already in smart fridges from LG and Samsung, so it doesn't take much imagination to think that Alexa and Google Assistant displays could take over the rest of your home. Imagine a smart display not only on the front of your fridge but also in the kitchen TV or maybe the bathroom mirror. Soon smart displays will be everywhere. CES 2018 was just the beginning.

Click here to catch up on the latest news from CES 2018.

After Math: CESpocalypse Now

Get hyped everybody, it's CES week! This is the high holy holiday of tech geekdom, a pilgrimage through the hallowed halls of the Las Vegas Convention Center. Everybody's going to be there. LG will be showing off an 88-inch 8K TV, Neutrogena is debuting its skin-grading iPhone accessory, and Honda has all of the adorable mobility bots. Numbers, because how else will we count down to the show's opening?

4: That's how many of its concept mobility robots Honda is showing off at CES this year. They've got a companion-bot, wheelchair-bot, a wheeled pack-bot, and an autonomous ATV. $1,000: That's how much Vuzix's Alexa-enabled AR glasses will set you back, assuming you're the sort of person who needs a pair of Alexa-enabled AR glasses right friggin' now. Or you could wait until 2019, when the company figures the price will drop by half.

3: That's how many 3D printers XYZPro will be displaying at CES. There will be the $45 da Vinci 3D Pen Cool, which promises not to burn the heck out of your fingers with molten plastic; a $230 tablet-controlled da Vinci Nano printer, and the burlier $4,000 da Vinci color AiO for small businesses that can both scan and print items in full color.

Also 3: Is the number of new service robots that LG plans to unveil at CES this year. You've got the Serving Robot, Porter Robot and Shopping Cart Robot -- each doing exactly what its name implies.

88: That's how many inches diagonal LG's ludicrous 8K OLED display is. That's 11 inches and an extra 4 K's bigger than the ginormous monitor LG showed off at last CES. Oh the difference a year makes.

0-100: That's the scale by which Neutrogena's SkinScanner concept iPhone accessory will grade the quality of your skin. Should your epidermis be found lacking, the scanner's app will direct you to Neutrogena's website where you can buy various tinctures and topicals to "fix" the "problems" with your skin. Or you can just love yourself for who you are on the inside and not worry about meeting some unobtainable societal standard of beauty.

Click here to catch up on the latest news from CES 2018.

2017’s biggest cybersecurity facepalms

2017 was a year like no other for cybersecurity. It was the year we found out the horrid truths at Uber and Equifax, and border security took our passwords. A year of WannaCry and Kaspersky, VPNs and blockchains going mainstream, healthcare hacking, Russian hackers, WikiLeaks playing for Putin's team, and hacking back.

In 2017 we learned that cybersecurity is a Lovecraftian game in which you trade sanity for information.

Let's review the year that was (and hopefully will never be again).

Moscow mules

This was the year Kaspersky finally got all the big press they've been angling for. Unfortunately for them, it wasn't for their research. The antivirus company spent an uncomfortable year in the headlines being accused of working with Russia's FSB (former KGB). Eventually those suspicions got it banned from use by US government agencies.

Kaspersky's alleged coziness with Putin's inner circle has made the rounds in the press and infosec gossip for years. But it came to a head when an NSA probe surfaced, the Senate pushed for a ban, and -- oddly -- the Trump administration came with the executioner's axe.

Obviously, Kaspersky -- the company, and its CEO of the same name -- denied the accusations, and offered to work with the US government. They offered up their code for review and filed suit when the ban passed.

At this point, the only thing that might save Kaspersky's reputation in the US is finding us that pee tape. Fingers crossed.

Be still my backdoored heart

A ransomware attack on Hollywood Presbyterian Hospital in 2016 put health care hacking center stage, but in 2017 it turned into a true nightmare.

The WannaCry ransomware attack spread like wildfire, locking up a third of the National Health Service (NHS) in England. That was followed by other worms, like Petya/NotPetya, which hit US hospitals in June.

The security of pacemakers was exposed as being awful, specifically in the case of medical device manufacturer St. Jude Medical (now rebranded as Abbott). A lot of people hated on researcher Justine Bone and MedSec for the way they went about exposing pacemaker flaws, but they were right. The FDA put a painful pin in it when it notified the public of a voluntary recall (as a firmware update) of 465,000 pacemakers made by St. Jude Medical.

Meanwhile, white hat hackers put together the first Cyber Med Summit -- a doctor-run, hacker boot camp for medical professionals. That the Summit exists is a tiny bit of good news in our medical mess, but it also proved that you should probably make sure your doctor keeps a hacker on staff.

Medical staff at the Summit got a wake-up call about medical devices exploits, and concluded they need to add "hacking" to their list of possible problems to assess and diagnose.

I'm not crying, you're crying

On May 12, over 150 countries were hit in one weekend by a huge ransomware crimewave named WannaCry. The attack was derived from a remote code execution vulnerability (in Windows XP up through Windows Server 2012) called "EternalBlue," found in the April Shadow Brokers/NSA dump. Those who did their Windows updates were not affected.

WannaCry demanded $300 in Bitcoin from each victim and among those included were the UK's National Health Service (NHS). The ransomworm was stopped in its tracks by the registration of a single domain that behaved like a killswitch. The creators apparently neglected to secure their own self destruct button.

Researcher MalwareTech was the hero of the day with his quick thinking, but was sadly repaid by having his identity outed by British tabloids. Adding injury to insult, he was later arrested on unrelated charges as he attempted to fly home after the DEF CON hacking conference in August.

Two weeks after the attack, Symantec published a report saying the ransomware showed strong links to the Lazarus group (North Korea).

Others independently came to the same conclusion. Eight months later, and just in time for his boss' warmongering on North Korea, Trump team member Thomas P. Bossert wrote in the Wall Street Journal that "the U.S. today publicly attributes the massive "WannaCry" cyberattack to North Korea."

Maybe he's just a backdoor man

US Deputy Attorney General Rod Rosenstein in October introduced the world to the new and totally made-up concept of "responsible encryption" -- and was promptly laughed out of the collective infosec room.

"Responsible encryption is effective secure encryption, coupled with access capabilities," he said.

He suggested that the feds won't mandate encryption backdoors "so long as companies can cough up an unencrypted copy of every message, call, photo or other form of communications they handle."

Even non-infosec people thought his new PR buzzwords were suspect. "Look, it's real simple. Encryption is good for our national security; it's good for our economy. We should be strengthening encryption, not weakening it. And it's technically impossible to have strong encryption with any kind of backdoor," said Rep. Will Hurd (R-Texas) at The Atlantic's Cyber Frontier event in Washington, D.C.

Politico wrote:

It's a cause Rosenstein has quietly pursued for years, including two cases in 2014 and 2015 when, as the US attorney in Maryland, he sought to take companies to court to make them unscramble their data, a DOJ official told POLITICO. But higher-ups in President Barack Obama's Justice Department decided against it, said the official, who isn't authorized to speak to the news media about the cases.

To everyone's dismay, Rosenstein doubled down on his "responsible encryption" campaign when he capitalized on a mass shooting (using as his example the phone of Devin Patrick Kelley who opened fire on a congregation in Texas, killing 26 people).

He said, "Nobody has a legitimate privacy interest in that phone ... But the company that built it claims that it purposely designed the operating system so that the company cannot open the phone even with an order from a federal judge."

Like Uber, but for Equifax

If there was some kind of reverse beauty pageant for worst look, worst behavior, and best example of what not to do with security, we'd need a tiebreaker for 2017. Equifax and Uber dominated the year with their awfulness.

Equifax was forced to admit it was hacked badly in both March and July, with the latter affecting around 200 million people (plus 400,000 in the UK). Motherboard reported that "six months after the researcher first notified the company about the vulnerability, Equifax patched it -- but only after the massive breach that made headlines had already taken place... This revelation opens the possibility that more than one group of hackers broke into the company."

Shares of Equifax plummeted 35% after the July disclosure. And news that some of its execs sold off stock before the breach was made public triggered a criminal probe.

Which brings us to the "unicorn" that fell from grace.

In late November Uber admitted it was hacked in October 2016, putting 57 million users and over half a million drivers at risk. Uber didn't report the breach to anyone -- victims or regulators -- then paid $100K to the hackers to keep it quiet, and hid the payment as a bug bounty. All of which led to the high-profile firing and departures of key security team members.

Just a couple weeks later, in mid-December, the now-notorious 'Jacobs letter' was unsealed, accusing Uber of spying and hacking. "It was written by the attorney of a former employee, Richard Jacobs, and it contains claims that the company routinely tried to hack its competitors to gain an edge," Engadget wrote, and "used a team of spies to steal secrets or surveil political figures and even bugged meetings between transport regulators -- with some of this information delivered directly to former CEO Travis Kalanick."

The letter was so explosive it's now the trial between Uber and Waymo -- so we can be sure we haven't seen the last of Uber's security disasters in the news.

Images: Getty Images/iStockphoto (Wannacry); D. Thomas Magee (All illustrations)

Xbox’s lack of compelling games won’t be fixed next year

Microsoft's 2017 started six months early. At E3 2016, Xbox chief Phil Spencer closed out the company's keynote by teasing the "most powerful console ever." At this year's show, he finally revealed the Xbox One X, and in November, the hardware was at retail. In the time it takes to earn a bachelor's degree, Microsoft addressed one of the internet's loudest complaints about the Xbox One: that it wasn't powerful enough compared to the PlayStation 4.

A 6-teraflop GPU and 12GB of RAM won't help Microsoft clear its other hurdle, though, at least not in the short term. Since last year, the company has shuttered a pair of its internal development studios (Fable house Lionhead and Max and the Curse of Brotherhood's Press Play) and killed off at least two other games: the incredibly promising dragon-owning-simulator Scalebound from PlatinumGames and the internally developed game-creation suite Project Spark.

The number of internal studios and software projects is so low that the company had to announce it would be going on a shopping spree for new studios and games next year. The problem is, on average, games take between two and three years to make, and big AAA tentpoles can easily spend double that time in development. Xbox's dearth of fresh games you can't play anywhere else isn't going to be fixed in 2018.

This fall, the Xbox's big exclusives were racing sim Forza Motorsport 7, which, while extremely pretty, was more of the same, and PlayerUnknown's Battlegrounds. The latter has 25 million players on PC, and Microsoft is bragging it racked up over 1 million players in its first 48 hours on Xbox One. The lovably clunky work in progress isn't the type of thing that's going to hit beyond Microsoft's shooter-centric base, however. At the moment it also isn't the type of game you boot up to show off your fancy new console and TV. The same goes for backwards compatibility with 15-year-old games from the original Xbox.

So how does Microsoft fill the gap between now and whenever its first new purchase comes out? Early 2018 has the cartoony shared-world pirate simulator Sea of Thieves and Crackdown 3, both of which are big maybes. Let me explain. Thieves is developer Rare's first stab at making a Destiny-like persistent online world. It'd be a feat for any team to make, but aside from the nostalgic Rare Replay collection from 2015, Rare hasn't had a critical hit in almost a decade.

Crackdown 3 has been delayed multiple times, and based on what I played of it at E3 this year, its being pushed into 2018 wasn't surprising at all. Will it actually be good when the ambiguous "spring" launch window rolls around? I wish I could say yes with any degree of certainty, but that isn't the case. State of Decay 2 is supposedly getting a big push from Microsoft, but zombie-survival games aren't the type of thing that cross over to mainstream success. That brings us back to Microsoft's well-tread path of racing games and first-person shooters.

You can all but guarantee we'll have Forza Horizon 4 next fall. And the team at 343 Industries has been quiet for a while, so you can probably expect news of Halo 6 to arrive next year as well. But considering how the last game turned out, you might want to hedge your bets as to how it'll play and what to expect out of it.

But what about all the exclusives Microsoft debuted at E3 in 2017? Many of them were "console launch exclusives," meaning that they'll show up other places. For example, Metro: Exodus will also be available on PC and will almost assuredly make its way to PS4. As for indies, The Artful Escape won't be out until "it's damn ready." Ashen looks promising, but two years later and it doesn't have a release date. Same goes for Ori and the Will of the Wisp.

The team at Insomniac didn't get an order for 'Sunset Overdrive 2' unless you count their PS4 exclusive 'Spider-Man' game out next spring.

There haven't been any sequels announced for any of Microsoft's AAA exclusives from the past four years that aren't named Forza, Gears of War or Halo either. Remedy is working on multiplatform games now, not a sequel to Quantum Break or Alan Wake. The team at Insomniac didn't get an order for Sunset Overdrive 2 unless you count its PS4 exclusive Spider-Man game out next spring. The perpetually beleaguered Crytek has been focusing on virtual reality and free-to-play games, and its relationship with Microsoft reportedly soured over a sequel to Xbox One launch title Ryse: Son of Rome.

Spencer has gone on the record saying that third-party exclusives like 2015's Rise of the Tomb Raider aren't viable for any games platform in the long run. But two years later, that's basically what Microsoft has. The company spent a year and a half trying to convince people they needed an Xbox One X with system specs alone. In the face of Nintendo's runaway success with the Switch and Sony's burgeoning lineup of diverse games you can't play anywhere else, 2018 has to be the year Microsoft starts convincing people there's a reason to buy any Xbox -- not just the most powerful one ever.

After Math: Merry Christmas, you filthy animals

It's been a wondrous week working up to Christmas Eve and not just for the guys with the Tommy Guns. Alamo Drafthouse announced it is starting a rental store and loaning out rare VHS, Protera is going to wake up tomorrow with an order for 25 of its electric buses under the tree, and Google is practically giving away its digital movie rentals. Numbers, because how else will you know how many gold rings you've got coming?

Several Christmas bells and red ribbon on the sheet music from the Christmas classic, Jingle Bells.

1951: That was the year that the Ferranti Mark 1, the first commercially available general-purpose electronic computer ever, was invited on to the BBC for a special holiday performance wherein it R2-D2'd its way through a number of Christmas standards. This year, Turing archive director Jack Copeland and composer Jason Long have managed to recreate the renditions for all to hear.

40: That's the number of beers that will be available in the lounges of Alamo Drafthouse's new series of rental stores. Even better, they'll offer a wide selection of rare and obscure VHS tapes (plus the VCRs to watch them on). These shops will also host a purveyance of Blu-Ray titles and memorabilia as well. You might even find a copy of Angels with Filthy Souls if you're lucky.

1: That's how many dollars you'll need in order to rent a movie from Google Play during its annual holiday sale. You can also rent 3 TV episodes for the same amount or get 50 percent off of an HBO Now subscription for the first three months (obvs, only for new subscribers or it wouldn't be "first three months").

25: That's how many electric buses the city of Los Angeles has ordered from Protera, all of which should arrive by 2019. It's all part of LA's plan to replace the entirety of its gas powered bus lines with electric alternatives by 2030 and partly why California is hitting its self-imposed green energy goals a decade early.

2018: That's the year the Magic Leap Augmented Reality glasses are supposed to ship. But given how little we still know about how they work -- or even if they work -- these AR goggles are still only about as real as the magical, gay black man who delivers them.

$250: That's how much it now costs to get into the wide world of drone racing sports thanks to the Fat Shark 101 starter kit. The setup includes the drone itself, a controller, FPV goggles and the rest of the miscellaneous hardware you'll need. Just don't go racing it through the house before your parents have had their first cup of coffee and donned protective headwear.

In 2017, society started taking AI bias seriously

A crime-predicting algorithm in Florida falsely labeled black people re-offenders at nearly twice the rate of white people. Google Translate converted the gender-neutral Turkish terms for certain professions into "he is a doctor" and "she is a nurse" in English. A Nikon camera asked its Asian user if someone blinked in the photo -- no one did.

From the ridiculous to the chilling, algorithmic bias -- social prejudices embedded in the AIs that play an increasingly large role in society -- has been exposed for years. But it seems in 2017 we reached a tipping point in public awareness.

Perhaps it was the way machine learning now decides everything from our playlists to our commutes, culminating in the flawed social media algorithms that influenced the presidential election through fake news. Meanwhile, increasing attention from the media and even art worlds both confirms and recirculates awareness of AI bias outside the realms of technology and academia.

Now, we're seeing concrete pushback. The New York City Council recently passed what may be the US' first AI transparency bill, requiring government bodies to make public the algorithms behind its decision making. Researchers have launched new institutes to study AI prejudice (along with the ACLU) while Cathy O'Neil, author of Weapons of Math Destruction, launched an algorithmic auditing consultancy called ORCAA. Courts in Wisconsin and Texas have started to limit algorithms, mandating a "warning label" about its accuracy in crime prediction in the former case, and allowing teachers to challenge their calculated performance rankings in the latter.

"2017, perhaps, was a watershed year, and I predict that in the next year or two the issue is only going to continue to increase in importance," said Arvind Narayanan, an assistant professor of computer science at Princeton and data privacy expert. "What has changed is the realization that these aren't specific exceptions of racial and gender bias. It's almost definitional that machine learning is going to pick up and perhaps amplify existing human biases. The issues are inescapable."

Narayanan co-authored a paper published in April analyzing the meaning of words according to an AI. Beyond their dictionary definitions, words have a host of socially constructed connotations. Studies on humans have shown they more quickly associate male names with words like "executive" and female names with "marriage" and the study's AI did the same. The software also perceived European American names (Paul, Ellen) as more pleasant than African American ones (Malik, Shereen).

The AI learned this from studying human texts -- the "common crawl" corpus of online writing -- as well as Google News. This is the basic problem with AI: Its algorithms are not neutral, and the reason they're biased is that society is biased. "Bias" is simply cultural meaning, and a machine cannot divorce unacceptable social meaning (men with science; women with arts) from acceptable ones (flowers are pleasant; weapons are unpleasant). A prejudiced AI is an AI replicating the world accurately.

"Algorithms force us to look into a mirror on society as it is," said Sandra Wachter, a lawyer and researcher in data ethics at London's Alan Turing Institute and the University of Oxford.

For an AI to be fair, then, it needs to not to reflect the world, but create a utopia, a perfect model of fairness. This requires the kind of value judgments that philosophers and lawmakers have debated for centuries, and rejects the common but flawed Silicon Valley rhetoric that AI is "objective." Narayanan calls this an "accuracy fetish" -- the way big data has allowed everything to be broken down into numbers which seem trustworthy but conceal discrimination.

The datafication of society and Moore's Law-driven explosion of AI has essentially lowered the bar for testing any kind of correlation, no matter how spurious. For example, recent AIs have tried to examine, from a headshot alone, whether a face is gay, in one case, or criminal, in another.

Then there was AI that sought to measure beauty. Last year, the company Beauty.AI held an online pageant judged by algorithms. Out of about 6,000 entrants, the AI chose 44 winners, the majority of whom were white, with only one having apparently dark skin. Human beauty is a concept debated since the days of the ancient Greeks. The idea that it could be number-crunched in six algorithms measuring factors like pimples and wrinkles as well as comparing contestants to models and actors is naïve at best. Deeply human questions were at play -- what is beauty? Is every race beautiful in the same way? -- which the scientists alone were ill-equipped to wrestle with. So instead, perhaps unwittingly, they replicated the Western-centric standards of beauty and colorism that already exist.

The major question for the coming year is how to remove these biases.

First, an AI is only as good as the training data fed into it. Data that is already riddled with bias -- like texts that associate women with nurses and men with doctors -- will create a bias in the software. Availability often dictates what data gets used, like the 200,000 Enron emails made public by authorities while the company was prosecuted for fraud that reportedly have since been used in fraud detection software and studies of workplace behavior.

Second, programmers must be more conscious of biases while composing algorithms. Like lawyers and doctors, coders are increasingly taking on ethical responsibilities except with little oversight. "They're diagnosing people, they're preparing treatment plans, they're deciding if somebody should go to prison," said Wachter. "So the people developing those systems should be guided by the same ethical standards that their human counterparts have to be."

This guidance involves dialogue between technologists and ethicists, says Wachter. For instance, the question of what degree of accuracy is required for a judge to rely on crime prediction is a moral question, not a technological one.

"All algorithms work on correlations -- they find patterns and calculate the probability of something happening," said Wachter. "If the system tells me this person is likely to re-offend with a competence rate of 60 percent, is that enough to keep them in prison, or is 70 percent or 80 percent enough?"

"You should find the social scientists, you should find the humanities people who have been dealing with these complicated questions for centuries."

A crucial issue is that many algorithms are a "black box" and the public doesn't know how they makes decisions. Tech companies have pushed back against greater transparency, saying it would reveal trade secrets and leave them susceptible to hacking. When it's Netflix deciding what you should watch next, the inner workings are not a matter of immense public importance. But in public agencies dealing with criminal justice, healthcare or education, nonprofit AI Now argues that if a body can't explain its algorithm, it shouldn't use it -- the stakes are too high.

In May 2018, the General Data Protection Regulation will come into effect in the European Union, aiming to give citizens a "right to explanation" for any automated decision and the right to contest those decisions. The fines for noncompliance can add up to 4 percent of annual revenue, meaning billions of dollars for behemoths like Google. Critics including Wachter say the law is vague in places -- it's unclear how much of the algorithm must be explained -- and implementation may be defined by local courts, yet it still creates a significant precedent.

Transparency without better algorithmic processes is also insufficient. For one, explanations may be unintelligible to regular consumers. "I'm not a great believer in looking into the code, because it's very complex, and most people can't do anything with it," said Matthias Spielkamp, founder of Berlin-based nonprofit AlgorithmWatch. "Look at terms and services -- there's a lot of transparency in that. They'll tell you what they do on 100 pages, and then what's the alternative?" Transparency may not solve the deep prejudices in AI, but in the short run it creates accountability, and allows citizens to know when they're being discriminated against.

The near future will also provide fresh challenges for any kind of regulation. Simple AI is basically a mathematical formula full of "if this then that" decision trees. Humans set the criteria for what a software "knows." Increasingly, AI relies on deep neural networks where the software is fed reams of data and creates its own correlations. In these cases, the AI is teaching itself. The hope is that it can transcend human understanding, spotting patterns we can't see; the fear is that we have no idea how it reaches decisions.

"Right now, in machine learning, you take a lot of data, you see if it works, if it doesn't work you tweak some parameters, you try again, and eventually, the network works great," said Loris D'Antoni, an assistant professor at the University of Wisconsin, Madison, who is co-developing a tool for measuring and fixing bias called FairSquare. "Now even if there was a magic way to find that these programs were biased, how do you even fix it?"

An area of research called "explainable AI" aims to teach machines how to articulate what they know. An open question is whether AI's inscrutability will outpace our ability to keep up with it and hold it accountable.

"It is simply a matter of our research priorities," said Narayanan. "Are people spending more time advancing the state of the art AI models or are they also spending a significant fraction of that time building technologies to make AI more interpretable?"

Which is why it matters that, in 2017, society at large increasingly grappled with the flaws in machine learning. The more that prejudiced AI is in the public discourse, the more of a priority it becomes. When an institution like the EU adopts uniform laws on algorithmic transparency, the conversation reverberates through all 28 member states and around the world: to universities, nonprofits, artists, journalists, lawmakers and citizens. These are the people -- alongside technologists -- who are going to teach AI how to be ethical.

Magic Leap One: All the things we still don’t know

It's that time of year again: the special season where everybody's favorite mythical creature makes its annual appearance. That's right, it's Magic Leap hardware teaser season! Seemingly once a year, the secretive startup reveals what it's been up to and, on Wednesday, revealed renderings of its latest AR headset prototype. The company even deigned to allow a Rolling Stone reporter to take the system for a spin. But for everything that Magic Leap showed off, the demonstrations and teaser materials still raise as many questions than they answer. There's a whole lot about the Magic Leap system that we don't know, so maybe let's hold off on losing our minds about the perceived imminent AR revolution until we do.

But before we get into all the things we don't know, let's take a quick look at the things we do. Magic Leap the company was founded in 2011 by Rony Abovitz, the bioengineer who created the Mako surgical assistance robot. He sold the Mako company for $1.65 billion and used that cash to start Magic Leap and fund it through its first four years. Today, the company is almost valued at $6 billion and has raised $1.9 billion in funding to date, despite having shown little more than high level animations and a few hardware renderings.

The company has spent the last seven years developing the Magic Leap Augmented Reality system. Currently in its ninth iteration, the setup has three components. The "Lightpack" is a pocket computer Abovitz claims is "something close to like a MacBook Pro or an Alienware PC," which would be incredible given its relative size in the renderings. Users can reportedly input commands through either hand gestures or the "Control" module. The Lightpack is wired up to the third component, the goggles themselves.

The "Lightware" goggles reportedly utilize translucent cells the company calls "Photonic wafers" which, according to Abovitz, shift photons around a 3D nanostructure to generate a specific digital light field signal. Basically, a light field is all the light that is bouncing off the objects around us, "like this gigantic ocean; it's everywhere. It's an infinite signal and it contains a massive amount of information," Abovitz told Rolling Stone.

Abovitz theorizes that the brain's visual cortex doesn't need all that much information in order to actually generate our perception of the world. Therefore instead of trying to recreate the entirety of the light field, "it just needed to grab the right bits of that light field and feed it to the visual cortex through the eye... We could make a small wafer that could emit the digital light field signal back through the front again," he said.

These are some pretty amazing claims, to be sure. The theories Abovitz is basing the device on are ones that he and a CalTech professor came up with. Theories so radical, as he told Rolling Stone, "we were way off the grid." That's not to say that his theories are unsound, or that the system doesn't work the way he says it does. It's just that there isn't yet any way to independently verify any of these claims.

And some of the claims beg to be investigated. Ones like there's a powerful secondary computer integrated into the Lightware, "which is a real-time computer that's sensing the world and does computer vision processing and has machine learning capability so it can constantly be aware of the world outside of you." That's a whole lot of buzzwords and big promises to pack into a single pair of googles.

And beyond those supposed capabilities, we have practically zero information on how the system actually works. What are the hardware specs, CPU/GPU speeds, or operating system? Will the internal components be upgradable or, like the MacBook Pro, be sealed, requiring more costly upgrades? What's more, how is the unit powered? What are its energy requirements? Is it fully mobile? What's the battery life? We need more than Abovitz's explanation that, "It's got a drive, WiFi, all kinds of electronics, so it's like a computer folded up onto itself."

Information on the Magic Leap's availability is just as nebulous. The Magic Leap website states that the SDK will be available in "early 2018" but there has been no word on even an estimated hardware release date. And don't even bother asking about the price. The company was silent on an MSRP during Wednesday's announcement, though Business Insider spoke with sources close to the company in August who claimed the system would retail in the $1,000-to-$1,500 range. But again, those are guesstimates at best.

There are also questions surrounding Magic Leap's demo choices. Of all the major players in technology news, why present this huge piece to Rolling Stone and Pitchfork rather than, say, Wired or Cnet? It may well be because the former choices command an older, more affluent readership, which is who are most likely going to be buying these things first, given their price. Or perhaps, more worrisome, the company hopes to avoid the harsh scrutiny of the entire tech press corps until the product is practically in the hands of consumers. Apple pulled similar shenanigans earlier this year when it provided a single-day review embargo for the experimental iPhone X.

What's more, we have haven't so much as scratched the surface of the societal implications should this technology take hold. Lord, can you imagine somebody driving with these things on? So, again, there's no reason to think that Magic Leap isn't on the up and up regarding the capabilities of its headset or the proprietary technology it's built upon. But the company is making some pretty extreme claims and, if they expect the rest of us to pony up $1,500 for a pair of the Snapchat Spectacles' dorkier cousins, they're going to need to provide a more transparent answer than "trust us, it totally works."

The best Engadget stories of 2017

This year gave us an innovative new console from Nintendo, an iPhone without a home button, EVs and self-driving cars from almost all the major automakers, and fresh headaches for Twitter and Facebook alike. As busy as we were reviewing a new flagship phone seemingly every other week, Engadget's writers and editors looked beyond that never-ending gadget cycle to deliver impactful, thoughtful features. In fact, some of our favorite stories from this year were weeks, sometimes months, in the making. Here's a selection of our best pieces, chosen by the team. Enjoy, and here's to even more long-form in 2018.

Aaron Souppouris

Aaron Souppouris
Features Editor

Inside LeEco's spectacular fall from grace

You can usually look at an article and make an educated guess at how long it took to come together. A simple news post? Maybe a couple of hours. A review of a new phone? Perhaps a week. But Cherlynn Low's investigation took months of planning and digging.

This one started life, as so many stories do, as a vague thought and a few hand-scrawled notes. But before long, Cherlynn had mapped out a five-year timeline, trawled through court documents and talked to multiple sources. With support from the Engadget features team and Engadget Chinese editor-in-chief Richard Lai, she managed to piece together a complete story about how things went so wrong at LeEco. Seeing it mature from idea to finished article was a privilege.

How the internet embraced a 'Simpsons'-'Akira' mashup

Bartkira was one of those things that I was aware of but completely uneducated about. Through his feature, Nick Summers traced its origins so neatly, exposing the tension between the creator and the gatekeeper of the project (who, it turns out, aren't the same person), and also highlighted stories from individual artists. To be fair, you could probably just give me a pageful of Bartkira imagery and I'd be happy, but this was so much more than that.

Dana Wollman

Dana Wollman
Executive Editor

How an AI took down four world-class poker pros

I'll be honest: I was surprised when senior mobile editor Chris Velazco volunteered to cover a poker competition in Pittsburgh. I wasn't aware that our resident phone reviewer enjoyed or even understood the nuances of the game. (No offense, Chris.) As it turned out, his trip to Carnegie Mellon University to watch an AI player trounce four world champions resulted in a compelling profile of both machinery and humanity. (Be sure to set aside time for the video too.) Just as important, Chris's narrative doesn't merely end with the AI Libratus' nearly $1.8 million victory. At the heart of this story is a more far-reaching question: If artificial intelligence can be used to defeat human poker experts, how else might we harness its power?

Nick Summers

Nick Summers
Associate Editor, Engadget UK

Reprogramming the piano

My musical knowledge is limited. I spent a few years plucking away at a bass guitar once, but my technique was dreadful and I needed half an hour to read a piece of sheet music. So when I watch a musician onstage, flicking switches and tapping guitar pedals, I'm in awe. Playing is hard enough; the technology part takes it to another level. What do all those buttons and dials do? For me, it might as well be witchcraft.

I've always wanted to learn more, which is why Chris Ip's piece on Dan Tepfer astonished me. The jazz pianist has developed an algorithm that "listens" to the notes he plays and creates a musical response. So when Tepfer sits down to play a song, it's as if a ghost partner is there with him, pressing different keys to expand and evolve the song. I found the concept fascinating -- a beautiful balance of human expression and digital creativity.

The hidden depth of mobile puzzle game 'Where Cards Fall'

Jessica Conditt's video game coverage is phenomenal. I could easily pick 10 pieces that should feature on this list -- how ESRB rules are killing boxed indie games, how Deck Nine picked up the Life Is Strange franchise, or Sony's worrying disinterest in indie games.


Since I had to pick just one for this round-up, I went with Where Cards Fall, an upcoming mobile game by Snowman and the Game Band. The former is known for Alto's Adventure, a simple but addictive snowboarding title, while the latter is a young studio from Los Angeles. Together they're building a game about adolescence and the hurdles associated with college and adulthood. You help the characters from a lofty position, building card-based houses to open up new paths. It's a gorgeous, whimsical project, and Jess' piece perfectly encapsulates it all.

Cherlynn Low

Cherlynn Low
Reviews Editor

How to get fired in the tech industry

There are so, so many pieces that I've ready this year by my amazing colleagues, but we rarely do straight-up satire. In this piece, Jess Conditt took a controversial topic (that controversial memo from a Google employee about women in the workplace) and gave it a biting, instruction-manual treatment that made it stand out from more cookie-cutter hot-take reaction pieces. This story explained why the memo was terrible in such a way that it convinced even a self-proclaimed contrarian like me, who initially thought the Googler had a point.


Olivia Kristiansen

Olivia Kristiansen
Director of Video Production

RealDoll's first sex robot took me to the uncanny valley

The Engadget original series Computer Love is a cinema verité take on editor-in-chief Christopher Trout's experiences with the technology and people who are changing the way we do it. Your curiosity about artificial intelligence, especially as it becomes more ubiquitous and eventually makes its way into our bedrooms, is proven in numbers. Trout's coverage of RealDoll's sex robot was Engadget's second-most-watched video, and one of our most-read stories of 2017. Don't miss Computer Love's second season in 2018.

Jessica Conditt

Jessica Conditt
Senior Reporter

Michigan's manufacturing past is fueling its tech future

Engadget has reporters scattered across the globe, but much of our coverage is constrained to a few major, tech-centric cities: San Francisco, New York, Hong Kong, Tokyo. However, there are fascinating stories unfolding in small towns and metropolises far beyond the Bay Area. Timothy Seppala is a Michigan native who saw a tech-centric movement unfolding in his hometown of Grand Rapids as well as throughout the larger Detroit area, and he dove in.

Over the course of several months, Timothy pieced together the tapestry of Michigan's new manufacturing future, emphasizing the humans at the center of this evolving industry. It's a brilliant piece of journalism and a story most technology sites might have skipped, or failed to notice. This kind of deep dive requires someone with intimate knowledge of the region and the instinct to spot the people at the heart of it all. He spoke with influential politicians and business leaders, game developers and entrepreneurs, to cover the breadth of Michigan's attempt at recovery after years of economic despair.

This story isn't just about Michigan -- it mirrors efforts taking place around the country to reinvigorate or repurpose languishing industrial plants. The people of Detroit and Grand Rapids are reshaping their cities, and Timothy gives us a glance at the soul of a state fighting past the turmoil.

Nathan Ingraham

Nathan Ingraham
Deputy Managing Editor

GameChanger brings virtual worlds to the kids who need it most

There are a number of charities that use video games to lift the spirits of sick children. One, called GameChanger, came to a New York City hospital earlier this summer. Mallory Locklear got a behind-the-scenes look at how much the group's efforts can affect the kids and families it works with. For many of those children, having a day purely devoted to games and fun offered some relief from constant thoughts about their disease or recovery.

For a lucky few, GameChanger also provides financial support in the form of a scholarship; the hospital staff are tasked with picking someone they feel is deserving. At the New York event, the scholarship recipient told Jim Carol (who participates in the charity and is the father of GameChanger founder Taylor Carol) that all the money would help her mom pay the bills. Jim found out how much the family needed to get out of their financial hole and cut them a check by the end of the day. The Carols and GameChanger might not be able to do that for everyone, but Mallory's story showed me that getting a break by spending a few hours playing games, like normal kids, is just as valuable. For a little while, one social worker told Mallory, "it's kind of like being home."

Matthew Lyons

Timothy J. Seppala

Timothy J. Seppala
Associate Editor

Nuclear warfare and the technology of peace

It takes just 15 minutes to launch a nuclear warhead from a submarine and trigger mutually assured destruction. Jess Conditt's story on the past, present and future of peace in the nuclear age is full of arresting facts like that. It's a comprehensive, sobering look at what's keeping the world from nuclear annihilation. But it isn't preachy, nor is it political, although politicians are definitely part of the equation. Instead, what I came away with was a sense of cautious hope. That, regardless of who sits in the White House, "little wars" will preclude and, hopefully, prevent big ones. In 2017, that's a tough pill to swallow, but at least it's coated in optimism.

Inside Grado Labs: A legacy of hand-built headphones

Growing up working at a small, family-owned body shop with my dad, I'm an easy mark for stuff like Billy Steele's piece on Grado Labs. It has pretty much everything you could ask for: a David and Goliath story, the return of a prodigal son, and gorgeous photography throughout. What's most inspiring, though, is the company's devotion to quality, both in terms of sound and building materials. A majority of Grado's headphones, headphone amps and turntable cartridges are built and assembled in the Brooklyn building the family has owned since 1950. Perhaps more impressive is that Grado hasn't had to advertize since 1964. Rather than pump out new products every year like some of its bigger rivals, Grado keeps its lineup small, releasing a new model only when it needs to. Sometimes that takes 10 years. In our disposable society, it's nice to know that, like music, what you listen to it on can last forever too.

Chris Ip

Chris Ip
Associate Features Editor

The law isn't ready for the internet of sexual assault

Daniel Cooper's deep dive into sexual assault in the internet of things was not just comprehensive, but prescient. The story points out how, in the tech world's rush to connect even the most pedestrian of items to the web, hacking smart sex toys could lead to remote sexual assault or stealing data about people's sex habits. But Dan also looks toward the future with a detailed analysis of the laws that will or will not protect humans from their compromised teledildonics. It's a fundamental question in the tech world: Can the legal system -- deliberative and thus slow to change, by nature as well as necessity -- keep pace with the breakneck pace of technological change? This story brings that abstract question all the way to the most intimate and troubling places.

In the months after Daniel's story, Pen Test, the security consultancy that hacked a connected vibrator, was also able to hijack Bluetooth butt plugs from a moving vehicle (they termed it "screwdriving"). Worryingly, the issues in this story may be relevant for a while yet.

Terrence O'Brien

Terrence O'Brien
Managing Editor

No, Kellyanne, microwaves cannot turn into cameras

This one is an obvious contender for the best headline ever to grace Engadget. (The subhead throws some delicious shade as well.)

"Cherlynn carefully and clearly explains how a camera and a microwave work."

But the excellence doesn't end there. Cherlynn Low carefully and clearly explains how a camera and a microwave work. She makes it painfully obvious to anyone with even a basic grasp of the English language (or an ounce of logic in their head) that, indeed, microwaves cannot turn into cameras. This story works both as an explainer for two common pieces of technology and as a merciless takedown of one of the more dubious public figures in 2017.

Christopher Trout

Christopher Trout
Editor in Chief

In the world of online media, it's rare that we get the chance to step AFK and connect with you IRL. It's rarer still that we get a massive chunk of cash to make it happen. That's why my favorite story of 2017 isn't a story at all -- it's an experience. Two years ago, Michael Gorman (our previous editor in chief) and I gave birth to a wildly unprecedented brain baby called the Engadget Experience, and this November, with help and a sweet pot of cash from our parent company, we got to share it with the world. Through the Alternate Realities grant program, we funded five truly out-there art projects that embraced new media like AR, VR and AI and introduced them to the world at a one-day event in downtown LA. For those of you who couldn't experience it IRL, we produced profiles of each of the five projects that will hopefully make you look at the world just a little bit differently.

Check out all of Engadget's year-in-review coverage right here.