main

Tech News

Iris scanner AI can tell the difference between the living and the dead

July 24, 2018 — by Engadget.com0

A. Czajka, P. Maciejewicz and M. Trokielewicz

It’s possible to use a dead person’s fingerprints to unlock a device, but could you get away with exploiting the dead using an iris scanner? Not if a team of Polish researchers have their way. They’ve developed a machine learning algorithm that can distinguish between the irises of dead and living people with 99 percent accuracy. The scientists trained their AI on a database of iris scans from various times after death (yes, that data exists) as well as samples of hundreds of living irises, and then pitted the system against eyes that hadn’t been included in the training process.

It’s a trickier process than you might think. Dead people’s eyes usually have to be held open by retractors, so the researchers had to crop out everything but the iris to avoid obvious cues. The algorithm makers also took care to snap live photos using the same camera that was used for the cadaver database, reducing the chances of technical differences spoiling the results.

There’s a catch to the current approach: it can only spot dead eyes when the person has been deceased for 16 hours or more, as the differences in irises aren’t pronounced enough in the first few hours. A crook could theoretically kill someone, prop their eyes open soon afterward and unlock their phone. However, this would limit the amount of time they could use this method. You may not want to make too many enemies, in other words — just take comfort in knowing that your data could one day be secure against grave robbers.

Tech News

DARPA pushes for AI that can explain its decisions

July 23, 2018 — by Engadget.com0

ValeryBrozhinsky via Getty Images

Companies like to flaunt their use of artificial intelligence to the point where it’s virtually meaningless, but the truth is that AI as we know it is still quite dumb. While it can generate useful results, it can’t explain why it produced those results in meaningful terms, or adapt to ever-evolving situations. DARPA thinks it can move AI forward, thoug. It’s launching an Artificial Intelligence Exploration program that will invest in new AI concepts, including “third wave” AI with contextual adaptation and an ability to explain its decisions in ways that make sense. If it identified a cat, for instance, it could explain that it detected fur, paws and whiskers in a familiar cat shape.

Importantly, DARPA also hopes to step up the pace. It’s promising “streamlined” processes that will lead to projects starting three months after a funding opportunity shows up, with feasibility becoming clear about 18 months after a team wins its contract. You might not have to wait several years or more just to witness an AI breakthrough.

The industry isn’t beholden to DARPA’s schedule, of course. It’s entirely possible that companies will develop third wave AI as quickly on their own terms. This program could light a fire under those companies, mind you. And if nothing else, it suggests that AI pioneers are ready to move beyond today’s ‘basic’ machine learning and closer to AI that actually thinks instead of merely churning out data.

[embedded content]

Tech News

IBM extends deal using Watson to support veterans with cancer

July 19, 2018 — by Engadget.com0

Andrew Spear for The Washington Post via Getty Images

IBM is making further use of Watson in the fight against cancer. The tech giant has extended a team-up with the US Department of Veterans Affairs that taps Watson for help treating soldiers with cancer, particularly stage 4 patients who have few other options. The new alliance runs through “at least” June 2019 and will continue the partnership’s existing strategy. Oncologists and pathologists first sequence tumor DNA, and then use Watson’s AI to interpret the data and spot mutations that might open up therapeutic choices.

The pact could do more to help health care in the US than you might think. IBM noted that Veterans Affairs treats about 3.5 percent of all American cancer patients, the largest in any one cancer group. If even a fraction of them can find viable cancer treatments through Watson, that could help a significant portion of the population.

The company also points out that “more than one-third” of VA patients in this oncology program (about 2,700 have received support so far) are rural residents who have a harder time getting access to cutting-edge treatments. To some extent, this could make specialized cancer therapy more accessible, not just more commonplace.

Tech News

'Robot chemist' could use AI to speed up medical breakthroughs

July 18, 2018 — by Engadget.com0

robot-chemist-u-glasgow-960x640.jpg

Getty Images/iStockphoto

Scientists can only do so much to discover new chemical reactions on their own. Short of happy accidents, it can take years to find new drugs that might save lives. They might have a better way at the University of Glasgow, though: let robots do the hard work. A research team at the school has developed a “robot chemist” (below) that uses machine learning to accelerate discoveries of chemical reactions and molecules. The bot uses machine learning to predict the outcomes of chemical reactions based on what it gleans from direct experience with just a fraction of those interactions. In a test with 1,000 possible reactions from 18 chemicals, the machine only needed to explore 100 of them to predict study-worthy reactions in the entire lot with about 80 percent accuracy.

The University said it found four reactions just through this test, and one reaction was in the top one percent of unique responses.

That may not sound like a great success rate, and it will ideally get better. However, it’s easy to see the robot dramatically speeding up the discovery process by letting scientists focus on the handful of reactions that are most likely to pan out. That could accelerate the development of new treatments, new battery formulas and extra-strong materials. And it wouldn’t necessarily cost jobs — rather, it could help chemists focus on the trickier aspects of research instead of plowing through mundane tests.

Tech News

Samsung's new DRAM chip will make phones run faster and longer

July 17, 2018 — by Engadget.com0

Samsung

Samsung has been busy improving its microSD range, introducing SSDs with faster write speeds, and opening the world’s biggest mobile factory, but the electronics maker doesn’t appear to be slowing down any time soon — it’s just completed tests on a 8GB LPDDR5 DRAM prototype, a faster, low power RAM that will be used to power machine learning applications and AI in 5G phones.

Compared to devices that use LPDDR4X chips, the 8GB LPDDR5 DRAM module offers a data rate which is up to 1.5 times faster. At 6,400 Mbps, LPDDR5 can transfer around 51 GB in one second, which Samsung says is the equivalent of roughly 14 HD video files. It also comes in two bandwidth flavors — 6,400 Mbps at 1.1 operating voltage, or 5,500 Mbps at 1.05 V. LPDDR5 has been specifically engineered to reduce voltage while in active mode, but Samsung’s emphasizing the ‘deep sleep mode’ — a feature which slashes power usage to half of the ‘idle mode’ offered by LPDDR4X chips.

These power saving attempts will supposedly decrease power consumption by up to 30 percent, and in the long run, help increase the the battery life of future smartphones. While Samsung didn’t spell out when LPDDR5 chips would be ready to hit the market, production will coincide with demand from global customers.

Tech News

Hinge uses AI to suggest a 'most compatible' date every day

July 12, 2018 — by Engadget.com0

Hinge

Now that dating giant Match owns Hinge, what’s its first move? It’s using a dash of AI to help you find a partner sooner. Hinge is trotting out a Most Compatible feature that uses machine learning and the Gale-Shapley algorithm (aka the “stable marriage” algorithm) to send daily recommendations for people who it thinks would be just as interested in you as you are with them. It’s effectively a virtual matchmaker — you might not have to spend ages swiping right on people who never swipe back, or participating in conversations that go nowhere.

The system acts primarily based on your behavior. The more you like parts of others’ profiles, the better the recommendations. You won’t meet your special someone right from the outset, but you might not have to wait weeks or months to find someone that really clicks.

Most Compatible should be available today for both Android and iOS users. It’s too soon to know how well it works, but it stands in stark contrast to Match’s approach in services like Tinder, where you’re practically encouraged to attempt as many matches as possible regardless of the quality. Hinge wants you to “get off the app” as soon as you can — even if those first dates don’t lead to follow-up encounters, you may be more likely to use the app again.

Tech News

Facebook improves AI by sending 'tourist bots' to a virtual NYC

July 11, 2018 — by Engadget.com0

Reuters/Brendan McDermid

As a general rule, AI isn’t great at using new info to make better sense of existing info. Facebook thinks it has a clever (if unusual) way to explore solutions to this problem: send AI on a virtual vacation. It recently conducted an experiment that had a “tourist” bot with 360-degee photos try to find its way around New York City’s Hell’s Kitchen area with the help of a “guide” bot using 2D maps. The digital tourist had to describe where it was based on what it could see, giving the guide a point of reference it can use to offer directions.

The project focused on collecting info through regular language (“in front of me there’s a Brooks Brothers”), but it produced an interesting side discovery: the team learned that the bots were more effective when they used a “synthetic” chat made of symbols to communicate data. In other words, the conversations they’d use to help you find your hotel might need to be different than those used to help, say, a self-driving car.

The research also helped Facebook’s AI make sense of visually complex urban environments. A Masked Attention for Spatial Convolution system could quickly parse the most relevant keywords in their responses, so they could more accurately convey where they were or needed to go.

As our TechCrunch colleagues observed, this is a research project that could improve AI as a whole rather than the immediate precursor to a navigation product. With that said, it’s easy to see practical implications. Self-driving cars could use this to find their way when they can’t rely on GPS, or offer directions to wayward humans using only vague descriptions.

[embedded content]

Tech News

Former Google AI chief will lead Appleā€™s new machine learning team

July 10, 2018 — by Engadget.com0

Bloomberg via Getty Images

Back in April, Google’s former AI and search chief John Giannandrea left the company to join Apple for an undisclosed role. Today, the latter company announced he will head a new team combining the Core ML and Siri groups.

It’s not a leap for Giannandrea, who led Google’s Machine Intelligence, Research and Search teams over an eight-year tenure at the company. The new Artificial Intelligence and Machine Learning team he will supervise won’t change the structure of the Siri and Core ML teams, per TechCrunch. But having Giannandrea at the helm of both will unify the direction of the company’s machine learning and AI endeavors, especially after the company continued its hiring frenzy this spring to expand the Siri team.

Tech News

MIT researchers automate drug design with machine learning

July 6, 2018 — by Engadget.com0

Jin et al.

Developing and improving medications is typically a long and very involved process. Chemists build and tweak molecules, sometimes aiming to create a new treatment for a specific disease or symptom, other times working to improve a drug that already exists. But it takes a lot of time and a lot of expert knowledge, and attempts often end with a drug that doesn’t work as hoped. But researchers at MIT are using machine learning to automate this process. “The motivation behind this was to replace the inefficient human modification process of designing molecules with automated iteration and assure the validity of the molecules we generate,” Wengong Jin, a PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory, said in a statement.

The research team trained their machine learning model on 250,000 molecular graphs, which are basically detailed images of a molecule’s structure. The researchers then had the model generate molecules, find the best base molecules to build off of and design new molecules with improved properties. The researchers found that their model was able to complete these tasks more effectively than other systems designed to automate the drug design process.

When tasked with generating new, valid molecules, each one the model created turned out to be valid. And that’s particularly important since producing invalid molecules is a major shortcoming of other automation systems — of the others the researchers compared their model to, the best only had a 43.5 percent validity rate. Secondly, when the model was told to find the best base molecule — known as a lead molecule — that is both highly soluble and easily synthesized, it again outperformed other systems. The best candidate molecule generated by their model scored 30 percent higher on those two desired properties than the best option produced by more traditional systems. Lastly, when the model was told to modify 800 molecules to improve them for those properties but keep them similar in structure to the lead molecule, around 80 percent of the time, it created new, similarly structured molecules that scored higher for those two properties than did the original molecules.

Going forward, the research team will test the model on other pharmaceutical properties and work to make a model that can function with limited amounts of training data. The research will be presented next week at the International Conference on Machine Learning.

Tech News

Elon Musk's 'Dota 2' AI bots are taking on pro teams

June 25, 2018 — by Engadget.com0

OpenAI

The Dota 2 world championship, The Invitational, is fast approaching, and a top team will have a different-looking squad to contend with: a group of artificial intelligence bots. OpenAI, which Elon Musk co-founded, has been taking on top Dota 2 players with the bots since last year, and now it’s gunning for a team of top professionals in an exhibition match at one of the biggest events in eSports.

OpenAI took on individual players at last year’s The Invitational in a one-on-one minigame, and pros said that by watching the matches back, they were able to learn from the bots. But playing as a team introduces different types of intricacies, and OpenAI had to teach the AI how to coordinate the five bots.

At any time, a hero (or character) can make one of around 1,000 actions; the bots have to make effective decisions while processing around 20,000 values representing what’s going on in the game at a given time. The average number of possible actions in chess is 35, so this is a little more complex than the Deep Blue supercomputer that beat chess grandmaster Garry Kasparov in the ’90s.

[embedded content]

To teach its bots what to do, OpenAI uses reinforcement learning. That’s essentially a trial-and-error method, where, over time, the AI evolves from completely random behavior, to a more focused style of play. OpenAI runs Dota 2 on more than 100,000 CPUs, and the AI plays itself to the tune of 180 years’ worth of games every day. In just a few hours, the bots can play more games than a human can in a lifetime, giving the AI ample opportunity to learn. But machines learn in different ways from humans, so it’s not an apples-to-apples comparison. Otherwise, the AI team would have been the best in the world in a snap.

Dealing with the game’s bi-weekly updates is a challenge, as those can shift gameplay mechanics. Since the field of view is limited to what’s on screen, the AI needs to make inferences about what the other team is doing and make decisions based what it thinks its opponents are up to. The bots have some advantages, such as an 80ms reaction time, which is faster than humans are capable of. They also perform around 150-170 actions per minute, which is comparable to top human players.

There are some limits on the AI though. The bots only use five of the game’s 115 heroes and play against a team made up of the same characters. Some decisions are made for them by humans, like which skills to level up in. OpenAI developers also restricted some items and cut off some of the game’s more intricate aspects like invisibility and warding, which lets players snoop on other parts of the map.

OpenAI started playing against amateur teams recently, and so far the