main

Tech News

Russia may send a robot 'crew' to space in 2019

July 23, 2018 — by Engadget.com0

Vyacheslav Prokofyev via Getty Images

Leveraging robotics to undertake dangerous missions has obvious benefits for mankind, and space travel is no exception. In 2011, NASA sent its dexterous assistant ‘Robonaut 2‘ on a trip to the International Space Station (ISS) with the objective of working alongside presiding astronauts. Now a “source in the rocket and space industry” tells RIA Novosti that a Russian android duo could be following suit as early as next year.

According to Defense One, the FEDOR androids will, in an unprecedented move, fly on the unmanned Soyuz spacecraft not as cargo, but as crew members. The Roscosmos space agency has reportedly given the flight its preliminary approval. We’ve reached out for further confirmation on this front.

FEDOR, an abbreviation of Final Experimental Demonstration Object Research, refers to a 2014 program that aimed to create a robot capable of replacing humans during high-risk scenarios such as rescue missions. The androids have been endowed with a number of abilities, including driving, push-ups, lifting weights, and, you guessed it, shooting. Former Deputy PM Dmitry Rogozin then had to deny Russia was “creating a terminator”.

With rapid developments in the AI arena, the question of whether it will be used for destructive or benevolent purposes is always on the table, but Rogozin assured that FEDOR would have “great practical significance in various fields.” Backing up those comments is CNA associate research analyst Samuel Bendett, who points out that despite its military-ready build, FEDOR was designed to function in space from the beginning.

Tech News

DARPA pushes for AI that can explain its decisions

July 23, 2018 — by Engadget.com0

ValeryBrozhinsky via Getty Images

Companies like to flaunt their use of artificial intelligence to the point where it’s virtually meaningless, but the truth is that AI as we know it is still quite dumb. While it can generate useful results, it can’t explain why it produced those results in meaningful terms, or adapt to ever-evolving situations. DARPA thinks it can move AI forward, thoug. It’s launching an Artificial Intelligence Exploration program that will invest in new AI concepts, including “third wave” AI with contextual adaptation and an ability to explain its decisions in ways that make sense. If it identified a cat, for instance, it could explain that it detected fur, paws and whiskers in a familiar cat shape.

Importantly, DARPA also hopes to step up the pace. It’s promising “streamlined” processes that will lead to projects starting three months after a funding opportunity shows up, with feasibility becoming clear about 18 months after a team wins its contract. You might not have to wait several years or more just to witness an AI breakthrough.

The industry isn’t beholden to DARPA’s schedule, of course. It’s entirely possible that companies will develop third wave AI as quickly on their own terms. This program could light a fire under those companies, mind you. And if nothing else, it suggests that AI pioneers are ready to move beyond today’s ‘basic’ machine learning and closer to AI that actually thinks instead of merely churning out data.

[embedded content]

Tech News

IBM extends deal using Watson to support veterans with cancer

July 19, 2018 — by Engadget.com0

Andrew Spear for The Washington Post via Getty Images

IBM is making further use of Watson in the fight against cancer. The tech giant has extended a team-up with the US Department of Veterans Affairs that taps Watson for help treating soldiers with cancer, particularly stage 4 patients who have few other options. The new alliance runs through “at least” June 2019 and will continue the partnership’s existing strategy. Oncologists and pathologists first sequence tumor DNA, and then use Watson’s AI to interpret the data and spot mutations that might open up therapeutic choices.

The pact could do more to help health care in the US than you might think. IBM noted that Veterans Affairs treats about 3.5 percent of all American cancer patients, the largest in any one cancer group. If even a fraction of them can find viable cancer treatments through Watson, that could help a significant portion of the population.

The company also points out that “more than one-third” of VA patients in this oncology program (about 2,700 have received support so far) are rural residents who have a harder time getting access to cutting-edge treatments. To some extent, this could make specialized cancer therapy more accessible, not just more commonplace.

Tech News

Facebook improves AI by sending 'tourist bots' to a virtual NYC

July 11, 2018 — by Engadget.com0

Reuters/Brendan McDermid

As a general rule, AI isn’t great at using new info to make better sense of existing info. Facebook thinks it has a clever (if unusual) way to explore solutions to this problem: send AI on a virtual vacation. It recently conducted an experiment that had a “tourist” bot with 360-degee photos try to find its way around New York City’s Hell’s Kitchen area with the help of a “guide” bot using 2D maps. The digital tourist had to describe where it was based on what it could see, giving the guide a point of reference it can use to offer directions.

The project focused on collecting info through regular language (“in front of me there’s a Brooks Brothers”), but it produced an interesting side discovery: the team learned that the bots were more effective when they used a “synthetic” chat made of symbols to communicate data. In other words, the conversations they’d use to help you find your hotel might need to be different than those used to help, say, a self-driving car.

The research also helped Facebook’s AI make sense of visually complex urban environments. A Masked Attention for Spatial Convolution system could quickly parse the most relevant keywords in their responses, so they could more accurately convey where they were or needed to go.

As our TechCrunch colleagues observed, this is a research project that could improve AI as a whole rather than the immediate precursor to a navigation product. With that said, it’s easy to see practical implications. Self-driving cars could use this to find their way when they can’t rely on GPS, or offer directions to wayward humans using only vague descriptions.

[embedded content]

Tech News

Former Google AI chief will lead Appleā€™s new machine learning team

July 10, 2018 — by Engadget.com0

Bloomberg via Getty Images

Back in April, Google’s former AI and search chief John Giannandrea left the company to join Apple for an undisclosed role. Today, the latter company announced he will head a new team combining the Core ML and Siri groups.

It’s not a leap for Giannandrea, who led Google’s Machine Intelligence, Research and Search teams over an eight-year tenure at the company. The new Artificial Intelligence and Machine Learning team he will supervise won’t change the structure of the Siri and Core ML teams, per TechCrunch. But having Giannandrea at the helm of both will unify the direction of the company’s machine learning and AI endeavors, especially after the company continued its hiring frenzy this spring to expand the Siri team.

Tech News

NVIDIA's AI can fix bad photos by looking at other bad photos

July 10, 2018 — by Engadget.com0

NVIDIA, MIT, Aalto University

A team of researchers from NVIDIA, MIT and Aalto University have found a way to fix pixelated photographs using AI — even if the AI has never seen a clean example of the target photo.

The group used deep learning — a type of machine learning that can teach AI to piece together images, text or video — to restore images with noisy input. While previous work trained AI to reconstruct photos with missing facial features by showing it complete photos, the current method means AI can rebuild a clean photo by only using “corrupted data”, or two tarnished images. And surprisingly, its ability to clean up artifacts, remove text and beautify photos occasionally produced a better outcome than methods requiring cleaner reference material.

The AI does this by utilizing a neural network that’s been trained using corrupt photos. It doesn’t need a clean image, but it does need to observe the source image twice. Experiments showed that target material affected by different kinds of synthetic noise (additive Gaussian, Poisson and bionomial noise) could still yield results that were “virtually identical” in quality with photos restored using clean targets. One of the most exciting things about the system is that it can significantly reduce the amount of time required for image rendering — we’re talking milliseconds.

More practical applications of this deep learning-based approach are promising for the medical field, where the quality of MRI scans and other types of imaging could be further enhanced.

[embedded content]

Tech News

I took a phone call from the Google Assistant

June 27, 2018 — by Engadget.com0

Screenshot2813129e-960x537.jpg

When Google unveiled the Duplex phone-calling reservation AI at I/O last month, the world was shook. Despite the potential convenience it presented, the system’s ability to mimic human inflections in conversation was uncanny and borderline-creepy. Back then, we only heard recordings of what Assistant could do with Duplex technology. At a recent demo in New York, though, I got a chance to chat with the real thing, playing the role of a restaurant staffer on the call.

More people will be taking calls made by Google Assistant soon, as the company begins public tests of the Duplex technology. Only a select group of users and restaurants across the US will be involved in the initial wave, though. It might be awhile before you can punch in a few parameters (number of people, time range) and have Assistant book reservations on the phone for you.

If you’re one of the lucky few, you might find the entire experience a little surreal, as I did. To be fair, most testers won’t be in the same situation I was — in a relatively quiet restaurant on the Upper East Side of Manhattan, carefully watched by an audience of about five people (including fellow tech reporters and Google’s product lead and engineers). I felt a little self-conscious as I picked up the phone and spoke with Assistant, but I think the AI should have been more worried.

[embedded content]

“Hello, this is Thep Thai restaurant,” I yelled down the phone. I knew in advance it’d be Assistant on the line, thanks to the caller ID and the fact that this was a demo setup. To be clear, this was the real Duplex system we were invited to test, and we weren’t talking with prerecorded clips. It was altogether possible to trip up the system by being unintelligible or even unreasonable, and we were given free rein to confuse Assistant. So I tried to come up with common scenarios I could see being problematic.

After it identified itself and informed me the call would be recorded, Assistant asked, in a friendly male voice, to make a reservation for Friday, June 30th. “Sure, for how many people?” I asked.

“For four?” The Assistant replied with an upward inflection.

“I’m sorry, we don’t take reservations for parties smaller than five,” I immediately shut Assistant down. After all, if I did indeed run a restaurant in New York, it would certainly be an exclusive establishment. Plus, I’d been on the receiving end of such requirements many times.

“Oh, OK. What’s the wait time like then?”

I was surprised by the follow-up. I was expecting Assistant to simply give up, but like a meticulous helper, it knew to ask for more information.

“It’s about an hour’s wait, we’re really busy on Fridays,” I riffed.

“OK, thank you,” the Assistant said. I had one last chance to throw Duplex off, so I went for it.

“Just so you know,

Tech News

Google's reservation-making AI will be making calls soon

June 27, 2018 — by Engadget.com0

Google talked about a lot at this year’s I/O developer conference, but one demo quickly stole the show: a male voice on a phone, making a restaurant reservation. The restaurant was real, but the person making the call wasn’t — it was Google Assistant, powered by an AI system called Duplex that’s meant to complete tasks by interacting with humans on the phone. It was, uh, pretty eerie, and it won’t be long before Google Assistant is calling a business near you. The company confirmed that today that it will start testing its Duplex-powered calls with “trusted testers and select businesses” in New York and San Francisco within weeks.

That said, there are very strict limits on what the Duplex-powered Assistant can actually do. The system was designed to handle a small handful of interactions: inquiring about business hours, handling restaurant reservations and making hair appointments. When Google kicks off its testing in earnest this summer, it plans to call businesses and ask for their holiday hours. That’s it. There’s no way for the human talking to the Assistant to engage in idle chit-chat or tap into any of the AI’s more common functionality either — in other words, forget about asking it what movies are playing around you. Meanwhile, business owners wary of receiving robo-calls (albeit highly advanced ones) will be able to opt-out of receiving Assistant calls entirely, though for now, Google isn’t sure how it’s going to make that option available to users.

[embedded content]

And to be clear: if you’re, say, a maître d’ fielding a call from Google Assistant, you’ll definitely know you’re not dealing with another human. At the beginning of each Duplex call, the Assistant identifies itself and tells the person on the other end that it’s recording the conversation — you know, for data to help train the AI model further. That upfront identification is absolutely crucial since, too. From pleasantly awkward “ums” and “ahhs” to the prominent vocal fry evident in one of the system’s female voices, it would be easy to forget that you were actually talking to a virtual assistant. (In demo calls Google arranged for reporters, I quickly found myself forgetting just that.)

When it comes to these very specific tasks, Scott Huffman, VP of engineering for Google Assistant, said the system could handle four out of five phone interactions without any help from a human. In situations where it can’t quite get through a conversation, though, the Assistant tells the person on the other end it’s contacting its “supervisor,” then switches the line over to a live, human operator to complete the task. The one unsettling thing about these kinds of transitions is that because you’re aware the voice you were just speaking to wasn’t human, there’s a moment when the Google operator takes over the line where you wonder if they too are artificial.

While the kinds of interactions Google trained the Assistant to handle

Tech News

US Army tests AI that predicts vehicle repairs

June 26, 2018 — by Engadget.com0

Getty Images

Keeping vehicles in good working order is about more than just getting to work on time for the US Army. A breakdown in the middle of a combat zone could prove deadly. So, to help keep on top of repairs, the army is testing artificial intelligence to predict when a vehicle might need a new part.

The army is monitoring several dozen Bradley M2A3 vehicles using a machine learning algorithm from Uptake Technologies. It hopes the Asset Performance Management application will reduce unscheduled maintenance and make repairs more efficient and productive, in part by predicting when components will fail. That will help mechanics be more proactive in fixing issues before they become serious problems. As with many other effective uses of AI, Uptake’s software is designed to augment, rather than replace humans in making decisions.

Uptake says the Bradley has a similar engine and parts to a string of industrial machines and vehicles. The software can monitor Bradley engine data points such as temperature, coolant and RPM, and the AI compares patterns with those from similar engines that have failed. The company has more than 1.2 billion hours of operating data the AI can draw from to make predictions.

The trial is worth just $1 million to Uptake, The Washington Post reported, but if it’s successful, the military could expand the AI to more of the Bradley fleet, along with other vehicles. Several military leaders have been pushing for the army to embrace the capabilities of AI, but there has been strong resistance from tech company workers. Google recently pulled out of Project Maven, which harnesses AI to analyze drone footage, after a backlash from thousands of employees.

Tech News

Elon Musk's 'Dota 2' AI bots are taking on pro teams

June 25, 2018 — by Engadget.com0

OpenAI

The Dota 2 world championship, The Invitational, is fast approaching, and a top team will have a different-looking squad to contend with: a group of artificial intelligence bots. OpenAI, which Elon Musk co-founded, has been taking on top Dota 2 players with the bots since last year, and now it’s gunning for a team of top professionals in an exhibition match at one of the biggest events in eSports.

OpenAI took on individual players at last year’s The Invitational in a one-on-one minigame, and pros said that by watching the matches back, they were able to learn from the bots. But playing as a team introduces different types of intricacies, and OpenAI had to teach the AI how to coordinate the five bots.

At any time, a hero (or character) can make one of around 1,000 actions; the bots have to make effective decisions while processing around 20,000 values representing what’s going on in the game at a given time. The average number of possible actions in chess is 35, so this is a little more complex than the Deep Blue supercomputer that beat chess grandmaster Garry Kasparov in the ’90s.

[embedded content]

To teach its bots what to do, OpenAI uses reinforcement learning. That’s essentially a trial-and-error method, where, over time, the AI evolves from completely random behavior, to a more focused style of play. OpenAI runs Dota 2 on more than 100,000 CPUs, and the AI plays itself to the tune of 180 years’ worth of games every day. In just a few hours, the bots can play more games than a human can in a lifetime, giving the AI ample opportunity to learn. But machines learn in different ways from humans, so it’s not an apples-to-apples comparison. Otherwise, the AI team would have been the best in the world in a snap.

Dealing with the game’s bi-weekly updates is a challenge, as those can shift gameplay mechanics. Since the field of view is limited to what’s on screen, the AI needs to make inferences about what the other team is doing and make decisions based what it thinks its opponents are up to. The bots have some advantages, such as an 80ms reaction time, which is faster than humans are capable of. They also perform around 150-170 actions per minute, which is comparable to top human players.

There are some limits on the AI though. The bots only use five of the game’s 115 heroes and play against a team made up of the same characters. Some decisions are made for them by humans, like which skills to level up in. OpenAI developers also restricted some items and cut off some of the game’s more intricate aspects like invisibility and warding, which lets players snoop on other parts of the map.

OpenAI started playing against amateur teams recently, and so far the