Tag: ai

Amazon’s Alexa can now wake you up with music instead of alarms

One of the greatest perks of connected speakers is waking up to whatever music you like, not just a buzzer or the radio. However, that hasn't been an option for Alexa-equipped devices like the Echo -- until today, that is. Amazon has added a feature to Alexa that lets you wake up to the music of your choice from one of several streaming services, including its own options and Spotify.

To begin with, your criteria can be as broad or narrow as you like. You can name a song, playlist or genre, or ask to play any kind of music if you're not picky. Alexa can stream radio channels from the likes of TuneIn and iHeartRadio. Naturally, there are a few perks if you use one of Amazon's music services. You can ask Alexa to wake you based on a mood (like "relaxing"), or find a wake-up song by reciting the lyrics.

This sounds like a minor feature, but it's potentially very important. If Amazon is going to make the Echo Spot a viable alarm clock, it needs to give the device better functionality than that 20-year-old clock radio sitting on your nightstand. This also makes all Echo models more directly competitive with rivals that have had music wake features for years, such as Sonos. And let's face it: even if you're just using Alexa on your phone, Amazon would rather be the one to start your day.


Apple AI chief reveals more progress on self-driving car tech

After remaining tight-lipped for years, Apple is now more than eager to share how much progress it's making on self-driving car technology. AI research director Ruslan Salakhutdinov made a presentation this week that revealed more of what the company's autonomous driving team has been up to. Some of the talk was familiar, but there were a few new examples of how far the fledgling project had come.

To start, Apple has crafted a system that uses onboard cameras to identify objects even in tricky situations, such as when raindrops cover the lens. It can estimate the position of a pedestrian even if they're hidden by a parked car. Other additions included giving cars direction through simultaneous localization and mapping, creating detailed 3D maps using car sensors and decision-making in urgent situations (say, a wayward pedestrian).

It's still not certain if or how Apple will commercialize its self-driving know-how. At the moment, its next goal is to produce driverless employee shuttles. The company isn't currently expected to sell its own cars, but licensing its work to others would be unusual when Apple is well-known for preferring to develop everything in-house.

The talk in itself is notable. Apple has been slowly opening the kimono on its AI research, but it hasn't been clear on just how much it was willing to discuss. Salakhutdinov's chat shows that it's willing to offer at least some kind of consistent openness rather than maintaining its legendary secrecy. Not that it has much of a choice. Apple has struggled to attract AI talent in part because its secretive approach has been unappealing for researchers used to receiving academic and industry recognition. Presentations like this could keep Apple's AI team in the spotlight and reel in scientists who'd otherwise go to Facebook, Google or tech giants.

Source: Wired


Why Qualcomm’s Tech Summit this week mattered

Qualcomm had so much news to share this year that it decided to throw a three-day "Tech Summit" in Hawaii for hundreds of press and analysts. In addition to unveiling the latest generation of its high-end mobile processor, Qualcomm also announced new Snapdragon-powered laptops from HP and ASUS, a new dedicated Hi-Fi audio DAC and a partnership with AMD. Speaking of partnerships, many of the companies that work with Qualcomm also attended the event to discuss the future of technologies like AI, 5G, AR and VR.

Given the battle Qualcomm is waging against Apple, as it fends off a potential takeover from rival Broadcomm, the Tech Summit has been as much a news announcement event as it was a show of force. Qualcomm isn't just a mobile chip maker, and it sure as hell wants you to know. Catch up on all you may have missed from the company's big event this week in under four minutes with this short video!


Google’s AlphaGo AI can teach itself to master games like chess

Google's DeepMind team has already advanced its AlphaGo AI to dominate Go without human input, but now the system is clever enough to master other board games without intervention. Researchers have developed a more generalized system for AlphaGo Zero that can train itself to achieve "superhuman" skill in chess, Shogi (a Japanese classic) and other game types knowing only the rules, all within less than a day. It doesn't need example games or other references.

This doesn't mean that DeepMind has developed a truly general purpose, independent AI... yet. Chess and Shogi were relatively easy tests, as they're simpler than Go. It'll be another thing entirely to tackle complex video games like StarCraft II, let alone fuzzier concepts like walking or abstract thought. There's also the question of speed: less than 24 hours works for board games, but that's too slow for situations where AI needs to adapt on the spot.

Even so, this is a major step toward AI that can accomplish any task with only minimal instructions. Robots and self-driving cars in particular may need to learn how to navigate unfamiliar environments without the luxury of pre-supplied training material. If nothing else, chess champions have one more reason to be nervous.

Via: MIT Technology Review

Source: ArXiv.org


Tinder is using AI to figure out who you’ll really like

In 2015, Tinder introduced a new feature called the "Super Like." We all know you can swipe right to let a user know you're interested. But if you're really interested, that's where the Super Like comes in. Swiping up meant you Super Liked a person. Now, Tinder is launching a new feature called "Super Likeable," and it uses AI to figure out who you're likely to Super Like.

Users are limited to using Super Like once a day. But Tinder's AI will present you with four different people it thinks are worthy of your Super Likes. You'll get one free Super Like to use on one of these Super Likeable people. Users have no say over when Super Likeable people appear to them, and you can't go out and find them; it's just a feature you'll occasionally find while using Tinder. It's worth noting that this feature is similar to the way one of Tinder's rivals works: Coffee Meets Bagel delivers matches it thinks you'll like every day.

The feature is currently limited to users in New York and Los Angeles, but it will likely roll out to the wider Tinder audience soon. It will be interesting to see how spot-on the AI actually is in regard to the people it thinks users will Super Like.

Via: TechCrunch

Source: Tinder


Google caters to the DIY crowd with an AI camera kit for Raspberry Pi

Google created its AIY Projects initiative -- "artificial intelligence yourself" -- to encourage developers and DIY enthusiasts to learn about artificial intelligence. The first project in the series, the ready-to-assemble Raspberry Pi-based AIY Voice Kit, was based on a project from MagPi magazine. Now Google has a second project ready for release this year: the AIY Vision Kit.

The camera kit comes with a cardboard shell, an AI-capable circuit board, a light-up arcade button, a tiny speaker, a lens kit with both macro and wide settings and various connection components, including a tripod mounting nut. You'll need to supply your own Raspberry Pi Zero W, Raspberry Pi Camera, an SD card and a power supply. The VisionBonnet circuit board has an Intel Movidius MA2450 low-power vision processing unit, which can run neural network models right on the device. You'll get software, too, which has three TensorFlow-based neural network models: one to recognize a thousand common objects, another that can recognize faces and expressions and a third that can detect people, cats and dogs. There's also a Python API that can adjust the arcade button colors and speaker sounds.

With this Raspberry Pi-based camera, Google says you can create a device that can identify different plant and animal species, be notified when your dog shows up at the back door, see if you left your car in the driveway, watch your holiday guests react to your decorations or even trip an alarm when your little brother enters your room. Of course, these are just examples. Developers and hackers will surely find even more exciting things to do with this device.

Google isn't the only company offering AI tools for developers to create solutions with. Amazon also just announced its own image recognition camera, too. Google's more DIY-centric AIY Vision Kit is available for pre-order now via Micro Center for $45, and will be available for delivery and store pickup December 31st.

Source: Google


Best Buy claims Google Home Max will be on sale December 11th

The Google Home Mini might be 40 percent off right now, but if you'd rather have a Google smart speaker with a little more oomph you might not have to wait much longer. Mountain View's self-calibrating Home Max will be released on December 11th according to a Best Buy listing spotted by 9to5Google. This could be a gaff, but Google did say the $400 device would be out before year's end. And, well, today being November 30th means the company doesn't have much time left to fulfill that promise.

The 11th feels like an odd placeholder date, though. Usually retailers will list for December 31st for items that have an ambiguous release window, so maybe there's something to this. Speaking of ambiguity, Apple delayed its own smart speaker, the HomePod, until sometime early next year. If the Best Buy listing is accurate, that'd give Google a pretty big leg up this holiday season. We've reached out to Google for more information and will update this post should it arrive.

Source: Best Buy


Google Home can now do two things at the same time

Google Assistant on your Google Home is going to get a lot more useful this week. The AI butler has recently been updated to support commands that have up to two conditions. Meaning, now you can tell your smart speaker to do things like the bump the temperature in your kids' room and start playing Slayer's "South of Heaven" in there as a lullaby. Or, if you'd rather set the mood in your living room rather than give your offspring nightmares, you could ask Assistant to dim the smart lights and start streaming something from Google Play on your TV. CNET notes that making a query with more than a pair of requests doesn't work.

It's wholly separate from the Routines Google promised back at the Pixel 2 event earlier this fall, too. So you can't say "Okay Google, I'm home; start the Roomba" and expect the cluster of commands tied to "I'm home" to transpire while the living room gets vacuumed. Still, this is something that really isn't possible on other digital assistants unless you issue multiple, separate commands.

Source: CNET


Amazon’s AI camera helps developers harness image recognition

Far from the stuff of science fiction, artificial intelligence is becoming just another tool for developers to build the next big thing. It's built in to Photoshop to help you knock out backgrounds, Google is using AI to figure out if you have a person peeping on your phone and Microsoft uses the technology to teach you Chinese. As Amazon's Jeff Barr says, "I think it is safe to say, with the number of practical applications for machine learning, including computer vision and deep learning, that we've turned the corner" towards practical applications for AI. To that end, Amazon has announced AWS DeepLens, a new video camera that runs deep learning models right on the device.

The DeepLens has a 4 megapixel camera that can capture 1080P video, along with a 2D microphone array. It's powered by an Intel Atom Processor with more than 100 gigaflops of power, which means it can process tens of frames of video through the deep-learning AI systems per second. The DeepLens camera has WiFi, USB and micro HDMI ports, and 8 gigabytes of memory to run all that code on, too. It runs Ubuntu 16.04, and can connect to Amazon Web Serivces, too.

While primarily for developers right now, it's not hard to see possible cool consumer applications down the line. Amazon has already put together some templates for devs to practice with, letting them use the DeepLens camera to detect things like faces, dogs and cats, hot dogs (or not) and a variety of household items, along with various motions and actions. Imagine showing DeepLens a bottle of shampoo, which then is recognized and relayed to Amazon to order you another bottle, or an attached device that can recognize your pets and feed them appropriately.

Barr notes that many future projects will likely run both onboard the device and in the cloud. "With eyes, ears, and a fairly powerful brain that are all located out in the field and close to the action, it can run incoming video and audio through on-board deep learning models quickly and with low latency, making use of the cloud for more compute-intensive higher-level processing. For example, you can do face detection on the DeepLens and then let Amazon Rekognition take care of the face recognition."

Source: Amazon


Pixelmator’s AI-driven Photoshop rival is ready for your Mac

Adobe isn't the only one rolling out an AI-savvy pro image editor -- right on cue, Pixelmator has released its previously-teased Pixelmator Pro on the Mac App Store. The $60 software promises many of the tools you'd hope for in a higher-end creative package, such as RAW processing, smart layout tools, non-destructive changes and advanced effects editing, but its centerpiece is its use of machine learning. It can remove objects, snap to items, level the horizon and identify layers without the painstaking manual effort you've typically needed in the past.

Pixelmator Pro also switches to a single-pane interface that aims to reduce clutter, and its heavy reliance on Mac-specific frameworks (Swift for code, Core Image and Metal for graphics) should give it a level of optimization you might not get from a cross-platform app.

Much like Apple's own creative apps, the overhaul has left some features by the wayside, at least for a while. The 9to5Mac crew has noticed that cropping is currently limited to fixed ratios (that will be addressed in a later update), and your MacBook Pro's Touch Bar won't get a workout in version 1.0. However, what's here could be enough to justify a purchase if you want sophisticated editing but can't stomach the idea of paying a monthly fee.

Via: 9to5Mac

Source: Mac App Store, Pixelmator