Tag: ai

Adobe’s Scribbler AI automatically colorizes any portrait

Finally! Adobe has devised a method of adding a touch of color to black and white images without all the dimension-jumping time travel (looking at you Pleasantville). At the company's Adobe MAX 2017 event on Thursday, research scientist Jingwan Lu demonstrated Project Scribbler, an AI-driven program that can not only add color but also shading and image texture to grey-scale pictures in just seconds.

Scribbler leverages Adobe's Sensei deep learning platform to automatically touch up images. Researchers trained the program on the various bits and pieces of the human face using tens of thousands of images, some monochromatic, others accurately colored. By comparing the two types of images, the program was able to work out the appropriate areas to color in (ie, not the teeth).

There are still limits to what Scribbler can do. For example, it can only currently handle painting faces, not entire bodies or scenes. Still, this technology should prove a boon to illustrators and editors who would otherwise spend hours accurately tinting these images. Scribbler is still in development as a standalone program, like the Adobe VoCo tool, and has yet to be integrated into any of the company's Creative Cloud apps as of yet.

Via: 9to5 MAc

Source: Adobe


Google’s AlphaGo AI no longer requires human input to master Go

Google's AlphaGo already beat us puny humans to become the best at the Chinese board game of Go. Now, it's done with humans altogether. DeepMind, the Alphabet subsidiary behind the artificial intelligence, just announced AlphaGo Zero. The latest iteration of the computer program is the most advanced yet, outperforming all previous versions. It's also different from its predecessors in one uniquely significant way: Whereas the older AlphaGos trained in Go from thousands of human amateur and professional games, Zero foregoes the need for human insight altogether. Like the unpopular kid in class, it will learn simply by playing alone, and against itself.

What sounds like a sad, lonesome existence, is already paying dividends. Zero whitewashed the previous (champion-beating) version of Go by 100 games to nil. That victory came after just three days of training. After 40 days of internal Go playing, it beat the Master version (the same program that triumphed over world number one Ke Jie in May) 89-11 -- making it "arguably the strongest Go player in history."

There are other technical elements that define the new AI, which you can dig into courtesy of DeepMind's paper, published in the scientific journal Nature. But removing the "constraints of human knowledge" has been the most liberating factor, according to the company's CEO Demis Hassabis.

In doing so, DeepMind is even closer to decoding one of the biggest hurdles facing AI: The reliance on vast amounts of data training. Whether this approach will work outside the confines of a strategic board game, however, remains to be seen. DeepMind, at least, believes it could have far-reaching implications. "If similar techniques can be applied to other structured problems, such as protein folding, reducing energy consumption or searching for revolutionary new materials, the resulting breakthroughs have the potential to positively impact society," writes the company in its blog post.

Source: DeepMind, Nature


The BBC is turning to AI to improve its programming

The BBC wants to leverage machine learning to improve its online services and the programmes it commissions every year. Today, the broadcaster announced a five-year research partnership with eight universities from across the UK. Data scientists will help the best and brightest at the BBC set up the "Data Science Research Partnership," tasked with being "at the forefront of the machine learning in the media industry." It will tackle a range of projects not just with the BBC, but media and technology organisations from across Europe. The larger aim is to take the results, or learnings, and apply them directly to the BBC's operations in Britain.

The broadcaster, for instance, wants to use data to "better understand what audiences want from the BBC." The organisation could, of course, simply poll licence fee payers, but the idea presumably is to burrow down into TV and iPlayer viewing habits. With a wealth of hard data, it's possible that an algorithm could pick out larger trends and deduce whether the BBC is using its resources most effectively. To that end, the BBC hopes machine learning can help it build "a more personal BBC" with tools that could allow employees to make informed editorial and commissioning decisions.

The broadcaster is also interested in a concept called object-based broadcasting. At the moment, TV shows and news bulletins are broadcast as single pieces of linear media. But for years now, the BBC has envisioned media "objects," or blocks, that could be assembled in different ways depending on the user or end-hardware. Take the news, for instance: If every story or segment was cut-up, it could be personalised based on your tastes. Maybe you want the sport first, with more time dedicated to women's football. Or a shorter, snappier version of the local news. Machine learning could, in theory, help the BBC realise this abstract dream.

Machine learning is a buzzword at the moment, but with good reason. Engineers are teaching computers to learn, adapt and analyse based on relevant examples. It's led to improvements in voice recognition, translation, and if you're Google's DeepMind division, world champion 'Go' players. It's no surprise that the BBC wants to leverage this new area of AI development in its own business. Media companies like Netflix have embraced user data to shape every part of its services, from commissioning to thumbnail designs. The BBC needs to do the same, especially as it pivots to a model increasingly dependent on original, British programming.

Source: BBC (Press Release)


Intel aims to conquer AI with the Nervana processor

Intel makes some pretty fast chips, but none are very efficient at the hottest thing in computing right now: Artificial intelligence (AI). Deep-learning apps that do computer vision, voice recognition and other tasks mostly just need to run matrix calculations on gigantic arrays -- something that doesn't suit general-purpose Core or Xeon chips. Now, thanks to its purchase of deep learning chipmaker Nervana, Intel will ship its first purpose-built AI chips, the Nervana Neural Processor family (NNP), by the end of 2017.

Intel enlisted one of the most enthusiastic users of deep learning and artificial intelligence to help out with the chip design. "We are thrilled to have Facebook in close collaboration sharing their technical insights as we bring this new generation of AI hardware to market," said Intel CEO Brian Krzanich in a statement. On top of social media, Intel is targeting healthcare, automotive and weather, among other applications.

Unlike its PC chips, the Nervana NNP is an application-specific integrated circuit (ASIC) that's specially made for both training and executing deep learning algorithms. "The speed and computational efficiency of deep learning can be greatly advanced by ASICs that are customized for ... this workload," writes Intel's VP of AI, Naveen Rao.

The chips are designed to do matrix multiplication and convolutions, among the most common calculations done by deep learning programs. Intel has eliminated the generalized cache normally seen on CPUs, instead using special software to manage on-chip memory for a given algorithm. "This enables the chip to achieve new levels of compute density and performance for deep learning," says Rao.

The goal of this new architecture is to develop a processor that is flexible enough to handle Deep learning workloads and scalable enough to handle high intensity computation requirements by making core hardware components as efficient as possible.

The chip is also designed with high-speed interconnects both on and off the chip, allowing for "massive bi-directional data transfer." That means if you link a bunch of the chips together, they can act as a huge virtual chip, allowing for increasingly larger deep-learning models.

Oddly, the Nervana NNP uses a lower-precision form of integer math called Flexpoint. "Neural networks are very tolerant to data 'noise' and this noise can even help them converge on solutions," Rao adds. At the same time, using lower-precision numbers allowed designers to increase so-called parallelism, reducing latency and increasing bandwidth.

NVIDIA has famously pushed Intel to the side of the road in AI thanks to a sort of lucky accident. As it happens, the GPUs it uses in graphics cards and supercomputers are the best option for training AI algorithms -- though not executing them -- so companies like Google and Facebook have been using them that way. Meanwhile, Intel's arch-rival Qualcomm has been working on chips that are exceptionally good at executing AI programs.

Intel is no doubt hoping to change that formula with the Nervana NNP chips, which are efficient at both AI training and execution. The company says it has "multiple generations" of the chips in the pipeline, and obviously has the manufacturing and sales infrastructure needed to pump them out in volume and get them into clients' hands. Intel is also working on a so-called neuromorphic chip called Loihi that mimics the human brain, and of course has the Myriad X chip designed specifically for machine vision.

While Intel is hoping to at least catch up to NVIDIA, the latter isn't exactly standing still. It recently released the V100 chip specifically for AI apps, and hired Clément Farabet as VP of AI infrastructure, likely with the aim of making chips that are just as good at running deep learning programs as they are at training them. At the same time, Google has built its own "Tensor Processing Unit" (TPU) that it strictly uses in its own data centers, and IBM has a neuromorphic chip dubbed "True North." In other words, if you think we've reached peak AI, you haven't seen anything yet.


Researchers use AI to banish choppy streaming videos

Nobody likes it when their binge watching is disrupted by a buffering video. While streaming sites like Netflix have offered workarounds for connectivity problems (including offline viewing and quality controls), researchers are tackling the issue head on. In August, a team from MIT CSAIL unveiled its solution: A neural network that can pick the ideal algorithms to ensure a smooth stream at the best possible quality. But, they're not alone in their quest to banish video stutters. The folks at France's EPFL university are also tapping into machine learning as part of their own method. The researchers claim their program can boost the user experience by 37 percent, while also reducing power loads by almost 20 percent.

The likes of YouTube and Netflix rely on systems that are "inefficient," claims post-doctoral researcher Marina Zapater Sanch. "They store either one copy of a video in the highest-quality format possible, or dozens of copies in different formats." This can result in slow and choppy streaming, or a crippling server storage load, according to Sanch.

Like CSAIL before them, her team taught their program to learn from experience. Specifically, the AI monitored 1,000 people playing a video across an exhaustive range of devices. The system then memorized the series of actions that led to better quality streams. The project is still in its infancy, which may explain why the researchers aren't elaborating on its details. Still, it could have real world applications for video platforms in the future. But, first the team want to modify it for real-time streaming: A system where just one copy of a video can be optimized for each particular user.

Source: EPFL


The Pixel 2 has a surprise: Google’s first custom imaging chip

Google didn't spill all the details about the Pixel 2 and Pixel 2 XL at its October 4th event. As it turns out, these phones have a secret weapon: Google's first custom imaging chip (and indeed first system-on-chip of any kind), the Pixel Visual Core. The eight-core processor works closely with software to handle Google's machine learning-assisted HDR+ photography up to five times faster than the Pixel 2's main CPU, all the while using a tenth of the energy.

More importantly, the chip makes HDR+ shooting accessible to any third-party camera app. You don't have to use Google's software to capture more detailed highlights and shadows. The tech giant is also promising new uses for Pixel Visual Core over time (it's programmable), so you could expect to see more photographic abilities as time goes on.

There's only one main catch with the Core. You see, it's not actually enabled yet -- it won't be available as an option until the developer preview of Android Oreo 8.1 arrives in the "coming weeks," and it won't be ready for all third-party apps until sometime after that. If you bought a Pixel 2 as soon as you could, you'll have to rely on the stock camera app for a while. Google didn't say to expect this feature when you bought the phone, to be fair, but its full potential won't be realized until considerably later.

Source: Google


Google Assistant can finally control Chromecast from your phone

Google's Assistant app is capable of lots of things, but before today, controlling a cast session by voice wasn't really possible. Android Police reports that now the mobile app can do so, and you can even specify which Chromecast in your house is the target. Adjusting the volume, skipping or repeating tracks and tasking Assistant to play Urfaust's latest on your Chromecast Audio while you beam a Minecraft video to the kids' room all can be done with a simple voice command now -- and all without a Google Home. On our iPhone with the Assistant app it worked as you'd expect, but Android Police says its devices weren't working just yet; the publication received tips from readers about the functionality prior. Are you having any luck? Let us know in the comments.

Source: Android Police


Shutterstock’s composition photo search is powered by AI

Fresh off its AI-powered tool for countering watermark removal from photos, Shutterstock is using machine learning for something else. In this case, it's launching a composition-aware search tool.

"This tool allows users to specify one or more keywords, or to search for copy space, and arrange them spatially on a canvas to reflect the specific layout of the image they are seeking," a press release reads. "The patent pending tool uses a combination of machine vision, natural language processing and state of the art information retrieval techniques to find strong matches against complex spatially aware search criteria."

So, dragging "pen" to the lower left corner of the search box, and "desk" to the upper right corner will come back with photos where the pen is in the lower left of the frame, and a desk is in the upper right. At least that's how it's supposed to work in theory. Plenty of the results had the pen all over the photo, and a desk was always in the background. Adding "mug" to the search and moving it around the space performed as it should've though.

Proper nouns don't work so hot. Searching for "Beyonce" resulted in pictures of (mostly) Caucasian women, and even one of a lady wearing a whipped cream bikini. So, yeah, it still has a ways to go. But, with normal stuff it works pretty well. Google Photos, which also uses AI and computer vision to sort and search photos, runs into similar hiccups, so this isn't unheard of.

As is the case with any type of machine learning, Shutterstock's tool will only get better with time and use. The company can't do anything about the former, but since the feature is free for anyone to mess around with, the latter shouldn't be an obstacle.

Via: The Verge

Source: PR Newswire, Shutterstock Labs


‘Blade Runner 2049’ dives deeper on AI to transcend the original

Blade Runner 2049 is a miracle. It's a sequel that nobody really wanted -- certainly not fans of the seminal 1982 original by Ridley Scott. And ponderous explorations of artificial intelligence aren't something that typically clicks with mainstream audiences. (The film's disappointing box office results seems to make that clear.) But it turns out that Blade Runner 2049 -- directed by Denis Villeneuve -- is actually an ideal sequel. It builds on its incredibly influential predecessor by asking deeper questions about AI. As the lines between humans and replicants blur, the idea of being "more human than human" seems truer than ever.

Spoilers ahead for Blade Runner 2049.

The new models

Within the first few minutes, we learn that Ryan Gosling's "K," our new cyborg-hunting detective, is actually a replicant. There's no ambiguity, like there is with Harrison Ford's Rick Deckard in the first film. That immediately gives his job an added weight: He's hunting his own kind, and he's well aware of the inherent moral conflict.

We learn through the opening text that a lot has changed since 2019. The Tyrell Corporation unveiled its Nexus 8 replicants, which had a longer, human-like lifespan. That's exactly what Roy Batty and crew were fighting for in the first film, as they were older Nexus 6 models who could live for only a short four years. Rebellious replicants engineered a global blackout in 2022, in hopes of erasing identification records that were being used to hunt them down. That led to a ban on replicants altogether, which was lifted only when Wallace Corporation, a successor to the original replicant maker, Tyrell, proved that he could make models that were more obedient than the Nexus 8.

K is one of these newer replicants, which still have longer lifespans but differ from older models by their increased reliance on embedded memories. That's something we saw with Rachel (and potentially Deckard) in the first film, but in Blade Runner 2049 it's used as even more of a psychic cushion. Replicants are still aware that they're not "real," but the memories give them the illusion of human experience -- a birthday party growing up, perhaps, or playing with other children when they were a child. While you could view the memories as a "kindness," as one of their creators describes them, they're clearly a type of invisible shackle meant to keep replicants content with their subservient role in society.

Throughout the film, K is on the verge of an existential crisis. In the opening scene, he reluctantly subdues and kills a rogue Nexus 8 who's trying to live out his years as a protein farmer. He's shaken afterwards but takes the encounter in stride, since that's what he's programmed to do. During a mandatory synchronization test -- which appears to be an evolved form of the Voight-Kampff exam for finding replicants in the first film -- K proves that he's performing at "baseline." The movie doesn't explain what that means, but we can assume that it refers to being within the limits of his programming. Throughout the movie, though, he also strives to push against those boundaries to become a "real boy."

A replicant savior

Warner Bros.

The main mystery behind Blade Runner 2049 is an explosive one: A replicant gave birth to a child naturally, just like a human. Specifically, Rachel and Rick Deckard had a child. K's boss, Lieutenant Joshi (Robin Wright), immediately understands the implication of that as something that "breaks the world." People rioted when replicants were able to live a bit longer -- how would they react to their being able to reproduce on their own? She tasks K with erasing all of the evidence of the discovery, a mission that sets him down the path to reject his programming.

The idea of artificially intelligent, human-like robots getting pregnant has profound implications. The original Blade Runner made the villainous replicants surprisingly sympathetic. They just wanted more life, as Roy Batty explained to his creator, Tyrell (before gouging his eyes out). Sure, they used violent methods to achieve their goal, but the desire is an understandable one for any conscious being. Giving replicants, which were stronger and smarter than humans, a short four-year lifespan seemed like an act of cruelty.

Blade Runner 2049 takes that existential question a step further. Now that replicants can live longer and have realistic emotional responses, what really separates them from humans? Especially if they can reproduce on their own? When they're merely manufactured, it's easy for us to convince ourselves that they're just soulless robots. But if a replicant can be born and age naturally, without any direct help from humans, we need to think harder about the nature of life.

Enter Niander Wallace (Jared Leto), the genius scientist behind the most recent batch of replicants. He's desperate to figure out the secret behind replicant pregnancy, which was originally developed by Tyrell. For him, it's more about the corporate power of owning that technology. He can't build enough replicants, so he's looking for new ways to increase production. Niander isn't concerned with the moral implications -- he just wants to become an even bigger industry titan.

AI love

Blade Runner 2049

While K is tackling these bigger questions, he's also dealing with a domestic relationship. He's in "love" with Joi (Ana de Armas), an AI program who also appears to love him as well. We have to qualify that idea of love, though. Joi is marketed as the ideal companion, one who tells you what you want to hear and shows you what you want to see. She doesn't have the free will to do otherwise, and she's certainly not self-aware. So even if she produces a love-like emotional response in K, is that the same as the emotional responses from two conscious beings?

Once again, this relationship takes a simple idea from the first film -- can robots love? -- and evolves it in fascinating ways. While K is more conscious than Joi, he's still fundamentally an AI program as well. The big difference is that he's aware of himself, and he spends most of the film pushing against the limits of what he's built to do. It's ambiguous whether Joi ever does that in the film.

This is where things get interesting. Even if Joi is just a wish-fulfilling program, she still evokes a love-like response from K. And that's enough to make her important to him. So when we see Joi get "killed" later in the film -- the hardware she's stored in gets smashed -- we also feel genuine loss as an audience. Eventually, K encounters a giant Joi ad who repeats some of the same lines his Joi whispered in his ear. And he's reminded that as much as he loved her, it's not the same as a "real" relationship.

Warner Bros.

Replicant rights

You could view the original Blade Runner as the story of a cop hunting down and killing lower-class beings, who aren't seen as people, in cold blood. Don't forget that at one point he ends up shooting an unarmed Nexus 6 multiple times in the back -- in public. That's a perspective laid out by Sarah Gailey at Tor, and it's an important one to consider as we move to Blade Runner 2049.

Just like before, replicants want more life. But it's not just about living longer -- there's an entire resistance movement that demands the same rights as humans. It's easy to see the parallels with the civil rights movement in America. Replicants have always been viewed as disposable slave labor. But as their consciousness and capabilities have improved, they've also become a threat to what makes humans special. And now that there's a replicant who was born naturally, they have a savior who could arguably have a "soul."

Unfortunately, Blade Runner 2049 doesn't dive too much into the replicant resistance. But it sets the stage for future films to explore that concept even further. That's not something I would've wanted before seeing this film -- especially given the way Prometheus and Alien Covenant went down. But now my mind is swimming with where the Blade Runner world can go. And if that's not a successful sequel, I don't know what is.


Microsoft and Facebook’s open AI ecosystem gains more support

Artificial intelligence has helped jump start everything from self-driving cars to soft robotics. As if that wasn't enough, it's also tearing down language barriers to bring the world closer together. But, at the same time, machine learning is dealing with its own, self-constructed walls. Last month, Facebook and Microsoft came together to target a major roadblock -- specifically the process of switching between machine learning frameworks, such as PyTorch and Caffe2. Their solution: An open-source AI ecosystem dubbed ONNX (or Open Neural Network Exchange), which allows developers to jump between AI engines at various stages of development. The tech titans claimed the Exchange would make machine learning "more accessible and valuable for everyone." And, they've apparently had no qualms in recruiting other big-name firms to help out. The latest additions to ONNX include IBM, Huawei, Intel, AMD, Arm, and Qualcomm -- companies that (to varying degrees) are also working within the field of AI.

Here's how Facebook described the all-star collab last month: "In Facebook's AI teams (FAIR and AML), we are continuously trying to push the frontier of AI and develop better algorithms for learning. When we have a breakthrough, we'd like to make the better technologies available to people as soon as possible in our applications. With ONNX, we are focused on bringing the worlds of AI research and products closer together so that we can innovate and deploy faster."

If we're to take the company's statement on face value, that means ONNX could accelerate the development process for AI tech, delivering things like connected cars even faster. On Tuesday, Microsoft also announced that devs would soon have more tools to play around with on the repository, including its Cognitive Toolkit and Project Brainwave platforms.

But, this is far from the first machine learning initiative to bring industry heavyweights together. Microsoft and Facebook are already part of the 'Partnership on AI,' along with Apple, Amazon, Google, and IBM. That team-up is all about increasing public awareness, and boosting research. Funny how machine learning has a knack for turning rivals into pals. Maybe, it won't end up destroying the human race after all.

Source: Microsoft