main

Gaming News

Artificial Intelligence Gets Good At Creating Anime Girls

October 3, 2018 — by Kotaku.com0

The types of anime-style characters artificial intelligence was making in 2015 weren’t good. But now in 2018, the machines have improved. A lot.

As Twitter user Yoshiwo Konogi points out, it’s amazing at how good AI has gotten at this type of art. Have a look below:

Earlier this year, the deep learning AI character creation system Crypko went into beta.

Because Japanese law allows copyrighted images to be used in machine learning, I’d assume the ability for AI to make anime girls has probably increased at a faster pace.

For more, check out Crypko’s official site.

PC News and Reviews

NVIDIA GeForce RTX 2080 Ti GPU Is 8x Times Faster In Ray Tracing Performance and 10x Faster In AI – RTX 2080 And RTX 2070 Also Equipped With RT and AI Engines

August 20, 2018 — by Wccftech.com0

geforce-rtx-2080-ti-gallery-a-740x740.jpg

NVIDIA just unveiled their brand new lineup of RTX graphics cards and has offered a one of a kind revolutionary leap that’s bigger than even the Kepler to Maxwell leap of old. The company has introduced two completely new engines into its GPUs that offer a performance boost of up to 800% in some visual effects! That is an absolutely insane leap and one that will be sure to propel the graphics industry forward a few years.

The holy grail of graphics – ray traced gaming is here, thanks to the NVIDIA GeForce RTX 2080 Ti, RTX 2080 and RTX 2070

So what’s the big deal about? Well, all previous generations of NVIDIA GeForce graphics have contained only one type of engine: the shader engine. The brand new RTX series contains three types of engines: the standard shader engine, an RT engine, and tensor cores. NVIDIA has been able to harness the power of AI and dedicated RT processor to offer performance leaps that are absolutely unheard off in the industry. A single RTX 2080 Ti is 10x faster than a 1080 Ti in AI and 8x faster than a 1080 Ti in ray tracing – which is absolutely bonkers!

NVIDIA has introduced a whole plethora of features in this event and has leveraged the increased AI performance to innovative techniques such as predicting pixels to dynamically lower graphics load at high resolutions – you can expect twice the number of fps in 4k thanks to the deep learned models predicting how pixels will play out. This means that even though the CUDA Core count of the RTX 2080 Ti isn’t that high over its predecessor, the AI will add on a multiplier effect – giving you much higher performance than anything that could naturally occur.

The Ray Tracing feature is of course the most revolutionary one however. Ever since the dawn of gaming, pixels have been rendered using rasterization which meant that developers had to add on effects and all the eye candy manually. With real-time ray tracing however, all the developer needs to do is set the material properties and add on a couple of lights for drop dead gorgeous and utterly realistic graphics. This isn’t a revolution waiting to happen – its already started.

The GeForce RTX 2080 Ti is powered by the TU102 GPU, while the RTX 2080 and the RTX 2070 are powered by the Turing TU104 GPU. The TU104 GPU is the successor to NVIDIA’s GP104 GPU and sticks to the same principles which made the GTX 1080 and GTX 1070 great which is to offer gamers the best performance at the highest efficiency rate (perf/watt) and deliver products that are highly competitive in pricing and performance.

GeForce RTX — New Family of Gaming GPUs
The new GeForce RTX 2080 Ti, 2080 and 2070 GPUs are packed with features never before seen in a gaming GPU, including:

  • New RT Cores to enable real-time ray tracing of objects and environments with physically accurate shadows, reflections, refractions and global illumination.
  • Turing Tensor Cores to perform lightning-fast deep neural network processing.
  • New NGX neural graphics framework integrates AI into the overall graphics pipeline, enabling AI algorithms to perform amazing image enhancement and generation.
  • New Turing shader architecture with Variable Rate Shading allows shaders to focus processing power on areas of rich detail, boosting overall performance.
  • New memory system featuring ultra-fast GDDR6 with over 600GB/s of memory bandwidth for high-speed, high-resolution gaming.
  • NVIDIA NVLink®, a high-speed interconnect that provides higher bandwidth (up to 100 GB/s) and improved scalability for multi-GPU configurations (SLI).
  • Hardware support for USB Type-C™ and VirtualLink(1), a new open industry standard being developed to meet the power, display and bandwidth demands of next-generation VR headsets through a single USB-C™ connector.
  • New and enhanced technologies to improve the performance of VR applications, including Variable Rate Shading, Multi-View Rendering, and VRWorks Audio.

NVIDIA GeForce RTX 20 Series Graphics Cards Official Specifications:

Graphics Card Name NVIDIA GeForce RTX 2070 NVIDIA GeForce RTX 2080 NVIDIA GeForce RTX 2080 Ti
GPU Architecture Turing GPU (TU104) Turing GPU (TU104) Turing GPU (TU102)
Process 12nm NFF 12nm NFF 12nm NFF
Die Size TBD TBD 754mm2
Transistors TBD TBD 18.4 Billion
CUDA Cores 2304 Cores 2944 Cores 4352 Cores
TMUs/ROPs 144/64 184/64 272/88
GigaRays 6 Giga Rays/s 8 Giga Rays/s 10 Giga Rays/s
Cache 4 MB L2 Cache? 4 MB L2 Cache? 6 MB L2 Cache
Base Clock 1410 MHz 1515 MHz 1350 MHz
Boost Clock 1620 MHz
1710 MHz OC
1710 MHz
1800 MHz OC
1545 MHz
1635 MHz OC
Compute TBD TBD TBD
Memory 8 GB GDDR6 8 GB GDDR6 11 GB GDDR6
Memory Speed 14.00 Gbps 14.00 Gbps 14.00 Gbps
Memory Interface 256-bit 256-bit 352-bit
Memory Bandwidth 448 GB/s 448 GB/s 616 GB/s
Power Connectors 8 Pin 8+8 Pin 8+8 Pin
TDP 180W 215W 250W
Price $499 US $699 US $999 US
Price (Founders Edition) $599 US $799 US $1,199 US
Launch September 2018 August 2018 August 2018

Submit

Gaming News

Dota 2 Pros Play Bots, Get Wrecked

August 6, 2018 — by Kotaku.com0

edl4hwatgayrnebzukti.png

Dota 2 pro players took on a team of bots last night. They didn’t fare well. (Screengrab via Twitch)

A team of three former Dota 2 pros, one current pro, and a commentator played against five artificial intelligence bots last night at an event in San Francisco. The bots defeated the humans, 2-1, talking plenty of smack along the way.

The bots came from OpenAI, a non-profit AI research company co-founded by Elon Musk. OpenAI’s five-on-five bot team has been in development all year, and last month it was beating amateur teams, according to OpenAI’s developers, but this was the first bout against real pros: former player Ben “Merlini” Wu, former player Ioannis “Fogged” Loucas, caster Austin “Capitalist” Walsh, former player William “Blitz” Lee, and current pro David “MoonMeander” Tan.

You can watch it all here.

This was a natural follow-up to last year’s bout between humans and bots, when one of OpenAI’s machines defeated a top player in a one-on-one match. There were some restrictions on this five-on-five match—namely, the teams could only use 18 of Dota 2‘s 115 heroes—but the bots were still good enough to prove that humanity is doomed.

Next, OpenAI’s developers plan to take the bots to Valve’s official International tournament, where they’ll compete against the best Dota 2 players in the world.

Tech News

Fox AI predicts a movie's audience based on its trailer

July 29, 2018 — by Engadget.com0

doomu via Getty Images

Modern movie trailers are already cynical exercises in attention grabbing (such as the social media-friendly burst of imagery at the start of many clips), but they might be even more calculated in the future. Researchers at 20th Century Fox have produced a deep learning system that can predict who will be most likely to watch a movie based on its trailer. Thanks to training that linked hundreds of trailers to movie attendance records, the AI can draw a connection between visual elements in trailers (such as colors, faces, landscapes and lighting) and the performance of a film for certain demographics. A trailer with plenty of talking heads and warm colors may appeal to a different group than one with lots of bold colors and sweeping vistas.

Notably, the deep learning approach already appears to work in real world conditions. While Fox did use existing movies as a benchmark, it also had success anticipating the performance of future movies. Sure enough, the visual cues in a brand new movie trailer gave an idea as to what attendance would be like several months later.

There are flaws in this method. It doesn’t capture temporal info (Fox uses an explosion after a car chase as an example), and it would ideally combine both the video and text descriptions to get a fuller sense of the story.

However, Fox isn’t shy about the practical applications. The AI could help studios craft trailers they know will appeal to a movie’s intended audience, whether they’re casual moviegoers who stick to the blockbusters or aficionados who want something off the beaten path. You might well see trailers that play up specific imagery to increase the chances that you’ll buy tickets. And that’s important in the streaming era, where movie theaters have to compete for viewers who could easily stay home and watch something on Amazon or Netflix.

Tech News

Adorable home robot Kuri is being discontinued

July 25, 2018 — by Engadget.com0

Kuri

Cute mechanical companion Kuri is no more. In a blog post published today, manufacturer Mayfield Robotics said that operations have been paused while it evaluates the company’s future, and that pre-orders of the adorable home robot will not be filled (all pre-order deposits will be refunded).

Mayfield Robotics, part of the Bosch Startup Platform, was established in 2015 with a bold vision to domesticate robots. Kuri was designed to be neither traditionally functional (like a vacuum cleaner), nor educational, but was intended to enter the home as a family member, reading to kids, playing with pets and taking photos of precious family moments.

The first Kuri units were priced at $700 apiece — relatively affordable for the tech involved but nonetheless expensive for a robot that didn’t really do much. Interest in Kuri was high, but pre-orders were low. As Mayfield’s blog post notes, “there was not a business fit within Bosch to support and scale [the] business.” Crowdfunded “social robot” Jibo faced a similar issue, failing to scale as backers hoped it would.

The decision arguably reflects the wider robotics industry. Droids are improving, but putting a cute face on them doesn’t make them useful, and usefulness is what’s going to sell products. Look at virtual assistants on smart speakers, such as Amazon’s Alexa. These have become commonplace in modern homes because they have tangible purpose — and are, of course, considerably more affordable. At this stage in robotics R&D, poor adorable Kuri could never compete.

Tech News

Watch the Google Cloud Next keynote in under 13 minutes

July 24, 2018 — by Engadget.com0

With the Google Next 2018 conference — the I/O for cloud computing — now underway in San Francisco, the company spent some time Tuesday morning crowing over its most recent cloud-based accomplishments and explaining where the platform will be expanding in the future. Diane Greene, CEO of Google Cloud, took the stage to announce that this year’s conference is the “biggest Google event ever” with more than 20,000 registered attendees.

AI and security were the two main points of focus for Google Cloud moving forward or as Greene put it, “Security is the number one worry, and AI is the number one opportunity.” She hinted that nearly a dozen new security features and tools will debut at the conference this week, in addition to the twenty announced back in March. AI will play an important role in that space. It’s already being used to check your grammar in Google Docs and sniff out spam in Gmail.

Google hopes to expand its cloud services into new fields in the coming months and years, especially healthcare. The company announced during the keynote that it is partnering with the Broad Institute for genome processing tools as well as the National Institutes of Health to help crunch massive biomedical data sets.

Tech News

Google Docs uses AI to catch your grammar mistakes

July 24, 2018 — by Engadget.com0

AOL

You no longer have to turn to tools like Grammarly if your Google Docs output lacks polish. As part of a sweeping set of updates aimed mostly at G Suite users, Google has introduced grammar suggestions to Docs users involved its Early Adopter Program. The addition uses machine translation to spot everything from basic grammatical goofs (such as “a” instead of “an”) to larger issues with sentence structure, including subordinate clauses. The AI nature of the checker should help it adapt over time and catch “trickier” issues.

There’s more. True to its promises, Google is making Smart Reply available to Hangouts chats in G Suite over the next few weeks. You no longer have to dutifully type out an “I don’t think so” when someone asks if the quarterly report is ready. Also, Gmail’s Smart Compose is no longer confined to home users. The G Suite crowd can use autocomplete to zip past the formalities and focus on the email content that really matters. All told, Google is bent on eliminating as much of the drudgery of writing as possible — even if the results can occasionally feel a bit impersonal.

[embedded content]

Tech News

Iris scanner AI can tell the difference between the living and the dead

July 24, 2018 — by Engadget.com0

A. Czajka, P. Maciejewicz and M. Trokielewicz

It’s possible to use a dead person’s fingerprints to unlock a device, but could you get away with exploiting the dead using an iris scanner? Not if a team of Polish researchers have their way. They’ve developed a machine learning algorithm that can distinguish between the irises of dead and living people with 99 percent accuracy. The scientists trained their AI on a database of iris scans from various times after death (yes, that data exists) as well as samples of hundreds of living irises, and then pitted the system against eyes that hadn’t been included in the training process.

It’s a trickier process than you might think. Dead people’s eyes usually have to be held open by retractors, so the researchers had to crop out everything but the iris to avoid obvious cues. The algorithm makers also took care to snap live photos using the same camera that was used for the cadaver database, reducing the chances of technical differences spoiling the results.

There’s a catch to the current approach: it can only spot dead eyes when the person has been deceased for 16 hours or more, as the differences in irises aren’t pronounced enough in the first few hours. A crook could theoretically kill someone, prop their eyes open soon afterward and unlock their phone. However, this would limit the amount of time they could use this method. You may not want to make too many enemies, in other words — just take comfort in knowing that your data could one day be secure against grave robbers.

Tech News

Russia may send a robot 'crew' to space in 2019

July 23, 2018 — by Engadget.com0

Vyacheslav Prokofyev via Getty Images

Leveraging robotics to undertake dangerous missions has obvious benefits for mankind, and space travel is no exception. In 2011, NASA sent its dexterous assistant ‘Robonaut 2‘ on a trip to the International Space Station (ISS) with the objective of working alongside presiding astronauts. Now a “source in the rocket and space industry” tells RIA Novosti that a Russian android duo could be following suit as early as next year.

According to Defense One, the FEDOR androids will, in an unprecedented move, fly on the unmanned Soyuz spacecraft not as cargo, but as crew members. The Roscosmos space agency has reportedly given the flight its preliminary approval. We’ve reached out for further confirmation on this front.

FEDOR, an abbreviation of Final Experimental Demonstration Object Research, refers to a 2014 program that aimed to create a robot capable of replacing humans during high-risk scenarios such as rescue missions. The androids have been endowed with a number of abilities, including driving, push-ups, lifting weights, and, you guessed it, shooting. Former Deputy PM Dmitry Rogozin then had to deny Russia was “creating a terminator”.

With rapid developments in the AI arena, the question of whether it will be used for destructive or benevolent purposes is always on the table, but Rogozin assured that FEDOR would have “great practical significance in various fields.” Backing up those comments is CNA associate research analyst Samuel Bendett, who points out that despite its military-ready build, FEDOR was designed to function in space from the beginning.

Tech News

DARPA pushes for AI that can explain its decisions

July 23, 2018 — by Engadget.com0

ValeryBrozhinsky via Getty Images

Companies like to flaunt their use of artificial intelligence to the point where it’s virtually meaningless, but the truth is that AI as we know it is still quite dumb. While it can generate useful results, it can’t explain why it produced those results in meaningful terms, or adapt to ever-evolving situations. DARPA thinks it can move AI forward, thoug. It’s launching an Artificial Intelligence Exploration program that will invest in new AI concepts, including “third wave” AI with contextual adaptation and an ability to explain its decisions in ways that make sense. If it identified a cat, for instance, it could explain that it detected fur, paws and whiskers in a familiar cat shape.

Importantly, DARPA also hopes to step up the pace. It’s promising “streamlined” processes that will lead to projects starting three months after a funding opportunity shows up, with feasibility becoming clear about 18 months after a team wins its contract. You might not have to wait several years or more just to witness an AI breakthrough.

The industry isn’t beholden to DARPA’s schedule, of course. It’s entirely possible that companies will develop third wave AI as quickly on their own terms. This program could light a fire under those companies, mind you. And if nothing else, it suggests that AI pioneers are ready to move beyond today’s ‘basic’ machine learning and closer to AI that actually thinks instead of merely churning out data.

[embedded content]