AI-Powered PicsArt Magic Effects Coming to Smartphone Near You | NVIDIA Blog

blogs.nvidia.com · by Tony Kontzer

House or horse, bird or barn? Deep learning and GPU computing have quickly advanced the abilities of image recognition technology to superhuman levels.

Now, PicsArt, maker of the social photo editor by the same name, is applying this breakthrough in artificial intelligence to the creation of images.

Hitting the market today, “Magic Effects” is a new feature in the latest version of the PicsArt app, which has been downloaded more than 300 million times and boasts 80 million active monthly users. Magic Effects uses GPU-powered AI to analyze the quality and context of photos, and enables users to transform their pics in seconds with an array of filtering effects that are customized based on the AI analysis.

If, for example, a user applies the “Neo Pop” effect to a photo, the result won’t be standardized. Instead, it will be customized based on the qualities of the photo in question. Users can further customize the filters using a variety of touch-interface tools.

Here’s an example of a Magic Effects filter in action, turning a photo into a colorful painting.

Read more of this post

Baidu Releases AI Benchmark

eetimes.com

SAN JOSE, Calif. – Calling for 100x faster processors, China Web giant Baidu released DeepBench, an open source benchmark for how fast processors train neural networks for machine learning.

DeepBench is available online along with first results from Intel and Nvidia processors running it. The benchmark tests low-level operations such as matrix multiplication, convolutions, handing recurrent layers and the time it takes for data to be shared with all processors in a cluster.

Machine learning has emerged as a critical workload for Web giants such as Baidu, Google, Facebook and others. The workloads come in many flavors serving applications such as speech, object and video recognition and automatic language translation.

Today the job of training machine learning models “is limited by compute, if we had faster processors we’d run bigger models…in practice we train on a reasonable subset of data that can finish in a matter of months,” said Greg Diamos, a senior researcher at Baidu’s Silicon Valley AI Lab.

The lab has found, for example, it can reduce by 40% errors in automatic language translation for every order-of-magnitude performance improvement in computing. “We could use improvements of several orders of magnitude–100x or greater,” said Diamos. Read more of this post

Smartphone Speech Recognition Is 3X Faster Than Texting

Speech-recognition software is not only three times faster at texting than human typists, it’s also more accurate.

475999-best-voice-activated-apps

Want to save some time? New research suggests you should be using your smartphone’s speech-recognition software to text, instead of your thumbs.

Researchers at Stanford University recently devised an experiment pitting Chinese tech giant Baidu’s speech recognition software against 32 texters, ages 19 to 32, working with the built-in keyboard on an Apple iPhone. Baidu’s Deep Speech 2 software was not only three times faster than the human typists, it was also more accurate.

The researchers hope this revelation “spurs the development of innovative applications of speech recognition technology,” which has historically gotten a pretty bad rap, often billed as slow and inaccurate.

Prof. Geoffrey Hinton Awarded IEEE Medal For His Work In Artificial Intelligence

Stanford team creates computer vision algorithm that can describe photos

Computers only recently began to get the software needed to discern unknown objects; now machine-learning takes computer vision to the next level with a system that can describe objects and put them into context. Coming soon, better visual search?

BY

Stanford Professor Fei-Fei Li, director of the Stanford Artificial Intelligence Lab, leads work on a computer vision system.

Computer software only recently became smart enough to recognize objects in photographs. Now, Stanford researchers using machine learning have created a system that takes the next step, writing a simple story of what’s happening in any digital image.

“The system can analyze an unknown image and explain it in words and phrases that make sense,” said  Fei-Fei Li, a professor of computer science and director of the Stanford Artificial Intelligence Lab.

“This is an important milestone,” Li said. “It’s the first time we’ve had a computer vision system that could tell a basic story about an unknown image by identifying discrete objects and also putting them into some context.”

Humans, Li said, create mental stories that put what we see into context. “Telling a story about a picture turns out to be a core element of human visual intelligence but so far it has proven very difficult to do this with computer algorithms,” she said.

At the heart of the Stanford system are algorithms that enable the system to improve its accuracy by scanning scene after scene, looking for patterns, and then using the accumulation of previously described scenes to extrapolate what is being depicted in the next unknown image.

Read more of this post