Story image

How gaming graphics changed the world: Nvidia AI Conference Sydney

14 Sep 18

The Singularity, the hypothetical moment in time when artificial intelligence becomes capable of recursive self-improvement, could radically change human civilisation almost overnight. 

According to the boffins at Nvidia, we may owe the next big leap in human technological advancement to gamers’ desire for better PC graphics.

Last week, graphics cards power-house Nvidia hosted their AI event in Sydney’s International Conference Centre. There they invited developers to check out their advances in deep learning and their new Turing-class GPUs. 

In a relatively short amount of time machine learning has gone from theory to reality, Nvidia’s DGX systems are giving scientist the tools needed to train the artificial intelligence that is going to revolutionise the world.

And it’s all thanks to a couple of GeForce GTX580 graphics cards. 

In his keynote speech, Marc Hamilton, Nvidia’s Vice President of Solutions Architecture and Engineering, looked back at how the power of GPU CUDA cores were first harnessed for AI.

In 2012, University of Toronto Students Alex Krizhevsky and Ilya Sutskever, along with their professor, Geoffrey Hinton, entered the ImageNet Large Scale Visual Recognition Challenge, with a convolutional neural network programmed in CUDA, running on two off the shelf Nvidia graphics cards. The trio won the competition and set up a company that was promptly purchased by Google.

Professor Hinton is also the great-great-grandson George Boole (the English mathematician that came up with Boolean algebra), proving that the apple doesn’t fall far from the tree.

But why GPUs? Nvidia’s Marc Hamilton explained that whilst CPUs are good at taking a packet of information and sequentially processing it, they do so one-by-one. GPU technology, on the other hand was developed with the need to get thirty images render rendered and up on the screen a second. It’s this technology that vert basically speaking make GPUs so efficient for training convolutional neural networks.   

And it is the application of this cutting-edge technology that Nvidia was showing off to the media at the conference. 

The most visually impressive application has to be real-time ray tracing. Nvidia’s showcase demo, running on RTX infused Quattro GPUs, had a photoreal model of a Porsche with ray-traced lighting that could be adjusted on the fly. The detail was amazing, right down to the refraction in the headlight lenses. 

Apart from looking very cool, real-time ray-tracing allows designer to examine their products in real-world lighting situation rather than relay on software tricks. Amazingly, the upcoming Shadow of the Tomb Raider and Battlefield V games will both feature this technology courtesy of GeForce RTX GPUs. 

Another mind-blowing visual application that Nvidia were showing of was in the form of a couple of Adobe Photoshop plugins using the machine learning of Turing-based RTX and Quattro GPUs. Super Rez makes the formally impossible/ridiculous CSI-style image enhancements almost a reality. A low-resolution image can be cleaned up with the plug-in actually adding in information. The result is an image with better clarity and more detail than the original photograph. The other plug-in, InPaint, allows users to literally paint-out objects and people from photographs- both amazing and slightly unnerving.

Nvidia’s AI technology is also revolutionising medicine by enabling what would ordinarily be a series of 2D scans to be realised as 3D models. Medical professionals will be able to use this technology to better diagnose aliments and more effectively treat patients.

Another demonstration, less glamorous, but still impressive, showed how Nvidia’s AI technology monitored their office parking garage. Using hundreds of cameras, the GPUs disseminate the data from the camera images, monitoring users parking habits and watching out for suspicious or erratic driver behaviour.

Poetically, the technology has come full circle in that the upcoming GeForce RTX2080 gaming GPUs are built upon the hardware advances that have come out of Nvidia’s machine-learning R&D. Very soon gamers will be able to experience the holy grail of 3D graphics, real-time ray tracing, in their games courtesy of these Turing-based GPUs.

With applications ranging from entertainment to autonomous vehicles, machine learning is already having a huge impact on our day-to-day lives. It is said that we will not be aware that the Singularity is occurring, we will only be aware once it has happened. That day may just be sooner than we think.
 

Is self-service BI living up to the hype?
the explosion of data available to a business and self-service BI tools is transforming how everyone works - but is self-service living up to expectations?
What the people say - Gartner’s November Customers’ Choices
A roundup of the latest Gartner Peer Insight Customers’ Choices from Backup and Recovery to Business Intelligence and Analytics, and more.
BlackBerry buys out cybersecurity AI firm Cylance
“We are eager to leverage BlackBerry’s mobility and security strengths to adapt our advanced AI technology to deliver a single platform.”
Five secrets – Workday’s 2019 winning formulas
We thoroughly investigate why business software vendor Workday believes 2019 will be their best year yet.
Exclusive: Strengths and limitations of the AWS/Cisco partnership
Iguazio CEO Yaron Haviv discusses whether the partnership really is a 'match made in heaven' and what it means for the industry.
Google Cloud CEO stepping down to welcome ex-Oracle exec
Google Cloud has grown significantly under Greene's tenure, but has involved tens of billions of dollars and little gains on AWS and Azure.
Talend and Databricks partner for scalable data solution
The strategic partnership combines unified analytics and data management in the cloud.
Video conferencing in dire need of simplification, study shows
A Forrester study shows that 84% of companies are using two or more cloud-based video conferencing apps.