Tools
Search
Blog // All Categories

Call it my years of technical marketing in the 2D and 3D graphics world, but I’ve always been mesmerized by the invention and rise of the Graphics Processing Unit (GPU) within the walls of Nvidia. I remember carting  Windows NT towers around with dual Pentium processors at SIGGRAPH to demo the latest iteration of our illustration software. And it was an amazing demo when we could show a customer an unsharp mask being processed in under two minutes in our photo editing software (even if those feats were technically due to multithreading not GPU power). Even more of my love for the GPU comes from gaming, and years of Frankenstein-ing PCs before I was drawn to the walled garden of Apple. It’s funny to think about now that I’ve seen Epic Citadel on my iPad, but I recall being blown away by the graphical UI of the first Leisure Suit Larry in 1987 as if it was yesterday.

That’s how one of my favorite interviews of all-time came back into my focus this week while perusing cool GTRI projects. About every six months, I go back and watch Charlie Rose’s interview with Nvidia CEO Jen-Hsun Huang. It’s 38 minutes of complete geek glory as Huang talks the history of the GPU, and the applications for the future. It's also just a great lesson on entreprenur vision and how to apporach new markets. We’re all familiar with the high-end video cards created by Nvidia, but these GPUs are turning up in the coolest of places. Huang talks of GPUs helping geologists interpret seismic data to enable oil exploration, or the 2010 Audis that have GPUs powering the navigation and instrumentation electronics.

By releasing a software development kit in 2007 to make the GPU programmable, Nvidia pushed the capabilities of this magical little device to new heights. And given they’re designed to be parallel computers, these $200 chips can be strung together to create massive amounts of computing power for relatively little cash. (The GTRI guys note that modern GPUs process close to two teraflops, and the state-of-the-art supercomputer of 10 years ago processed just over seven teraflops for a price tag of about $110M – or less than four of today's GPUs strung together for a total cost of under $1,000). And therein lies the problem when a hacker gets their brain wrapped around this reality.

Most readers will be familiar with a hacking method known as “brute force attack,” which relies on sequential guessing of all possible combinations of a character set in order to beat a password. That password might be protecting something as innocuous as someone’s email or as important as database-level access to credit card data. The advent of the programmable GPU has created a boon in this arena.

And here’s where two research scientists at GTRI come into play: Richard Boyd and Joshua Davis. Their team is looking into how this freely available processing power represents a threat to the modern implementation of passwords. Their conclusion is any password should be a minimum of 12 characters, including numbers, symbols and uppercase letters. But even that’s no true protection given the ever-increasing computing power.

According to Davis, the safest ‘password of the future’ may well be a full sentence given its long but also easy to remember. The entire case study is a really interesting read – especially for GPU geeks like me…

Page 3 of 9