How are GPUs used in brute force attacks?

I'm choosing to assume you're asking why it's a risk rather than how to hack.

GPUs are very good at parallelising mathematical operations, which is the basis of both computer graphics and cryptography. Typically, the GPU is programmed using either CUDA or OpenCL. The reason they're good for brute-force attacks is that they're orders of magnitude faster than a CPU for certain operations - they aren't intrinisically smarter.

The same operations can be done on a CPU, they just take longer.


People have given great answers here that directly answer your question, but I'd like to give a complementary answer to explain more in depth why GPUs are so powerful for this, and other applications.

As some have pointed out, GPUs are specially designed to be fast with mathematical operations since drawing things onto your screen is all math (plotting vertice positions, matrix manipulations, mixing RBG values, reading texture space etc). However, this isn't really the main driving force behind the performance gain. The main driving force is the parallelism. A high end CPU might have 12 logical cores, where a high end GPU would be packing something like 3072.

To keep it simple, number of logical cores equals the total number of concurrent operations that can take place against a given dataset. Say for example I want to compare or get the sum the values of two arrays. Lets say length of the array is 3072. On the CPU, I could create a new empty array with the same length, then spawn 12 threads that would iterate across the two input arrays at a step equal to the number of threads (12) and concurrently be dumping the sum of the values into the third output array. This would take 256 total iterations.

With the GPU however, I could from the CPU upload those same values into the GPU then write a kernel that could have 3072 threads spawned against that kernel at the same time and have the entire operation completed in a single iteration.

This is handy for working against any data that can, by its nature, support being "worked on" in a parallelizable fashion. What I'm trying to say is that this isn't limited to hacking/evil tools. This is why GPGPU is becoming more and more popular, things like OpenCL, OpenMP and such have come about because people have realized that we programmers are bogging down our poor little CPUs with work when there is a massive power plant sitting in the PC barely being used by contrast. It's not just for cracking software. For example, once I wrote an elaborate CUDA program that took the lotto history for the last 30 years and calculated prize/win probabilities with tickets of various combinations of all possible numbers with varying numbers of plays per ticket, because I thought that was a better idea than using these great skills to just get a job (this is for laughs, but sadly is also true).

Although I don't necessarily endorse the people giving the presentation, this presentation gives a very simple but rather accurate illustration of why the GPU is so great for anything that can be parallelized, especially without any form of locking (which holds up other threads, greatly diminishing the positive effects of parallelism).


You don't need any other device, just a suitable GPU, and a software. For example, cRARk can use your GPU to brute-force rar passwords. And oclhashcat can use your GPU to brute-force lots of things.

Why GPU's are much more faster than CPU in cracking? Because cracking is something you can run in parallel (You can use every single core for trying different passwords at the same time) And GPU's have lots of cores which can be used in parallel.

For example: GeForce GTX980 Ti, which is a high end GPU, has 2816 cores. While no PC CPU has more than 16 cores (Highest I know is 72-cores but for supercomputing and server purposes).

But why CPUs have a little amount of cores compared to GPUs? Can't they make CPUs with lots of cores? Of course they can, but it is not beneficial. Because generally it is not possible to process in parallel like graphics. Many software has to process sequentially, and even if they can process in parallel, it is not common to write a software for parallel processing, because it is harder for developers.

See the graph below:

enter image description here

Assuming that averagely %50 of the processing can be parallelized, speedup is only 2x with 16 cores. So increasing core numbers has very diminishing returns for CPUs.