Large savingsAccelerating server based computations using FPGAs and GPUs many times results in a 10x or more speed-up, opening up interesting possibilities. It means that a much smaller server park can be used to solve a specific problem, or, for a same sized server park, the productivity will be increased with the same factor. This does not only result in large power savings, but also savings on equipment and of course the server cluster footprint.
Quick deployment using high level languagesThe tools has over the last years improved significantly and today, there are many high level language alternatives for CPU offloading, enabling faster development cycles and higher availability to a larger audience.
- OpenCL for FPGAs and GPGPUs
- C/C++ using High Level Synthesis for FPGAs
- Manufacturer specific offerings: CUDA, SDSOC etc
FPGA based networks cards for ultra low latency and streaming applications have become mainstream for trading platforms and other financial applications. They are also well suited for on-the-fly encryption/ decryption, network filtering, real time monitoring, and similar solutions, providing a high degree of CPU offloading.
Server Based Acceleration
Servers equipped with FPGAs for acceleration gain more and more traction. They offer tremendous speed-up for specific problems, but even more interesting, they do it at low power! Hard floating point cores are now also paving the way into FPGAs, making them an interesting alternative in territories earlier ruled by GPGPUs.
Servers accelerated with GPGPUs are today used to tackle many scientific problems, where Deep Learning lately has become one of the most popular ones. GPGPUs, offering a large array of floating point compute elements, shines when applied to highly parallel algorithms with heavy computations.