Graphical Processing Units (GPUs) offer more processing power and a more energy efficient solutions than traditional CPUs for intense workloads. These factors have contributed to a boom in GPU use in supercomputers and in data centers across the world. Getting the best performance from GPUs often requires rewriting applications to take advantage of the hundreds of cores available on the GPU and to address the limited memory available to GPUs. The cost of porting ones code to utilize GPU’s is often worth the investment, with many real-world programs seeing over a 5x speed up in computation time. The technology presented here increases the ease of porting software to GPUs by removing the need to worry about the memory limit of GPUs. This allows for current GPU programs to be scaled to larger datasets and for faster development of new GPU applications.
Automated GPU Memory Management
This technology automates the handling of GPU memory by serving as a transparent layer between the application and the OpenCL compilers. Processes too large to fit within the specifications of the GPU are split into smaller sub-processes automatically and smartly managed, along with their memory buffers, to limit reoccurring transfers from the CPU to the GPU. After completion of the sub-processes the data is automatically reassembled providing the user with the illusion of having available a larger memory space on the GPU.
- GPGPU SDK
- Scalable GPU Programming
- High Performance Computing
- Increase accessibility to GPU programming
- Shorter development cycles
- Backwards compatible with current OpenCL applications allowing for easy scaling to larger datasets
- Faster code execution through utilization of GPUs
- OpenCL code works on NVIDIA, AMD, and Intel GPUs