CUDA vs OpenCL: Which to Use for GPU Programming

Cuda vs OpenCL

Joseph Sibony

reading time: 

9 minutes

Graphic Processing Units or GPUs have become an essential part of providing processing power for high performance computing applications over the recent years. GPGPU Programming is general purpose computing with the use of a Graphic Processing Unit (GPU). This is done by using a GPU together with a Central Processing Unit (CPU) to accelerate the computations in applications that are traditionally handled by just the CPU only. GPU programming is now included in virtually every industry, from accelerating video, digital image, audio signal processing, and gaming to manufacturing, neural networks and deep learning.

GPGPU programming essentially entails dividing multiple processes or a single process among different processors to accelerate the time needed for completion. GPGPU’s take advantage of software frameworks such as OpenCL and CUDA to accelerate certain functions in a software with the end goal of making your work quicker and easier. GPU’s make parallel computing possible by use of hundreds of on-chip processor cores which simultaneously communicate and cooperate to solve complex computing problems.

CUDA vs OpenCL – two interfaces used in GPU computing and while they both present some similar features, they do so using different programming interfaces.

Why CUDA?

CUDA which stands for Compute Unified Device Architecture, is a parallel programming paradigm which was released in 2007 by NVIDIA. CUDA while using a language which is similar to the C language is used to develop software for graphic processors and a vast array of general-purpose applications for GPU’s which are highly parallel in nature.

CUDA is a proprietary API and as such is only supported on NVIDIA’s GPUs that are based on Tesla Architecture. The graphics cards which support CUDA are the GeForce 8 series, Tesla and Quadro. The CUDA programming paradigm is a combination of both serial and parallel executions and contains a special C function called the kernel, which is in simple terms a C code that is executed on a graphics card on a fixed number of threads concurrently (learn more about what is CUDA).

Why OpenCL?

OpenCL an acronym for the Open Computing Language was launched by Apple and the Khronos group as a way to provide a benchmark for heterogeneous computing that was not restricted to only NVIDIA GPU’s. OpenCL offers a portable language for GPU programming that uses CPU’s, GPU’s, Digital Signal Processors and other types of processors. This portable language is used to design programs or applications that are general enough to run on considerably different architectures while still being adaptable enough to allow each hardware platform achieve high performance.

OpenCL provides portable, device- and vendor-independent programs which are capable of being accelerated on various different hardware platforms. OpenCL C language is a restricted version of the C99 language that has extensions which are appropriate for executing data-parallel codes on various devices.

CUDA vs OpenCL Comparison

Performance

OpenCL assures a portable language for GPU programming, which is adept at targeting very unrelated parallel processing devices. This in no way means that a code is guaranteed to run on all devices if at a all due to the fact that most have very different feature sets. Some extra effort has to be put in to make the code run on multiple devices while avoiding vendor-specific extension. Unlike the CUDA kernel, an OpenCL kernel can be compiled at runtime, which would add up to an OpenCL’s running time. However, On the other hand, this just-in-time compile could allow the compiler to generate code that will make better use of the target GPU.

CUDA, is developed by the same company that develops the hardware on which it executes its functions, which is why one may expect it to better match the computing characteristics of the GPU, and therefore offering more access to features and better performance.

However, performance wise, the compiler (and ultimately the programmer) is what makes each interface faster as both can fully utilize hardware. The performance will be dependent on some variables, including code quality, algorithm type and hardware type.

Implementation by Vendors

As of the time of this writing there is only one vendor for CUDA implementation and that is its proprietor, NVIDIA.

OpenCL, however, has been implemented by a vast array of vendors including but not limited to:

  • AMD:  Intel and AMD chips and GPU’s are supported.
  • Radeon 5xxx, 6xxx, 7xxx series, R9xxx series are supported
  • All CPUs support OpenCL 1.2 only
  • NVIDIA: NVIDIA GeForce 8600M GT, GeForce 8800 GT, GeForce 8800 GTS, GeForce 9400M, GeForce 9600M GT, GeForce GT 120, GeForce GT 130, ATI Radeon 4850, Radeon 4870, and likely more are supported.
  • Apple (MacOS X only is supported)
  • Host CPUs as compute devices are supported
  • CPU, GPU, and “MIC” (Xeon Phi).

Portability

This is likely the most recognized difference between the two as CUDA runs on only NVIDIA GPUs while OpenCL is an open industry standard and runs on NVIDIA, AMD, Intel, and other hardware devices. Also OpenCL provides for CPU fallback and as such code maintenance is easier while on the other hand CUDA does not provide CPU fallback, which makes developers put if-statements in their codes that help to distinguish between the presence of a GPU device at runtime or its absence.

Open-source vs commercial

Another highly recognized difference between CUDA and OpenCL is that OpenCL is Open-source and CUDA is a proprietary framework of NVIDIA. This difference brings its own pros and cons and the general decision on this has to do with your app of choice.

Generally if the app of your choice supports both CUDA and OpenCL, going with CUDA is the best option as it generates better performance results in this scenario. This is because NVIDIA provides top quality support. If some apps are CUDA based and others have OpenCL support, a recent NVIDIA card will help you get the most out of CUDA enabled apps while having good compatibility in non-CUDA apps.

However, if all your apps of choice are OpenCL supported then the decision is already made for you.

Multiple OS Support

CUDA is able to run on Windows, Linux, and MacOS, but only using NVIDIA hardware. However, OpenCL is available to run on almost any operating system and most hardware varieties. When it comes to the OS support comparison the chief deciding factor still remains the hardware as CUDA is able to run on the leading operating systems while OpenCL runs on almost all.

The hardware distinction is what really sets the comparison. With CUDA having a requirement of only the use of NVIDIA hardware, while with OpenCL the hardware is so not specified. This distinction has its own pros and cons. 

Libraries

Libraries are key to GPU Computing, because they give access to a set of functions which have already been finetuned to take advantage of data-parallelism. CUDA comes in very strong in this category as it has support for templates and free raw math libraries which embody high performance math routines:

  • cuBLAS – Complete BLAS Library
  • cuRAND – Random Number Generation (RNG) Library
  • cuSPARSE – Sparse Matrix Library
  • NPP – Performance Primitives for Image & Video Processing
  • cuFFT – Fast Fourier Transforms Library
  • Thrust – Templated Parallel Algorithms & Data Structures
  • h – C99 floating-point Library

OpenCL has alternatives which can be easily built and have matured in recent times, however nothing like the CUDA libraries. An example of which is the ViennaCL. AMD’s OpenCL libraries also have an added bonus of not only running on AMD devices but additionally on all OpenCL compliant devices

Community

CUDA vs OpenCL - Community

This is a part of the comparison that encompasses the support, longevity, commitment, etc of each framework. While these things could be hard to measure, a look at forums give a measure of how large a community is. The number of topics on NVIDIA’s CUDA forums are staggeringly larger than AMD’s OpenCL forums. However, the OpenCL forums have been increasing in topics in recent years and one ought to also note that CUDA has been around for a larger amount of time.

Technicalities

CUDA allows for developers to write their software in C or C++ because it is only a platform and programming model not a language or API. Parallelization is achieved by the employment of CUDA keywords.

On the other hand OpenCl does not permit for writing code in C++, however it provides an environment resembling the C programming language for work and permits for work with GPU resources directly.

 Comparison Table

ComparisonCUDAOpenCL
PerformanceNo clear advantage, dependent code quality, hardware type and other variablesNo clear advantage, dependent code quality, hardware type and other variables
Vendor ImplementationImplemented by only NVIDIAImplemented by TONS of vendors including AMD, NVIDIA, Intel, Apple, Radeon etc.
PortabilityOnly works using NVIDIA hardwareCan be ported to various other hardware as long as vendor-specific extensions are avoided
Open Source vs CommercialProprietary framework of NVIDIAOpen Source standard
OS SupportSupported on the leading Operating systems with the only distinction of NVIDIA hardware must be usedSupported on various Operating Systems
LibrariesHas extensive high performance librariesHas a good number of libraries which can be used on all OpenCL compliant hardware but not as extensive as CUDA
CommunityHas a larger communityHas a growing community not as large as CUDA
TechnicalitiesNot a language but a platform and programming model that achieves parallelization using CUDA keywordsDoes not enable for writing code in C++ but works in a C programming language resembling environment

How To Choose

When GPU is supported it brings huge great benefits to computing power and apps. With CUDA and OpenCL being the leading frameworks as at the time of writing. CUDA being a proprietary NVIDIA framework is not supported in as many applications as OpenCL, but where it is supported, the support makes for unparalleled performance. While the OpenCL which is supported in more applications does not give the same performance boosts where supported as CUDA does.

NVIDIA GPUs (newer ones) while being CUDA supported have strong OpenCL performance for the instances CUDA is not supported. The general rule of thumb being that if on the instance a great majority your choice of apps and hardware are all OpenCL supported then OpenCL should be the choice for you.

No matter what you decide on Incredibuild can help you turbocharge your compilations and tests leading to better computing, be it in content creation, machine learning, signal processing, and tons of other computer-intensive workloads. A look at our case study with MediaPro is an example of how we can accelerate your compilations and tests to a fraction of the time (in this case more than 6 times faster).

 

speed up c++

Shorten your builds

Incredibuild empowers your teams to be productive and focus on innovating.