Most CUDA developers are familiar with methods of passing constant arguments into GPU kernels. The simplest method is directly via kernel parameters and the other option is copying to constant memory. Under certain circumstances though, there’s another lesser-known way to get constants into your GPU kernel, that may even improve kernel performance!

The following code takes in an array of N floats, the constants N and M, and outputs NM via a for loop over M. Each element is handled in parallel by a single CUDA thread. The loop iteration count M is passed in directly via a kernel parameter.

This second kernel does the same thing as the first, however it uses cudaMemcpyToSymbol to store M in constant GPU memory:

In both examples, the compiler cannot determine what the value of M is at compile time, and therefore can only guess at the amount of number of times to unroll, and emit appropriate code to handle other values of M. However, if M were passed in via a template argument, it would be known at compile time, which would result in optimal loop unrolling:

This however adds the restraint that the constant value M be limited to a certain range of values, so that the appropriate instantiation of the kernel is performed.

In the example code, this results in something like this:

Here is a more detailed timing breakdown for various values of M collected on a NVIDIA P100, all on one billion elements.

Kernel 1 and Kernel 2 times are nearly identical, which is as expected, because kernel parameters are passed to the device via constant memory ( The templated argument allows the compiler to unroll the loop properly and out-performs the other two kernels.

Of course, it’s not always possible to use templated arguments, and there’s increased compilation costs because different kernels are generated for each value. This can expand dramatically when multiple constants are passed in via templated parameters. However, when performance is key, the payoff might be well worth considering.