Main Content

Setting Up the Prerequisite Products

To use GPU Coder™ for CUDA® code generation, install the products specified in Installing Prerequisite Products.

MEX Setup

When generating CUDA MEX with GPU Coder, the code generator uses the NVIDIA® compiler and libraries included with MATLAB®. Depending on the operating system on your development computer, you only need to set up the MEX code generator.

Note

GPU Coder does not support standalone deployment of the generated CUDA MEX-file using MATLAB Runtime.

Windows Systems

If you have multiple versions of Microsoft® Visual Studio® compilers for the C/C++ language installed on your Windows® system, MATLAB selects one as the default compiler. If the selected compiler is not compatible with the version supported by GPU Coder, change the selection. For supported Microsoft Visual Studio versions, see Installing Prerequisite Products.

To change the default compiler, use the mex -setup C++ command. When you call mex -setup C++, MATLAB displays a message with links to set up a different compiler. Select a link and change the default compiler for building MEX files. The compiler that you choose remains the default until you call mex -setup C++ to select a different default. For more information, see Change Default Compiler. The mex -setup C++ command changes only the C++ language compiler. You must also change the default compiler for C by using mex -setup C.

Linux Platform

MATLAB and the CUDA Toolkit support only the GCC/G++ compiler for the C/C++ language on Linux® platforms. For supported GCC/G++ versions, see Installing Prerequisite Products.

Environment Variables

Standalone code (static library, dynamically linked library, or executable program) generation has additional set up requirements. GPU Coder uses environment variables to locate the tools, compilers, and libraries required for code generation.

Note

On Windows, a space or special character in the path to the tools, compilers, and libraries can create issues during the build process. You must install third-party software in locations that does not contain spaces or change Windows settings to enable creation of short names for files, folders, and paths. For more information, see Using Windows short names solution in MATLAB Answers.

PlatformVariable NameDescription
WindowsCUDA_PATH

Path to the CUDA Toolkit installation.

For example:

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\

NVIDIA_CUDNN

Path to the root folder of cuDNN installation. The root folder contains the bin, include, and lib subfolders.

For example:

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\

NVIDIA_TENSORRT

Path to the root folder of TensorRT installation. The root folder contains the bin, data, include, and lib subfolders.

For example:

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\TensorRT\

OPENCV_DIR

Path to the build folder of OpenCV on the host. This variable is required for building and running deep learning examples.

For example:

C:\Program Files\opencv\build

PATH

Path to the CUDA executables. Generally, the CUDA Toolkit installer sets this value automatically.

For example:

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin

Path to the cudnn.dll dynamic library. The name of this library may be different on your installation.

For example:

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin

Path to the nvinfer* dynamic libraries of TensorRT. The name of this library may be different on your installation.

For example:

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\TensorRT\lib

Path to the Dynamic-link libraries (DLL) of OpenCV. This variable is required for running deep learning examples.

For example:

C:\Program Files\opencv\build\x64\vc15\bin

LinuxPATH

Path to the CUDA Toolkit executable.

For example:

/usr/local/cuda-11.8/bin

Path to the OpenCV libraries. This variable is required for building and running deep learning examples.

For example:

/usr/local/lib/

Path to the OpenCV header files. This variable is required for building deep learning examples.

For example:

/usr/local/include/opencv

LD_LIBRARY_PATH

Path to the CUDA library folder.

For example:

/usr/local/cuda-11.8/lib64

Path to the cuDNN library folder.

For example:

/usr/local/cuda-11.8/lib64/

Path to the TensorRT™ library folder.

For example:

/usr/local/cuda-11.8/TensorRT/lib/

Path to the ARM® Compute Library folder on the target hardware.

For example:

/usr/local/arm_compute/lib/

Set LD_LIBRARY_PATH on the ARM target hardware.

NVIDIA_CUDNN

Path to the root folder of cuDNN library installation.

For example:

/usr/local/cuda-11.8/

NVIDIA_TENSORRT

Path to the root folder of TensorRT library installation.

For example:

/usr/local/cuda-11.8/TensorRT/

ARM_COMPUTELIB

Path to the root folder of the ARM Compute Library installation on the ARM target hardware. Set this value on the ARM target hardware.

For example:

/usr/local/arm_compute

Verify Setup

To verify that your development computer has all the tools and configuration required for GPU code generation, use the coder.checkGpuInstall function. This function performs checks to verify if your environment has the all third-party tools and libraries required for GPU code generation. You must pass a coder.gpuEnvConfig object to the function. This function verifies the GPU code generation environment based on the properties specified in the given configuration object.

You can also use the equivalent GUI-based application, GPU Environment Check that performs the same checks. To open this application, use the MATLAB command, gpucoderSetup.

In the MATLAB Command Window, enter:

gpuEnvObj = coder.gpuEnvConfig;
gpuEnvObj.BasicCodegen = 1;
gpuEnvObj.BasicCodeexec = 1;
gpuEnvObj.DeepLibTarget = 'tensorrt';
gpuEnvObj.DeepCodeexec = 1;
gpuEnvObj.DeepCodegen = 1;
results = coder.checkGpuInstall(gpuEnvObj)

The output shown here is representative. Your results might differ.

Compatible GPU           : PASSED 
CUDA Environment         : PASSED 
	Runtime   : PASSED 
	cuFFT     : PASSED 
	cuSOLVER  : PASSED 
	cuBLAS    : PASSED 
cuDNN Environment        : PASSED 
TensorRT Environment     : PASSED 
Basic Code Generation    : PASSED 
Basic Code Execution     : PASSED 
Deep Learning (TensorRT) Code Generation: PASSED 
Deep Learning (TensorRT) Code Execution: PASSED 

results = 

  struct with fields:

                 gpu: 1
                cuda: 1
               cudnn: 1
            tensorrt: 1
        basiccodegen: 1
       basiccodeexec: 1
         deepcodegen: 1
        deepcodeexec: 1
    tensorrtdatatype: 1
           profiling: 0

See Also

Apps

Functions

Objects

Related Topics