Vectorization and multi-threading are techniques that can improve the performance of embedded applications. Both allow processors to make more efficient use of available resources and complete tasks faster, either by executing the same instruction on multiple data elements simultaneously (vectorization), or by dividing a workload into threads for concurrent execution across several cores (multi-threading).
With MATLAB Coder, you can take advantage of vectorization through the use of SIMD (Single Instruction, Multiple Data) intrinsics available in code replacement libraries for ARM Cortex-A and M targets. On Intel and AMD CPUs, enable SIMD with the AVX2 or AVX512 instruction set extensions. For processors that support multi-threading, enable OpenMP.
Additionally, as of R2023a, you can enable bfloat16 compression of network learnables. For deep learning networks that are resilient to precision loss, compressing learnables from single-precision to bfloat16 datatypes greatly reduces memory usage with little change in inference accuracy. This process does not require calibration data and also increases inference speeds. Any hardware that supports single-precision floating-point datatypes can benefit from bfloat16. For more information, please refer here. Note: these settings are general guidelines. Depending on your specific application and hardware target, changes to additional configuration settings may lead to added performance.
Using MATLAB Coder
Using GPU Coder
NVIDIA Jetson Board
>> cfg = coder.gpuConfig('dll');
>> cfg.DeepLearningConfig = coder.DeepLearningConfig(TargetLibrary = 'tensorrt');
>> cfg.DeepLearningConfig.DataType = 'FP16';
>> cfg.Hardware = coder.Hardware('NVIDIA Jetson');
>> cfg.GpuConfig.EnableMemoryManager = true;
>> cfg.GpuConfig.ComputeCapability = '7.0';
NVIDIA Desktop GPU
>> cfg = coder.gpuConfig('dll');
>> cfg.DeepLearningConfig = coder.DeepLearningConfig(TargetLibrary = 'tensorrt');
>> cfg.DeepLearningConfig.DataType = 'FP16';
>> cfg.GpuConfig.EnableMemoryManager = true;
>> cfg.GpuConfig.ComputeCapability = '7.0';
Using Embedded Coder with Simulink
Select the Simulink model and run the following commands with set_param:
ARM Cortex-A Targets
>> set_param(gcs, 'ProdHWDeviceType', 'ARM Compatible->ARM Cortex-A');
>> set_param(gcs, 'CodeReplacementLibrary', "GCC ARM Cortex-A");
>> set_param(gcs, 'MultiThreadedLoops', 'on');
>> set_param(gcs, 'MaxStackSize', '20000');
>> set_param(gcs, 'DLTargetLibrary', 'none');
>> set_param(gcs, 'DLLearnablesCompression', 'bfloat16');
Intel Targets
>> set_param(gcs, 'ProdHWDeviceType', 'Intel->x86-64 (Windows64)');
>> set_param(gcs, 'ProdHWDeviceType', 'Intel->x86-64 (Linux 64)');
>> set_param(gcs, 'ProdHWDeviceType', 'Intel->x86-64 (Mac OS X)');
>> set_param(gcs, 'InstructionSetExtensions', 'AVX512F');
>> set_param(gcs, 'MultiThreadedLoops', 'on');
>> set_param(gcs, 'MaxStackSize', '20000');
>> set_param(gcs, 'DLTargetLibrary', 'none');
>> set_param(gcs, 'DLLearnablesCompression', 'bfloat16');