## Speed Up Response Optimization Using Parallel Computing

### When to Use Parallel Computing for Response Optimization

You can use Simulink® Design Optimization™ software with Parallel Computing Toolbox™ software to speed up the response optimization of a Simulink model. Using parallel computing may reduce model optimization time in the following cases:

• The model contains a large number of tuned parameters, and the `Gradient descent` method is selected for optimization.

• The `Pattern search` method is selected for optimization.

• The model contains a large number of uncertain parameters and uncertain parameter values.

• The model is complex and takes a long time to simulate.

When you use parallel computing, the software distributes independent simulations to run them in parallel on multiple MATLAB® sessions, also known as workers. Distributing the simulations significantly reduces the optimization time because the time required to simulate the model dominates the total optimization time.

For information on how the software distributes the simulations and the expected speedup, see How Parallel Computing Speeds Up Optimization.

For information on configuring your system and using parallel computing, see Use Parallel Computing for Response Optimization.

### How Parallel Computing Speeds Up Optimization

You can enable parallel computing with the `Gradient descent` and `Pattern search` optimization methods. When you enable parallel computing, the software distributes independent simulations during optimization on multiple MATLAB sessions. The following sections describe which simulations are distributed and the potential speedup using parallel computing.

#### Parallel Computing with the Gradient Descent Method

When you select `Gradient descent` as the optimization method, the model is simulated during the following computations:

• Constraint and objective value computation — One simulation per iteration

• Constraint and objective gradient computations — Two simulations for every tuned parameter per iteration

• Line search computations — Multiple simulations per iteration

The total time, $Ttotal$, taken per iteration to perform these simulations is given by the following equation:

`$Ttotal=T+\left(Np×2×T\right)+\left(Nls×T\right)=T×\left(1+\left(2×Np\right)+Nls\right)$`

where $T$ is the time taken to simulate the model and is assumed to be equal for all simulations, $Np$ is the number of tuned parameters, and $Nls$ is the number of line searches. $Nls$ is difficult to estimate and you generally assume it to be equal to one, two, or three.

When you use parallel computing, the software distributes the simulations required for constraint and objective gradient computations. The simulation time taken per iteration when the gradient computations are performed in parallel, $TtotalP$, is approximately given by the following equation:

`$TtotalP=T+\left(ceil\left(\frac{Np}{Nw}\right)×2×T\right)+\left(Nls×T\right)=T×\left(1+2×ceil\left(\frac{Np}{Nw}\right)+Nls\right)$`

where $Nw$ is the number of MATLAB workers.

Note

The equation does not include the time overheads associated with configuring the system for parallel computing and loading Simulink software on the remote MATLAB workers.

The expected speedup for the total optimization time is given by the following equation:

`$\frac{TtotalP}{Ttotal}=\frac{1+2×ceil\left(\frac{Np}{Nw}\right)+Nls}{1+\left(2×Np\right)+Nls}$`

For example, for a model with `Np=3`, `Nw=4`, and `Nls=3`, the expected speedup equals $\frac{1+2×ceil\left(\frac{3}{4}\right)+3}{1+\left(2×3\right)+3}=0.6$.

For an example of the performance improvement achieved with the `Gradient descent` method, see Improving Optimization Performance Using Parallel Computing.

#### Parallel Computing with the Pattern Search Method

The `Pattern search` optimization method uses search and poll sets to create and compute a set of candidate solutions at each optimization iteration.

The total time, $Ttotal$, taken per iteration to perform these simulations, is given by the following equation:

`$Ttotal=\left(T×Np×Nss\right)+\left(T×Np×Nps\right)=T×Np×\left(Nss+Nps\right)$`

where $T$ is the time taken to simulate the model and is assumed to be equal for all simulations, $Np$ is the number of tuned parameters, $Nss$ is a factor for the search set size, and $Nps$ is a factor for the poll set size. $Nss$ and $Nps$ are typically proportional to $Np$.

When you use parallel computing, Simulink Design Optimization software distributes the simulations required for the search and poll set computations, which are evaluated in separate `parfor` (Parallel Computing Toolbox) loops. The simulation time taken per iteration when the search and poll sets are computed in parallel, $TtotalP$, is given by the following equation:

`$\begin{array}{c}TtotalP=\left(T×ceil\left(Np×\frac{Nss}{Nw}\right)\right)+\left(T×ceil\left(Np×\frac{Nps}{Nw}\right)\right)\\ =T×\left(ceil\left(Np×\frac{Nss}{Nw}\right)+ceil\left(Np×\frac{Nps}{Nw}\right)\right)\end{array}$`

where $Nw$ is the number of MATLAB workers.

Note

The equation does not include the time overheads associated with configuring the system for parallel computing and loading Simulink software on the remote MATLAB workers.

The expected speed up for the total optimization time is given by the following equation:

`$\frac{TtotalP}{Ttotal}=\frac{ceil\left(Np×\frac{Nss}{Nw}\right)+ceil\left(Np×\frac{Nps}{Nw}\right)}{Np×\left(Nss+Nps\right)}$`

For example, for a model with `Np=3`, `Nw=4`, `Nss=15`, and `Nps=2`, the expected speedup equals $\frac{ceil\left(3×\frac{15}{4}\right)+ceil\left(3×\frac{2}{4}\right)}{3×\left(15+2\right)}=0.27$.

Note

Using the `Pattern search` method with parallel computing may not speed up the optimization time. To learn more, see Why do I not see the optimization speedup I expected using parallel computing?

For an example of the performance improvement achieved with the `Pattern search` method, see Improving Optimization Performance Using Parallel Computing.

## Related Topics 