Main Content

rlContinuousGaussianRewardFunction

Stochastic Gaussian reward function approximator object for neural network-based environment

    Description

    When creating a neural network-based environment using rlNeuralNetworkEnvironment, you can specify the reward function approximator using an rlContinuousDeterministicRewardFunction object. Do so when you do not know a ground-truth reward signal for your environment and you expect the reward signal to be stochastic.

    The reward function object uses a deep neural network as internal approximation model to predict the reward signal for the environment given one of the following input combinations.

    • Observations, actions, and next observations

    • Observations and actions

    • Actions and next observations

    • Next observations

    To specify a deterministic reward function approximator, use an rlContinuousDeterministicRewardFunction object.

    Creation

    Description

    example

    rwdFcnAppx = rlContinuousGaussianRewardFunction(net,observationInfo,actionInfo,Name=Value) creates a stochastic reward function using the deep neural network net and sets the ObservationInfo and ActionInfo properties.

    When creating a reward function you must specify the names of the deep neural network inputs using one of the following combinations of name-value pair arguments.

    You must also specify the names of the deep neural network outputs using the RewardMeanOutputName and RewardStandardDeviationOutputName name-value pair arguments.

    You can also specify the UseDevice property using an optional name-value pair argument. For example, to use a GPU for prediction, specify UseDevice="gpu".

    Input Arguments

    expand all

    Deep neural network with a scalar output value, specified as a dlnetwork object.

    The input layer names for this network must match the input names specified using the ObservationInputNames, ActionInputNames, and NextObservationInputNames. The dimensions of the input layers must match the dimensions of the corresponding observation and action specifications in ObservationInfo and ActionInfo, respectively.

    Name-Value Arguments

    Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

    Example: ObservationInputNames="velocity"

    Observation input layer names, specified as a string or string array. Specify ObservationInputNames when you expect the reward signal to depend on the current environment observation.

    The number of observation input names must match the length of ObservationInfo and the order of the names must match the order of the specifications in ObservationInfo.

    Action input layer names, specified as a string or string array. Specify ActionInputNames when you expect the reward signal to depend on the current action value.

    The number of action input names must match the length of ActionInfo and the order of the names must match the order of the specifications in ActionInfo.

    Next observation input layer names, specified as a string or string array. Specify NextObservationInputNames when you expect the reward signal to depend on the next environment observation.

    The number of next observation input names must match the length of ObservationInfo and the order of the names must match the order of the specifications in ObservationInfo.

    Reward mean output layer name, specified as a string.

    Reward standard deviation output layer name, specified as a string.

    Properties

    expand all

    This property is read-only.

    Observation specifications, specified as a reinforcement learning specification object or an array of specification objects defining properties such as dimensions, data types, and names of the observation signals.

    You can extract the observation specifications from an existing environment or agent using getObservationInfo. You can also construct the specifications manually using rlFiniteSetSpec or rlNumericSpec.

    This property is read-only.

    Action specifications, specified as a reinforcement learning specification object or an array of specification objects defining properties such as dimensions, data types, and names of the action signals.

    You can extract the action specifications from an existing environment or agent using getActionInfo. You can also construct the specification manually using rlFiniteSetSpec or rlNumericSpec.

    Computation device used to perform operations such as gradient computation, parameter updates, and prediction during training and simulation, specified as either "cpu" or "gpu".

    The "gpu" option requires both Parallel Computing Toolbox™ software and a CUDA®-enabled NVIDIA® GPU. For more information on supported GPUs see GPU Computing Requirements (Parallel Computing Toolbox).

    You can use gpuDevice (Parallel Computing Toolbox) to query or select a local GPU device to be used with MATLAB®.

    Note

    Training or simulating a network on a GPU involves device-specific numerical round-off errors. These errors can produce different results compared to performing the same operations using a CPU.

    Object Functions

    rlNeuralNetworkEnvironmentEnvironment model with deep neural network transition models

    Examples

    collapse all

    Create an environment interface and extract observation and action specifications. Alternatively, you can create specifications using rlNumericSpec and rlFiniteSetSpec.

    env = rlPredefinedEnv("CartPole-Continuous");
    obsInfo = getObservationInfo(env);
    actInfo = getActionInfo(env);

    Create a deep neural network. The network has two input channels, one for the current action and one for the next observations. The single output channel is for the predicted reward value.

    statePath = featureInputLayer(obsInfo.Dimension(1),Name="obs");
    actionPath = featureInputLayer(actInfo.Dimension(1),Name="action");
    nextStatePath = featureInputLayer(obsInfo.Dimension(1),Name="nextObs");
    commonPath = [concatenationLayer(1,3,Name="concat")
        fullyConnectedLayer(32,Name="fc")
        reluLayer(Name="relu1")
        fullyConnectedLayer(32,Name="fc2")];
    
    meanPath = [reluLayer(Name="rewardMeanRelu")
        fullyConnectedLayer(1,Name="rewardMean")];
    stdPath = [reluLayer(Name="rewardStdRelu")
        fullyConnectedLayer(1,Name="rewardStdFc")
        softplusLayer(Name="rewardStd")];
    
    rwdNet = layerGraph(statePath);
    rwdNet = addLayers(rwdNet,actionPath);
    rwdNet = addLayers(rwdNet,nextStatePath);
    rwdNet = addLayers(rwdNet,commonPath);
    rwdNet = addLayers(rwdNet,meanPath);
    rwdNet = addLayers(rwdNet,stdPath);
    
    rwdNet = connectLayers(rwdNet,"nextObs","concat/in1");
    rwdNet = connectLayers(rwdNet,"action","concat/in2");
    rwdNet = connectLayers(rwdNet,"obs",'concat/in3');
    rwdNet = connectLayers(rwdNet,"fc2","rewardMeanRelu");
    rwdNet = connectLayers(rwdNet,"fc2","rewardStdRelu");
    
    plot(rwdNet)

    Figure contains an axes object. The axes object contains an object of type graphplot.

    Create a dlnetwork object.

    rwdNet = dlnetwork(rwdNet);

    Create a stochastic reward function object.

    rwdFncAppx = rlContinuousGaussianRewardFunction(...
        rwdNet,obsInfo,actInfo,...
        ObservationInputNames="obs",...
        ActionInputNames="action", ...
        NextObservationInputNames="nextObs", ...
        RewardMeanOutputNames="rewardMean", ...
        RewardStandardDeviationOutputNames="rewardStd");

    Using this reward function object, you can predict the next reward value based on the current action and next observation. For example, predict the reward for a random action and next observation. The reward value is sampled from a Gaussian distribution with the mean and standard deviation output by the reward network.

    obs = rand(obsInfo.Dimension);
    act = rand(actInfo.Dimension);
    nextObs = rand(obsInfo.Dimension(1),1);
    predRwd = predict(rwdFncAppx,{obs},{act},{nextObs})
    predRwd = single
        -0.1308
    

    You can obtain the mean value and standard deviation of the Gaussian distribution for the predicted reward using evaluate.

    predRwdDist = evaluate(rwdFncAppx,{obs,act,nextObs})
    predRwdDist=1×2 cell array
        {[-0.0995]}    {[0.6195]}
    
    

    Version History

    Introduced in R2022a