getNormalizer
Description
returns the normalizer object used by the function approximator object
normalizers
= getNormalizer(fcnAppx
)fcnAppx
(typically an actor or critic).
Examples
Assign Normalizers to Actor and Critic Inputs
This example shows how to assign normalizer objects to the actor and critics of a DDPG agent.
Create DDPG Agent and Extract Actor and Critic
Create specification objects to define observation and action channels. For this example, the agent has two observation channels. The first channel has a vector with three elements and the second channel has a vector with four elements.
The action channel carries a two-dimensional vector.
obsInfo = [rlNumericSpec([3,1]) rlNumericSpec([4,1])]; actInfo = rlNumericSpec([2,1]);
Create a default DDPG agent.
agent = rlDDPGAgent(obsInfo, actInfo);
Extract the approximator objects.
actor = getActor(agent); critic = getCritic(agent);
Create Normalizer Objects
Create one normalizer object for each input channel. DDPG agents use a Q-value function critic, which requires both the observation and the action as inputs. The Mean
, StandardDeviation
, Min
, and Max
properties apply to each element of the channel.
Create the normalizer for the first observation channel.
obs1Nrz = rlNormalizer(obsInfo(1).Dimension, ... Normalization="zscore", Mean=2, StandardDeviation=3)
obs1Nrz = rlNormalizer with properties: Dimension: [3 1] Normalization: "zscore" Mean: 2 StandardDeviation: 3
Create the normalizer for the second observation channel.
obs2Nrz = rlNormalizer(obsInfo(2).Dimension, ... Normalization="zerocenter", Mean=4)
obs2Nrz = rlNormalizer with properties: Dimension: [4 1] Normalization: "zerocenter" Mean: 4
Create the normalizer for the action input channel of the Q-value function critic.
actInNrz = rlNormalizer(actInfo.Dimension, ... Normalization="rescale-symmetric", Min=-2, Max=2)
actInNrz = rlNormalizer with properties: Dimension: [2 1] Normalization: "rescale-symmetric" Min: -2 Max: 2
To check how the normalizer works on an input, use normalize
.
normalize(obs2Nrz,6)
ans = 2
Assign Normalizer Objects to Actor and Critic
To assign new normalizers to the actor and critic, use setNormalizer
.
actor = setNormalizer(actor, [obs1Nrz, obs2Nrz]); critic = setNormalizer(critic, [obs1Nrz, obs2Nrz, actInNrz]);
You can also assign normalizers to selected channels only. For example, to assign normalizers only to the first observation channel (in the order specified by obsInfo
) and the action channel, use an indices vector.
critic = setNormalizer(critic, [obs1Nrz actInNrz], [1 3]);
Display the normalization properties of the actor and critic.
actor.Normalization
ans = 1x2 string
"zscore" "zerocenter"
critic.Normalization
ans = 1x3 string
"zscore" "zerocenter" "rescale-symmetric"
Assign Actor and Critic to Agent
To assign the new actor and critic to the agent, use setActor
and setCritic
.
setCritic(agent, critic); setActor(agent, actor);
To check that agent works, use getAction
.
a = getAction(agent, { ... rand(obsInfo(1).Dimension) ... rand(obsInfo(2).Dimension)}); a{1}
ans = 2×1
-0.1039
0.5166
Create DQN Agent and Extract Critic
Define a mixed observation space and a discrete action channel.
obsInfo = [rlNumericSpec([3,1]), rlFiniteSetSpec([5,3,4,2])]; actInfo = rlFiniteSetSpec([-1,0,1]);
Create a default DQN agent.
agent = rlDQNAgent(obsInfo, actInfo);
Extract the agent critic.
critic = getCritic(agent);
Extract Normalizer Objects from Actor or Critic
Use getNormalizer
to extract an array of normalizer objects from the agent critic.
crtNrz = getNormalizer(critic)
crtNrz = 1x2 rlNormalizer array with properties: Dimension Normalization Mean StandardDeviation Min Max
Modify Normalizer Objects Using Dot Notation
Use dot notation to access and change the property of the second normalizer object (associated with the second observation channel).
crtNrz(2).Normalization="rescale-zero-one";
crtNrz(2).Min=-5;
crtNrz(2).Max=15;
Assign the modified normalizer to the critic and assign the critic to the agent.
critic = setNormalizer(critic, crtNrz); setCritic(agent, critic);
To check that the agent works, use getAction
.
a = getAction(agent, { ... rand(obsInfo(1).Dimension) ... rand(obsInfo(2).Dimension)}); a{1}
ans = 0
Input Arguments
fcnAppx
— Function approximator
function approximator object
Function approximator, specified as one of the following:
rlValueFunction
object — Value function criticrlQValueFunction
object — Q-value function criticrlVectorQValueFunction
object — Multi-output Q-value function critic with a discrete action spacerlContinuousDeterministicActor
object — Deterministic policy actor with a continuous action spacerlDiscreteCategoricalActor
— Stochastic policy actor with a discrete action spacerlContinuousGaussianActor
object — Stochastic policy actor with a continuous action spacerlContinuousDeterministicTransitionFunction
object — Continuous deterministic transition function for a model-based agentrlContinuousGaussianTransitionFunction
object — Continuous Gaussian transition function for a model-based agentrlContinuousDeterministicRewardFunction
object — Continuous deterministic reward function for a model-based agentrlContinuousGaussianRewardFunction
object — Continuous Gaussian reward function for a model-based agent.rlIsDoneFunction
object — Is-done function for a model-based agent
To create an actor or critic function object, use one of the following methods.
Note
For agents with more than one critic, such as TD3 and SAC agents, you must call
getModel
for each critic representation individually. You
cannot call getModel
for the array returned by
getCritic
.
critics = getCritic(myTD3Agent); criticNet1 = getModel(critics(1)); criticNet2 = getModel(critics(2));
Output Arguments
normalizers
— Normalizer objects
rlNormalizer
object | array of rlNormalizer
objects
Normalizer objects, specified as an rlNormalizer
object or an array of rlNormalizer
objects.
Version History
Introduced in R2024a
See Also
Functions
Objects
rlNormalizer
|rlNumericSpec
|rlAgentInitializationOptions
|rlFiniteSetSpec
|rlValueFunction
|rlQValueFunction
|rlVectorQValueFunction
|rlContinuousDeterministicActor
|rlDiscreteCategoricalActor
|rlContinuousGaussianActor
|rlContinuousDeterministicTransitionFunction
|rlContinuousGaussianTransitionFunction
|rlContinuousDeterministicRewardFunction
|rlContinuousGaussianRewardFunction
|rlIsDoneFunction
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)