Markov Decision Processes (MDP) Toolbox

Functions related to the resolution of discrete-time Markov Decision Processes.
15,2K descargas
Actualizado 20 ene 2015

Ver licencia

The MDP toolbox proposes functions related to the resolution of discrete-time Markov Decision Processes: backwards induction, value iteration, policy iteration, linear programming algorithms with some variants.
The functions were developped with MATLAB (note that one of the functions requires the Mathworks Optimization Toolbox) by Iadine Chadès, Marie-Josée Cros, Frédérick Garcia, Régis Sabbadin of the Biometry and Artificial Intelligence Unit of INRA Toulouse (France).
Toolbox page: http://www.inra.fr/mia/T/MDPtoolbox

Citar como

Marie-Josee Cros (2024). Markov Decision Processes (MDP) Toolbox (https://www.mathworks.com/matlabcentral/fileexchange/25786-markov-decision-processes-mdp-toolbox), MATLAB Central File Exchange. Recuperado .

Compatibilidad con la versión de MATLAB
Se creó con R2014b
Compatible con cualquier versión
Compatibilidad con las plataformas
Windows macOS Linux
Categorías
Más información sobre Markov Chain Models en Help Center y MATLAB Answers.
Agradecimientos

Inspiración para: Betavol(x,R,fig)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!
Versión Publicado Notas de la versión
1.6

Add the possibility to download as a toolbox (.mltbx file).

1.5.0.0

Complete Other Requirements.

1.4.0.0

Mainly improve documentation (Jan. 2014)

1.3.0.0

Update the zip file !

1.2.0.0

The version 4.0 (October 2012) is entirely compatible with GNU Octave (version 3.6), the output of several functions: mdp_relative_value_iteration, mdp_value_iteration and mdp_eval_policy_iterative, were modified.

1.1.0.0

Add all authors names

1.0.0.0