This book examines Gaussian processes in both model-based reinforcement learning (RL) and inference in nonlinear dynamic systems.
First, we introduce PILCO, a fully Bayesian approach for efficient RL in continuous-valued state and action spaces when no expert knowledge is available. PILCO takes model uncertainties consistently into account during long-term planning to reduce model bias.
Second, we propose principled algorithms for robust filtering and smoothing in GP dynamic systems.
Umfang: IX, 205 S.
Preis: €36.00 | £33.00 | $63.00
These are words or phrases in the text that have been
automatically identified by the
Named Entity Recognition and Disambiguation service,
which provides Wikipedia
()
and Wikidata
(
)
links for these entities.
Deisenroth, M. 2010. Efficient Reinforcement Learning using Gaussian Processes. Karlsruhe: KIT Scientific Publishing. DOI: https://doi.org/10.5445/KSP/1000019799
Dieses Buch ist lizenziert unter Creative Commons Attribution + Noncommercial + NoDerivatives 3.0 DE Dedication
Dieses Buch ist Peer reviewed. Informationen dazu finden Sie hier
Veröffentlicht am 22. November 2010
Englisch
223
Paperback | 978-3-86644-569-7 |