Publication Date:
2022
abstract:
Two data-driven strategies for value iteration in linear quadratic optimal control problems over an infinite horizon are proposed. The two architectures share common features, since they both consist of a purely continuous-time control architecture and are based on the forward integration of the Differential Riccati Equation (DRE). They profoundly differ, instead, in the estimation mechanism of the vector field of the underlying DRE from collected data: the first relies on a characterization of properties of the advantage function associated to the problem, whereas the second is inspired by tools from adaptive control theory and ensures semi-global exponential convergence to the optimal solution. Advantages and drawbacks of the architectures are discussed, while the performance is validated via a benchmark numerical example.
Iris type:
01.01 Articolo in rivista
Keywords:
Adaptive control; Adaptive control; Convergence; Costs; Linear systems; Optimal control; Optimal control; Reinforcement learning; Reinforcement learning; Riccati equations; Trajectory; Value iteration; learning
List of contributors:
Possieri, Corrado
Published in: