Skip to Main Content (Press Enter)

Logo CNR
  • ×
  • Home
  • People
  • Outputs
  • Organizations
  • Expertise & Skills

UNI-FIND
Logo CNR

|

UNI-FIND

cnr.it
  • ×
  • Home
  • People
  • Outputs
  • Organizations
  • Expertise & Skills
  1. Outputs

Fast Stochastic MPC Implementation via Policy Learning

Academic Article
Publication Date:
2022
abstract:
Stochastic Model Predictive Control (MPC) gained popularity thanks to its capability of overcoming the conservativeness of robust approaches, at the expense of a higher computational demand. This represents a critical issue especially for sampling-based methods. In this letter we propose a policy learning MPC approach, which aims at reducing the cost of solving stochastic optimization problems. The presented scheme relies upon the use of neural networks for identifying a mapping between the current state of the system and the probabilistic constraints. This allows to reduce the sample complexity to be less than or equal to the dimension of the decision variable, significantly scaling down the computational burden of stochastic MPC approaches, while preserving the same probabilistic guarantees. The efficacy of the proposed policy-learning MPC is proved by means of a numerical example.
Iris type:
01.01 Articolo in rivista
Keywords:
Constrained control; neural networks; predictive control; randomized algorithms; stochastic optimal control
List of contributors:
Mammarella, Martina; Dabbene, Fabrizio
Authors of the University:
DABBENE FABRIZIO
Handle:
https://iris.cnr.it/handle/20.500.14243/413522
Published in:
IEEE CONTROL SYSTEMS LETTERS
Journal
  • Overview

Overview

URL

http://www.scopus.com/record/display.url?eid=2-s2.0-85132760998&origin=inward
  • Use of cookies

Powered by VIVO | Designed by Cineca | 26.5.0.0 | Sorgente dati: PREPROD (Ribaltamento disabilitato)