Gradient methods for iterative distributed control synthesis
Författare
Summary, in English
In this paper we present a gradient method to iteratively update local controllers of a distributed linear system driven by stochastic disturbances. The control objective is to minimize the sum of the variances of states and inputs in all nodes. We show that the gradients of this objective can be estimated distributively using data from a forward simulation of the system model and a backward simulation of the adjoint equations. Iterative updates of local controllers using the gradient estimates gives convergence towards a locally optimal distributed controller.
Avdelning/ar
Publiceringsår
2009
Språk
Engelska
Fulltext
- Available as PDF - 173 kB
- Download statistics
Dokumenttyp
Konferensbidrag
Ämne
- Control Engineering
Conference name
48th IEEE Conference on Decision and Control
Conference date
2009-12-16
Conference place
Shanghai, China
Status
Published
Forskningsgrupp
- LCCC