Journal of Applied Mathematics and Stochastic Analysis
Volume 2005 (2005), Issue 1, Pages 77-88
doi:10.1155/JAMSA.2005.77
Maximum process problems in optimal control theory
Department of Mathematical Sciences, University of Aarhus, Ny Munkegade, Aarhus 8000, Denmark
Received 16 January 2004; Revised 23 June 2004
Copyright © 2005 Goran Peskir. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Given a standard Brownian motion (Bt)t≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ), where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1)} with μ0<0<μ1 and ℓ0<0<ℓ1 given and fixed. The following control v∗ is proved to be optimal: “pull as hard as possible,” that is, vt∗=μ0 if Xt<g∗(St), and “push as hard as possible,” that is, vt∗=μ1 if Xt>g∗(St), where s↦g∗(s) is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation). The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations) in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.