PLENARY: Risk and Reliability in Optimization Under Uncertainty
Problems of optimization are concerned with making decisions “optimally” however in many situations in management, finance and engineering, decisions have to be made without knowing fully how they will play out in the future. When the future is modeled probabilistically, this leads to stochastic optimization, yet the formulation of objectives and constraints can be far from obvious. A future cost or hazard variable may be a random variable which a present decision can influence to some extent, but maybe only in shaping its distribution in a limited way. For instance, it may be desirable to keep a hazard below a particular threshold, like building a bridge to resist earthquakes and floods, and yet it may be impossible or too expensive to guarantee that the threshold will never be breached.
One needs to have a standard according to which a cost or hazard is “adequately” below the desired threshold in line with its probability distribution. That is the role for so-called “measures of risk,” which started to be developed for purposes like assessing the solvency of banks but now are being utilized much more widely. Measures of risk also offer fresh ways of dealing with reliability constraints, such as have traditionally been imposed in engineering in terms of bounds on the probability of failure of various manufactured components. Probability of failure has troublesome mathematical behavior in an optimization environment. Now, though, there is a substitute, called buffered probability of failure, which makes better sense and is much easier to work with computationally.
PLENARY: Progressive Hedging in Nonconvex Stochastic Optmization
The progressive hedging algorithm minimizes an expected “cost” by iteratively decomposing into separate subproblems for each scenario. Up to now it has depended on convexity of the underlying “cost” function with respect to the decision variables and the constraints on them. However, a new advance makes it possible to obtain convergence to a locally optimal solution when the procedure is executed close enough to it and a kind of second-order local sufficiency condition is satisfied. This can moreover work not just for an expectation but also for minimizing a risk objective or buffered probability of exceedance.
Professor Emeritus R. Tyrrell Rockafellar
Professor Emeritus, Departments of Mathematics and Applied Mathematics, University of Washington, Seattle
R. Tyrrell (“Terry”) Rockafellar is an American mathematician and one of the leading scholars in optimization theory and related fields of analysis and combinatorics. He is professor emeritus at the departments of mathematics and applied mathematics at the University of Washington, Seattle.
His research interests span convex and variational analysis, with emphasis on applications to stochastic programming, optimal control, economics, finance, and engineering.
For his contributions to convex optimization, nonsmooth analysis, and stochastic programming, Rockafellar was awarded the John von Neumann Theory Prize by the Institute for Operations Research and the Management Sciences (INFORMS) in 1999. His decades long career in the field was celebrated, ranging from his 1963 PhD dissertation to his more recent work on scenario analysis and epiconvergence.