WORKSHOP ABSTRACTS

Presenters who have made their slides available are shown with an icon next to their name in the menu below.

Mesias Alfeus - On Numerical Methods for Spread Options

On Numerical Methods for Spread Options

Co-author: Professor Erik Schlogl

Spread options are multi-asset options whose payoffs depend on the difference of two underlying financial variables. In most cases, analytically closed form solutions for pricing such payoffs are not available, and the application of numerical pricing methods turns out to be non-trivial. We consider several such non-trivial cases and explore the performance of the highly efficient numerical technique of Hurd and Zhou (2010), comparing this with Monte Carlo simulation and the lower bound approximation formula of Caldana and Fusai (2013). We show that the former is in essence an application of the two–dimensional Parseval Identity.
As application examples, we price spread options in a model where asset prices are driven by a multivariate normal inverse Gaussian (NIG) process, in a threefactor stochastic volatility model, as well as in examples of models driven by other popular multivariate Lévy processes such as the variance Gamma process, and discuss the price sensitivity with respect to volatility. We also consider examples in
the fixed–income market, specifically, on cross–currency interest rate spreads and on LIBOR/OIS spreads. In terms of FFT computation, we have used the FFTW library and we document appropriate usage of this library to reconcile it with the MATLAB ifft2 counterpart.

View presentation slides (PDF)

Dr Gleb Belov - Process Plant Layout Optimization: Equipment Allocation

Process Plant Layout Optimization: Equipment Allocation

Co-authors: Tobias Czauderna, Maria Garcia De La Banda, Matthias Klapperstueck, Ilankaikone Senthooran, Mark Wallace and Michael Wybrow

Designing the layout of a chemical plant is a complex and important task. Its main objective is to find a most economical spatial arrangement of the equipment and associated pipes that satisfies construction, operation, maintenance and safety constraints. The problem is so complex it is still solved manually, taking multiple engineers several years to complete. This paper provides two main contributions. First, the most comprehensive model ever reported in the literature for spatially arranging the equipment. And second, a Large Neighborhood Search framework that enables Constraint Programming and Mixed-Integer Programming solvers to explore much larger neighborhoods than in previous approaches, thus increasing its effectiveness. The model is part of a system being developed in collaboration with Woodside Energy Ltd. for their Liquefied Natural Gas plants. The results are so promising Woodside is actively exploring its commercialisation.

Dr Geoffrey Brent - Quantifying expert judgement in an objective function for table-balancing

Quantifying expert judgement in an objective function for table-balancing

The Australian Bureau of Statistics (ABS) has recently adopted quadratic optimisation methods for reconciling economic data in cases where different sources give inconsistent estimates. These methods must produce accurate estimates both at the aggregate level (e.g. GDP) and for fine divisions (e.g. individual industries and products). Consistency requirements include quadratic as well as linear constraints.
Designing the objective function for this application is challenging. Ideally, weighting would be based on probabilistic measures of source accuracy, but this information is often unavailable. Instead, we have used Lagrange-multiplier methods to allow subject-matter experts to specify the balancing objective, in terms of plausible adjustments for each cell. The objective function then reconciles these expectations to find a good compromise solution.
This approach allows for subject-matter expertise to be codified into balancing rules. By providing an intuitive link between balancing specifications and results, it makes specification and debugging easier. Graphical methods can be used to identify and interpret issues with the inputs, making it easier to correct these.

View presentation slides (PDF)

Associate Professor Regina Burachik - Sparsity optimization for a network of coupled oscillators

Sparsity optimization for a network of coupled oscillators

Co-author: Dr C. Yalçin Kaya, Dr Alex Kalloniatis, Professor Subhrakanti Dey

We consider a graph with $N$ nodes and adjacency matrix $A$. The dynamics, i.e., the phases, of the nodes are coupled via a network version of the Kuramoto model, defined in terms of $A$ and the coupling strength $\sigma>0$ of the network. Our aim is to find the sparsest adjacency matrix $A$ for which the phases of the nodes are synchronized. We formulate optimization problems that promote sparsity of $A$, using the $\ell_p$-norm of the matrix $A$, with $p\in (0,1]$. Our first model is nondifferentiable and nonconvex for $p<1$. Our second model is an $\ell_1$-relaxation of the first one. We solve the problem with the $\ell_1$-relaxation for three cases, namely for $20$-, $30$- and $50$-node networks with the node intrinsic natural frequencies and the phase initial conditions generated randomly. Our formulation of the problem and the numerical approach we devise achieve highly sparse solutions for the given networks. Our models are developed for finding (i) Directed sparse networks, (ii) undirected sparse networks, and (iii) undirected sparse networks in which the overall difference between the degrees is relatively small.

View presentation slides (PDF)

Pamela Cortez - Post-disaster Humanitarian Logistics

Post-disaster Humanitarian Logistics

Co-author:  Dr Alysson Costa

This research is concerned with the delivery of relief items in the aftermath of sudden-onset natural disasters, such as fires and floods. These catastrophes present hard challenges for the humanitarian sector, unfolding very different conditions than those found in commercial logistics.
We propose a multi-commodity, multi-period, multi-trip facility location and routing problem that takes into account important objectives and constraints such as fairness and acceptable delay in meeting a request.
We decompose our model into two simpler problems: the first one deals with facility location and commodities pre-positioning, while the second deals with routing decisions. The first model locates temporary warehouses, defines the amount of supply to be stored in each facility and assigns relief items to meet the demands of each point of distribution with the objective of minimising unsatisfied demands. A further decomposition level is considered to efficiently solve the routing problem; we studied two strategies: decomposing by depot or by period. Results on the time decomposition model show the importance of coordination to reduce logistics costs.

Dr Michelle Dunbar - Mathematics in medicine: optimising image acquisition and cancer treatment in radiotherapy

Mathematics in medicine: optimising image acquisition and cancer treatment in radiotherapy

Effective radiotherapy is dependent on being able to (i) visualise the tumour clearly, and (ii) deliver the correct dose to the cancerous tissue, whilst sparing the healthy tissue as much as possible. In the presence of motion uncertainty, both of these tasks become increasingly difficult to perform accurately – increasing the likelihood of incorrect dose delivered to cancerous tissue and exposure of healthy tissue to unnecessary radiation, causing adverse effects.

In this presentation we outline our novel mathematical models for optimising pre-treatment imaging in radiation therapy. We focus on tumours located in the lung and thorax subject to respiratory motion, and discuss the performance of our models for real-world patient breathing data. In addition, we briefly outline our current research involving the development of mathematical models for joint respiratory and cardiac motion to enable imaging and treatment for tumours located in parts of the anatomy that are currently considered a “no-fly zone”. We present current challenges and ideas for future research.

Professor Andrew Eberhard - A fixed point operator in discrete optimisation duality

A fixed point operator in discrete optimisation duality

I will discuss some duality structures that have appeared in discrete optimisation in conjunction with studies of discrete proximal point algorithm, augmented Lagrangian duality, supporting theory for the “Feasibility Pump” and more recently with regard to Stochastic Integer programming. A common theme appears involving a fixed point operator associated with the local minima of a regularised dual function. This enables the one to describe some MIP heuristics in terms of so continuous optimisation ideas.

Chaojie (Jasmine) Guo - An algorithm for the network flow problem with multi-transport modes and time window constraints

An algorithm for the network flow problem with multi-transport modes and time window constraints

Co-author: Associate Professor Russell Thompson

With the spread concept of hyper-connected City Logistics which highlight the importance of consolidating transport facilities in urban city to improve the delivery efficiency and reduce the environment cost. Transportation network become more flexible and comprehensive with a lot of transit hubs and multi-transport modes (for example public facilities, private cars and traditional freight vehicles) connecting nodes. So far, works on solution approach for network flow are based on Tabu research. However, in a such a complex network with multi-transport modes and time window constraints, Tabu research will be not that efficient. Based on the work of Solomon (1987) who propose an insertion-heuristic for VRSPTW, we find a hub-insertion procedure to solve this network flow problem.

View presentation slides (PDF)

David Hemmi - Recursive Evaluate and Cut

Recursive Evaluate and Cut

Stochastic programming is concerned with decision making under uncertainty, seeking an optimal policy with respect to a set of possible future scenarios. We look at multistage decision problems where the uncertainty is revealed over time. First, decisions are made with respect to all possible future scenarios. Secondly, after observing the random variables, a set of scenario specific decisions is taken. Our goal is to develop algorithms that can be used as a back-end solver for high-level modelling languages. We propose a scenario decomposition method to solve multistage stochastic combinatorial decision problems recursively. Our approach is applicable to general problem structures, utilizes standard solving technology and is highly parallelizable. We provide experimental results to show how it efficiently solves benchmarks with hundreds of scenarios.

Arash Kaviani - Optimising the resilience of road networks under uncertainty

Optimising the resilience of road networks under uncertainty

The increase in the frequency and severity of natural disasters has motivated researchers and practitioners to propose various optimisation models within the concept of resilience in the transportation networks such as roadway systems. Many of these models aim to optimise the pre-disaster investment decisions to achieve road network resilience and are formulated as a Network Design Problems (NDP). Moreover, to incorporate various uncertainties that exist in the decisions around the human-made or natural disasters, Stochastic Programming (SP) has been vastly applied in the formulation of NDP in which uncertain variables are considered as random variables with a known probability distribution. Moreover, the alternative approach of Robust Optimisation (RO) has been utilised recently as a method to deal with uncertainties in an optimisation problem. This talk initially covers the latest applications of SP and RO in the context of NDP and presents our recent work in which we propose a hybrid stochastic-robust optimisation approach for dealing with the resilience of road networks.

View presentation slides (PDF)

Dr Yalçın Kaya - Optimal Control of a UAV in Search and Rescue Operations

Optimal Control of a UAV in Search and Rescue Operations

We consider the problem of designing optimal trajectories of an uninhabited aerial vehicle (UAV) so that the connectivity with a set of moving ground nodes is never (or at least not for too long) lost, in search and rescue operations. Our scenario consists of two ground nodes and one UAV. We present three mathematical models to ensure effective communication amongst these dynamic nodes of the network. Our first model introduces a new objective function which is used to maximize the network connection level. The second model considers, under the presence of an obstacle between the ground nodes, a new objective representing the maximization of a so-called confidence level under the uncertainty of the knowledge about the positions of the ground nodes. Our third model represents multi-objective optimisation, by utilizing the two previous objectives. We illustrate these three models with various scenarios and numerical/graphical examples. Due to the nonsmoothness of our optimal control models, the problems are computationally challenging, with even this modest number of nodes. This is joint work with Regina Burachik from UniSA, and Kin-Ping Hui and Damien Phillips from DSTG.

View presentation slides (PDF)

Professor Christopher M Kellett - Optimal Control for Computing Carbon Prices

Optimal Control for Computing Carbon Prices

Many governments and international finance organisations use a carbon price in cost-benefit analyses, emissions trading schemes, quantification of energy subsidies, and modelling the impact of climate change on financial assets.  The most commonly used value in this context is the social cost of carbon (SCC).  Users of the social cost of carbon include the US, UK, German, and other governments, as well as organizations such as the World Bank, the International Monetary Fund, and Citigroup.  Consequently, the social cost of carbon is a key factor driving worldwide investment decisions worth many trillions of dollars.

The social cost of carbon is derived using integrated assessment models that combine simplified models of the climate and the  economy.  One of three dominant models used in the calculation of the social cost of carbon is the Dynamic Integrated model of Climate and the Economy, or DICE.  DICE is a relatively low order discrete-time dynamical system which, when coupled with an optimal control problem, can be used to compute estimates of the social cost of carbon.  In this talk, I will briefly describe the DICE model and its associated optimal control problem.  Subsequently, I will describe some of the interesting numerical techniques and questions that arise in computing estimates in the social cost of carbon.

View presentation slides (PDF)

Dr Philip Kilby - Efficient computation of cost allocations for the Vehicle Routing Problem

Efficient computation of cost allocations for the Vehicle Routing Problem

Co-author: Dr Dan Popescu

In the Vehicle Routing Problem we aim to find efficient routes for a fleet of vehicles to serve customers situated at various locations. Given a solution, we may then wish to find a way to divide the overall service cost, in a way that is fair to all customers. This cost allocation problem is the focus of our paper.

This fair division problem turns out to be very challenging. An ideal solution is well defined, based on the Shapley Value of the Travelling Salesman Game. Unfortunately this method has exponential computational complexity, which makes it practical only for small-scale scenarios, involving no more than 30 customers. In real life applications, scenarios involving a few hundred or even a few thousand customers are more typical.

We present a novel methodology to generate high-quality approximations to the Shapley Value, giving a fair division of transportation cost. It is based on a deeper geometric insight into distance sharing in an Euclidean context. We give explicit formulas for two such approximations, which have polynomial computational complexity. We present experimental results which demonstrate that our proposed methods outperform the other state of the art approximations found in the literature.

View presentation slides (PDF)

David Kirszenblat - Using column generation to solve an aircrew training timetabling problem

Using column generation to solve an aircrew training timetabling problem

The Training Authority Aviation (TA-Avn) is an organisation within the Royal Australian Navy (RAN) responsible for managing aviation-specific training for all RAN personnel, who are to be employed in an aviation-related job category. In a temporal sense, the bulk of aircrew training consists of a sequence of major, structured courses and a number of mandatory short courses for which the prerequisite requirements are less strict. Both short and long courses are run repeatedly throughout a year with a fixed number of repetitions and are subject to high and extremely variable course pass rates.

In this talk, we adopt a conventional integer linear programming technique, specifically, column generation. The problem of designing feasible schedules is formulated as a network flow problem that encompasses covering and prerequisite constraints. The problem is decomposed into a master and subproblem. The master problem is initialised with a set of dummy schedules to which we allocate the aircrew student population, whilst respecting class capacity limitations. The master problem then requests solutions from the subproblem that offer some promise of minimising the overall time spent in training. Experimental results are compared with those of an ILP approach that assigns feasible schedules to labelled students.

Associate Professor Alexander Kruger - About Stability of Error Bounds

About Stability of Error Bounds

Stability of local and global error bounds, including new concepts of (arbitrary, convex and linear) perturbations of the given function, will be discussed. The characterisations of error bounds for families of perturbations can be interpreted as estimates of the ‘radius of error bounds’. The definitions and characterisations are illustrated by examples.

View presentation slides (PDF)

Scott Lindstrom - Douglas-Rachford Method: a View from Strongly Quasi-Nonexpansive Operators

Douglas-Rachford Method: a View from Strongly Quasi-Nonexpansive Operators

A frequent problem in optimization is that of finding a feasible point in the zero level sets of a finite number of functions. We discuss a method which is based on iterated over and under relaxed cutter projection methods, of which the method of alternating projections and Douglas-Rachford are special cases. Notwithstanding the loss of firm nonexpansivity, useful convergence results may still be obtained by using another framework. We provide examples and share new insights.
Jose Nicolas Melchor Gutierrez - A comparison of nonlinear, second order cone and linear programming formulations for the optimal power flow problem

A comparison of nonlinear, second order cone and linear programming formulations for the optimal power flow problem

Co-authors: Professor Pierluigi Mancarella, Professor Rubén Augusto Romero Lazaro

The provision of electrical energy is a vital service in our society, so the continuity as well as the quality of this service are compulsory. To guarantee the aforementioned, first, the system operational state at any time must be determined and second, various hypothetical situations in the power system must be considered to guarantee the reliability under unexpected changes. The optimal power flow (OPF) problem addresses the foregoing. The linear direct current (DC) mathematical formulation has been used for more than 50 years to represent the OPF problem. However, the DC OPF is a mere approximation of the complete alternating current (AC) formulation and in some cases its solution could lead to unrealistic results. In this work, a comparison between the DC and AC OPF in steady state power systems is made. A second order cone (SOC) approximation and a nonlinear formulation are studied for the AC OPF problem. The formulations are implemented in the mathematical programming language AMPL and solved by the solvers Knitro and Cplex. The power flow simulations for the AC and DC formulations are made on the IEEE 300 bus system.

View presentation slides (PDF)

Andrew Perryman - Augmented Benders' Decomposition for Synchromodal Logistics

Augmented Benders’ Decomposition for Synchromodal Logistics

Co-authors: Professor Andreas Ernst, Professor Mohan Krishnamoorthy

In Australian cities, deep-sea container ports are regularly located close to the city-center, with transportation to and from the port facilitated by trucks. In the aim of reducing the congestion associated with road transportation, state and federal governments have recently begun championing a modal switch to short-haul rail for these transportation tasks. In this talk we describe a metropolitan freight transportation problem arising from this domain that seeks to effectively leverage both modes of transport from a least-cost perspective. We propose a mathematical programming formulation and develop a modified Benders’ decomposition method for the problem. We mathematically demonstrate that this method finds pareto-optimal cuts by solving an augmented version of the subproblem that exploits subproblem dual-degeneracy without destroying its underlying structure. Our modified Benders’ routine is further augmented by a number of logic-based Benders’ cuts that remain valid for arbitrarily complex road-transportation requirements. Computational results demonstrate the effectiveness of this routine over the performance of commercial solver implementations of the mathematical programming formulation.

Dr Moshe Sniedovich - Decision-making under severe uncertainty: from worst-case analysis to robust optimization

Decision-making under severe uncertainty: from worst-case analysis to robust optimization

Worst-case analysis is an old, controversial tool for the treatment of uncertainty. In contrast, (modern) robust optimization is a relatively new, extremely popular, paradigm for decision-making under uncertainty in the framework of optimization problems. In this presentation we examine the relationship between these two approaches from the view point of Wald’s famous maximin principle (circa 1940), recalling that this principle argues that decisions should be ranked according to their worst-case performance, hence an optimal decision is one whose worst-case performance is at least as good as the worst-case performance of any other decision. We draw a distinction between local and global worst-case analysis (hence local and global robustness), clarify the role of constraints in worst-case analysis and robust optimization, and discuss in detail the “conservatism” issue associated with the use of models based on a worst-case approach to uncertainty.

View presentation slides (PDF)

Dr Belinda Spratt - Reducing post-surgery recovery occupancy under uncertainty

Reducing post-surgery recovery occupancy under uncertainty

Co-author: Adjunct Professor Erhan Kozan

Operations Research approaches to surgical scheduling are becoming increasingly popular in both theory and practice. Often, these models neglect stochasticity in order to reduce the computational complexity of the problem. In this talk, two years of historical data are utilised to examine the occupancy of post-surgery recovery spaces as a function of the initial surgical case sequence. We show that the number of patients in the recovery space is well modelled by a Poisson binomial random variable. A mixed integer nonlinear programming model for the surgical case sequencing problem is presented that reduces the probability of exceeding the capacity in post-surgery recovery units. Given the complexity of the model, metaheuristic approaches are implemented to solve the surgical case sequencing problem. Preliminary results indicate that this approach can be used to level the demand for post-surgery recovery resources and improve the predictability of demand in downstream wards.

Dr Nadia Sukhorukova - Curve clustering using Chebyshev and Least Squares approximation

Curve clustering using Chebyshev and Least Squares approximation

There have been a number of attempts to extend a well-known K-means method to curve clustering. This method is based on repeated recalculations of cluster prototypes (that is, approximations) when one or more curves are moving from one cluster to another. In this presentation I will talk about possible ways to accelerate the procedure. I will consider two types of problems: Least Squares and Chebyshev approximation. I will also mention other applications of approximation to curve clustering and signal processing.

View presentation slides (PDF)

Workshop Abstracts