Presenters who have made their slides available for download are shown with an icon next to their name in the menu below.
Selective discrete symbiotic organism search for capacitated vehicle-routing problem
Co-author: Vincent F. Yu
This paper proposed a selective discrete symbiotic organism search (S-DSOS) heuristic for solving the capacitated vehicle routing problem (CVRP), which is a well-known discrete optimization problem. SOS is a simple and powerful metaheuristic that simulates the symbiotic interaction strategies adopted by an organism for surviving in an ecosystem. As symbiotic organism search (SOS) is originally developed for solving continuous optimization problems, in this study we proposed a discrete version of SOS that using a discrete solution representation instead of converting from continuous to discrete. The interaction strategies in SOS, mutualism, commensalism, and parasitism, are modified in order to be compatible with a discrete solution representation. The performance of S-DSOS was evaluated on a set of benchmark instances and compared results with best known solution. The computational results show that the proposed algorithm can produce good solution as a preliminary testing. These results indicated that the proposed S-DSOS is competitive to solve the capacitated vehicle routing problem.
Regularity of mappings vs. transversality of sets
Transversality properties of finite collections of sets and their connections with the corresponding regularity properties of set-valued mappings will be discussed. Dual characterisations of the properties in terms of Fréchet, limiting and proximal normals will be provided.
Computing high-quality Lagrangian bounds of the stochastic mixed-integer programming problem
Co-authors: N. Boland, J. Christiansen, B. Dandurand, J. Linderoth, J. Luedtke and F. Oliveira
A new primal-dual decomposition method is presented, based on an integration of the Progressive Hedging (PH) and the Frank–Wolfe (FW) methods, referred to as FW–PH, for computing high-quality Lagrangian bounds of the stochastic mixed-integer programming problem (SMIP). The deterministic equivalent (DE) form of the SMIP typically lacks convexity due to the assumed integrality restrictions, and since PH is a specialisation of the alternating direction method of multipliers (ADMM), the application of PH to the SMIP is not theoretically supported due to this lack of convexity. Thus, PH applied to the SMIP is understood as a heuristic approach without convergence guarantees, where either cycling or suboptimal convergence is possible. Although Lagrangian bounds may be computed after each PH iteration, these bounds often show limited improvement with increasing number of iterations, and the amount of improvement is highly sensitive to the value of the PH penalty parameter. Motivated by these observations, we modify the PH method so that the generated sequence of Lagrangian bounds is guaranteed to converge to the optimal Lagrangian bound for any positive-valued PH penalty parameter. The new integrated method is shown to be both theoretically supported due to an integration of established theory for ADMM and the FW method, and practically implementable. Numerical experiments demonstrate the improvement of FW–PH over the PH method for computing high-quality Lagrangian bounds.
Optym Simulation-guided optimisation algorithms for real-time train scheduling
Optym is a world leader in developing optimisation- and simulation-based solutions for transportation and logistics companies worldwide. In this talk, we will share our algorithmic approach for real-time movement-planning and -scheduling of trains over large rail networks. Train movement-planning problems are highly combinatorial in nature, and also quite complex due to several considerations like network topology, train acceleration/deceleration, crossings and priorities. In this presentation, we will share our highly scalable algorithms to solve this problem, and the results of the successful studies from the application of our tools on several large railroads worldwide.
Optimising patient flow and throughput in a surgical suite
To keep up with the rising demand for elective surgery, we need to be more efficient in delivering surgical services. Although it is appropriate to model the emergency patient flow as a queuing network, elective patient flow in a surgical suite is quite different. Although a simulation model can be used as an alternative to a queuing model, it is difficult to make optimal decisions this way.
In this talk, we will discuss a mixed integer programming (MIP) model to analyse and optimise elective surgery patient flow in a surgical suite. We will use the model to develop stochastically-optimal surgery schedules for various length of stay scenarios. We will also analyse the effect of varying the penalty factor, an input parameter to control cancellations, on the occupancy level and the cancellation rate. Finally, we will develop an optimal master surgery schedule to maximise the utilisation level while keeping the cancellation rate within limits.
Reactive multi-operating room surgical case-sequencing problem
Co-author: Erhan Kozan
The surgical department is a highly uncertain environment. Although robust surgical schedules can be produced to reduce disruption, strategies must be developed to reschedule in the case that schedule deviations occur. These include patient cancellations or late arrivals, non-elective arrivals and changes to staff availability. On top of this, surgical durations are, in some cases, highly uncertain. In this talk, we present a mixed-integer non-linear programming model for the reactive rescheduling of the surgical department on a daily basis. This online model can be used to reschedule in the case of disruptions. Here, we consider non-identical operating rooms that are suited to certain specialties, but not others. We also consider the allocation of surgeons to each surgery under surgeon availability and suitability constraints. Patients are given priorities such that they must be scheduled before less urgent surgeries. Throughout the literature, no consideration is given to scheduling additional elective patients where surgical capacity exceeds pre-scheduled demand. We allow additional elective patients to be scheduled where appropriate surgeons are available and time-to-surgery is sufficient to allow for transport to the hospital and for surgical preparation.
Optimal path planning
Markov-Dubins’ path is the shortest planar curve joining two points with prescribed tangents, with a prescribed bound on its curvature. Although the problem was first posed by Andrei Andreyevich Markov in 1889, it was Lester Eli Dubins who reported the first solution in 1957. The structure of Dubins’ solution is elegantly simple: a selection of at most three pieces of circular arcs of maximum curvature (the prescribed bound) and a straight line are concatenated. Markov-Dubins’ path has since been extensively used for path-planning of uninhabited aerial vehicles (or drones) and robots. It has also been used for tunnelling in underground mines. A realistic generalisation of the Markov-Dubins problem is the requirement that the path pass through a number of prescribed intermediate points, giving rise to an interpolation problem. I will present a solution to this interpolation problem. Furthermore, I will report results on interpolation problems where pointwise maximum acceleration of the interpolating path is minimised, and present practical solution techniques for finding such paths. The latter are relevant in situations where a small acceleration is desired, which is not guaranteed by, for example, the commonplace cubic splines.
Minimising the number of apertures in multileaf collimator sequencing with field-splitting
Co-authors: Matthias Ehrgott, Horst W. Hamacher and Ines M. Raschendorfer
We consider the problem of decomposing a given integer matrix into a positive integer linear combination of consecutive-ones matrices with a bound on the number of columns per matrix. This problem is of relevance in the realisation stage of intensity-modulated radiation therapy (IMRT) using linear accelerators and multileaf collimators with limited width. Constrained and unconstrained versions of the problem with the objectives of minimising beam-on time and decomposition cardinality are considered. We introduce a new approach which can be used to find the minimum beam-on time for both constrained and unconstrained versions of the problem. The decomposition cardinality problem is shown to be NP-hard and an approach is proposed to solve the lexicographic decomposition problem of minimising the decomposition cardinality subject to optimal beam-on time.
A two-stage stochastic programming model for inventory management in the blood-supply chain
Managing inventories in the blood supply chain is a challenging task, mainly due to the uncertain nature of the demand for blood units, the perishable nature of the blood, and a strong subjective bias towards criteria other than cost-minimisation. We propose a two-stage stochastic programming model for defining optimal periodic review policies for the inventory management of red blood cells focusing on minimising operational costs, as well as on blood shortage and wastage due to outdating, taking into account perishability and demand uncertainty. The adoption of this framework allows the consideration of more general stochastic processes to model the demand uncertainty than approaches currently available in literature. To illustrate the potential benefits of the proposed model to support the definition of optimal blood inventory control policies, a case study is presented considering realistic data representing the daily demand for eight types of blood.
Efficiently solving stochastic mixed-integer problems combining Gauss-Siedel and penalty-based methods
Co-authors: J. Christiansen, B. Dandurand and A. Eberhard
In this talk, we will present the development of a novel decomposition approach for mixed-integer stochastic programming (SMIP) problems that is inspired by the combination of penalty-based Lagrangian and block Gauss-Seidel methods (named the PBGS method). In this sense, we will present the technical aspects associated with the development of PBGS, which in turn focus on exploiting the inherent decomposable structure that SMIPs present in a computationally efficient manner. The performance of the proposed method is compared with the classical Progressive Hedging method (PH), which also can be viewed as an augmented Lagrangian relaxation-based method for obtaining solutions for SMIP. We will also present extensive numerical results performed using several instances from the literature that illustrate the efficiency of the proposed method in terms of computational performance and solution quality.
Some Recent Advances in Polynomial Optimisation
Optimisation problems involving polynomial functions are of great importance in applied mathematics and engineering, and they are intrinsically hard problems. They arise in important engineering applications such as the sensor network localisation problem, and provide a rich and fruitful interaction between algebraic-geometric concepts and modern convex programming.
The talk will be divided into two parts. In the first part, I will describe the key results in this exciting area, highlighting the geometric and conceptual aspects as well as recent work on exact semi-definite program relaxation for polynomial optimisation problems. In the second part, I will explain how the semi-algebraic structure helps us to analyse the explicit convergence rate of some important and powerful algorithms such as alternating projection algorithm, proximal point algorithm and Douglas-Rachford algorithm. Applications to tensor computations and sparse optimisation problems will be discussed (if time is permitted).
Modelling a water-supply system to generate long-term operating plans
Melbourne Water supplies drinking and recycled water and manages Melbourne’s water supply catchments, sewage treatment, rivers, creeks and major drainage systems. In its role as the supplier of drinking water to retail water companies, Melbourne Water manages a complex interconnected network of storage reservoirs, tunnels, aqueducts and bulk water-transfer pipelines. Distributing water through such a system requires careful planning to ensure optimal strategies to supply water. This requires consideration of many criteria including minimising operating cost and maximising water storage. This talk will present our work on a new, scenario-based interactive optimisation tool, based on the MiniZinc constraint modelling system, to replace the previously-used optimisation tool that assisted in producing Melbourne Water’s annual operating plan. The new tool can take demand forecasts for multiple years into account and incorporates potential drought-response plan actions based on modelled storage levels. The tool provides the flexibility to modify the underlying network configuration for the purposes of representing asset management or system improvement actions. This will enable the rapid and interactive evaluation of different plans for different operating scenarios, thereby making the decision-making quicker, easier and more robust.
A large neighbourhood-search approach for the nurse-rostering problem
Co-authors: Andreas Ernst and Mohan Krishnamoorthy
Nurse-rostering problems involve assigning nurses to shifts, subject to various hard and soft constraints due to work/enterprise regulations and personal work preferences of the nurses. The objective is to satisfy as many of the workforce/cover requirements and soft constraints as possible, subject to satisfying the desired hard constraints. Due to the difficulty of many nurse-rostering problems, finding good-quality solutions often requires the application of sophisticated heuristic algorithms. Large-neighbourhood search-based heuristic methods have been shown to be effective in solving difficult problems in areas such as transportation and scheduling. In this paper, we present a large-neighbourhood search heuristic for the nurse-rostering problem. By formulating the nurse rostering problem using a work-stretch approach, we are able to solve reasonably large-sized problems quickly, as long they only involve a small number of shift types. This large neighbourhood search method exploits our ability to solve certain subsets of the larger problems quickly using mixed integer programming, allowing local improvements on a large neighbourhood of the current solution while maintaining feasibility. We investigate several improvements to the nurse rostering problem. We also test our approach on standard datasets from the literature to show the effectiveness of our approach.
A hybrid Benders decomposition of the assembly line
Since the 1910s when Henry Ford revolutionised the large-scale production of automobiles, assembly lines have become an integral part of the manufacturing industry. Your pen, book and phone have all been made much cheaper due to some mathematician optimising the production rate of an assembly line. However, there is still a large gap between an abstract assembly line in the academic literature and one in real life. We reduce this gap with a hybrid solution methodology combining state-of-the-art solving technology from both mathematical and constraint programming. Logic-based Benders decomposition allows us to break the problem down into its basic combinatorial components and tackle each one individually. When compared to the best existing approaches from the literature, our method is able to prove optimality in a fraction of the time. The general solution framework we propose can be applied to a range of other assembly-line balancing problems which contain a scheduling component.
Better support for combinatorial optimisation problem modellers
Developing models of combinatorial optimisation problems that can be run efficiently is a very challenging process. It often involves teams of experts iteratively modifying an initial model, until the result can be used to find quality solutions with reasonable speed. This iterative process is often guided by the past experiences of the team (rather than by any automatic data on the model and its execution) and takes huge amounts of time, making it a kind of expensive “dark art”. I will talk about several Constraint Programing methods we are developing aimed at helping make this process shorter and the result better.
Mathematics in medicine: optimising image acquisition and cancer treatment in radiotherapy
Effective radiotherapy is dependent on being able to (i) visualise the tumour clearly, and (ii) deliver the correct dose to the cancerous tissue, whilst sparing the healthy tissue as much as possible. In the presence of motion, both of these tasks become increasingly difficult to perform accurately – increasing the likelihood of incorrect dose delivered to cancerous tissue and exposure of healthy tissue to unnecessary radiation, causing adverse effects. In this presentation I will outline a few preliminary mathematical models for optimising radiation treatment (imaging and delivery) and outcomes for the patient. I’ll also discuss some other further areas for research and interdisciplinary collaboration.
Linear convergence of projection algorithms
Projection algorithms are well known for their simplicity and flexibility in solving feasibility problems. They are particularly important in practice since software involving projection algorithms requires minimal implementation and maintenance. In this work, we study the linear convergence of several projection algorithms for systems of finitely-many closed sets. The results complement contemporary research on the same topic.
An approach for the convex feasibility problem via monotropic programming
Co-author: Victoria Martín-Márquez
Convex feasibility problems are mathematical models with a wide range of concrete applications, including medical imaging and sparse signal recovery. In this talk, we apply recent duality results for analysing the consistency (i.e., existence of solutions) of the convex feasibility problem. We characterise consistency in terms of a continuity property of certain support functions associated with the sets.
Optimising a vendor-managed inventory problem in the fuel industry
Co-author: Eva Johanna Degel
In this talk, we describe our experience of solving an optimisation problem for a large-scale fuel distributor in charge of managing the inventories of its customers. Given a fleet of vehicles, recent levels of tanks and predictions of future consumption, the goal is to run the business as cost-effectively as possible without compromising on safety, tank stock-outs and driver-fatigue management rules. After a brief introduction to the problem and the application that we built, we discuss the relevant Key Performance Indicators (KPIs) for the problem. We then go over some of the constraints that are present in the problem description, followed by a high-level description of our solution. We describe the business processes that enable us to create benchmarks in collaboration with the client that drive our optimisation development process efficiently. We close the talk with some rough calculations that show how much similar businesses can save with even small percentage improvements in the quality of their plans if they have a sufficiently large customer base.
Optimal convergence for Krasonelskii-Mann fixed-point release
A classical method to approximate a fixed point for a non-expansive map T : C → C is the Krasnoselskii-Mann iteration (KM) xn+1 = (1 − αn+1)xn + αn+1T xn.
We use optimal transport to establish a recursive formula to estimate the distance between iterates kxm − xnk ≤ dmn. These bounds dmn induce a metric on the integers that allows to study the rate of convergence of (KM). As a result, we settle the Baillon-Brucks conjecture: for every non-expansive map in any normed space the following estimate holds with κ = 1/ √ π kxn − T xnk ≤ κ diam(C) pPn i=1 αk(1 − αk). The analysis exploits an unexpected connection with discrete probability and combinatorics, related to the Gambler’s Ruin for sums of non-homogeneous Bernoulli trials. We also show that the constant κ = 1/ √ π is sharp for nonlinear maps, and we provide an improved sharp bound for linear affine maps.
Optimising courier routes in central business districts
Rapid development of major cities and consequently fast-growing traffic congestion raise significant challenges for determining efficient distribution routes in central city areas. This talk will describe the development of a model, formulated as a bi-level optimisation, for determining the optimal on-street loading zones for courier vehicles to use to minimise distribution costs. The traffic paths used to access loading zones and the paths used to cart the goods to establishments are determined. Nearest-neighbourhood algorithms are used to search optimal solutions. The model is tested on a grid network as well as in Sydney’s CBD. The results produced by the model for the CBD area are compared to tracked delivery routes and costs, and show significant savings.
Regularising with Bregman-Moreau envelopes
Co-authors: Heinz H. Bauschke and Minh N. Dao
The Moreau envelope of a given convex function provides a regularised version which has additional desirable properties such as differentiability and full domain. The Bregman distance as a measure of discrepancy between two points generalises the square of the Euclidean distance. Proximity operators based on the Bregman distance have become a topic of significant research as they are useful in algorithmic solution of optimisation problems. More recently Kan and Song studied regularisation aspects of the left Bregman-Moreau envelope even for nonconvex functions. We complement previous works by analysing the left and right Bregman-Moreau envelopes and by providing additional asymptotic results and examples.
Optimisation in data analysis: survey and recent developments
Optimisation methodology has proved to be essential to formulating and solving problems in data analysis, machine learning, and computational statistics. Such problems are characterised by fairly elementary objective functions but a very large amount of data. Algorithms need to take account of the statistical/learning context, the expense of computing function and derivative information, non-smoothness, and (increasingly) non-convexity. The talk will sketch canonical problem formulations, fundamental algorithmic techniques, and issues of current research focus.
Scheduling automated cell staining: an iterative approach
This talk will introduce a real-time industrial scheduling problem that arises in fully-automating advanced staining machines. Advanced staining is the process of applying reagents to tissue samples in order to reveal the presence or prevalence of particular cellular components. This method is commonly used to assist pathologists in diagnosing cancers and infectious diseases.
To process the samples, the automated system must use its finite resources to complete a set of activities subject to a number of challenging constraints such as minimum and maximum time-lags. An interesting property of the problem is that the number of activities can be significantly reduced depending on the order in which they are completed.
The current solution strategy involves first obtaining and then improving an initial feasible solution. The number of tasks are iteratively reduced, while always ensuring solution feasibility. Computational results will be discussed which highlight the benefits of this approach.
Open problems in convex optimisation
Solving an optimisation problem usually involves choosing an appropriate algorithm or designing a new one if none exists. This may be a difficult task as some problems are notoriously hard, for instance NP-hard problems such as the famous travelling salesman problem. However “P versus NP” is not the only challenge faced by the researchers in optimisation theory. There are numerous open questions whose resolution will lead to breakthroughs in our understanding of optimisation.
In this talk I will focus on two challenging geometric problems that originate from computational complexity and representational challenges in convex optimisation: the polynomial Hirsch conjecture and the generalised Lax conjecture. The former asks for a polynomial bound for the combinatorial diameter of a polytope, and the latter is about representing a class of algebraically-defined constraints as slices of the cone of positive semidefinite matrices.