Solving Large-Scale Robust Optimization Models
John M. Mulvey (Princeton University)
Robust optimization provides an ideal framework for
addressing significant planning problems in engineering and
management. As an example, the approach applies in a natural
manner to integrating asset and liabilities for large
financial organizations and wealthy individuals. The goal is
to discover a proper combination of
investment/borrowing/saving strategies that will perform well
over a finite set of scenarios. The robustness of the
recommendations is evaluated through out-of-sample precision
analysis. Solutions deemed inadequate under particular
scenarios are reviewed during the next stage of the
procedure. We demonstrate the methodology using the Home
Account financial system in which thousands of RO problems
must be solved each day.
These large nonlinear multi-stage stochastic programs -- up to
several million decision variables and constraints -- require
high performance computers. This talk outlines the key
concepts for solving the resulting stochastic optimization
problem, including nonlinear interior point methods,
decomposition algorithms, and strategies for parallelization.
We debate the pros/cons of alternative optimization
algorithms. Possible avenues for future research are explored.
Return to Program Overview
Chair: Michael Florian (Universite de Montreal)
1. Computer Representation of Networks for ITS Applications
Eric Le Saux (Universite de Montreal), Ismail Chabini
(Massachusetts Institute of Technology), Michael Florian,
Bernard Gendron (Universite de Montreal)
We first present a new format, called the Network File Format
(NFF), for the external representation of networks and their
associated data, such as origin-destination matrices and
functions. We overview its syntax and demonstrate its
flexibility to represent data from various network models and
methods required by ITS applications. Then, we present an
Application Programming Interface (API) library that builds
an internal representation of an NFF file. The API consists
of a set of C++ libraries that perform various tasks, such as
parsing NFF files and user-supplied functions,
cross-referencing attributes and building alternative network
representations, such as forward star. We show examples that
illustrate the effectiveness and ease of use of the above
tools and also outline future developments.
2. Parallel Solution Methods for Network Equilibrium Models
Ismail Chabini (Massachusetts Institute of Technology) and
Michael Florian (Universite de Montreal)
We present a library of C++ parallel tools for solving network
equilibrium models. The library is based on PVM. We introduce
a novel algorithm that computes shortest paths for networks
with turn penalties. We describe various parallel
implementations of different solution methods. We demonstrate
their efficiency to solve large-scale urban transportation
networks. The performance of the parallel implementations is
analyzed by using the "speedup" and the "relative burden"
measures. Various computing platforms are investigated.
3. Parallel Solution Methods for Some Transportation Planning
Problems Formulated as Integer and Mixed-Integer Programs
Bernard Gendron, Teodor Gabriel Crainic, Michel Gendreau,
Jean-Yves Potvin, and Gilbert Laporte (Universite de Montreal)
We present an overview of our current research on parallel
algorithms for solving transportation planning problems
formulated as large-scale integer and mixed-integer programs.
We focus on strategic/tactic (network design) and operational
(routing-scheduling) models, and show the impact of parallel
computing on solution methodologies based on methematical
programming, enumerative approaches and metaheuristics. We
present numerical results of implementations of several
parallel algorithms on various computing platforms.
Return to Program Overview
Co-Chairs: William T. Scherer and Donald E. Brown (University of
Virginia)
1. Visualizing Airborne Navigation in an Electromagnetic Environment
T. Cronin (CECOM RDEC Intelligence and Electronic Warfare
Directorate)
Consider the problem of airborne navigation through a system
of hemispheres dispersed over the surface of the earth. The
hemispheres are isotropic models of electromagnetic energy
propagated outwards from ground based locations. Some of the
hemispheres represent desirable places to fly, whereas others
must be avoided, due to electronic interference or physical
threat to the aircraft. Because of modeling error and power
fluctuation, it is prudent to fly well within the confines of
desirable hemispheres, and conversely to distance oneself from
avoided hemispheres.
When planning a flight path as a local function of the spatial
relationship between two hemispheres, there are three
interesting cases: avoiding two isolated hemispheres, flying
within two overlapping hemispheres, and flying within a
hemisphere overlapping or containing one to be avoided.
Development of a global flight path is a nontrivial operation
that requires mission-specific user interaction, since the
electromagnetic environment may consist of an unwieldy number
of hemispheres. A user may specify which hemispheres are
important by inspecting a translucent model of the environment
encompassing terrain, air ceiling, and other constraints.
"Nested" translucence permits viewing of electromagnetic
energy buried several levels below that represented by other
hemispheres. Concepts supporting the flight path planning
process are being developed on a Silicon Graphics
visualization workstation.
2. Simple Efficient Heuristics for Designing Large Scale Survivable
Telecommunications Networks
G. Anandalingam, Frances Lee (University of Pennsylvania), and
Lloyd W. Clarke (Georgia Institute of Technology)
We present the design of survivable telecommunications
networks, and new approaches for obtaining minimum cost
solutions. The large scale problem is decomposed into an
access network and backbone network design problems. We show
that a simple linear programming approach will find optimal
solutions for most practical problems in telecommunications.
We also present heuristics based on statistical analysis for
the access network design problem. Using this method, we can
get near-optimal solutions for 500-node problems in a matter
of seconds using a SUN Workstation. The method outperforms
well-known techniques for designing access networks in the
telecommunications literature.
We then present a heuristic based on bootstrapping feasible
(integer) solutions to linear programming relaxations of the
backbone network design problem. This method also obtains fast
solutions to large scale problems. We then integrate the
decomposed network to make it survivable under node and/or
link failure. The Design System provides extremely fast
solutions to fairly large telecommunications network design
problems.
3. Simulated Annealing and Response Surface Methods for Surveillance
Route Planning and Site Selection
D. E. Brown (University of Virginia) and J.B. Schamburg (U.S.
Military Academy)
Many design and planning problems are characterized by a large
number of design variables and multiple response variables.
This is certainly the case with route planning and site
selection for military surveillance systems. The accuracy of
these systems depends on the geometry, velocities, and
directions of movement among all targets and
sensors. In this paper we describe a methodology to manage
the complexities of this problem. The purpose of our
methodology is twofold: to understand the relationships
between accuracy and employment factors; and to find optimal
or near optimal settings of the controllable employment
factors.
In our methodology we use simulated annealing to find regions
of the design space over which to conduct more detailed
experiments. We then employ response surfaces methods with
graduating functions to analyze and model accuracy within
these selected regions. We test for the validity and
robustness of our models and then use them to make important
route planning and site selection decisions. We show that
this methodology produces significant results on actual
surveillance planning problems.
4. Simulated Annealing for Contiguity-Constrained Hierarchical
Clustering Problems
Christopher L. Huntley (University of Virginia)
A clustering problem is generally defined as the division of a
set of objects into a given number of mutually exclusive
subsets according to a specified clustering criterion.
Hierarchical clustering, then , extends this basic definition
to levels of partitions, where each cluster is a subset of a
cluster in the level above it and a superset of the clusters
below it.
We will explore the use of simulated annealing for a very
general class of hierarchical clustering problems with
applications to school districting, sales territory alignment,
and similar regional planning problems. In this formulation
each object represents a location on a map and is defined to
be adjacent to one or more neighboring locations. The
objective is then to divide the map into connected clusters of
locations (i.e., districts) according to the given criterion.
After a brief review of the relevant complexity results, two
versions of a simulated annealing algorithm will be proposed
for these types of problems. The first uses a penalty function
to drive the search away from infeasible solutions. The
second uses a projection function to map each infeasible
solution to nearby feasible solution.
Empirical results will be presented from a real-world example
drawn from the reorganization of the United States Postal
Service in 1992.
Return to Program Overview
Chair: U. Narayan Bhat (Southern Methodist University)
1. The Backlog Process in Queues with Random Service Speed,
Martin A. Wortman, Georgia-Ann Klutke (Texas A&M University)
This paper examines the backlog (total amount of work in the
system) for a single server queueing system that works in a
"random environment". Specifically, the service speed is
described by an exogenous (non-negative) "environment"
process. We characterize the time-dependent backlog via a
stochastic integral equation and use this equation to compute
stationary performance measures. It is shown that, in
general, the backlog in a queue with a random service speed is
greater than that in an equivalent queue with a constant
service speed. Some empirical results are presented that help
elucidate the principal result.
2. Stability and Queuing-Time Analysis of a Reader-Writer Queue with
Writer Preference
L. C. Puryear (IBM Corporation) and V. G. Kulkarni (University of
North Carolina)
A reader-writer queue manages two classes of customers:
readers and writers. An unlimited number of readers can be
processed in parallel; whereas writers are processed serially
in First-In, First-Out (FIFO) order. Both classes arrive
according to a Poisson process. Reader and writer service
times are general iid random variables. There is infinite room
in the queue for waiting customers. The writer-preference
scheduling discipline gives writers non-preemptive priority
over readers. A customer arriving to an empty system begins
service immediately. Once writers begin service, they are
served exhaustively; i.e. until none remain in the system. The
first writer to arrive to a system serving readers must wait
until those readers complete service. Additional reader
arrivals queue up. This system is analyzed to produce a
stability condition and Laplace-Stieltjes Transforms (LSTs)
for the steady state queueing times of readers and writers. An
example and simulation results are also given.
3. Computational Analysis of a G/G/1 Queue with Vacations and
Exhaustive Service
Huan Li (The University of Miami) and Yixin Zhu (State University
of New York at Buffalo)
A class of G/G/1 queuing systems with exhaustive service and
Erlang vacations is studied. We develop a computationally
tractable algorithm to calculate the moments of the stationary
waiting time and sojourn time of the system. Numerical
examples show that the algorithm is easy to implement and very
powerful.
Return to Program Overview
Chair: Betty L. Hickman (University of Nebraska at Omaha)
1. Flexible Form Estimation: A Comparison of Polynomial Regression
with Artificial Neural Networks
Robert Dorsey and Srimayi Sen (University of Mississippi)
The Artificial Neural Network (ANN) has recently seen
increasing use in business literature. It is a completely
flexible form mapping technique that enables researchers to
approximate unknown functional relationships to any degree of
accuracy. The ANN requires a large number of parameters to
achieve the approximation but to date has not been compared to
similarly parameterized approximation techniques. This paper
examines the ANN and compares it to polynomial regression
using the same number of parameters. A Monte Carlo comparison
is conducted on a number of test problems. In-sample,
interpolation and extrapolation forecasts are compared.
2. An Artificial Neural Network Model for Freeway Entrance Ramp
Metering Control
Chien Hung Wei and Kun Yu Wu (National Cheng Kung University)
Freeway traffic increases significantly over years,
particularly in the urban areas. Efficient freeway traffic
control strategies have been studied to relieve congestion on
freeway.
Entrance ramp metering control is one of the favored method
frequently suggested. The basic theory is to control the
roadway density within critical range so as to maximize
traffic throughputs. Most existing ramp metering control
systems are predetermined operations.
With these systems, it would be hard to retain a stable
traffic condition since traffic is stochastically changing
everywhere over time. An automated control system is highly
desirable that could detect current traffic situation and
response the most suitable control strategy. It is certainly
required that the system operates in real time. Inside the
system computing appropriate metering rates is the core. It
has to take into account several possible variables and their
interactions in time and space dimensions. The artificial
neural network is a reliable candidate for this task. Its
attractive capabilities of fast computing, fault tolerance
and high memory indicate the feasibility of such an
application. In particular, learning process distinguishes
artificial neural networks from conventional approaches such
as mathematical programming or expert systems. Our neural
network model is embedded with a powerful self-retraining
capability. Computed metering rates will be evaluated
against actual traffic conditions and on-line adjusted to
enhance performance. The neural network model is compared
with the popular FREQ10PC model on several aspects by a
macroscopic simulation model.
3. Survey of Applications of Hopfield Neural Network to Scheduling
Problem
Yumin He and Milton L. Smith (Texas Tech University)
Successful applications of Hopfield-type neural networks to
the scheduling problem, as one kind of optimization problem,
are found in the literature. A survey on current research and
future tendency has been conducted. The paper presents the
concept of the Hopfield neural network and two basic models.
It summarizes the applications of the Hopfield neural network
to NP-complete and NP-hard scheduling problems. Practical
application suggestions and future research perspectives are
concluded.
Return to Program Overview
Chair: Bjarni Kristjansson (Maximal Software)
1. New Modeling and Optimization Tools for Windows.
Sanjay Saigel (Compass Modeling Solutions)
With the release of AMPL Plus, the power of AMPL is now
available to users of Windows-based desktops, such as
Microsoft Windows. We will demonstrate the latest
enhancements to AMPL Plus and, time permitting, other new
desktop optimization software available from Compass.
2. How the New Graphical User Interface of MPL for Windows 95 Can Be
Used To Increase Productivity in Modeling.
Bjarni Kristjansson (Maximal Software)
In this demonstration, we will introduce a new version of the
MPL Modeling System that takes advantage of open systems and
multitasking features of Windows to create a complete model
development environment within the Graphical User Interface.
The tight integration of MPL and supported Windows solvers,
such as CPLEX, gives the user seamless access to the solver
directly from the MPL environment. We will also discuss the
current and future impact of state-of-the-art graphical user
interfaces upon formulating optimization models.
Return to Program Overview