MD: Monday, January 8, 3:30-5:00

MD1 Tutorial

Classical Queuing Theory

Robert B. Cooper (Florida Atlantic University)

This tutorial gives a survey of classical queueing theory, covering material developed from the time of Erlang (early 1900s) up to the present. We begin with the now-standard birth-and-death arguments of Erlang, and quickly derive the Erlang formulas. We discuss the properties of the two major Erlang models (the Erlang loss model and delay model) and their applications in teletraffic engineering, and the finite-source model and its applications in computer science. This is followed by multidimensional birth-and-death models, and applications to circuit-switched (voice) networks and message-switched (data) networks. We then introduce the concept of the imbedded Markov chain; and we focus more narrowly on the M/G/1 queue and it variants, including the finite-buffer model, vacation models, and polling models.

Along the way we discuss fundamental concepts and results such as Little's theorem, insensitivity, PASTA, the arrival theorem, the output theorem (Burke's theorem), the Pollaczek-Khintchine formula, and the differing statistical viewpoints of the arriving customer, the departing customer, and the outside observer.

This tutorial is intended to provide an overview of the whole field, as opposed to an emphasis on current research. The presentation will be informal and, hopefully, responsive to questions from the audience. It is intended for an audience that is somewhat knowledgeable but not necessarily expert, and who would benefit from a unifying, big-picture overview.

Return to Program Overview


MD2 Parallel Computing for Large-Scale Optimization

Chair: Stavros Zenios (University of Cyprus)

1. Splitting Methods and Affine Variational Inequalities, with a Parallel Application to Optimal Control.

Jonathan Eckstein (Rutgers University) and Michael C. Ferris (University of Wisconsin)

We start with a simple exposition of splitting methods for finding roots of set-valued maximal monotone operators. We then apply these methods to monotone affine variational inequality problems, yielding not only the familiar gradient projection and matrix splitting algorithms, but also some less-known methods that have much more robust convergence properties. Next, we specialize on of these methods to discretized optimal control problems formulated as extended linear-quadratic programs in the manner advocated by Rockafellar and Wets. The result is a highly parallel algorithm, which we implement and test on the Connection Machine CM-5 computer family. On a set of test problems derived from the MCPLIB, the parallel splitting implementation far outperforms standard serial solvers. We also discuss heuristics that improve the "tail convergence" properties of the splitting algorithm.

2. A Parallel Lagrangean System for Large Scale MIPs

Shunping Zhu and Kurt Spielberg (IBM Corporation)

We present results of an NSF project: Parallel Lagrangean System (PARLAS) for large scale mixed integer programming problems (MIP). The system is designed to provide users with capabilities for exploring problem structures intuitively, and solving problem interactively. The users decide upon the manner of decomposing a complicated problem into subproblems. With information provided by the users, the system automatically generates subproblems which can be relatively easily solved by existing solvers.

In the system, the solution algorithm is based on a "2-phase hybrid method" which can solve Lagrangean duals efficiently. The method combines subgradient optimization (SO) and constraint generation (CG). SO may yield quickly a good approximation to the dual optimum but may fail to converge to it. It also lacks a convergence criterion. It provides valuable information for a CG method, though, as a number of Lagrangean feasible solutions are generated from Lagrangean subproblems and/or heuristics. This alleviates the often slow start exhibited by CG methods. The master problem provides a value converging to the Lagrangean dual value. We use it as target value in the initial (subgradient) phase of our 2-phase hybrid method. A more precise target value yields faster convergence. If the method fails to generate new cuts, we switch to a pure CG phase.

We also present computational results based on both single-processor and multi-processor machines (SP-1) for selected applications.

3. A Scalable Parallel Interior Point Algorithm for Stochastic Linear Programming and Robust Optimization

Stavros A. Zenios (University of Cyprus) and Dafeng Yang (University of Pennsylvania)

We present a computationally efficient implementation of an interior point algorithm for solving large-scale problems arising in stochastic linear programming and robust optimization. A matrix factorization procedure is employed that exploits the structure of the constraint matrix, and it is implemented on parallel computers. The implementation is perfectly scalable. Extensive computational results are reported for a library of standard test problems from stochastic linear programming, and also for robust optimization formulations. The results show that the codes are efficient and stable for problems with thousands of scenarios. Test problems with 130 thousand scenarios, and a deterministic equivalent linear programming formulation with 2.6 million constraints and 18.2 million variables, are solved successfully.

Return to Program Overview


MD3 Tutorial

2LP: Linear Programming and Logic Programming

Carol Tretkoff and Ken McAloon (Brooklyn College, CUNY)

The 2LP programming language is dual to modeling languages. It starts with programming rather than modeling. This approach makes software engineering a part of model management, allows the user to express disjunctive constraints without resorting to 0-1 variables, makes interfacing with C seamless, facilitates the integration of AI and OR techniques.

Return to Program Overview


MD4 Heuristics

Chair: Rema Padman (Carnegie Mellon University)

1. A New Tabu Search Heuristic for Set Covering Problems.

Erik Rolland (University of California)

In this paper we investigate the use of tabu search for solving set covering problems. We will see how a tabu search technique performs for solving such problems. A comparison to a well known exact solution method is presented.

2. Principles of Cybernetic Optimization: Accelerating Convergence of Simulated Annealing Through Parallel Processing and Probabilistic Feedback Control.

Mark Fleischer (Old Dominion University)

The convergence of the simulated annealing (SA) algorithm is accelerated by a probabilistic feedback control (PFC) scheme. This scheme uses two or more parallel processors to solve the same or related combinatorial optimization problems and are coupled by a {\em Probabilistic Measure of Quality (PMQ). The PMQ is used to generate an error signal for use in feedback control. Control over the search process is achieved by using the error signal to modulate the temperature parameter. Other aspects of control theory, such as the system gain and its effects on system performance, are described. Theoretical and experimental results show that such a scheme increases the steady-state probability of the globally optimal solutions.

3. A Genetic Programming Approach for Heuristic Selection in Constrained Project Scheduling

Rema Padman and Stephen Roehrig (Carnegie Mellon University)

Return to Program Overview


MD5 Panel Discussion: Measuring and Reporting Parallel Performance

Chair: Ismail Chabini (Massachusetts Institute of Technology)

Panelists: John Gustafson (AMES Research Center), Michael Florian, Ismail Chabini (Massachusetts Institute of Technology), Richard Barr (Southern Methodist University)

Accompanying the increasing use of parallel computing technology is a corresponding growth of research into the development and testing of parallel algorithms and applications. This panel will address the inherent difficulties in measuring and reporting on parallel performance and explore various approaches to their resolution.

Return to Program Overview


MD6 Software Demonstrations II

Developing a GUI Math Programming Application for Non-OR Users.

Rick Douthat (Resource Optimization, Inc.)

ROI XPRESS is a tool for rapid development and deployment of PC-based, networkable, optimization applications. The FoxPro Graphical User Interface integrated with the XPRESS-MP modeling and optimization system makes ROI XPRESS easy to learn and use while maintaining the power and speed necessary to handle large and complex optimization problems.

Windows MPSIII

Joe Creegan (Ketron Management Science)

The various interfaces into this suite of model management and optimization systems are discussed and the interactive Window control environment is demonstrated.

Return to Program Overview


Membership Reception, 7:00 pm


Return to Program Overview