TD: Tuesday, January 9, 3:30-5:00

TD1 Survey

A Polynomial Time Network Simplex Algorithm

James Orlin (Massachusetts Institute of Technology)

Developing a polynomial time algorithm for the minimum cost flow problem has been a long standing open problem. To this date, the best running time for a network primal simplex algorithm was subexponential, but not polynomial. Here we present the first polynomial time network simplex algorithm.

Previously, there had been polynomial time primal simplex algorithms for the shortest path problem, the assignment problem, and the maximum flow problem. Moreover, there had been polynomial time dual simplex algorithms for the minimum cost flow problem. There are also methods that can solve the minimum cost flow problem as a polynomial number of pivots if one permits the objective function value to get worse at some of the iterations.

In this talk, we review some of the history of this problem, and present a new algorithm referred to as the premultiplier method. The premultiplier method is itself a pseudopolynomial primal network simplex algorithm, but it can be transformed into a polynomial time algorithm using well known scaling techniques. When the scaling approach is applied appropriately, the algorithm remains a primal network simplex algorithm. The number of pivots is O(nm log nC) on a network with n nodes, m arcs, and integral cost coefficients whose absolute value is bounded above by C. Moreover, this algorithm can be made strongly polynomial time using an approach developed by Tardos more than a decade ago.

The premultiplier method is unusual in that it does not maintain a vector of simplex multipliers. Rather the reduced costs of tree arcs are permitted to be non-zero. When arcs of the tree are oriented towards the root node, the reduced costs of tree arcs are non-positive.

Some preliminary testing of the polynomial time primal simplex method suggests that it performs well in practice.

Return to Program Overview


TD2 Modeling and Computing Issues in Intelligent Transportation Systems

Chair: Kendall E. Nygard (North Dakota State University)

1. Mathematical Modeling and Computing Issues in Advanced Traveler Information Systems

Kendall E. Nygard (North Dakota State University)

The Advanced Traveler Information Systems (ATIS) area within Intelligent Transportation Systems (ITS) involves widely dispersed travelers, many with portable computing devices, and networked with wireless communication technologies. Although there are a number of enabling technologies, they must be integrated to produce large-scale, end-to-end solutions with acceptable real-time performance. Successful deployment of an ATIS will require the solutions to many problems that fall at the interface between operations research and computer science. This includes problems in distributed queueing, data management, telecommunications, visualization, routing, and scheduling. In this presentation we discuss these issues, and offer solution approaches and procedures.

2. Genesis and Advanced Traveler Information Systems (ATIS): Killer Applications for Mobile Computing?

Shashi Shekhar and Duen-Ren Liu (University of Minnesota)

Genesis and ATIS are being developed under the umbrella of the Intelligent Vehicle Highway Systems ( also known as Intelligent Transportation Systems} to facilitate various kinds of travel, including daily commuting to work via private/public transportation and visiting new places and points of attraction.

Travelers are naturally mobile, and the most effective way to aid travelers is via mobile computing, which is being explored in the Genesis project at Minnesota. The travelers can use personal communication devices including pagers and portable computers (e.g. Apple Newton) to avail themselves of the ATIS services provided by Genesis. We present an overview of the goals and preliminary design of Genesis.

We believe that ATIS provides a very important commercial application of mobile computing and can bring mobile computing to mass markets. This paper focuses on describing the application domain rather than evaluating candidate solutions.

3. Traffic Prediction and Control Via Semantic Control

S. Massoud Amin, Ervin Y. Rodin, Ann-Pheng Liu, Katherine Rink, Travis Cusick (Washington University), Asdrubal Garcia-Ortiz and John R. Wootton (Electronics and Space Corp.)

In this report we consider the part of our work which concerns the urban traffic management via a hybrid prediction/routing/control system. In this approach, we utilize the Semantic Control paradigm which consists of the following modules: an Identifier, a Goal selector, and an adapter. The Identifier performs "system identification", i.e. determines the road network congestion level as well as incident detection and localization. The Goal Selector handles traffic routing for maximum use of available road network capacity; the Goal Selector consists of two sub-modules, one for traffic management in signaled streets and another module for highway traffic control (advisory). The Adapter copes with changes as well as providing micro-level drivers' support for shortest path routing and steering control (an in-vehicle driver's assistance). The Identifier has a Stochastic Prediction module, rules for incident detection, and a radial basis function neural network-based estimator. The Goal Selector consists of Shortest-Path algorithms and a discrete-event-dynamical-system simulation of signaled intersections. The control laws in the Adapter are developed for two situations: 1) vehicle control within a platoon, and 2) neurocontrol of a single vehicle. In this report we focus on the first two modules, the Identifier and the Goal Selector.

A model of the highway system, based on historical traffic data provided by the Missouri Highway and Transportation Department (MoHTD), is currently being developed. The algorithms are currently being evaluated using data from a portion of the highway system in the St. Louis metropolitan area.

4. Neural Network Data Fusion Models for Arterial Travel Time Estimation

Peter C. Nelson, Elliott A. Torres and Joseph Raj (University of Illinois at Chicago)

The ability of Advanced Traveler Information Systems (ATIS) to predict traffic flow patterns, estimate link travel times, and detect incident occurrences are dependent on how the traffic information is collected and "fused." These systems must be able to accurately interpret all the available data and provide useful information for the ATIS user. Artificial neural networks appear the most suitable innovative technology for solving the data fusion problem. In addition, while this problem is encountered primarily in ATIS applications, it is also fundamental to operations in Advanced Traffic Management Systems (ATMS).

Data fusion is the process of collecting, organizing and integrating raw and processed data from multiple sources and producing some sort of meaningful inference from this data. Data collected from the various sources is screened for reasonability and is then fused. The network-wide traffic information gathered from several sources may have a variety of syntax and semantics. Moreover, the data collected from various sources is often incomplete and inconsistent. The traditional probabilistic models or regression models can not handle such diversified raw data.

This paper will present the results obtained from the development of a neural network system that processes available traffic data to produce synthesized traffic information that can be integrated into ATIS. The sub-network selected for training is a major arterial street which includes signalized intersections that operate under semi-actuated signal control and loop detectorized intersections. The neural network models under consideration include backpropagation, counterpropagation and recurrent backpropagation. The training data consisted of using flow and occupancy values from loop detectors as inputs and actual travel times as the output values to train the network. The decision was made to filter training and evaluation data with fuzzy logic to improve the generalization of the network.

Return to Program Overview


TD3 Discrete Optimization and Heuristic Search: Applications and Developments

Chair: Rex K. Kincaid (The College of William and Mary)

1. Progressive Hedging for Mixed Integer Programming

David L. Woodruff (University of California at Davis) and Arne Lokketangen (Molde College, Molde, Norway)

We begin by describing an application to production planning to motivate stochastic multi-stage optimization. We then briefly review the progressive hedging (PH) algorithm proposed by Rockafeller and Wets for stochastic programming. When the algorithm is applied to integer problems, a number of issues arise. Some of these issues are discussed and computational results to date are presented.

2. A Guideline on Preparing, Performing and Reporting Numerical Experiments --- Where We Are and Where We Go

Stefan Voss and Lutz Sondergeld (Technische Univ. Braunschweig)

Scientific research is often verified by numerical experiments. The main goals for experimental testing are:

However, the (to a large extent) non-existence of a generalized standard for dealing with numerical experiments often involves a loss of usability and exploitability of scientific work.

Thus there is an urgent need for a critical analysis of the `state of the art' of experimental testing today and to draw necessary conclusions for future work. More specifically, there is a need for a scientific foundation for empirical research including the provision of numerical experiments - a guideline how to prepare, perform and report on numercial experiments.

The focus of this paper is to present our current status with respect to developing of such a guideline. As a testbed for verification we provide an application within the field of combinatorial optimization, where the specific problem under consideration will be the well-known p-median problem. The methods used, for reason of comparability, belong to the class of recent metastrategies involving various tabu search implementations as well as the search paradigm known under the name ant system.

3. A Branch-and-Bound Algorithm for Partitioning k-ary n-cubes

Weizhen Mao and David M. Nicol (The College of William and Mary)

Many parallel processing networks can be viewed as graphs called k-ary n-cubes, whose special cases include rings, hypercubes and toruses. The graphs are usually weighted, with the node weights representing computation volumes and the edge weights representing communication volumes. Define the load of a subgraph to be the sum of the weights of its nodes and its external edges (those with only one end point in the subgraph). If a graph is partitioned into P subgraphs, then the bottleneck cost of the partition is the maximum load among all P subgraphs. The partitioning problem is therefore to find a partition such that the bottleneck cost is minimized. Further, a rectilinear partition in a k-ary n-cube is one in which the separating cuts are all hyperplanes (vertical and horizontal lines for mesh with wrap-round edges).

In this presentation, we will first give a characterization of the subgraph (in a k-ary n-cube) of a given number of nodes with the maximum edge count, and a method of computing the maximum edge count. Then we will apply the results to a branch-and-bound algorithm to determine near-optimal rectilinear partition in a k-ary n-cube. Specifically, the theoretical results will help define a lower bounding function, which lies at the heart of any branch-and-bound algorithms.

4. Actuator Placement for Active Sound and Vibration Control of Cylinders

Rex K. Kincaid (The College of William and Mary)

Active structural acoustic control is a method in which the control inputs (used to reduce interior noise) are applied directly to a vibrating structural acoustic system. The control concept modelled in this work is the application of in-plane force inputs to piezoceramic patches bonded to the wall of a vibrating cylinder. The cylinder is excited by an exterior noise source--an acoustic monopole--located near the outside of the cylinder wall. The goal is to determine the location and force inputs for the piezoelectric actuators so that (1) the interior noise is effectively damped; (2) the level of vibration of the cylinder shell is not increased; and (3) the power requirements needed to drive the actuators are not excessive.

To model objective (1) a p by n (complex) transfer matrix H is calculated, in which each entry of H gives the contribution of each of the n potential actuators sites to noise reduction at each of the p error microphone locations whenever a unit force input is applied to the actuator site. One optimization problem then is to determine the best set of k out of n actuator sites. The selection of a good set of k actuator sites can be done with a tabu search heuristic. Our preliminary experiments confirm that the tabu search heuristic is able to uncover better solutions than those selected based upon engineering judgement alone. In addition, the high quality solutions generated by tabu search, when minimizing interior noise, do not further excite the cylinder shell. Thus, we are able to meet objective (2) without imposing an additional constraint or forming a multi-objective performance measure.

Return to Program Overview


TD4 Computational Experience with Interior Point Methods for LP and Linear Complementarity

Chair: David Shanno (Rutgers University)

1. Industrial-Strength Parallel Linear Programming

Irvin Lustig (CPLEX Optimization, Inc.)

This paper describes the parallelization of an ``industrial strength" linear programming package. Our parallel version of CPLEX barrier, running on a Silicon Graphics Power Challenge shared-memory multiprocessor, provides dramatic performance improvements over sequential methods on a wide range of practical, realistic linear programming problems. The resulting software/hardward combination can provide sustained performance of as much as 2 Gflops.

2. LPSOL: a MATLAB-Based Interior-Point Code

Yin Zhang (University of Maryland, Baltimore County)

The talk documents algorithmic development and computational experience with LPSOL.

3. An Infeasible-Interior-Point Method for Linear Complementarity

David Shanno and Evangelia Simantiracki (Rutgers University)

The paper discusses a globally convergent infeasible interior-point method for linear complementarity problems. Convergence is shown under very loose conditions, which are shown to be necessary in practice. The method has proven very efficient in practice. Extensive numerical results will be given.

Return to Program Overview


TD5 Intelligent Manufacturing

1. Modeling and Simulation of an Asynchronous, Distributed, Dynamic Inventory Management (ADAMS) Scheme

Lingrong Chen (BGS Systems) and Sumit Ghosh (Arizona State)

The most fundamental of the traditional inventory decision model consists of the classic economic-order-quantity (EOQ) and its variations. The classic EOQ model assumes a constant demand rate that is known a priori, inventory is replenished when it drops to exactly zero, lead time is a non-zero constant, and ordering cost, unit price, and holding cost are constant. This lends itself to the analytical determination of the reorder point, quantity and frequency of reorders. The computation is performed by a centralized, decision-making entity either yearly or at a fixed interval of time, utilizing historical data, projected demand, and other known parameters. During operation, reorder is triggerred as soon as the reorder point is reached, i.e. the inventory level falls below the computed threshold. Thereafter, following a constant lead time, the inventory is replenished with a fixed quantity of items. Thus, the computations of the reorder points and the reorder quantity are static, i.e. they do not take into consideration today's dynamic consumer demand, fast-paced price changes resulting from intense competition, and significant increase in computational overhead resulting from large numbers of product types and other factors. Although variations of the classic EOQ model allow probabilistic demands and other constraints, distributions, many retail chains such as KMart and Macys resort to expensive computers to quickly execute the inventory program and react timely to competition while others including Walmart require their suppliers to track daily inventory levels and adjust production and supply to the Walmart stores.

This paper proposes a new approach, ADAMS, wherein the computation of the cost function is trigerred as soon as the inventory level falls below the threshold and the reorder point is dynamically determined based on the current system parameters. Thus, ADAMS is expected to exhibit a lower cost function and track the dynamics of the inventory system better than the traditional schemes. In addition, ADAMS introduces the concept of emergency replenishment, one in which a retail unit may request the transfer of items from another retail unit. In general, the time to transfer from a retail unit is shorter than that from the warehouse. Therefore, emergency replenishment is likely to lower the cost while improving item availability. In ADAMS, the overall inventory management computation is distributed among all of the retail units. While the computations are performed quickly, the evolution from a centralized computing entity to geographically-distributed computing entities will exhibit greater robustness and imply less vulnerability to natural and artificial catastrophies. In ADAMS, the inventory management system is organized into a multi-level hierarchy with on or more retail units and one or more warehouses grouped into a region based on state boundaries, geo-political, and other considerations. A retail unit may either request transfer of items from a neighbor retail unit within its region or reorder from that region's warehouse. In turn, a warehouse may either request the transfer of items from a neighboring region's warehouse or order directly from the manufacturer.

ADAMS is modeled for representative inventory management networks and simulated on 25+ multiple, concurent SUN workstations, configured as a loosely-coupled parallel processor. Extensive simulation experiments are performed with stochastically generated demand functions and for different sets of values of key parameters. Performance measures indicate that (i) while ADAMS exhibits the same general saw-tooth behavior as the classic EOQ model, the reorder point is determined dynamically, (ii) under representative parameters, emergency replenishment realizes lower cost function, and (iii) that ADAMS is scalable, i.e. the CPU times required by the retail units are uniform and only marginally affected by the size of the inventory management network.

2. Scheduling a Flow Shop to Minimize the Maximal Lateness Under Arbitrary Precedence Constraints

Joanna Jo'zefowska, Arkadiusz Zimniak (Poznan' University of Technology, Poland)

A flow shop consisting of m stages with arbitrary precedence constraints between tasks is considered. The optimization criterion is the maximum lateness. Heuristic algorithms have been developed for this NP-hard problem and solutions generated by them have been compared with the optimal ones for reasonably sized problems. A polynomially solvable special case has been presented. The conclusions concerning further use and development of heuristics are also presented.

3. A Cooperative Multi-Agent Approach to Constrained Project Scheduling

Dan Zhu (University of Iowa) and Rema Padman (Carnegie Mellon University)

Different approaches to a problem generally produce distinct algorithms that have significantly different performance characteristics in varying problem environments. In most instances, a single algorithm is chosen to be applied throughout the solution process of a given problem, thus losing the opportunity to exploit the desirable features of other methods. An Asynchronous Team (A-Team) is a software organization that facilitates cooperation amongst multiple algorithms so that together they produce better solutions than if they were acting alone. This paper explores the feasibility of applying the A-Team approach to the practical problem of resource-constrained project scheduling with cash flows (RCPSPCF). This problem investigates the scheduling of precedence-related and resource-constrained activities to maximize the Net Present Value of the expenses incurred and payments made over the duration of the project. This is a complex combinatorial optimization problem which precludes the development of optimal schedules for projects of practical size. We embed several simple heuristics for solving the RCPSPCF within the iterative, parallel structure of A-Team which provides a natural framework for distributed problem solving. Preliminary results on small randomly generated project networks show that the combination of multiple, simple heuristics perform very well when compared to many single pass, complex optimization-based heuristics proposed in the literature.

Return to Program Overview