TC: Tuesday, January 9, 1:30-3:00

TC1 Survey

Visualization and Optimization

Christopher Jones (University of Washington)

The field of visualization uses interactive computer graphics to help people understand complex systems better. The field of OR/MS uses mathematical modeling such as optimization to help people understand complex systems better. This interactive, multimedia tutorial presents how visualization can contribute to optimization as well as how optimization has contributed to visualization. Although the tutorial focuses on optimization, many of the ideas presented are applicable to other OR/MS modeling traditions.

Return to Program Overview


TC2 Parallel Software for Discrete Optimization Problems

Chair: Catherine Roucairol (Universite de Versailles)

1. Experiments with Parallel Tabu Search

Celso Ribeiro (Catholic University of Rio de Janeiro)

2. Using a Parallel Branch and Bound Library for Solving Difficult Combinatorial Optimization Problems

Bertrand Le Cun and Catherine Roucairol (PRiSM and INRIA, Domaine de Rocquencourt)

We desribe a library, called BOB, whose the principal aim is to make easier the development of the Branch-and-Bound applications. This library has the double goal of allowing on the one hand the Combinatorial Optimization community to implement their applications without worrying about the architecture of the machines and benefiting the advantages provided by parallelism. On the other hand, BOB offers to the community of Parallelism a set of benchmark composed by the efficient algorithms of Combinatorial Optimization for its parallelization methods and/or tools.

To achieve this double goal, the BOB library is founded on the notion of global priority queue which makes the parallelization methods independent from the applications, and vice-versa. We describe for this global priority queue different implementation models (asynchronous, synchronous, client/server, ...) according to the type of used machine (serial, parallel with shared or distributed memory).

A set of serial and concurrent data structures (D-Heap, Skew-Heap, Implicit-Heap, Funnel-Tree, Splay-Trees, ...) is provided to achieve the global priority queue.

We also emphasis on the conception of BOB and its main components (user functions, kernel functions and monitor), in particular the management of the global upper/lower bound in parallel environment. Finally, we show with an example how easy to develop Branch-and-Bound applications with this library.

3. Distributed Branch and Bound Algorithms for Large Quadratic Assignment Problems

Yves Denneulin, Jean-Francois Mehaut (University of Lille), Bertrand Le Cun and Thierry Mautor (PRiSM,INRIA, Domaine de Rocquencourtr)

One of the main features of the Quadratic Assignment Problem (QAP) is its extreme difficulty to be solved exactly. Through the thirty years of development of exact algorithms, progresses have been very slow and the current limits in size remain very low.

To improve the computational performances of exact algorithms and to push forward their computational limits, we increase the computational power of the system by using an important set of processors.

Several different QAP implementations on distributed architectures are described with a focus on the dynamic load-balancing aspects. A first implementation is based on the Master-Slave model. The performances are good, but the size of problems that can be adressed is limited by the memory size. To pass this limitation, we designed a fully distributed implementation which permits to resolve larger problems in reasonable times.

The results and the execution times for different problems of the QAP Library are provided for implementations on a 32 processors IBM-SP2 and on a cluster of 16 processors DEC-ALPHA.

Return to Program Overview


TC3 Geographic Information Systems

Chair: Betty L. Hickman (University of Nebraska at Omaha)

1. GIS in Property Assessment

Michael Racer and Mohammad Amini (University of Memphis)

Shelby County, Tennessee, recently investigated the value of implementing a comprehensive GIS for numerous county activities. As a pilot study in the application of GIS to municipal issues, the Assessor's Office is developing a system that will utilize geographical data in creating better assessments of residential property values.

Shelby County consists of approximately 300,000 residential parcels. The bulk of this two-year effort focuses on the assimilation of this data into the ARC/INFO environment. Of primary interest has been attempting to use existing data sources. While the existence of the data in other forms has proved useful in some instances, it has also brought on the development of a fairly extensive quality control effort.

In this session, we will present some results of this effort to introduce GIS into the municipal activities of Shelby County - including a discussion of anticipated future directions for the project team. Anticipated efforts include a stronger focus on the utilization of the GIS for helpful analyses for other municipal agencies, (e.g. Planning and Development, Health, Housing and Community Development, the school districts).

We will conclude with a discussion of the first analysis project underway, the development of analysis software that will define the functional relationships between property characteristics and value.

2. GIS, Combinatorial Optimization and Operations Research--Their Combinations and Applications in Telecommunication Industry

Buyang Cao and Charles Macleod (Environmental Systems Research Institute, Inc.)

Over last decades, the Geographic Information System (GIS) has been widely used in different areas such as agriculture and land use planning, municipal management, facilities management, transportation, telecommunication. GIS provides us a very powerful tool to retrieve, process, analyze and store large amount of geographic (spatial) data, which are critical and useful for operative, tactical and strategic decisions. Interestingly enough, a lot of new theories, algorithms have been developed in the fields of combinatorial optimization and operations research, which can solve some classical problems from the applications areas mentioned above. Some of these examples are: shortest path, minimal cost tree, location-allocation, traveling salesman, vehicle routing (with/without time windows) problems.

In this paper, we describe a system incorporating the techniques of GIS, combinatorial optimization and operations research, regarding the problem of designing an optimal layout of a telephone cable network meeting certain constraints required by the business logic. An algorithm for finding a reasonable layout of the telephone cable network is presented. An implementation of this system within an ArcView/ArcInfo client server environment is briefly discussed. We also demonstrate how to use GIS in building the corresponding model for the optimization problem and verifying the results obtained by the algorithm. The applications of this system present an promising link between GIS, combinatorial optimization and OR methods.

3. A Linear Programming Approach to Terrestrial Imaging Spectroscopy Data Analysis

Richard S. Barr (Southern Methodist University), Michael P. Bishop, Cynthia L. Corritore (University of Nebraska at Omaha), Fred Glover (University of Colorado at Boulder), Betty L. Hickman (University of Nebraska at Omaha)

Reflected solar electromagnetic energy from the earth's surface can be recorded in as many as 200 contiguous regions of the spectrum, generating registered spectral images. These digital images contain information about the environment and can be used for resource mapping and exploration, assessing environmental degradation, monitoring environmental change, and improving our knowledge and understanding of earth systems. Furthermore, spectral data can be used to map minerals and identify biochemicals in forest canopies. Consequently, research is required to investigate the relationships between multispectral data and biogeochemical concentrations from the surface. However, the volume of data and lack of knowledge regarding functional forms needed for empirical modeling pose unique problems for geoscientists. Therefore, we have developed an automated linear programming approach to modeling linear and nonlinear relationships.

The linear programming approach is based on a new method for automatically generating polynomial L1-regression models that implicitly examines a combinatorially explosive number of higher-order terms. While applicable to virtually any real-number mapping function of the original variables, we explore powers and combinations of transendental and logarithmic transforms. The efficiency of this technique is critical for the timely solution of the large number of problem instances created by the observed spectra.

Return to Program Overview


TC4 Neural Networks III

Chair: Camille C. Price (Stephen F. Austin State University)

1. Neural Network Training Via Quadratic Programming

Theodore B. Trafalis and Nicolas P. Couellan (University of Oklahoma)

We develop two new training algorithms for feedforward supervised neural networks based on quadratic optimization methods. Specifically, we approximate the error function by a quadratic convex function. In the first algorithm, the new error function is optimized by an affine scaling method, which replaces the steepest descent method in the backpropagation algorithm. In the second algorithm, the steepest descent method is replaced by a trust region technique. Comparative numerical simulations for m edical diagnosis problems show significant reductions in learning time with respect to a Quasi-Newton algorithm.

2. Adaptive Learning Rates and Fuzzy Targets for Backpropogation

S. Monica Lam (California State University, Sacramento)

This paper discusses two applications of fuzzy set techniques to improve the performance of backpropagation. The first application is to replace the fixed learning rate of backpropagation by adaptive learning rates based on training cases' fuzziness. The second application is to replace hard targets of training cases by fuzzy targets for backpropagation. In conventional backpropagation training, cases which lie on the separating surfaces of several classes tend to confuse the learning process since they are coded as a certain class when in fact they belong to several classes to a similar degree. Adaptive learning rates direct the network to learn more about typical cases (hard cases) than atypical cases (fuzzy cases). Fuzzy targets prevent the network from oscillations during the optimization process by backpropagating errors in proportion to cases' fuzziness. Three different data sets were used to test backpropagation with adaptive learning rates and fuzzy targets. Preliminary experimental results confirm the positive effect of fuzzy set techniques on improving backpropagation's performance.

3. Fault-Tolerance in a Boltzmann Machine for Quadratic Assignment Problems

Camille C. Price, John B. Hanks, and Jeffery N. Stephens (Stephen F. Austin State University)

Connectionist computing models represent a promising advance in the field of neural networks. The Boltzmann machine is a model for connectionist computing that has been applied to the solution of combinatorial optimization problems and which can be viewed as a massively parallel simulated annealing procedure. We have designed a Boltzmann machine to solve quadratic assignment problems, and have demonstrated its effectiveness. In anticipation of hardware implementation of the Boltzmann machine, it is desirabl e to have an understanding of the fault-tolerant properties of this computational model under inevitable conditions of component failure. We have investigated the fault-tolerance of this connectionist model by injecting a variety of patterns of component failures, including single node failures, column node failures, and random multiple node failures. Our purpose was to observe and measure the deterioration in the quality of the objective function results that are produced. In our experiments, we observed a n average degradation in performance of approximately 1% from that of a fully operational Boltzmann machine.

Return to Program Overview


TC5 Statistical Methods and Computation

Chair: Soren S. Nielsen (University of Texas at Austin)

1. Stochastic Analysis of Affinity Scheduling and Load Balancing in Parallel Processing Systems

Randolph D. Nelson and Mark S. Squillante (OTA Limited Partnership IBM Research Division)

Across a wide range of parallel processing systems it is often more efficient to schedule a task on one server than on another. Due to the inevitability of imbalances in the amount of work assigned to each server in these environments, there exists a fundamental tradeoff between keeping the workload balanced and scheduling tasks where they run most efficiently. The purpose of an adaptive migratory scheduling policy is to determine the appropriate balance between the extremes of this affinity-scheduling/load-balancing tradeoff, which has important practical and theoretical implications on the design and development of high-performance computer and telecommunication systems. We formulate a general queueing theoretic model of adaptive load balancing to study these issues for an important class of parallel processing environments. We consider a general adaptive threshold policy that allows overloaded servers to transfer some of their work to servers with less load (sender initiated), while simultaneously allowing underloaded servers to transfer work from other servers with more load (receiver initiated). By varying four policy control parameters we induce a rich family of schedul ing policies. A mathematical analysis of the system is derived based on decomposition and matrix-geometric techniques, which yields closed-form solutions for many instances of the model. Our objective is to provide a fundamental understanding of affinity scheduling and load balancing in shared-memory and certain distributed-memory parallel processing environments. We illustrate the potential for significant improvements in system performance and we show that, even when migration costs are large, it may stil l be beneficial to migrate waiting tasks. We also experimentally determine optimal threshold values from our analysis and demonstrate the potential for unstable behavior under migratory scheduling policies if thresholds are selected inappropriately.

2. Robust Multiple Linear Regression with Observations per Estimated Parameter Severely Constrained

Ahmed M. M. Sultan (Egyptian Air Force), Albert H. Moore, Joseph P.Cain, and Dharam C. Khatri (Air Force Institute of Technology)

An extensive Monte Carlo analysis is conducted to determine the performance of robust linear regression techniques with and without outliers. Various methods of regression are compared including least squares and minimum absolute deviation. The classical robust techniques of Huber and Hample were used and robust techniques using the Q-statistic as discriminant were introduced.

3. A General Approach for the Computerized Construction of Efficient Choice Experiments

Klaus B. Zwerina and Joel Huber (Duke University)

Discrete choice experiments are increasingly applied in marketing, retailing, and transportation problems to model and predict consumer choice behavior. This is, because their ability to realistically mimic what happens in the marketplace. Since choices are less informative than, e.g., ratings obtained from the same set of hypothetical objects, many respondents and/or choices are normally necessary to achieve reliable estimates. To reduce the number of respondents and/or choices used in an experiment, statistically efficient choice designs are required.The designs of such experiments are very complex and require normally special skills. Ready-made efficient choice designs are only available for a few special cases. The authors provide a general approach for the computerized construction of efficient choice designs. This offers a flexible way to construct efficient choice designs within many contexts. The approach uses exchange algorithms to maximize the determinant of the Fisher information matrix of the multinomial logit model. They demonstrate its usefulness on different types of choice designs. Examples include choice designs with alternative-specific attributes, attribute and availability cross effects, and the incorporation of prior information on the parameter estimates.

4. Importance Sampling in Lattice Pricing Models

Soren S. Nielsen (University of Texas at Austin)

Binomial lattice models find widespread uses in the valuation of derivative financial instruments. When these instruments are path-dependent (non-Marcovian), it is usually necessary to resort to Monte Carlo simulation. However, a crude Monte Carlo sampling of the lattice can be very inefficient since a large number of sample paths may need to be calculated. We develop procedures for sampling the paths of a binomial lattice based on importance sampling and related methods. Such procedures can lead to a significant reduction in sample variance, but due to the very large number of sample paths, it is problematic to specify the "importance" of each path. A two-stage sampling procedure for doing this is presented, and also a method based on varying the lattice branching probabilities. The methods can be fitted to instrument characteristics, either based on prior knowledge of the instruments' payoff functions, or adaptively.

Return to Program Overview