Recent Advances in Integer Programming
Karla L. Hoffman (George Mason University)
There have been many advances in the methodologies for
solving large integer programming problems. This talk will
provide an overview of some of the strategies for modeling
and solving such problems. Model formulation, preproceesing,
heuristics, cutting-plane techniques, and column-generation
approaches will be presented as well as the presentation of a
general framework that will allow all such strategies to be
combined into an overall optimization software package.
The talk will begin with a discussion of the importance of the
model formulation and its implications for modern modeling
languages. We will illustrate how the reformulation of an
otherwise unsolvable problem makes the problem solvable.
Often these reformulations require the addition of a very
large number of both rows and columns. This section of the
talk will discuss the importance of special structures in the
solution of these problems and how modeling languages can
assist in identifying structure.
Once we have a formulation, most computationally effective
methodologies for solving integer programming problems work to
continually tighten both the upper and lower bounds on the
optimal solution value. If we assume a maximization problem,
feasible solutions are lower bounds to the problem. A short
survey of useful heuristics for finding good integer solutions
for various problem structures will be presented.
The upper bound on the integer program is obtained by some
relaxation of the constraints of the problem. Most commercial
codes relax the integrality requirements (and thus use LP to
solve the subproblems). Lagrangian relaxation and
decomposition are alternative strategies for obtaining tight
upper bounds. Although this talk will concentrate on the
tightening of the LP relaxation, we will identify where other
relaxations might be usefully employed.
The process of tightening the upper bound is done by (a) the
automatic reformulation of the problem by techniques usually
referred to as "preprocessing routines", (b) the addition of
valid linear inequalities that cut-off a current fractional
vertex but none of the feasible integer solutions and (c) the
addition of new columns when the relaxation has ignored many
of the columns in the original formulation.
We briefly outline general preprocessing procedures and then
describe cutting plane procedures that approximate facets of
the convex hull of integer points. These facet-generation
procedures require an understanding of the underlying
structure of the problem. Other cutting-plane procedures such
as "lift-and-project" and procedures where cuts are obtained
by expanding the column space will be outlined.
We also present column-generation procedures for some
difficult ip problem structures and present some recent
computational results that validate the value in a
reformulation that significantly expands the dimensionality of
the problem. Examples include crew-scheduling, job-shop
scheduling, and the generalized assignment problem.
Finally, we discuss how all of the techniques presented can be
embedded into a general tree-search procedure that is capable
of proving optimality to very difficult problems or providing
a good feasible solution with a measure of how far from
optimality that solution could be.
Return to Program Overview
Chairs: Bruce Golden (University of Maryland) and Edward Wasil
(American University)
1. Pattern Recognition and Neural Networks in Practice
Tom Downey (PARTEK Inc.)
When using neural networks to model a phenomena it is in our
best interest to carefully study the data which is measured
before "throwing" it at a neural network. This is important
for at least two reasons. First, in order to choose the best
type of neural network, we must have some understanding of the
data the network is intended to model. Secondly, we are
interested in selecting the independent variables from the
measured data which will be most effective in allowing us to
model the phenomena. We may additionally rescale or otherwise
transform the selected variables before proceeding. I'll give
examples of a few real-world pattern recognition applications
which highlight the need for this preliminary analysis and
processing of the data. In these examples, we'll see how
visual and quantitative analysis of the data can enable us to
make good decisions about feature selection, feature
transformation, and choosing the architecture of the neural
network. Finally, we'll take a look at what visualization
offers us to help assess the quality of our trained neural
network.
2. Neural Tree Networks via Tabu Search
K. P. Bennett and J. A. Blue (Rensselaer Polytechnic Institute)
Neural tree networks combine the representational capabilities
of neural networks with the logical structure of decision
trees. Determining whether two or more disjoint sets can be
correctly classified by a neural tree network with fixed
structure is an NP-complete problem. The problem is cast as a
multilinear program that minimizes the sum of the products of
linear functions over a polyhedral region. The resulting
solution is a vertex of the polyhedral region. A tabu search
mechanism is used to traverse the vertices. In computational
testing, tabu search was found to be more robust than
classical optimization approaches. Promising results are
reported for both generated and real-world classification
problems.
3. A Neural Network Model for Predicting Atlantic Hurricane Activity
Ohseok Kwon, Bruce Golden (University of Maryland), and Edward
Wasil (American University)
Modeling techniques such as linear regression have been used
to predict hurricane activity many months in advance of the
start of the hurricane season with some success. In this
paper, we construct feed-forward neural networks to model
Atlantic basin hurricane activity and compare the predictions
of our neural network models to the predictions produced by
statistical models found in the weather forecasting
literature. We find that our neural network models produce
reasonably accurate predictions that, for the most part,
compare favorably to the predictions of statistical models.
Return to Program Overview
Chair: John Hooker (Carnegie Mellon University)
1. Tutorial in First Order Logic
John Hooker (Carnegie Mellon University)
While operations research has been solving problems with
mathematical programming and related methods, computer science
and artificial intelligence have developed methods in which
first order logic plays a fundamental role. These include
logic programming (e.g., PROLOG), constraint logic programming
(CLP2) and to some extent constraint satisfaction methods
(CHIP, ILOG Solver). First order logic is also the natural
framework for logic-based modeling languages. There is no
reason, other than historical accident, that logic-based
methods should not be standard tools of OR. In fact the
recent successes of constraint-based algorithms and the
advantages of logic-based modeling are beginning to attract
the attention of the OR community. It is not hard to foresee
an integration of mathematics- and logic-based methods that is
more powerful than either in isolation. If this prognosis is
correct, OR researchers would do well to become familiar with
logic, particularly first-order logic. Actually the basic
ideas are fairly straightforward, and this tutorial will
introduce them. It will cover the ideas of instantiation,
Skolemization, prenex form, completeness, Herbrand's theorem
(including a compactness result), resolution-based theorem
proving, and a brief application to logic programming. It
will provide the logical background necessary to follow the
remaining two papers in this session.
2. A Linear Programming Model of First Order Logic
Vijay Chandru, Vivek S. Borkar (Indian Institute of Science), and
Sanjoy K. Mitter (Massachusetts Institute of Technology)
The embedding of first order predicate logic in a mathematical
programming context was initiated by R. Jeroslow. His
approach was to treat partially instantiated predicate
formulas as formulas of propositional logic, which can then be
represented as integer programming constraints. The emphasis
of this and subsequent work has been on using the embedding
purely for theorem proving and not for structural insights.
We introduce a new framework for embedding predicate logic
theorem proving that provides for structural analysis based on
topology. The framework is that of infinite-dimensional
mathematical programming. Using standard techniques of
topology we show that Herbrand's theorem is a simple
consequence of compactness in product topologies. Since
Herbrand's theorem is the cornerstone of first order theorem
proving, we believe that our framework therefore provides a
handle on the structural aspects of theorem proving. We are
also able to prove the unique "minimal" model property of
first order Horn logic, and thereby provide a foundation for
analyzing model theory of logic programming.
3. Partial Instantiation Methods for Logic Programming
Gabriella Rago (University of Pisa) and John Hooker (Carnegie
Mellon University)
R. Jeroslow introduced a partial instantiation method for
checking satisfiability of formulas in first order logic that
differs radically from the standard resolution-based methods.
Our goal in this paper is to lay the theoretical groundwork
for an extension of the method that is general enough and
efficient enough for general logic programming with functions
and indefinite clauses.
Jeroslow's approach is essentially a row-generation method
that is roughly analogous to Benders decomposition in
mathematical programming. We improve this approach in two
ways: a) we put formulas in prenex normal form and using the
concept of satisfiers introduced by Gallo and Rago to make
constraint generation much more efficient; b) we eliminate the
partial covering mechanism. We also extend the approach it to
full first-order logic with functions and prove correctness.
We prove completeness in the decidable Sch"ofinkel-Bernays
fragment of first order logic and prove termination on
unsatisfiable formulas in full first-order logic.
Return to Program Overview
Chair: Gautam Mitra (Brunel University, UK)
1. A Unified Graphic Interface for Mathematical Programming and
Simulation Models
M. A. Pollatschek (Technion, Israel)
Algebraic languages for mathematical programming models are
declarative while simulation languages are procedural. It is
argued that this difference between the two does not stem from
any fundamental reason and it is demonstarted that simulation
can be described just as well in a declarative way. Bringing
the two kinds of models to a common basis opens the
possibility of describing both within the same modeling
language framework, with all the benefits that such a unified
approach can offer. Geoffrion's 'Structured Modeling Language'
(SML) can be used to express simulation as associations of
entities for some time durations which follow each other
according to some rules. Associations and rules are clearly
declerative. An alternative formulation imbedded in SML may be
based on 'event calculus' and sample paths whose ranges are
ordered sets. These two novel mathematical formalisms allow to
express simulation by equations which are again declarative.
Examples will demonstrate these approaches. The possibility to
express SML by 'genus graphs' (GG) leads to a graphic
interface. It will be demonstrated how to reach from a problem
statement corresponding to initial GG the complete description
of the model (final GG) as a result of some guidance to the
modeler who obtains the final GG by gradually modifying the
initial GG.
2. Enhancing User Understanding Via Model Analysis in a Decision
Support System
David M. Steiger (University of North Carolina at Greensboro)
This paper focuses on the analysis of solved, model instances
in a decision support system (DSS). Specifically, we provide a
unique taxonomy for the existing model analysis systems and
approaches based on the type of logic employed, deductive or
inductive. Then, we expand the taxonomy based on the premise
that the primary purpose of a DSS is to enhance the decision
maker's understanding of the modeled business environment. We
then propose a framework for the next evolution of analysis
systems based on the theory of understanding, its basic
premise (knowledge as design) and components (models, purpose
and arguments). We also discuss the potential application of
knowledge bases and artificial intelligence to enhance a
user's understanding of mathematical models in a DSS
environment.
3. Data and Optimisation Modeling: A Tool For Elicitation And
Browsing (DOME)
Hossein Mousavi, Gautam Mitra, Cormac Lucas (Brunel University,
UK)
The prototype design of a data and optimisation modelling and
elicitation tool (DOME) is presented in this report. The
design objectives of DOME are to support structuring of the
data and structuring of the model. A suitable graphical user
interface (GUI) has been designed to elicitate the problem
owner's LP application. In order to aid the model development
and analysis process a mechanism to browse through the data
model, algebraic model, the constraint matrix and the solution
representation has been introduced. Multi-dimensional tables
and the "slice and dice" facilities to present different views
of data are also incorporated within this tool. An example of
developing and investigating prototype applications using this
tool is presented.
Return to Program Overview
Co-chairs: Phillip J. Brown (Loral Vought Systems) and Allen Zumbach
(U.S. Army Missile Command)
1: Rapid Force Projection Initiative (RFPI) Live/Virtual Seamless
Simulation
Greg Tackett (U.S. Army Missile Command)
RFPI is an Army technology program and an OSD Advanced Concept
Technology Demonstration (ACTD), whose mission is to increase
lethality, survivability, and tempo for early entry forces
through the use of a Hunter Standoff-Killer concept. Virtual
simulation utilizing Distributed Interactive Simulation (DIS)
is a key element to the RFPI simulation effort. Virtual
prototypes of RFPI systems such as the Hunter Sensor Suite,
Remote Sentry, Intelligent Mine Field, Enhanced Fiber-Optic
Guided Missile (E-FOGM), Line of Sight Anti-Tank (LOSAT) and
Unmanned Aerial Vehicles (UAV) with Advanced Sensor Suite
Integration provide man-in-the-loop performance in the virtual
environment. Semi-automated forces representations of RFPI
systems round out the system types and quantities to play the
entire RFPI Brigade battle including field artillery and
advanced submunitions. The RFPI Command and Control Testbed,
or RC2T, provides digital command and control for realistic
weapon-target pairing. Virtual terrain developments allow
correlation to both HIRES scenarios in constructive models and
test terrains used for live exercises.
Live/virtual links are accomplished through instrumentation of
live vehicles to project them into the virtual environment, as
well as shared situational awareness between virtual and live
entities using the RC2T, which communicates with both SINCGARS
radio networks and DIS. Live/virtual integration was
demonstrated on a small scale during the RFPI Early Version
Demonstration (EVD) review in October of 1994. A series of
progressively larger experiments are planned to culminate in a
seamless RFPI Brigade and weapons/target pairing across
combinations of virtual and live elements will be integral
with the execution of the brigade battles
2: DIS: Doorway to the Virtual Battlefield
Jack Lavender (Loral Vought Systems)
Loral Vought Systems is using the Distributed Interactive
Simulation (DIS) communication protocol to project independent
simulations into a common environment. This environment is
characterized as a synthetic battlefield. The DIS protocol
was started in 1989 to support Department of Defense Advanced
Distributed Simulation (ADS) initiatives. ADS applications
include training and readiness exercises--both large and small
scale; experiments designed to identify new capabilities from
existing systems; evaluation of new concepts and technology;
and developing system requirements to optimize battlefield
utility.
This briefing will key on the author(146)s experience in using
the DIS interface protocol concept to connect virtual
prototypes with semi-automated forces to create engineering
tesbeds for weapon system concepts. By using DIS,
implementers of participating simulations are able to
concentrate on their system/subsystems fidelity, yet allow it
to interact with other players to test operational
performance. Lessons learned from implementing the DIS
interface will be presented along with thoughts on future
research and development
3: Lessons Learned in Implementing Virtual Prototypes
R. M. (Bob) Pippin (Loral Vought Systems)
Loral Vought Systems is using prototypes to support several
product development lines. Problems experienced in developing
these simulation models have produced a set of guidelines for
either avoiding or mitigating the risk inherent to working
with a diverse group of stakeholders.
This briefing will relate some significant problems
encountered, describe how each problem was resolved, and
identify actions that would have reduced problem impact.
Examples of problems encountered are: lack of a clear,
up-front consensus among customers, users and implementers on
simulation objectives; not acknowledging the value of failures
in accelerating the prototype development process; cobbling
together system architecture using previously developed
models; and ignoring real world design constraints in
implementing the prototype simulation. Containing cost and
reducing schedule risk in this rapidly developing technology
mandates that developers recognize and avoid repeated
victimization by these too common pitfalls
4: Verification and Validation of a Distributed Interactive
Simulation Prototype for Weapon System Effectiveness Evaluations
A. Zumbach (U.S. Army Missile Command)
Distributed Interactive Simulation (DIS) will be increasingly
used for weapon system development and evaluation. To obtain
meaningful results, verification and validation (V&V) of DIS
simulators is essential.
As defined in Army Regulation 5-11, verification is the
process of determining that the Model and Simulation (M&S)
accurately represents the developer(146)s conceptual
description and specifications. Verification evaluates the
extent to which the M&S has been developed using sound and
established software engineering techniques. The verification
process involves identifying and examining interfaces with
input databases: ensuring that source code accurately
performs all intended and required calculations; reviewing
output reports; performing structured walkthrough techniques
to determine if sensitivity (or lack of sensitivity) to key
inputs may highlight a need to review the M&S algorithm for
omissions or errors. Verification also includes appropriate
data certification and M&S documentation. Verification should
normally be performed by an independent V&V agent, but remains
the responsibility of the M&S proponent to ensure
accomplishment.
Likewise, AR 5-11 defines validation as the process of
determining the extent to which M&S accurately represents the
real-world from the perspective of its intended use. The
validation process ranges from single modules to the entire
system. Ultimately, the purpose is to validate the entire
system of M&S, data, and operator-analysis who will execute
the M&S. Validation methods will incorporate documentation of
procedures and results of any validation effort. Selected
examples of validation include: Face validation, comparison
with historical data of other M&S results, comparison with
developmental test data or other engineering tests, comparison
with operation test data, other field tests, or operational
data, peer review, and independent review (of Red Team).
For a line-of-sight system, V&V of the human interaction with
the virtual world is one of the toughest tasks for the
simulator developer. Results from V&V testing of the LOSAT
Distributed Interactive Crew Station Simulator (DISCSS)
virtual prototype will be discussed as well as lessons learned
and outstanding issues.
Return to Program Overview