Assumptions and Implications of the Linear Programming Model
In a linear program (lp) , we want to maximize or minimize a linear objection function of a set of continuous, real variables subject to a set of linear equalities and inequalities.
Decision Variables: x1, x2, x3, ... , xn
Objective Function: Maximize (Minimize)
z(x1, x2, x3,..., xn) = c1 x1 + c2 x2 + c3 x3 + .. + cn xn
where c1, c2 , c3 ,..., cn are real-valued constants.
subject to
f1(x1, x2, x3,..., xn) = (<= or >=) b1
f2(x1, x2, x3,..., xn) = (<= or >=) b2
f3(x1, x2, x3,..., xn) = (<= or =>) b3
fm(x1, x2, x3,..., xn) = (<= or >=) bm
where b1, b2 , b3 ,..., bn are real-valued constants.
Feasible Region: the set of all points satisfying all the LP's constraints.
Optimal Solution for a Maximization Problem: a point in the feasible region with the largest objective function value.
Optimal Solution for a Minimization Problem: a point in the feasible
region with the smallest objective function value.
The use of linear functions implies the following assumptions about the LP model:
1) Proportionality
The contribution of any decision variable to the objective function is proportional to its value.
For example in the diet problem, the contribution to the cost of the diet from one pound of apples is $0.75, from two pounds of apples its $1.50 and from four pound the contribution is $3.00. For four hundred pounds, the contribution would be $300.00.
In many situations, you might get a volume discount such that the price
per pound goes down if you purchase more apples. These discounts are often
nonlinear, which that a linear programming model is either inappropriate
or is really an approximation of the real world problem.
2) Additivity
The contribution to the objective function for any variable is independent of the other decision variables. For example in the NSC production problem, the production of P2 tons of steel in Month 2 will always contribute $4000 P2 regardless of how much steel is produced in Month 1.
Proportionality and Additivity are also implied by the linear constraints. In the diet problem, you can obtain 40 milligrams of protein for each gallon of milk you drink. It is unlikely, however, that you would actually obtain 400 milligrams of protein by drinking 100 gallons of milk. Also, it may be the case due to a chemical reaction, you might obtain less than 70 milligrams of Vitamin a by combining a pound of cheese with a pound of apples. Thus, the LP model is really just an approximation of what really happens.
3) Divisibility
Since we are using continuous variables, the LP model assumes that the decision variables can take on fractional variables. Thus, we could a solution to the GT Railroad problem that sends 0.7 locomotives from Centerville to Fine Place. In many situations, the LP is being used on a large enough scale that one can round the optimal decision variables up or down to the nearest integer and get an answer that is reasonably close to the optimal integer solution. For example, if an LP for a production plan said to produce 12,208.4 widgets, we can be probably produce 12,209 and be close to an optimal solution. As we will discuss later in the semester, problems in which some or all the variables must be integers are generally speaking much hard to solve than LPs.
Divisibility also implies that the decision variables can take on the full range of real values. For example, in the tennis problem, the LP may tell you bet $19.123567 on player A to win the match. Again, most of the problems we will encounter in this course are on a large enough scale that some rounding or truncating of the optimal LP decision variables will not greatly affect the solution.
4) Certainty
The LP model assumes that all the constant terms, objective function and constraint coefficients as well as the right hand sides, are know with absolute certainty and will not change. If the values of these quantities are known with certainty, for example the demand data given in the NSC may be forecasts that might not be 100% accurate, then this assumption is violated.