I have found an interesting characteristics in GAMS that it solves multiple simple LPs much slower than solving the integrated version of them. Here I show one illustrating example codes.

I defined 27 simpe LP problems. Firstly I solved them individually with respect to the indices. Then I solved the integrated version combining all of them.

Code: Select all

```
Set
x /x1*x3/
y /y1*y3/
z /z1*z3/;
Parameter d1(x,y,z) /
x1.(y1*y3).(z1*z3) = 2.9
x2.(y1*y3).(z1*z3) = 3
x3.(y1*y3).(z1*z3) = 3.1
/;
Parameter d2(x,y,z) /
(x1*x3).y1.(z1*z3) = -4.8
(x1*x3).y2.(z1*z3) = -5
(x1*x3).y3.(z1*z3) = -5.2
/;
Parameter d3(x,y,z) /
(x1*x3).(y1*y3).z1 = 0.8
(x1*x3).(y1*y3).z2 = 1.0
(x1*x3).(y1*y3).z3 = 1.2
/;
********* Individual Problems *************
Positive Variable
x1;
Negative Variable
x2;
Free Variable
obj1
x3;
Scalars
dd1
dd2
dd3;
Equations
objj1
eq11
eq12
eq13
eq14;
objj1.. obj1 =e= 5 * x1 + 4 * x2 + 6 * x3;
eq11.. x1 + 2 * x2 =g= 2;
eq12.. x1 + x3 =l= dd1;
eq13.. -3 * x1 + 2 * x2 + x3 =l= dd2;
eq14.. x1 - x2 + x3 =e= dd3;
Model sample1 / objj1, eq11, eq12, eq13, eq14 /;
Parameter
store(x,y,z);
Scalar
storesum;
Loop(x,
Loop(y,
Loop(z,
dd1 = d1(x,y,z);
dd2 = d2(x,y,z);
dd3 = d3(x,y,z);
Solve sample1 maximizing obj1 using lp;
store(x,y,z) = obj1.l
);
);
);
storesum = sum(x, sum(y, sum(z, store(x,y,z))));
display storesum;
********* Integrated Problem *************
Positive Variable
x12(x,y,z);
Negative Variable
x22(x,y,z);
Free Variable
obj2
x32(x,y,z);
Equations
objj2
eq21
eq22
eq23
eq24;
objj2.. obj2 =e= sum(x, sum(y, sum(z, 5 * x12(x,y,z) + 4 * x22(x,y,z) + 6 * x32(x,y,z))));
eq21(x,y,z).. x12(x,y,z) + 2 * x22(x,y,z) =g= 2;
eq22(x,y,z).. x12(x,y,z) + x32(x,y,z) =l= d1(x,y,z);
eq23(x,y,z).. -3 * x12(x,y,z) + 2 * x22(x,y,z) + x32(x,y,z) =l= d2(x,y,z);
eq24(x,y,z).. x12(x,y,z) - x22(x,y,z) + x32(x,y,z) =e= d3(x,y,z);
Model sample2 / objj2, eq21, eq22, eq23, eq24 /;
Solve sample2 maximizing obj2 using lp;
display obj2.l;
```

If the circumstance changes to millions of LPs, solving such a large-scale LP with millions of variables and millions of constraints is exceedingly time consuming, which triggers the development of decomposition techniques like Benders and Dantzig-Wolfe. Those decomposition techniques basically are to decompose the large-scale LP to multiple individual LPs (they called subproblems) to reduce computational time. But the GAMS elapse time result here does not support this fact.

Does anyone have any idea about this? I guess it should be a way to do something like parallel computing, but I am a newbie to GAMS. Thanks in advance!

Gabriel