Smooth approximations for MAX(X,0) and MIN(X,0)

Frequently asked questions about GAMS

Moderator: aileen

Forum rules
Please ask questions in the other sub-forums
Locked
aileen
User
User
Posts: 136
Joined: 4 years ago

Smooth approximations for MAX(X,0) and MIN(X,0)

Post by aileen »

The use of min and max in a model make some derivatives discontinuous and the model type DNLP needs to be used and solvers get stuck at the point with discontinuous derivatives. How can I find a smooth approximation for max(x,0), and min(x,0)?
aileen
User
User
Posts: 136
Joined: 4 years ago

Re: Smooth approximations for MAX(X,0) and MIN(X,0)

Post by aileen »

Here is the answer from Prof. Ignacio Grossmann (Carnegie Mellon University):

Use the approximation:

Code: Select all

f(x) := ( sqrt( sqr(x) + sqr(epsilon) )  + x ) / 2
for max(x,0), where sqrt is the square root and sqr is the square.

The error err(x) = abs(f(x)-max(x,0)) in the above approximation is maximized at 0 (the point of non differentiability), where err(0) = epsilon/2. As x goes to +/- infinity, err(x) goes to 0. One can shift the function so the error at 0 becomes 0 but takes on epsilon/2 as x goes to +/- infinity:

Code: Select all

g(x) := ( sqrt( sqr(x) + sqr(epsilon) )  + x - epsilon ) / 2
Because min(x,0) = -max(-x,0), you can use the above approximations for min(x,0) as well. Epsilon is a small positive constant.
Locked