Calculating eigenvalues

Frequently asked questions about GAMS

Moderator: aileen

Forum rules
Please ask questions in the other sub-forums
Locked
aileen
User
User
Posts: 136
Joined: 3 years ago

Calculating eigenvalues

Post by aileen »

How do I calculate all eigenvalues and eigenvectors of a symmetric matrix?
aileen
User
User
Posts: 136
Joined: 3 years ago

Re: Calculating eigenvalues

Post by aileen »

Calculating all eigenvalues and eigenvectors of a symmetric matrix at once results in a highly non linear and non-convex model and NLP solvers most likely will have problems solving these. However, one may compute the eigenvalues one by one by maximizing a norm and iteratively introducing a constraint that makes the new eigenvector orthogonal to the ones already found. A little trick in the generation of initial values removes components parallel to already found eigenvectors and ensures that the initial point to each solve is feasible. All constraints except one are linear and the objective is quadratic, so the model is easy to solve.

Alternatively, $libInclude linalg can be used to calculate eigenvalues and eigenvectors (see the header of the file in <sysdir>\inclib\linalg.gms for details). The model below (contributed by Arne Drud) gives an example of this approach for a covariance matrix.

Code: Select all

$Title Covariance Matrix and its Eigenvalues and -vectors.
Sets
    j    'variables'         /j1*j25/
    n    'observations'      /n1*n40/

Alias (j,jj,jjj);

Parameters
    jn1(j,n)    'observations of the variables'
    meanj(j)    'mean value of the observations'
    covj(j,jj)  'First covariance matrix';

* forming a covariance matrix
jn1(j,n)   = normal(0,1);
meanj(j)   = sum(n, jn1(j,n))/card(n);
covj(j,jj) = sum(n, (jn1(j,n) - meanj(j))*(jn1(jj,n) - meanj(jj)))/card(n);

Parameter
    evecj(j,j)  'Eigenvectors found so far'
    evalj(j)    'Eigenvalues found so far';

*
*  Define a model that finds the largest eigen value and eigen vector:
*
Set       jev(j)   'Subset already having an associated eigenvector-eigenvalue pair';
Scalar    normrv   'Norm of initial random vector';

jev(j) = no;
evecj(j,j) = 0;

Variable
    evec(j)    'Eigenvector'
    eval       'Eigenvalue';

Equation
    eq_obj     'Definition of Objective'
    eq_orth(j) 'The new Eigenvector must be orthogonal to earlier Eigenvectors'
    eq_norm    'The norm of the eigen vector must be 1';

eq_obj        .. eval =e= sum((j,jj), evec(j)*covj(j,jj)*evec(jj) );

eq_orth(jev)  .. sum( j, evecj(j,jev) * evec(j) ) =e= 0;

eq_norm       .. sum(j, sqr(evec(j)) ) =e= 1;

Model mod_ev / all /;

parameter helper(j);

option limRow=0, limCol=0, solPrint=silent, solveLink=%solveLink.loadLibrary%;
loop(jjj,
*
*  We initialize the eigenvector with random numbers. To prevent using
*  a lot of time in phase 1 we make sure the orthogonality constraints
*  are satisfied by subtracting a multiple of the already found
*  eigenvectors. We do not check that the remaining vector still is
*  nonzero.
*
   evec.l(j)     = uniform(-1,1);
   helper(jev)   = sum(j, evec.l(j)*evecj(j,jev));
   evec.l(j)     = evec.l(j) - sum(jev,helper(jev)*evecj(j,jev));
   normrv        = sqrt( sum(j, sqr(evec.l(j)) ) );
   evec.l(j)     = evec.l(j) / normrv;
   solve mod_ev using nlp maximizing eval;
   abort$(mod_ev.solvestat ne 1) 'solvestat not 1', mod_ev.solvestat;
   abort$(mod_ev.modelstat ne 1 and mod_ev.modelstat ne 2) 'modelstat not 1 or 2';
   jev(jjj)      = Yes;
   evecj(j,jjj)  = evec.l(j);
   evalj(jjj)    = eval.l;
);

Parameter
    testj(j,j) 'New covariance matrix'
    difj(j,j)  'Difference between original and new covariance';

testj(j,jj) = sum( jjj, evecj(j,jjj) * evalj(jjj) * evecj(jj,jjj) );
difj(j,jj)  = covj(j,jj) - testj(j,jj);
abort$(smax((j,jj), abs(difj(j,jj)))>1e-6) difj;

* Use $libInclude linalg to calculate Eigenvectors and Eigenvalues
$libInclude linalg eigenvector j covj evalj evecj
testj(j,jj) = sum( jjj, evecj(j,jjj) * evalj(jjj) * evecj(jj,jjj) );
difj(j,jj)  = covj(j,jj) - testj(j,jj);
abort$(smax((j,jj), abs(difj(j,jj)))>1e-6) difj;
Locked