I need to run successively hundred of thousand times a small model in python, with each time different dimensions and values for the sets and parameters.
Before parallelizing, I'd like to know if there are few tricks to reduce computation time.
What is the faster ?
- load model from string or from file ?
- creating a database once and update with sync each time or creating the db each time ?
- what about dynamically writing in python the dataset in the model stream ?
- other tricks ?
I have found in the documentation options Suppress but I don't know how to use it:
Code: Select all
opt.defines['suppress'] = "SuppressCompilerListing" ?
Thanks a lot in advance
Best regards
Serge