Does google or-tools support incremental SAT solving? - or-tools

I wonder whether Google or-tools support incremental SAT solving?
Many applications of SAT are based on solving a sequence of incrementally generated SAT instances. In such scenarios the SAT solver is invoked multiple times on the same formula under different assumptions. Also the formula is extended by adding new clauses and variables between the invocations. It is possible to solve such problem by solving the incrementally generated instances independently, however this might be very inefficient compared to an incremental SAT solver, which can reuse knowledge acquired while solving the previous instances (for example the learned clauses).

Related

Estimating Noise Budget without Secret Key

In the latest version of SEAL, the Simulation class has been removed.
Has it been shifted to some other file?
If it has been completely removed from the library, then how can I estimate noise growth, and choose appropriate parameters for my application?
Simulation class and related tools are indeed removed from the most recent SEAL. There were several reasons for this:
updating them was simply too much work with all the recent API changes;
the parameter selection and noise growth estimation tools worked well only in extremely restricted settings, e.g. very shallow computations;
the design of these tools didn't necessary correspond very well to how we envision parameter selection to happen in the future.
Obviously it's really important to have developer tools in SEAL (e.g. parameter selection, circuit analysis/optimization, etc.) so these will surely be coming back in some form.
Even without explicit tools for parameter selection you can still test different parameters and see how much noise budget you have left after the computation. Repeat this experiment multiple times to account for probabilistic effects and convince yourself that the parameters indeed work.

Time driven simulation backwards steps

I am looking for an alternative to Simulink that would allow to control the stepping mechanism. The same concern was raised in this question, that Simulink does not allow backwards stepping, so that complex systems x(t,u) where "changing variables" (t <- -t) requires weeks of work, cannot be controlled (u) correctly due to interdependance between past states and present decisions (u=f(x(s)),s<=t).
Conceptually, the trick simply consists in manually storing previous states in memory : for a Simulink user, this is difficult and error prone, whereas really simply stepping backwards in time would be simple and fit nicely in Simulink, from the developer's point of view.

Factors that Impact Translation Time

I have run across issues in developing models where the translation time (simulates quickly but takes far too long to translate) has become a serious issue and could use some insight so I can look into resolving this.
So the question is:
What are some of the primary factors that impact the translation time of a model and ideas to address the issue?
For example, things that may have an impact:
for loops vs a vectorized method - a basic model testing this didn't seem to impact anything
using input variables vs parameters
impact of annotations (e.g., Evaluate=true)
or tough luck, this is tool dependent (Dymola, OMEdit, etc.) :(
use of many connect() - this seems to be a factor (perhaps primary) as it forces translater to do all the heavy lifting
Any insight is greatly appreciated.
Clearly the answer to this question if naturally open ended. There are many things to consider when computation times may be a factor.
For distributed models (e.g., finite difference) the use of simple models and then using connect equations to link them in the appropriate order is not the best way to produce the models. Experience has shown that this method significantly increases the translation time to unbearable lengths. It is better to create distributed models in the same approach that is used the MSL Dynamic pipe (not exactly like it but similar).
Changing the approach as described is significantly faster in translational time (orders of magnitude for larger models, >~100,000 equations) than using connect statements as the number of distributed elements increases to larger numbers. This was tested using Dymola 2017 and 2017FD01.
Some related materials pointed out by others that may be useful for more information have been included below:
https://modelica.org/events/modelica2011/Proceedings/pages/papers/07_1_ID_183_a_fv.pdf
Scalable Test Suite : https://dx.doi.org/10.3384/ecp15118459

Barriers to translation stage in Modelica?

Some general Modelica advice?
We've built a model with ~2000 equations and three vectors of input from measured data. Using OpenModelica, attempts at simulation have begun to hang in the translation stage (which runs for hours where it used to take less than a minute) and now I regularly "lose connection to omc.exe." Is there perhaps something cumulative occurring that's degrading translation/compilation performance?
In general, are there any good rules of thumb for keeping simulations lighter and faster? I realize that, depending on the couplings, additional equations could be exponentially increasing the size of the resulting system of equations - could this be a problem?
Thanks for your thoughts!
It shouldn't take that long. Seems like a bug.
You can report this bug here:
https://trac.openmodelica.org/OpenModelica (New Ticket).
If your model is public you can post it there, if not you can contact the OpenModelica team privately.
I did some cleaning in the code; and got the part that repeats 12x (the module) down to ~180 equations; in the process I reduced the size of my input vectors (and also a 2D look-up table the module refers to) by quite a bit - they're both down to a few hundred values. It's working now--simulations run in reasonable time, a few minutes each.
Since all these tables were defined within Modelica functions (as you pointed out, Mr. Tiller) perhaps shrinking them helped to improve the performance. I had assumed that all that data just got spread out in a memory array, without going through any real processing, but maybe that's not the case...time to know more about what's going on under the hood in this environment (as always).
Thanks for the help!

ILP solvers with small memory footprint

I'm trying to solve a sequence-labelling problem by formulating it as an integer linear program (as an experiment to see how well doing it in that way works). I've already found some suggestions for solvers on SO but I would like to get some more fine-grained advice due to some constraints I'm under (yes, that pun was actually intended).
I'm running out of memory on more than half of my sequences due to their length while using COIN-OR although I see no reason I need to use so much memory for my problem at hand: This is a Boolean linear program, so I would theoretically need only one bit per feature. However, e.g. the COIN Open Solver Interface seems to be able to use only double values for e.g. defining constraints.
Are there any (free) ILP packages which are well-suited for either Boolean problems or at least for problems with a very small range of potential values?
CPLEX seems to be considered approximately the state of the art, and in my experience for hard ILPs it is often better than any free solver I found. Unfortunately, CPLEX is not free, except for academic users; IBM does offer free access to CPLEX for students and researchers at educational institutions, if you fit that description.