I'm converting my problem to SMT and I have noticed that the SMT solvers (MathSat5 and CVC4) are slow when solving sat instances. My suspension is that there is something in my translation that is making it go slow.
I'm attaching a sample cnf instance and the smt2 translation for reference and below I'm providing solver times (excluding the translation time) for a larger instance to compare MathSat5, CVC4 and MiniSat.
Solver Solver Time (s)
-------------------------------------
MiniSat 0.028062 s
MathSat5 2.629702 s
CVC4 7.488870 s
CVC4(QF_SAT) 1.253978 s
So does anyone have an idea why these times are drastically different?
PS. cvc4 says it spent 5.862 seconds in: theory uf symmetry_breaker
Sample cnf:
-------------------------------------
p cnf 20 91
4 -18 19 0
...
4 -16 -5 0
Sample smt2:
-------------------------------------
(set-logic QF_UF)
(set-info :smt-lib-version 2.0)
(set-option :produce-models true)
(declare-fun v1 () Bool)
...
(declare-fun x20 () Bool)
(assert (or v4 (not x18) x19))
...
(assert (or v4 (not v16) (not v5)))
(check-sat)
(get-value ( v1 ... x20))
(exit)
Thanks
SMT solvers have extra overhead because of theory solvers. In CVC4, you can avoid this by using the following commands:
(set-logic QF_UF)
(set-info :cvc4-logic QF_SAT)
instead of
(set-logic QF_UF)
Note that this is a CVC4 extension, not part of the SMT-LIB standard. But if you are truly using only Boolean reasoning, this should give you competitive performance.
Related
I try to place N rectangular blocsks with different sizes into a grid, by formulating it
as a CSP propblem.
The blocks should not overlap with each other, they can touch on the edges, and there can be
empty places.
For example place 4 rectangular blocks of size 2x2 into a 8x8 grid. (Vary the number of blocks, the sizes of the blocks, and the size of the grid.) I know the formula as
I try to write a program or script generates the formula but I am confused too much I cant write in an SMT syntax. İf anyone helps I aprreciate too much. Thank you.
You should be specific about what you tried and what didn't work. If your problem is with syntax, then here's something to get you started:
(set-option :produce-models true)
(declare-fun xi () Int)
(declare-fun wi () Int)
(declare-fun xj () Int)
(declare-fun wj () Int)
(assert (or (<= (+ xi wi) xj)
(<= (+ xj wj) xi)))
The above encodes the first two disjuncts in your formula. You can add the other variables and assert all the other conditions as required.
I am working on a Scheme program, where I need at some place a pair of a floatingpoint counter and the same counter as formated string. I am having issues with the number to string conversion.
Can someone explain me these inaccuracies in this code ?
(letrec ((ground-loop (lambda (times count step)
(if (= times 250)
(begin
(display "exit")
(newline)
)
(begin
(display (* times step)) (newline)
(display (number->string (* times step)))(newline)
(newline)
(newline)
(ground-loop (+ times 1) (* times step) step)
)
)
)
))
(ground-loop 0 0 0.05)
)
Part of the output looks like that
7.25
7.25
7.3
7.300000000000001
7.35
7.350000000000001
7.4
7.4
7.45
7.45
7.5
7.5
7.55
7.550000000000001
7.6
7.600000000000001
7.65
7.65
I am aware of floating point inaccuracies and tried several forms of increasing the counter but the issue is in the conversion itself.
Any ideas for an easy fix? Tried a bit with explicitly rounded numbers but this did not do the job. The results even vary from IDE and environment to environment. Do I really have to do string manipulation after conversion?
The very weird thing in my case is having an exact numeric result but the string is off.
Thank you
It looks to me as if:
the native float type (the type you get by reading 1.0) of your implementation is IEEE double float;
the display of your Scheme is not printing such floats 'correctly' (see below, I'm no sure this means it's buggy);
your number->string is doing the right thing.
By 'correctly' above I mean 'in a way so that reading what display printed returns an equivalent number'. I am not at all sure that display is required to be correct in this restrictive sense however, so I am not sure whether it's a bug. Someone who understands the Scheme standards better than I do might be able to comment on that.
In particular if the native float type of the languageis an IEEE double float, then, for instance:
(= (* 0.05 3) 0.15)
is false, as is
(= (* 0.05 146) 7.3)
Which is the example you have in the first line of your output.
So you certainly should not assume that your program will ever produce a number equal to the number you get by reading 7.3 for instance, because it won't.
In the above I have carefully avoided printing the numbers out, and that's because I'm not sure display is reliable on this, and in particular I'm not sure your display is reliable or that it is required to be.
Well, I have a Lisp implementation to hand which is reliable about this. In this system the default float format is a single-precision IEEE float, and I can get the reader to read double floats with, for instance 1.0d0. So, in this implementation you can see the results:
> (* 0.05d0 3)
0.15000000000000002D0
> (* 0.05d0 146)
7.300000000000001D0
And you'll see that these are exactly (up to the double-precision indicator) what number->string is giving you and not what display is giving you.
If what you want to do is to get a representation of the number in such a way that reading it will return an equivalent number, then number->string is what you should trust. In particular R5RS says in section 6.2.6 that:
(let ((number number)
(radix radix))
(eqv? number
(string->number (number->string number
radix)
radix)))
is true, and 'it is an error if no possible result makes this expression true'.
You can check the behaviour of number->float & float->number over a range of numbers by, for instance (this may assume a more recent or featurefull Scheme than you have):
(define (verify-float-conversion base times)
(define (good? f)
(eqv? (string->number (number->string f)) f))
(let loop ([i 0]
[bads '()])
(let ([c (* base i)])
(if (>= i times)
(values (null? bads) (reverse bads))
(loop (+ i 1) (if (good? c) bads (cons c bads)))))))
Then you should get
> (verify-float-conversion 0.05 10000)
#t
()
More generally using floats, still more floats that are the result of some computation more complicated than reading them some input source, as unique indices into any kind of tabular structure is fraught with danger to put it rather mildly: floating-point errors mean that it's just really dangerous to assume that (= a b) is true for floats even when it mathematically should be.
If you want such indices do exact arithmetic instead, and convert the results of that arithmetic to floats at the point you need to do computations. I believe (but am not sure) that Scheme implementations are nowadays required to support exact rational arithmetic (certainly this seems to be true for R6RS), so if you want to count 20ths (say) you can do so by counting in units of 1/20, which is exact, and then constructing floats when you need them.
It's probably safe to compare floats in the case that if you are for instance comparing a float you got by taking some initial float value and multiplying it by a machine integer and comparing it with some earlier version of itself which you have read by string->number. But if the calculation your doing is more complicated than that you need to be quite careful.
I am trying to implement RSA prime generation for P and Q based on FIP186-4 specification. The specification describes two different implementations: Section 3.2 Provable Prime Construction vs. Section 3.3 Probable Prime Construction. Initially, I tried implementing the probable prime approach because it is easier to understand and implement, but I discovered it is very slow because of the number of iterations needed to find P and Q primes (worst case it takes 15 minutes). Next, I decided to try the provable prime approach but I found out the algorithm is much more complex and might be slow as well. Below are my two issues:
In Section C.10, Step 12, how to eliminate the sqrt(2) to the expression x = floor(sqrt(2))(2^(L−1))) + (x mod (2^L − floor((sqrt(2)(2^(L−1))))) so that I can represent it as whole numbers using BigNum representation?
In Section C.10, Step 14, is there a fast way to compute y in the interval [1, p2] such that 0 = ( y p0 p1–1) mod p2? The specification doesn't specify a method to implement this. My initial thought was to perform a linear search staring from integer 1 and up but that can be very slow because p2 can be a very large number.
I tried searching online for help on this issue, but I discovered a lot of examples don't even comply with FIPS186-4. I assume it is because these two methods are too slow.
Section 4.7.2 of the MIT/GNU Scheme Reference Manual states that
The IEEE floating-point number specification supports three special ‘numbers’: positive infinity (+inf), negative infinity (-inf), and not-a-number (NaN).
These constants, in addition to being well-defined IEEE floating-point values, are also useful for range arithmetic. However, I’m unable to use them in my programs:
1 ]=> +inf
;Unbound variable: +inf
Generating these values isn’t easy, either: expressions which seem like they ought to evaluate to floating-point infinities simply don’t:
1 ]=> (flo:/ 1. 0.)
;Floating-point division by zero
How can I input or generate infinite floating-point constants in MIT Scheme?
tests/runtime/test-arith.scm suggests using flo:with-exceptions-untrapped:
;;; XXX The nonsense about IDENTITY-PROCEDURE here serves to fake
;;; out bogus constant-folding which needs to be fixed in SF (and
;;; probably LIAR too).
(define (zero)
(identity-procedure 0.))
(define (nan)
(flo:with-exceptions-untrapped (flo:exception:invalid-operation)
(lambda ()
(flo:/ (zero) (zero)))))
(define (inf+)
(flo:with-exceptions-untrapped (flo:exception:divide-by-zero)
(lambda ()
(flo:/ +1. (zero)))))
(define (inf-)
(flo:with-exceptions-untrapped (flo:exception:divide-by-zero)
(lambda ()
(flo:/ -1. (zero)))))
The results display as #[NaN], #[+inf], #[-inf] but cannot be input that way.
Given an n-place integer and an m-place integer. How can I multiply them in LISP using lists, arrays or any other lisp specific data types?
for instance;
a(1)a(2)...a(n)
b(1)b(2)...b(m)
with result of;
r(1)r(2)...r(m+n)
Common Lisp has already bignums natively. Why don't you use them?
You basically don't have to declare anything specially, they "magically" happen:
% sbcl
This is SBCL 1.0.56.0.debian, an implementation of ANSI Common Lisp.
* (defun fact (n) (if (< n 1) 1 (* n (fact (- n 1)))))
FACT
* (fact 50)
30414093201713378043612608166064768844377641568960512000000000000
So with a Common Lisp, you basically don't have to bother...
addenda
And efficient bignum algorithms are a very difficult problem; Efficient algorithms have better complexity than naive ones; you can find difficult books explaining them (the underlying math is pretty hard). See also this answer.
If you want to make a competitive bignum implementation, be prepared to work hard several years, and make it a PhD thesis.
A simple algorithm to use is just mimicking what you do when computing mutiplications by hand:
123 x
456 =
---
738
615
492
-----
56088
The first step is implementing multiplication by a single "digit" (e.g. 123 x 6 = 738).
After you have that shifting is of course trivial (just slide elements in your list) and therefore multiplication can then be completed using your addition function.
Note that this is not the fastest way to compute the product of two big numbers (see Karatsuba algorithm for example).
PS: thinking to how you can compute the product of two large numbers by hand also explains some "amazing" result like 111111111*111111111 = 12345678987654321
111111111 x
111111111 =
---------
111111111
111111111
111111111
111111111
111111111
111111111
111111111
111111111
111111111
-----------------
12345678987654321
(* 1234567890123456789123456789 1234567890123456789123456789)
Big ints are native to Common Lisp.