Solving for all valid inputs with an ATS - smt

Let's say you have a system of pure expressions, like,
(bi0, bi1, bi2, ai0, ai1, ai2) := inputs
b0 := bi0 && bi1
a1 := b0 ? ai0 : cbrt(ai0)
a2 := bi2 ? a1 : ai1
output := a2 > ai2
# prove:
output == True
Can an automated theorem prover be programmed, not just to find some inputs for which output is true, but to find all possible inputs for which it is true?

SMT solvers are quite good at solving these sorts of constraints and there are many to choose from. See https://smtlib.cs.uiowa.edu for an overview.
One implementation is Microsoft's z3, which can easily handle queries like the one you posed: https://github.com/Z3Prover/z3
SMT solvers can be programmed in the so called SMTLib syntax (see first link above), or they provide APIs in many languages for you to program them in; including C/C++/Java/Python/Haskell etc. Furthermore, most theorem provers these days (including Isabelle/HOL) come with tactics that use these provers behind the scenes; so many integrations are possible.
As I mentioned, z3 can be programmed in various languages, a good paper to review is https://theory.stanford.edu/~nikolaj/programmingz3.html, which uses Python to do so.
A Solution using Z3 in Haskell
For your particular problem, here's how it'd be solved using z3 in Haskell, using the SBV layer (http://leventerkok.github.io/sbv/):
import Data.SBV
problem :: IO AllSatResult
problem = allSatWith z3{allSatPrintAlong = True} $ do
[bi0, bi1, bi2] <- sBools ["bi0", "bi1", "bi2"]
[ai0, ai1, ai2] <- sReals ["ai0", "ai1", "ai2"]
cbrt_ai0 <- sReal "cbrt_ai0"
let cube x = x * x * x
constrain $ cube cbrt_ai0 .== ai0
let b0 = bi0 .&& bi1
a1 = ite b0 ai0 cbrt_ai0
a2 = ite bi2 a1 ai1
pure $ a2 .> ai2
When I run this, I get:
*Main> problem
Solution #1:
bi0 = False :: Bool
bi1 = False :: Bool
bi2 = False :: Bool
ai0 = 0.125 :: Real
ai1 = -0.5 :: Real
ai2 = -2.0 :: Real
cbrt_ai0 = 0.5 :: Real
Solution #2:
bi0 = True :: Bool
bi1 = True :: Bool
bi2 = True :: Bool
ai0 = 0.0 :: Real
ai1 = 0.0 :: Real
ai2 = -1.0 :: Real
cbrt_ai0 = 0.0 :: Real
Solution #3:
bi0 = True :: Bool
bi1 = False :: Bool
bi2 = True :: Bool
ai0 = 0.0 :: Real
ai1 = 0.0 :: Real
ai2 = -1.0 :: Real
cbrt_ai0 = 0.0 :: Real
...[many more lines deleted...]
I interrupted the output since it seems there's a lot of assignments (possibly infinite by looking at the structure of your problem).
A solution using Z3 in Python
Another popular choice is to use Python to program the same. The code is a bit more verbose than Haskell in this case, there's some boiler-plate to get all-solutions and other Python gotchas, but otherwise follow a similar path:
from z3 import *
s = Solver()
bi0, bi1, bi2 = Bools('bi0 bi1 bi2')
ai0, ai1, ai2 = Reals('ai0 ai1 ai2')
cbrt_ai0 = Real('cbrt_ai0')
def cube(x):
return x*x*x
s.add(cube(cbrt_ai0) == ai0)
b0 = And(bi0, bi1)
a1 = If(b0, ai0, cbrt_ai0)
a2 = If(bi2, a1, ai1)
s.add(a2 > ai2)
def all_smt(s, initial_terms):
def block_term(s, m, t):
s.add(t != m.eval(t, model_completion=True))
def fix_term(s, m, t):
s.add(t == m.eval(t, model_completion=True))
def all_smt_rec(terms):
if sat == s.check():
m = s.model()
yield m
for i in range(len(terms)):
s.push()
block_term(s, m, terms[i])
for j in range(i):
fix_term(s, m, terms[j])
yield from all_smt_rec(terms[i:])
s.pop()...
yield from all_smt_rec(list(initial_terms))
for model in all_smt(s, [ai0, ai1, ai2, bi0, bi1, bi2]):
print(model)
Again, when run this continuously spits out solutions.

Related

Explanation of class instance declarations

I am following a tutorial and found this code:
data A = B | C deriving(Eq)
class K a where
f :: a -> Bool
instance K A where
f x = x == C
f _ = False
call = f B
Why do I need f _ = False? I get the same result without it.
The answer is simply: you don't need f _ = False here. In fact, if you compile with -Wall then the compiler will warn you that this clause is redundant, because the f x = ... clause already catches everything.
If the tutorial told you to have that extra clause, well, it's wrong.
As pointed out, it's not necessary.
You might need (or want) that line, though, if you had a slightly different definition, one that does not require an Eq instance:
data A = B | C
class K a where
f :: a -> Bool
instance K A where
f C = True
f _ = False
Instead of comparing x to C, you can match the argument directly against C, then define f to return False for all other values. This makes more sense if there were more constructors that could produce False.
data A' = B | C | D
instance K A' where
f C = True
f _ = False -- in place of f B = False and f D = False

Min-cost flow modification for arcs with fixed cost?

I have a min-cost flow network in which some arcs have a fixed charge, that is, if arc k has non-zero flow x_k, then the cost is c_k, independent of the amount of flow. A flow of 0 incurs 0 cost. These arcs do not have capacity constraints.
I know how to model this as a mixed integer program (MIP): Add a 0/1 variable y_k with cost c_k. Set the capacity on arc k to M * y_k, where M is larger than the sum of all supplies. So the fixed cost is incurred if and only if the arc has flow.
Can this be solved using a min-cost flow formulation, which would be more efficient than a general MIP implementation? Does OR-Tools (or any other package) have an extension to min-cost flow that accommodates this?
Cross-posted to the Google OR-Tools list.
Thanks,
Hershel
I'm not sure that I understand you (most likely due to my ignorance). - you will possibly get a better response from the OR-forum than here.
However, I think there may be a way of doing what you ask as a circuit via AddCircuit()
Essentially I believe one can maximise (or minimise) those arcs which are marked as having a cost.
Here is an example using the AddCircuit constraint, where one outgoing arc from each node has a fixed cost.
from ortools.sat.python import cp_model
class DiGraphSolver:
def __init__(self, desc):
self.model = cp_model.CpModel()
self.status = cp_model.UNKNOWN
self.timing = None
# AddCircuit needs a numeric index for each node.
# Here's two lazy key->index / index->key lookups.
self.keys = {k: i for i, k in enumerate(desc.nodes.keys()) }
self.revs = {i: k for k, i in self.keys.items() }
# Determine the start and stop nodes
self.start = self.keys[desc.start]
self.stop = self.keys[desc.stop]
# Store the nodes dict in it's indexed form.
self.nodes = {self.keys[head]: [self.keys[t] for t in tails] for head,tails in desc.nodes.items()}
self.heavies = [(self.keys[head],self.keys[tail]) for head,tail in desc.heavies.items()]
self.arcs = []
self.vars = []
self.result = []
self.heavy_arcs = []
self.weight = 0
def setup(self):
self.arcs = [
(head,tail, self.model.NewBoolVar(f'{head}:{tail}')) for head, tails in self.nodes.items() for tail in tails
]
self.heavy_arcs = [arc[2] for arc in self.arcs if arc[:-1] in self.heavies]
# vars is a list of all the arcs defined in the problem.
self.vars = [arc[2] for arc in self.arcs]
# Add self loops for all *optional* nodes (because AddCircuit requires a Hamiltonian Circuit)
# for this example, that's everywhere except for 'start' and 'stop'
# We just use the keys of self.revs (the index values).
loops = [(n, n, self.model.NewBoolVar(f'{n}:{n}')) for n in self.revs if n not in [self.start, self.stop]]
self.arcs += loops
# connect the stop variable to the start variable as a dummy arc to complete the hamiltonian circuit.
# Because start and stop are not self-closing (non-optional), we don't need to set truth values.
loop = (self.stop, self.start, self.model.NewBoolVar(f'loop'))
self.arcs.append(loop)
# Now add the circuit as a constraint.
self.model.AddCircuit(self.arcs)
# Now reduce weighted nodes.
self.model.Minimize(sum(self.heavy_arcs)) # look for the shortest network with the lightest weight.
def solve(self) -> bool:
cp_solver = cp_model.CpSolver()
cp_solver.parameters.max_time_in_seconds = 1
cp_solver.parameters.num_search_workers = 12
self.status = cp_solver.Solve(self.model)
return self.summarise(cp_solver)
def summarise(self, cp_solver) -> bool:
if self.status in (cp_model.OPTIMAL, cp_model.FEASIBLE):
self.store(cp_solver)
return True
else:
if self.status == cp_model.INFEASIBLE:
print(f"Challenge for {self.step_count} arc{'s ' if self.step_count > 1 else ' '}is infeasible after {cp_solver.WallTime()}s.")
else:
print(f"Solver ran out of time.")
return False
def store(self, cp_solver):
self.timing = cp_solver.WallTime()
used = [arc for arc in self.arcs if cp_solver.Value(arc[2])]
arc = None, self.start
while True:
arc = next((link for link in used if link[0] == arc[1]), None)
self.result.append(self.revs[arc[0]])
if arc[1] == self.start:
break
self.weight = cp_solver.ObjectiveValue()
self.step_count = len(self.result) - 1
def show(self):
print(f"{'-'.join(self.result)}")
print(f'Cost: {self.weight}')
class RandomDigraph:
"""
define a problem.
26 nodes, labelled 'a' ... 'z'
start at 'a', stop at 'z'
Each node other than 'z' has a 4 outgoing arcs (random but not going to 'a')
"""
def __init__(self):
from random import sample,randint #
names = 'abcdefghijklmnopqrstuvwxyz'
arcs = 4
self.steps = 1
self.start = 'a'
self.stop = 'z'
but_first = set(names) ^ set(self.start)
self.nodes = {v: sample(but_first - set(v), arcs) for v in names}
self.heavies = {v: self.nodes[v][randint(0, arcs - 1)] for v in names if v != self.stop}
self.nodes[self.stop] = []
def print_nodes(self):
for key, value in self.nodes.items():
vs = [f" {v} " if v != self.heavies[key] else f"*{v}*" for v in value]
print(f'{key}: {"".join(vs)}')
def solve_with_steps(problem) -> int:
solver = DiGraphSolver(problem)
solver.setup()
if solver.solve():
solver.show()
return solver.step_count
def solve_az_paths_of_a_random_digraph():
problem = RandomDigraph()
problem.print_nodes()
print()
solve_with_steps(problem)
if __name__ == '__main__':
solve_az_paths_of_a_random_digraph()
Example run (solving a..z) gives
# network: (heavy arcs are marked by the tail in **.)
# eg. a->p is a heavy arc.
a: *p* d i l
b: *t* u e y
c: r v *m* q
d: q t *f* l
e: k *o* y i
f: i p z *u*
g: s h i *x*
h: *g* l j d
i: x f e *k*
j: *g* r e p
k: d *c* g q
l: r f j *h*
m: *i* b d r
n: t v y *b*
o: s x q *w*
p: w g *h* n
q: o r *f* p
r: f *c* i m
s: y c w *p*
t: *y* d v i
u: *h* z w n
v: *d* x f t
w: l c *s* r
x: *j* r g m
y: b j *u* c
z:
Solution:
a-i-e-k-g-h-j-p-w-c-q-o-s-y-b-u-n-t-v-x-r-m-d-l-f-z
Cost: 0.0

API for handling polymothinc records

It is a little bit custom issue, is not contrived, but just simplified as possible.
-- this record that has fn that handles both x and y,
-- x and y supposed to be Functors, a arbitrary param for x/y, r is arbitrary result param
type R0 a x y r =
{ fn :: x a -> y a -> r
}
-- this record that has fn that handles only x
type R1 a x r =
{ fn :: x a -> r
}
What I want is a common API (function) that could handle values of R0 and R1 types.
So I do a sum type
data T a x y r
= T0 (R0 a x y r)
| T1 (R1 a x r)
And I declare this function, there is a constraint that x and y have to be Functors.
some :: ∀ a x y r.
Functor x =>
Functor y =>
T a x y r -> a
some = unsafeCoerce -- just stub
Then try to use it.
data X a = X { x :: a}
data Y a = Y { y :: a }
-- make X type functor
instance functorX :: Functor X where
map fn (X val) = X { x: fn val.x }
-- make Y type functor
instance functorY :: Functor Y where
map fn (Y val) = Y { y: fn val.y }
-- declare functions
fn0 :: ∀ a. X a -> Y a -> Unit
fn0 = unsafeCoerce
fn1 :: ∀ a. X a -> Unit
fn1 = unsafeCoerce
Trying to apply some:
someRes0 = some $ T0 { fn: fn0 } -- works
someRes1 = some $ T1 { fn: fn1 } -- error becase it can not infer Y which should be functor but is not present in f1.
So the question is: Is it possible to make such API work somehow in a sensible/ergonomic way (that would not require some addition type annotations from a user of this API)?
I could apparently implement different functions some0 and some1 for handling both cases, but I wonder if the way with a single function (which makes API surface simpler) is possilbe.
And what would be other suggestions for implementing such requirements(good API handling such polymorphic record types that differ in a way described above, when one of the records has exessive params)?
You should make T1 and T0 separate types and then make function some itself overloaded to work with them both:
data T0 x y r a = T0 (R0 a x y r)
data T1 x r a = T1 (R1 a x r)
class Some t where
some :: forall a. t a -> a
instance someT0 :: (Functor x, Functor y) => Some (T0 x y r) where
some = unsafeCoerce
instance someT1 :: Functor x => Some (T1 x r) where
some = unsafeCoerce
An alternative, though much less elegant, solution would be to have the caller of some explicitly specify the y type with a type signature. This is the default approach in situations when a type can't be inferred by the compiler:
someRes1 :: forall a. a
someRes1 = some (T1 { fn: fn1 } :: T a X Y Unit)
Note that I had to add a type signature for someRes1 in order to have the type variable a in scope. Otherwise I couldn't use it in the type signature T a X Y Unit.
An even more alternative way to specify y would be to introduce a dummy parameter of type FProxy:
some :: ∀ a x y r.
Functor x =>
Functor y =>
FProxy y -> T a x y r -> a
some _ = unsafeCoerce
someRes0 = some FProxy $ T0 { fn: fn0 }
someRes1 = some (FProxy :: FProxy Maybe) $ T1 { fn: fn1 }
This way you don't have to spell out all parameters of T.
I provided the latter two solutions just for context, but I believe the first one is what you're looking for, based on your description of the problem mentioning "polymorphic methods". This is what type classes are for: they introduce ad-hoc polymorphism.
And speaking of "methods": based on this word, I'm guessing those fn functions are coming from some JavaScript library, right? If that's the case, I believe you're doing it wrong. It's bad practice to leak PureScript-land types into JS code. First of all JS code might accidentally corrupt them (e.g. by mutating), and second, PureScript compiler might change internal representations of those types from version to version, which will break your bindings.
A better way is to always specify FFI bindings in terms of primitives (or in terms of types specifically intended for FFI interactions, such as the FnX family), and then have a layer of PureScript functions that transform PureScript-typed parameters to those primitives and pass them to the FFI functions.

Using 'convoy pattern' to obtain proof inside code of equality of pattern match

It is a standard example of beginner's textbooks on category theory to argue that a preorder gives rise to a category (where the hom-set hom(x,y) is a singleton or empty depending on whether x <= y). When attempting to formalize this idea in coq, it is natural to view an arrow of as a triple (x,y,pxy) where x y:A (A being a type on which we have a preorder) and pxy is a proof that x <= y. So naturally, when attempting to define a composition of two arrows (x,y,pxy) and (y',z,pyz), we need to returnSome arrow whenever y = y' (or None otherwise). This implies that we are able to test for equality within the function, and compute a proof (the last field of our triple, which may rely on the fact that things are equal).
For the sake of this question, suppose I have:
Parameter eq_dec : forall {A:Type}, A -> A -> bool.
and:
Axiom eq_dec_correct : forall (A:Type) (x y:A),
eq_dec x y = true -> x = y. (* don't care about equivalence here *)
and let us assume I am attempting something simpler than defining composition between arrows, by writing a function which returns a proof that x = y whenever x = y.
Definition test {A:Type} (x y : A) : option (x = y) :=
match eq_dec x y with
| true => Some (eq_dec_correct A x y _)
| false => None
end.
This doesn't work of course, but probably gives you the idea of what I am trying to achieve. Any suggestion is greatly appreciated.
EDIT: Ok it seems this is a case of 'convoy pattern'. I have found this link which suggested to me:
Definition test (A:Type) (x y:A) : option (x = y) :=
match eq_dec x y as b return eq_dec x y = b -> option (x = y) with
| true => fun p => Some (eq_dec_correct A x y p)
| false => fun _ => None
end (eq_refl (eq_dec x y)).
This seems to be working. It is a bit magical and confusing but I'll get my head round it.

Evaluation of effectual functions giving unexpected results

Title is fairly self explanatory. Using this paper as a reference, I have been working out how to properly use effects in Idris. Creating effectual functions seems straightforward, but the actual evaluation of the effectual functions is giving me results I do not expect. For context, if I have the following program:
h1 : Eff Nat [SYSTEM, RND]
h1 = do srand !time
pure (fromInteger !(rndInt 0 6))
h2 : Eff Nat [SYSTEM]
h2 = pure (fromInteger !time)
h3 : Eff Nat [RND]
h3 = do srand 123456789
pure (fromInteger !(rndInt 0 6))
runH : IO ()
runH = do
x <- run h1 -- <---- This one works fine
--let x = runPure h1 -- <---- This one does not work << !!!
y <- run h2 -- <---- This one works fine
--let y = runPure h2 -- <---- This one does not work << !!!
--z <- run h3 -- <---- This one works fine
let z = runPure h3 -- <---- This one works fine << ???
putStrLn $
"test 1 : " ++ show x ++ "\n" ++
"test 2 : " ++ show y ++ "\n" ++
"test 3 : " ++ show z
I would expect to be able to get actual values out of h1, h2, and h3 with runPure, but the compiler will only let me do this for h3. Both h1 and h2 will only compile if they are evaluated with run. The only real differences I see are a) functions with multiple effects cannot use runPure or b) the SYSTEM effect cannot use runPure. Could anyone explain to me why the code above behaves the way it does?
The full error if I choose the line let y = runPure h2 over y <- run h2:
When checking right hand side of runH with
expected type
IO ()
When checking argument env to function Effects.runPure:
Type mismatch between
Env m [] (Type of [])
and
Env id [SYSTEM] (Expected type)
Specifically:
Type mismatch between
[]
and
[SYSTEM]