Bizarre Contract Violation in Judgment - racket

I have a judgement with the following contract:
(define-judgment-form DynamicLam
#:mode (down I I O O)
#:contract (down Γ e Γ e)
[----------------"Lambda"
(down Γ_0 z_0 Γ_0 z_0)]
;; rest of the code ...
)
When I run this:
(define empty (term ()))
(redex-match? DynamicLam Γ empty)
(redex-match? DynamicLam e lam1^*)
(redex-match? DynamicLam z lam1^*)
(judgment-holds (down empty lam1^* empty lam1^*))
I get back:
#t
#t
#t
. . down: judgment input values do not match its contract;
(unknown output values indicated by _)
contract: (down Γ e Γ e)
values: (down empty lam1^* _ _)
But this does not make sense, because I clearly used redex-match? above to test:
That empty matches Γ
That lam1^* matches e
And furthermore that lam1^* matches z.
What am I missing? Is there more to the meaning of #:contract than just matching Γ e Γ e?

I solved the problem by changing the #:mode to (down I I I I) instead of (down I I O O), and changing
(judgment-holds (down empty lam1^* empty lam1^*))
to
(judgment-holds (down ,empty ,lam1^* ,empty ,lam1^*))
The , change makes a lot of sense to me but I still do not understand why the two outputs needed to be inputs, so if someone can either edit this answer to explain that, or provide a comment or another answer explaining that subtlety, that would be fantastic.

Related

How do I code algebraic cpos as an Isabelle locale

I am trying to prove the known fact that there is a standard way to build an algebriac_cpo from a partial_order. My problem is I keep getting the error
Type unification failed: No type arity set :: partial_order
and I can not understand what this means.
I think I have tracked down my problem to the definition of cpo. The definition works and I have proven various results for it but the working interpretation of a partial_order fails with cpo.
theory LocaleProblem imports "HOL-Lattice.Bounds"
begin
definition directed:: "'a::partial_order set ⇒ bool" where
" directed a ≡
¬a={} ∧ ( ∀ a1 a2. a1∈a∧ a2∈a ⟶ (∃ ub . (is_Sup {a1,a2} ub))) "
class cpo = partial_order +
fixes bot :: "'a::partial_order" ("⊥")
assumes bottom: " ∀ x::'a. ⊥ ⊑ x "
assumes dlub: "directed (d::'a::partial_order set) ⟶ (∃ lub . is_Inf d lub) "
print_locale cpo
interpretation "idl" : partial_order
"(⊆)::( ('b set) ⇒ ('b set) ⇒ bool) "
by (unfold_locales , auto) (* works *)
interpretation "idl" : cpo
"(⊆)::( ('b set) ⇒ ('b set) ⇒ bool) "
"{}" (* gives error
Type unification failed: No type arity set :: partial_order
Failed to meet type constraint:
Term: (⊆) :: 'b set ⇒ 'b set ⇒ bool
Type: ??'a ⇒ ??'a ⇒ bool *)
Any help much appreciated. david
You offered two solutions.
Following the work of Hennessy in Algebraic Theory of Processes" I am trying to prove (where I(A) are the Ideal which are sets) "If a is_a partial_order then I(A) is an algebraic_cpo" I will then want to apply the result to a number of semantics, give as sets. Dose your comment mean that the second solution is not a good idea?
Initially I had a proven lemma that started
lemma directed_ran: "directed (d::('a::partial_order×'b::partial_order) set) ⟹ directed (Range d)"
proof (unfold directed_def)
With the First solution started well:
context partial_order begin (* type 'a is now a partial_order *)
definition
is_Sup :: "'a set ⇒ 'a ⇒ bool" where
"is_Sup A sup = ((∀x ∈ A. x ⊑ sup) ∧ (∀z. (∀x ∈ A. x ⊑ z) ⟶ sup ⊑ z))"
definition directed:: "'a set ⇒ bool" where
" directed a ≡
¬a={} ∧ ( ∀ a1 a2. a1∈a∧ a2∈a ⟶ (∃ ub . (is_Sup {a1,a2} ub))) "
lemma directed_ran: "directed (d::('c::partial_order×'b::partial_order) set) ⟹ directed (Range d)"
proof - assume a:"directed d"
from a local.directed_def[of "d"] (* Fail with message below *)
show "directed (Range d)"
Alas the working lemma now fails: I rewrote
proof (unfold local.directed_def)
so I explored and found that although the fact local.directed_def exists is can not be unified
Failed to meet type constraint:
Term: d :: ('c × 'b) set
Type: 'a set
I changed the type successfully in the lemma statement but now can find no way to unfold the definition in the proof. Is there some way to do this?
Second solution again starts well:
instantiation set:: (type) partial_order
begin
definition setpoDef: "a⊑(b:: 'a set) = subset_eq a b"
instance proof
fix x::"'a set" show " x ⊑ x" by (auto simp: setpoDef)
fix x y z::"'a set" show "x ⊑ y ⟹ y ⊑ z ⟹ x ⊑ z" by(auto simp: setpoDef)
fix x y::"'a set" show "x ⊑ y ⟹ y ⊑ x ⟹ x = y" by (auto simp: setpoDef)
qed
end
but next step fails
instance proof
fix d show "directed d ⟶ (∃lub. is_Sup d (lub::'a set))"
proof assume "directed d " with directed_def[of "d"] show "∃lub. is_Sup d lub"
by (metis Sup_least Sup_upper is_SupI setpoDef)
qed
next
from class.cpo_axioms_def[of "bot"] show "∀x . ⊥ ⊑ x " (* Fails *)
qed
end
the first subgoal is proven but show "∀x . ⊥ ⊑ x " although cut an paste from the subgaol in the output does not match the subgoal. Normally at this point is need to add type constraints. But I cannot find any that work.
Do you know what is going wrong?
Do you know if I can force the Output to reveal the type information in the sugoal.
The interpretation command acts on the locale that a type class definition implicitly declares. In particular, it does not register the type constructor as an instance of the type class. That's what the command instantiation is good for. Therefore, in the second interpretation, Isabelle complains about set not having been registered as a instance of partial_order.
Since directed only needs the ordering for one type instance (namely 'a), I recommend to move the definition of directed into the locale context of the partial_order type class and remove the sort constraint on 'a:
context partial_order begin
definition directed:: "'a set ⇒ bool" where
"directed a ≡ ¬a={} ∧ ( ∀ a1 a2. a1∈a∧ a2∈a ⟶ (∃ ub . (is_Sup {a1,a2} ub))) "
end
(This only works if is_Sup is also defined in the locale context. If not, I recommend to replace the is_Sup condition with a1 <= ub and a2 <= ub.)
Then, you don't need to constrain 'a in the cpo type class definition, either:
class cpo = partial_order +
fixes bot :: "'a" ("⊥")
assumes bottom: " ∀ x::'a. ⊥ ⊑ x "
assumes dlub: "directed (d::'a set) ⟶ (∃ lub . is_Inf d lub)"
And consequently, your interpretation should not fail due to sort constraints.
Alternatively, you could declare set as an instance of partial_order instead of interpreting the type class. The advantage is that you can then also use constants and theorems that need partial_order as a sort constraint, i.e., that have not been defined or proven inside the locale context of partial_order. The disadvantage is that you'd have to define the type class operation inside the instantiation block. So you can't just say that the subset relation should be used; this has to be a new constant. Depending on what you intend to do with the instantiation, this might not matter or be very annoying.

Surprising implicit assumptions in intuitionistic definitions

I'm trying to make sense of something that surprised me. Consider the following two definitions.
Require Import List.
Variable A:Type.
Inductive NoDup : list A -> Prop :=
NoDup_nil : NoDup nil
| NoDup_cons : forall x l, ~ In x l -> NoDup l -> NoDup (x :: l).
Inductive Dup : list A -> Prop :=
Dup_hd : forall x l, In x l -> Dup (x :: l)
| Dup_tl : forall x l, Dup l -> Dup (x :: l).
My first intuition was that they say the same thing (but negated). However, #Arthur Azevedo De Amorim showed that they are not exactly equivalent (or see here). If ~ NoDup l -> Dup l then it must be the case that forall (a b:A), ~ a <> b -> a = b. Thus, an extra assumption on the type A sneaks in if one uses ~ NoDup rather than Dup when stating one's proof goal.
I tried to spot where this extra assumption is introduced, to get a mental model of what happened, so I will see it myself next time. My current explanation is that
it is the ~ In x l argument to NoDup_cons that is responsible, because
~ In x l terms can only be created if one can prove that a certain x is different from the first element in the list, the second element in the list, etc.
So when I destruct a term om type NoDup (_::_) I get a term ~ In _ _ that can only have been created for a type A for which ~ a <> b -> a = b must hold.
Q: is that an ok 'informal' way to think about it, or is there a better way to understand it, so I don't fall into that trap again?
Also, I found that the Coq library contains NoDup and not Dup, so perhaps some lemmas are weaker than they need to be, because they were formulated using NoDup instead of Dup. However, they could be formulated with Dup because ~Dup l -> NoDup l.
I think the lesson to take out of this example is that you need to be more careful when thinking about negations in intuitionistic logic. In particular, your statement "they say the same thing (but negated)" makes sense in classical logic: it means either of the equivalent statements P <-> ~Q or ~P <-> Q. However, in intuitionistic logic these two statements are not equivalent, so you would have to be more specific about which of these two (if either) is actually true.
In this case, it is true that NoDup l is equivalent to ~ Dup l. What is not true in general is that Dup l is a normal proposition (recall that a proposition P is called normal if ~~P -> P, in which case it's easy to conclude that P <-> ~~P). Therefore, ~ NoDup l is equivalent to ~~ Dup l, which in general is a strictly weaker statement than Dup l.
One possible way to think about the difference between the two is: from a concrete proof of Dup l, it would be possible to extract a pair of indices such that the corresponding entries of l are equal (not literally as a function in Coq due to the restrictions on eliminating from Prop to Type, but you could definitely prove a lemma that there exists such a pair of indices). On the other hand, a concrete proof of ~ NoDup l simply gives a way to take a purported proof of NoDup l and derive a contradiction from it - from which you can't necessarily extract any particular pair of indices.
(I do agree it's somewhat odd that the standard library has only NoDup and not Dup.)

Why does Haskell's foldr NOT stackoverflow while the same Scala implementation does?

I am reading FP in Scala.
Exercise 3.10 says that foldRight overflows (See images below).
As far as I know , however foldr in Haskell does not.
http://www.haskell.org/haskellwiki/
-- if the list is empty, the result is the initial value z; else
-- apply f to the first element and the result of folding the rest
foldr f z [] = z
foldr f z (x:xs) = f x (foldr f z xs)
-- if the list is empty, the result is the initial value; else
-- we recurse immediately, making the new initial value the result
-- of combining the old initial value with the first element.
foldl f z [] = z
foldl f z (x:xs) = foldl f (f z x) xs
How is this different behaviour possible?
What is the difference between the two languages/compilers that cause this different behaviour?
Where does this difference come from ? The platform ? The language? The compiler?
Is it possible to write a stack-safe foldRight in Scala? If yes, how?
Haskell is lazy. The definition
foldr f z (x:xs) = f x (foldr f z xs)
tells us that the behaviour of foldr f z xs with a non-empty list xs is determined by the laziness of the combining function f.
In particular the call foldr f z (x:xs) allocates just one thunk on the heap, {foldr f z xs} (writing {...} for a thunk holding an expression ...), and calls f with two arguments - x and the thunk. What happens next, is f's responsibility.
In particular, if it's a lazy data constructor (like e.g. (:)), it will immediately be returned to the caller of the foldr call (with the constructor's two slots filled by (references to) the two values).
And if f does demand its value on the right, with minimal compiler optimizations no thunks should be created at all (or one, at the most - the current one), as the value of foldr f z xs is immediately needed and the usual stack-based evaluation can used:
foldr f z [a,b,c,....,n] ==
a `f` (b `f` (c `f` (... (n `f` z)...)))
So foldr can indeed cause SO, when used with strict combining function on extremely long input lists. But if the combining function doesn't demand right away its value on the right, or only demands a part of it, the evaluation will be suspended in a thunk, and the partial result as created by f will be immediately returned. Same with the argument on the left, but they already come as thunks, potentially, in the input list.
Haskell is lazy. So foldr allocates on the heap, not the stack. Depending on the strictness of the argument function, it may allocate a single (small) result, or a large structure.
You're still losing space, compared to a strict, tail-recursive implementation, but it doesn't look as obvious, since you've traded stack for heap.
Note that the authors here are not referring to any foldRight definition in the scala standard library, such as the one defined on List. They are referring to the definition of foldRight they gave above in section 3.4.
The scala standard library defines the foldRight in terms of foldLeft by reversing the list (which can be done in constant stack space) then calling foldLeft with the the arguments of the passed function reversed. This works for lists, but won't work for a structure which cannot be safely reversed, for example:
scala> Stream.continually(false)
res0: scala.collection.immutable.Stream[Boolean] = Stream(false, ?)
scala> res0.reverse
java.lang.OutOfMemoryError: GC overhead limit exceeded
Now lets think about what should be the result of this operation:
Stream.continually(false).foldRight(true)(_ && _)
The answer should be false, it doesn't matter how many false values are in the stream or if it is infinite, if we are going to combine them with a conjunction, the result will be false.
haskell of course gets this with no problem:
Prelude> foldr (&&) True (repeat False)
False
And that is because of two important things: haskell's foldr will traverse the stream from left to right, not right to left, and haskell is lazy by default. The first item here, that foldr actually traverses the list from left to right might surprise or confuse some people who think of a right fold as starting from the right, but the important feature of a right fold is not which end of a structure it starts on, but in which direction the associativity is. So give a list [1,2,3,4] and an op named op, a left fold is
((1 op 2) op 3) op 4)
and a right fold is
(1 op (2 op (3 op 4)))
But the order of evaluation shouldn't matter. So what the authors have done here in chapter 3 is to give you a fold which traverses the list from left to right, but because scala is by default strict, we still will not be able to traverse our stream of infinite falses, but have some patience, they will get to that in chapter 5 :) I'll give you a sneak peek, lets look at the difference between foldRight as it is defined in the standard library and as it is defined in the Foldable typeclass in scalaz:
Here's the implementation from the scala standard library:
def foldRight[B](z: B)(op: (A, B) => B): B
Here's the definition from scalaz's Foldable:
def foldRight[B](z: => B)(f: (A, => B) => B): B
The difference is that the Bs are all lazy, and now we get to fold our infinite stream again, as long as we give a function which is sufficiently lazy in its second parameter:
scala> Foldable[Stream].foldRight(Stream.continually(false),true)(_ && _)
res0: Boolean = false
One easy way to demonstrate this in Haskell is to use equational reasoning to demonstrate lazy evaluation. Let's write the find function in terms of foldr:
-- Return the first element of the list that satisfies the predicate, or `Nothing`.
find :: (a -> Bool) -> [a] -> Maybe a
find p = foldr (step p) Nothing
where step pred x next = if pred x then Just x else next
foldr :: (a -> b -> b) -> b -> [a] -> b
foldr f z [] = z
foldr f z (x:xs) = f x (foldr f z xs)
In an eager language, if you wrote find with foldr it would traverse the whole list and use O(n) space. With lazy evaluation, it stops at the first element that satisfies the predicate, and uses only O(1) space (modulo garbage collection):
find odd [0..]
== foldr (step odd) Nothing [0..]
== step odd 0 (foldr (step odd) Nothing [1..])
== if odd 0 then Just 0 else (foldr (step odd) Nothing [1..])
== if False then Just 0 else (foldr (step odd) Nothing [1..])
== foldr (step odd) Nothing [1..]
== step odd 1 (foldr (step odd) Nothing [2..])
== if odd 1 then Just 1 else (foldr (step odd) Nothing [2..])
== if True then Just 1 else (foldr (step odd) Nothing [2..])
== Just 1
This evaluation stops in a finite number of steps, in spite of the fact that the list [0..] is infinite, so we know that we're not traversing the whole list. In addition, there is an upper bound on the complexity of the expressions at each step, which translates into a constant upper bound on the memory required to evaluate this.
The key here is that the step function that we're folding with has this property: no matter what the values of x and next are, it will either:
Evaluate to Just x, without invoking the next thunk, or
Tail-call the next thunk (in effect, if not literally).

COQ definition curry howard (A -> B -> C) -> (B -> A -> C) using sets

I've been staring this in the face for hours not understanding :(
I need to solve some definitions using coq, and I am supposed to do it via the Curry Howard isomorphism. I have read up and still have no clue what I am doing. I have looked at other examples and tried doing it those ways and I always get errors.
For example, here I need to define this:
Variables A B C : Set.
Definition c01 : (A -> B -> C) -> (B -> A -> C) :=
this was my attempt:
fun g => fun p => g (snd p) (fst p).
end.
I also tried
fun f => fun b => fun a => f (b , a)
end.
Eventually it just says it was expecting a different type than what I gave, and sometimes it says things like: "expected to have type "?9 * ?10"."
Really struggling to grasp this after reading everything I could find.
Please could somebody explain :(
Well I guess you do not know how to read the types correctly.
c01's type is (A -> B -> C) -> (B -> A -> C). That means it is a function, which takes a function as argument and returns a function.
It takes "a function with two arguments" (I mean in the Haskell sense of "a function with two arguments", not in the Scala or Java sense), of type A and B, which returns a value of type C.
It must return a function with two arguments, of type A and B (but in the other order), which returns a value of type C.
So what must this function, c01, do?
It must take a function, and turn it into the same function with the order of its arguments reversed.
So:
fun f => fun b => fun a => f a b
Or equivalently (just adding some parentheses to make it clearer):
fun f => (fun b => fun a => f a b)

Difference in asymptotic time of two variants of flatten

I am going through the Scala by Example document and I am having trouble with exercise 9.4.2. Here is the text:
Exercise 9.4.2 Consider the problem of writing a function flatten, which takes a list of element lists as arguments. The result of flatten should be the concatenation of all element lists into a single list. Here is an implementation of this method in terms of :\.
def flatten[A](xs: List[List[A]]): List[A] =
(xs :\ (Nil: List[A])) {(x, xs) => x ::: xs}
Consider replacing the body of flatten by
((Nil: List[A]) /: xs) ((xs, x) => xs ::: x)
What would be the difference in asymptotic complexity between the two versions of flatten?
In fact flatten is predefined together with a set of other userful function in an object
called List in the standatd Scala library. It can be accessed from user program by calling List.flatten. Note that flatten is not a method of class List – it would not make sense there, since it applies only to lists of lists, not to all lists in general.
I do not see how the asymptotic time of these two function variants are different. I'm sure it's because I am missing something fundamental about the meaning of fold left and fold right.
Here is a pdf of the document I am describing:
http://www.scala-lang.org/docu/files/ScalaByExample.pdf
I am generally finding this an excellent introduction into Scala.
Look at the implementation of concatenation ::: (p.68) (the rest of answer is masked with spoiler-tags, mouse-over to read !)
Witness that it's linear (in ::) in the size of the left argument (the list that ends up being the prefix of the result).
Assume (for the sake of the complexity analysis) that your list of lists contains n equal-sized small lists of size a fixed constant k, k<n. If you use foldLeft, you compute:
f (... (f (f a b1) b2) ...) bn
Where f is the concatenation. If you use foldRight:
f a1 (f a2 (... (f an b) ...))
With again f standing for the prefix notation of concatenation. In the second case it's easy : you add k elements at the head each time, so you do (k*n cons).
For the first case (foldLeft), in the first concatenation, the list (f a b1) is of size k. You add it on the second round to b2 to form (f (f a b1) b2) of size 2k ... You do (k+(k+k)+(3k)+... = k*sum_{i=1}^n(i) = k*n(n+1)/2 cons).
(Followup question : is this the only parameter that should be taken into account while thinking of the efficiency of that function ? Doesn't foldLeft have an advantage -not asymptotic complexity- that foldRight doesn't ?)