I am working thru the examples in Software Foundations. In the induction section one of the exercises involves creating a normalizing function and proving certain things about it.
I was able to figure out most of the parts, albeit there was a place that I got stuck on regarding the proof.
Prove a match statement
Presents this exercise I am referring to, and thru that post I was able to figure out how to solve the exercise.
The question now is, why do I get stuck in my proof if I don't decide to factor out the nested match in the normalize function declaration. The part that I get stuck is proving the idempotent-ness of normalize, or
normalize (normalize n) = normalize (n)
Why does taking out the nested match and re declaring it as a new function allow things to work?
(After proving an additional statement relating this new function and normalize that is).
Am I missing something fundamentally about how proofs work in Coq ? Or is there a way to complete this exercise with this declaration of normalize
Fixpoint normalize (n : bin): bin :=
match n with
| Z => Z
| B1b n' => B1b (normalize n')
| B0b n' => match (normalize(n')) with
| Z => Z
| n'' => B0b n''
end
end.
as opposed to
Definition helper_norm (n:bin):bin :=
match n with
| Z =>
| n'' => B0b n''
end.
Fixpoint normalize (n : bin): bin :=
match n with
| Z => Z
| B1b n' => B1b (normalize n')
| B0b n' => helper_norm(normalize(n'))
end.
EDIT
For those going through the tutorials from software foundations. It may be useful to know how to simplify a hypothesis...... , as far as I know the chapters prior to this exercise do not mention how to do so. Otherwise to solve exercise will have to do the factoring out.....
There doesn't seem to be a need to do this here. You just need to destruct the right thing in the complicated case, and then everything falls out of there.
Inductive bin : Type :=
| Zero : bin
| B0b : bin -> bin
| B1b : bin -> bin.
Fixpoint normalize (n : bin) : bin :=
match n with
| Zero => Zero
| B0b n =>
match normalize n with
| Zero => Zero
| n => B0b n
end
| B1b n => B1b (normalize n)
end.
Theorem normalize_idem (n : bin) : normalize (normalize n) = normalize n.
Proof.
induction n as [ | n rec | n rec]; simpl.
- admit.
- destruct (normalize n) as [ | n' | n']; simpl in *; admit.
- admit.
Qed.
I don't think it's possible for a definition to make anything possible that wasn't already possible. Any lemma whatsoever that you prove about helper_norm x is also true and just as provable for match x with | Z => Z | n => B0b n end, since these expressions are convertible. It might be cleaner or more understandable to separate the proof into parts (in this case, we'd pull some part of the destruct (normalize n) ... case out to a new lemma, where we can probably generalize normalize n to any n : bin), but I don't think you gain any power. Also, though the underlying proof language does not really distinguish definitions from their unfoldings, tactics directly interact with the syntax of expressions, so cleaner expressions mean that tactics are less likely to get confused (e.g. try rewrite ... at ...ing a dozen occurrences of a variable vs maybe just one when the offending term is stowed into a function). Maybe there's an example that I'm missing, but splitting definitions into smaller ones is not to make more proofs possible, but just to make it easier on our minds to understand what's going on.
Related
There is this one exercise in "Software Foundations" that I've been trying to solve correctly for some time now but I've actually hit a wall in terms of trying to write down the function being asked for. Here is the relevant part of the exercise
Consider a different, more efficient representation of natural numbers
using a binary rather than unary system. That is, instead of saying
that each natural number is either zero or the successor of a natural
number, we can say that each binary number is either
zero,
twice a binary number, or
one more than twice a binary number.
(a) First, write an inductive definition of the type bin corresponding
to this description of binary numbers.
The naive definition doesn't quite work because you end up being able to construct terms that add 1 to a number that already had 1 added to it or multiplying zero with 2 for no good reason. In order to avoid those I figured I would encode some kind of state transition into the constructors to avoid those but this is kinda tricky so this was the best I could come up with
Inductive tag : Type := z | nz | m. (* zero | nonzero | multiply by 2 *)
Inductive bin_nat : tag -> Type :=
(* zero *)
| Zero : bin_nat z
(* nonzero *)
| One : bin_nat nz
(* multiply by 2 -> nonzero *)
| PlusOne : bin_nat m -> bin_nat nz
(* nonzero | multiply by 2 -> multiply by 2 *)
| Multiply : forall {t : tag}, (t = m \/ t = nz) -> bin_nat t -> bin_nat m.
With the above representation I avoid the issues of terms that don't make sense but now I have to carry around proofs whenever I multiply by 2. I actually have no idea how to use these things in recursive functions though.
I know how to construct the proofs and the terms and they look like this
(* nonzero *)
Definition binr (t : tag) := or_intror (t = m) (eq_refl nz).
(* multiply by 2 *)
Definition binl (t : tag) := or_introl (t = nz) (eq_refl tag m).
(* some terms *)
Check Zero.
Check One.
Check (Multiply (binr _) One).
Check (Multiply (binl _) (Multiply (binr _) One)).
Check PlusOne (Multiply (binl _) (Multiply (binr _) One)).
I can also write down a "proof" of the theorem that I want to correspond to a function but I don't know how to actually convert it to a function. Here's the proof for the conversion function
Definition binary_to_nat : forall t : tag, bin_nat t -> nat.
Proof.
intros.
einduction H as [ | | b | t proof b ].
{ exact 0. } (* Zero *)
{ exact 1. } (* One *)
{ exact (S (IHb b)). } (* PlusOne *)
{ (* Multiply *)
edestruct t.
cut False.
intros F.
case F.
case proof.
intros F.
inversion F.
intros F.
inversion F.
exact (2 * (IHb b)).
exact (2 * (IHb b)).
}
Defined.
I know this term is the function I want because I can verify that I get the right answers when I compute with it
Section Examples.
Example a : binary_to_nat z Zero = 0.
Proof.
lazy.
trivial.
Qed.
Example b : binary_to_nat nz One = 1.
Proof.
lazy.
trivial.
Qed.
Example c : binary_to_nat m ((Multiply (binl _) (Multiply (binr _) One))) = 4.
Proof.
lazy.
trivial.
Qed.
End Examples.
So finally the question, is there an easy way to convert the above proof term to an actual function in a simple way or do I have to try to reverse engineer the proof term?
I like your idea of representing a valid binary number using states and an indexed inductive type. However, as the question indicates, it's possible to get away with just three constructors on the inductive type: zero, multiply by 2, and multiply by 2 and add one. That means that the only transition we need to avoid is multiplying zero by 2.
Inductive tag : Type := z | nz. (* zero | nonzero *)
Inductive bin_nat : tag -> Type :=
(* zero *)
| Zero : bin_nat z
(* multiply by 2 *)
| TimesTwo : bin_nat nz -> bin_nat nz
(* multiply by 2 and add one *)
| TimesTwoPlusOne : forall {t : tag}, bin_nat t -> bin_nat nz.
Then, for example,
Let One := TimesTwoPlusOne Zero. (* 1 *)
Let Two := TimesTwo One. (* 10 *)
Let Three := TimesTwoPlusOne One. (* 11 *)
Let Four := TimesTwo Two. (* 100 *)
So TimesTwo adds a zero to the end of the binary representation and TimesTwoPlusOne adds a one.
If you want to try this yourself, don't look any further.
(I would have put this in spoiler tags, but apparently code blocks in spoiler tags are glitchy)
Incrementing a binary number.
Fixpoint bin_incr {t: tag} (n: bin_nat t): bin_nat nz :=
match n with
| Zero => One
| TimesTwo n' => TimesTwoPlusOne n'
| #TimesTwoPlusOne _ n' => TimesTwo (bin_incr n')
end.
A helper definition for converting nat to binary.
Definition nat_tag (n: nat): tag :=
match n with
| 0 => z
| S _ => nz
end.
Converting nat to binary.
Fixpoint nat_to_bin (n: nat): bin_nat (nat_tag n) :=
match n with
| 0 => Zero
| S n' => bin_incr (nat_to_bin n')
end.
Converting binary to nat. Note that this uses the notations for multiplication and addition of natural numbers. If this doesn't work, you may not have the right scopes open.
Fixpoint bin_to_nat {t: tag} (n: bin_nat t): nat :=
match n with
| Zero => 0
| TimesTwo n' => 2 * (bin_to_nat n')
| #TimesTwoPlusOne _ n' => 1 + 2 * (bin_to_nat n')
end.
We get actual functions from these definitions (note that 20 is 10100 in binary).
Compute nat_to_bin 20.
= TimesTwo
(TimesTwo (TimesTwoPlusOne (TimesTwo (TimesTwoPlusOne Zero))))
: bin_nat (nat_tag 20)
Compute bin_to_nat (nat_to_bin 20).
= 20
: nat
One further technical note. I used this code on two versions of Coq (8.6 and 8.9+alpha) and one version demanded that I put the tag in explicitly when matching on TimesTwoPlusOne, while the other allowed it to stay implicit. The above code should work in either case.
In mathematics, we often proceed as follows: "Now let us consider two cases, the number k can be even or odd. For the even case, we can say exists k', 2k' = k..."
Which expands to the general idea of reasoning about an entire set of objects by disassembling it into several disjunct subsets that can be used to reconstruct the original set.
How is this reasoning principle captured in coq considering we do not always have an assumption that is one of the subsets we want to deconstruct into?
Consider the follow example for demonstration:
forall n, Nat.Even n => P n.
Here we can naturally do inversion on Nat.Even n to get n = 2*x (and an automatically-false eliminated assumption that n = 2*x + 1). However, suppose we have the following:
forall n, P n
How can I state: "let us consider even ns and odd ns". Do I need to first show that we have decidable forall n : nat, even n \/ odd n? That is, introduce a new (local or global) lemma listing all the required subsets? What are the best practices?
Indeed, to reason about a splitting of a class of objects in Coq you need to show an algorithm splitting them, unless you want to reason classically (there is nothing wrong with that).
IMO, a key point is getting such decidability hypotheses "for free". For instance, you could implement odd : nat -> bool as a boolean function, as it is done in some libraries, then you get the splitting for free.
[edit]
You can use some slightly more convenient techniques for pattern matching, by enconding the pertinent cases as inductives:
Require Import PeanoNat Nat Bool.
CoInductive parity_spec (n : nat) : Type :=
| parity_spec_odd : odd n = true -> parity_spec n
| parity_spec_even: even n = true -> parity_spec n
.
Lemma parityP n : parity_spec n.
Proof.
case (even n) eqn:H; [now right|left].
now rewrite <- Nat.negb_even, H.
Qed.
Lemma test n : even n = true \/ odd n = true.
Proof. now case (parityP n); auto. Qed.
First of all I'm new to proof theory and coq, so I'd appreciate answers to be easy to understand.
I'm trying to build up definitions to eventually define prime numbers; I'm currently trying to define divisibility, and in my definition I've written the true cases.
Every nat is divisible with 1.
Every nat is divisible with itself.
And my inductive case (applyable when '(i > j)' ):
Every nat 'i' is divisible by 'j' if '(i - j)' is divisible by 'j'.
Now in some of my subsequent lemmas I need that everything not fulfilling this is false.
How would I go about encoding this in my definition?
I'm thinking something alike, when none of the above is applicable --> false.
- In a sense an else statement for definitions.
In constructive logic, which Coq is built upon, a proposition is only considered "true" when we have direct evidence, i.e. proof. So, one doesn't need such "else" part, because anything that cannot be constructed is in a sense false. If none of the cases for your "is divisible by" relation are applicable, you'll be able to prove your statement by contradiction, i.e. derive False.
For example, if we have this definition of divisibility:
(* we assume 0 divides 0 *)
Inductive divides (m : nat) : nat -> Prop :=
| div_zero: divides m 0
| div_add: forall n, divides m n -> divides m (m + n).
Notation "( x | y )" := (divides x y) (at level 0).
Then we can prove the fact that 3 does not divide 5, using inversion, which handles the impossible cases:
Fact three_does_not_divide_five:
~(3 | 5).
Proof.
intro H. inversion H. inversion H2.
Qed.
Note: we can check that our divides relation captures the notion of divisibility by introducing an alternative ("obvious") definition:
Definition divides' x y := exists z, y = z*x.
Notation "( x |' y )" := (divides' x y) (at level 0).
and proving their equivalence:
Theorem divides_iff_divides' (m n : nat) :
(m | n) <-> (m |' n).
Admitted. (* it's not hard *)
A different approach is to define divisibility from with division and remainder:
Define a divn : nat -> nat -> nat * nat operation that divides two numbers and returns the remainder.
Then, divisibility is expressed as "remainder is equal to 0". You'll need to work out some details, such as what happens with 0.
Then, a falsified divisibility hypothesis amounts to a false equality which can be usually solved by congruence. You can manipulate the equality with the standard theory for the remainder.
This is the approach used in the math-comp library, see http://math-comp.github.io/math-comp/htmldoc/mathcomp.ssreflect.div.html
I have come by this problem many times: I have a proof state in Coq that includes matches on both sides of an equality that are the same.
Is there a standard way to rewrite multiple matches into one?
Eg.
match expression_evaling_to_Z with
Zarith.Z0 => something
Zartih.Pos _ => something_else
Zarith.Neg _ => something_else
end = yet_another_thing.
And if I destruct on expresion_evaling_to_Z I get two identical goals. I would like to find a way to get only one of the goals.
A standard solution is to define "a view" of your datatype using a type family that will introduce the proper conditions and cases when destructed. For your particular case, you could do:
Require Import Coq.ZArith.ZArith.
Inductive zero_view_spec : Z -> Type :=
| Z_zero : zero_view_spec Z0
| Z_zeroN : forall z, z <> Z0 -> zero_view_spec z.
Lemma zero_viewP z : zero_view_spec z.
Proof. now destruct z; [constructor|constructor 2|constructor 2]. Qed.
Lemma U z : match z with
Z0 => 0
| Zpos _ | Zneg _ => 1
end = 0.
Proof.
destruct (zero_viewP z).
Abort.
This is a common idiom in some libraries like math-comp, which provides special support for instantiating the z argument of the type family.
You can write the match expression more succinctly:
match expression_evaling_to_Z with
| Z0 => something
| Zpos _ | Zneg _ => something_else
end = yet_another_thing.
But that will give you 3 subgoals when using destruct.
In this particular case we may use the fact that you actually need to distinguish the zero and non-zero cases, and it looks like a job for the Z.abs_nat : Z -> nat function.
Require Import Coq.ZArith.BinIntDef.
match Z.abs_nat (expression_evaling_to_Z) with
| O => something
| S _ => something_else
end = yet_another_thing.
This will get you only two subcases, but you need to destruct on Z.abs_nat (expression_evaling_to_Z) or introduce a new variable. If you choose the 1st variant, then you'll probably need destruct (...) eqn:Heq. to put the equation into context.
Basically this approach is about finding a new datatype (or defining one) and a suitable function to map from the old type to the new one.
If you don't mind typing you can use replace to replace the RHS with the LHS of your goal, which makes it trivial to solve, and then you just have to prove once that the rewrite is indeed ok.
Open Scope Z.
Lemma L a b :
match a + b with
Z0 => a + b
| Zpos _ => b + a
| Zneg _ => b + a
end = a + b.
replace (b+a) with (a+b). (* 1. replace the RHS with something trivially true *)
destruct (a+b); auto. (* 2. solve the branches in one fell swoop *)
apply Z.add_comm. (* 3. solve only once what is required for the two brances *)
Qed.
Perhaps you can use some Ltac-fu or other lemma to not have to type in the RHS manually too.
Let us assume that we want to prove the following (totally contrived) lemma.
Lemma lem : (forall n0 : nat, 0 <= n0 -> 0 <= S n0) -> forall n, le 0 n.
We want to apply nat_ind to prove it. Here is a possible proof:
Proof.
intros H n. apply nat_ind. constructor. exact H.
Qed.
But why not directly using H in the apply tactic, using something like apply (nat_ind _ _ H) or eapply (nat_ind _ _ H) ? But the first one fails, and the second one hides the remaining goal in an existential variable.
Is it possible in apply or its derivatives to skip hypotheses in order to specify the other arguments while keeping them as classic goals in the remainder of the proof ?
If you do
intros. refine (nat_ind _ _ H _).
then you only have
0 <= 0
left. Is that useful in your case?
Another approach (more universal than in my other answer) would be using the apply ... with ... construct, like this:
intros H n.
apply nat_ind with (2 := H).
Here, 2 is referring to the inductive step parameter of nat_ind (see the Coq v8.5 reference manual, 8.1.3):
In a bindings list of the form (ref_1 := term_1) ... (ref_n := term_n), ref is either an ident or a num. ... If ref_i is some number n, this number denotes the n-th non dependent premise of the term, as determined by the type of term.
This partial proof
intros H n.
apply nat_ind, H.
will give you 0 <= 0 as the only subgoal left.
This approach uses the apply tactic, but does not answer the question in its generality, since it will work only if you want to instantiate the last parameter (which is the case for the example in the question).
Here is quote from the Coq reference manual:
apply term_1 , ... , term_n
This is a shortcut for apply term_1 ; [ .. | ... ; [ .. | apply term_n ]... ], i.e. for the successive applications of term_(i+1) on the last subgoal generated by apply term_i, starting from the application of term_1.
Also, since it's just syntactic sugar, the solution may be considered cheating (and, I guess, abuse of the original intent of the Coq tactics developers) in the context of the question.