Proof with false hypothesis in Isabelle/HOL Isar - coq

I am trying to prove a lemma which in a certain part has a false hypothesis. In Coq I used to write "congruence" and it would get rid of the goal. However, I am not sure how to proceed in Isabelle Isar. I am trying to prove a lemma about my le function:
primrec le::"nat ⇒ nat ⇒ bool" where
"le 0 n = True" |
"le (Suc k) n = (case n of 0 ⇒ False | Suc j ⇒ le k j)"
lemma def_le: "le a b = True ⟷ (∃k. a + k = b)"
proof
assume H:"le a b = True"
show "∃k. a + k = b"
proof (induct a)
case 0
show "∃k. 0 + k = b"
proof -
have "0 + b = b" by simp
thus ?thesis by (rule exI)
qed
case Suc
fix n::nat
assume HI:"∃k. n + k = b"
show "∃k. (Suc n) + k = b"
proof (induct b)
case 0
show "∃k. (Suc n) + k = 0"
proof -
have "le (Suc n) 0 = False" by simp
oops
Note that my le function is "less or equal". At this point of the proof I find I have the hypothesis H which states that le a b = True, or in this case that le (Suc n) 0 = True which is false. How can I solve this lemma?
Another little question: I would like to write have "le (Suc n) 0 = False" by (simp only:le.simps) but this does not work. It seems I need to add some rule for reducing case expressions. What am I missing?
Thank you very much for your help.

The problem is not that it is hard to get rid of a False hypothesis in Isabelle. In fact, pretty much all of Isabelle's proof methods will instantly prove anything if there is False in the assumptions. No, the problem here is that at that point of the proof, you don't have the assumptions you need anymore, because you did not chain them into the induction. But first, allow me to make a few small remarks, and then give concrete suggestions to fix your proof.
A Few Remarks
It is somewhat unidiomatic to write le a b = True or le a b = False in Isabelle. Just write le a b or ¬le a b.
Writing the definition in a convenient form is very important to get good automation. Your definition works, of course, but I suggest the following one, which may be more natural and will give you a convenient induction rule for free:
Using the function package:
fun le :: "nat ⇒ nat ⇒ bool" where
"le 0 n = True"
| "le (Suc k) 0 = False"
| "le (Suc k) (Suc n) = le k n"
Existentials can sometimes hide important information, and they tend mess with automation, since the automation never quite knows how to instantiate them.
If you prove the following lemma, the proof is fully automatic:
lemma def_le': "le a b ⟷ a + (b - a) = b"
by (induction a arbitrary: b) (simp_all split: nat.split)
Using my function definition, it is:
lemma def_le': "le a b ⟷ (a + (b - a) = b)"
by (induction a b rule: le.induct) simp_all
Your lemma then follows from that trivially:
lemma def_le: "le a b ⟷ (∃k. a + k = b)"
using def_le' by auto
This is because the existential makes the search space explode. Giving the automation something concrete to follow helps a lot.
The actual answer
There are a number of problems. First of all, you will probably need to do induct a arbitrary: b, since the b will change during your induction (for le (Suc a) b, you will have to do a case analysis on b, and then in the case b = Suc b' you will go from le (Suc a) (Suc b') to le a b').
Second, at the very top, you have assume "le a b = True", but you do not chain this fact into the induction. If you do induction in Isabelle, you have to chain all required assumptions containing the induction variables into the induction command, or they will not be available in the induction proof. The assumption in question talks about a and b, but if you do induction over a, you will have to reason about some arbitrary variable a' that has nothing to do with a. So do e.g:
assume H:"le a b = True"
thus "∃k. a + k = b"
(and the same for the second induction over b)
Third, when you have several cases in Isar (e.g. during an induction or case analysis), you have to separate them with next if they have different assumptions. The next essentially throws away all the fixed variables and local assumptions. With the changes I mentioned before, you will need a next before the case Suc, or Isabelle will complain.
Fourth, the case command in Isar can fix variables. In your Suc case, the induction variable a is fixed; with the change to arbitrary: b, an a and a b are fixed. You should give explicit names to these variables; otherwise, Isabelle will invent them and you will have to hope that the ones it comes up with are the same as those that you use. That is not good style. So write e.g. case (Suc a b). Note that you do not have to fix variables or assume things when using case. The case command takes care of that for you and stores the local assumptions in a theorem collection with the same name as the case, e.g. Suc here. They are categorised as Suc.prems, Suc.IH, Suc.hyps. Also, the proof obligation for the current case is stored in ?case (not ?thesis!).
Conclusion
With that (and a little bit of cleanup), your proof looks like this:
lemma def_le: "le a b ⟷ (∃k. a + k = b)"
proof
assume "le a b"
thus "∃k. a + k = b"
proof (induct a arbitrary: b)
case 0
show "∃k. 0 + k = b" by simp
next
case (Suc a b)
thus ?case
proof (induct b)
case 0
thus ?case by simp
next
case (Suc b)
thus ?case by simp
qed
qed
next
It can be condensed to
lemma def_le: "le a b ⟷ (∃k. a + k = b)"
proof
assume "le a b"
thus "∃k. a + k = b"
proof (induct a arbitrary: b)
case (Suc a b)
thus ?case by (induct b) simp_all
qed simp
next
But really, I would suggest that you simply prove a concrete result like le a b ⟷ a + (b - a) = b first and then prove the existential statement using that.

Manuel Eberl did the hard part, and I just respond to your question on how to try and control simp, etc.
Before continuing, I go off topic and clarify something said on another site. The word "a tip" was used to give credit to M.E., but it should have been "3 explanations provided over 2 answers". Emails on mailing lists can't be corrected without spamming the list.
Some short answers are these:
There is no guarantee of completely controlling simp, but attributes del and only, shown below, will many times control it to the extent that you desire. To see that it's not doing more than you want, traces need to be used; an example of traces is given below.
To get complete control of proof steps, you would use "controlled" simp, along with rule, drule, and erule, and other methods. Someone else would need to give an exhaustive list.
Most anyone with the expertise to be able to answer "what's the detailed proof of what simp, auto, blast, etc does" will very rarely be willing to put in the work to answer the question. It can be plain, tedious work to investigate what simp is doing.
"Black box proofs" are always optional, as far as I can tell, if we want them to be and have the expertise to make them optional. Expertise to make them optional is generally a major limiting factor. With expertise, motivation becomes the limiting factor.
What's simp up to? It can't please everyone
If you watch, you'll see. People complain there's too much automation, or they complain there's too little automation with Isabelle.
There can never be too much automation, but that's because with Isabelle/HOL, automation is mostly optional. The possibility of no automation is what makes proving potentially interesting, but with only no automation, proving is nothing but pure tediousness, in the grand scheme.
There are attributes only and del, which can be used to mostly control simp. Speaking only from experimenting with traces, even simp will call other proof methods, similar to how auto calls simp, blast, and others.
I think you cannot prevent simp from calling linear arithmetic methods. But linear arithmetic doesn't apply much of the time.
Get set up for traces, and even the blast trace
My answer here is generalized for also trying to determine what auto is up to. One of the biggest methods that auto resorts to is blast.
You don't need the attribute_setups if you don't care about seeing when blast is used by auto, or called directly. Makarius Wenzel took the blast trace out, but then was nice enough to show the code on how to implement it.
Without the blast part, there is just the use of declare. In a proof, you can use using instead of declare. Take out what you don't want. Make sure you look at the new simp_trace_new info in the PIDE Simplifier Trace panel.
attribute_setup blast_trace = {*
Scan.lift
(Parse.$$$ "=" -- Args.$$$ "true" >> K true ||
Parse.$$$ "=" -- Args.$$$ "false" >> K false ||
Scan.succeed true) >>
(fn b => Thm.declaration_attribute (K (Config.put_generic Blast.trace b)))
*}
attribute_setup blast_stats = {*
Scan.lift
(Parse.$$$ "=" -- Args.$$$ "true" >> K true ||
Parse.$$$ "=" -- Args.$$$ "false" >> K false ||
Scan.succeed true) >>
(fn b => Thm.declaration_attribute (K (Config.put_generic Blast.stats b)))
*}
declare[[simp_trace_new mode=full]]
declare[[linarith_trace,rule_trace,blast_trace,blast_stats]]
Try and control simp, to your heart's content with only & del
I don't want to work hard by using the formula in your question. With simp, what you're looking for with only and the traces is that no rule was used that you weren't expecting.
Look at the simp trace to see what basic rewrites are done that will always be done, like basic rewrites for True and False. If you don't even want that, then you have to resort to methods like rule.
A starting point to see if you can completely shut down simp is apply(simp only:).
Here are a few examples. I would have to work harder to find an example to show when linear arithmetic is automatically being used:
lemma
"a = 0 --> a + b = (b::'a::comm_monoid_add)"
apply(simp only:) (*
ERROR: simp can't apply any magic whatsoever.
*)
oops
lemma
"a = 0 --> a + b = (b::'a::comm_monoid_add)"
apply(simp only: add_0) (*
ERROR: Still can't. Rule 'add_0' is used, but it can't be used first.
*)
oops
lemma
"a = 0 --> a + b = (b::'a::comm_monoid_add)"
apply(simp del: add_0) (*
A LITTLE MAGIC:
It applied at least one rule. See the simp trace. It tried to finish
the job automatically, but couldn't. It says "Trying to refute subgoal 1,
etc.".
Don't trust me about this, but it looks typical of blast. I was under
the impressions that simp doesn't call blast.*)
oops
lemma
"a = 0 --> a + b = (b::'a::comm_monoid_add)"
by(simp) (*
This is your question. I don't want to step through the rules that simp
uses to prove it all.
*)

Related

What is difference between `destruct` and `case_eq` tactics in Coq?

I understood destruct as it breaks an inductive definition into its constructors. I recently saw case_eq and I couldn't understand what it does differently?
1 subgoals
n : nat
k : nat
m : M.t nat
H : match M.find (elt:=nat) n m with
| Some _ => true
| None => false
end = true
______________________________________(1/1)
cc n (M.add k k m) = true
In the above context, if I do destruct M.find n m it breaks H into true and false whereas case_eq (M.find n m) leaves H intact and adds separate proposition M.find (elt:=nat) n m = Some v, which I can rewrite to get same effect as destruct.
Can someone please explain me the difference between the two tactics and when which one should be used?
The first basic tactic in the family of destruct and case_eq is called case. This tactic modifies only the conclusion. When you type case A and A has a type T which is inductive, the system replaces A in the goal's conclusion by instances of all the constructors of type T, adding universal quantifications for the arguments of these constructors, if needed. This creates as many goals as there are constructors in type T. The formula A disappears from the goal and if there is any information about A in an hypothesis, the link between this information and all the new constructors that replace it in the conclusion gets lost. In spite of this, case is an important primitive tactic.
Loosing the link between information in the hypotheses and instances of A in the conclusion is a big problem in practice, so developers came up with two solutions: case_eq and destruct.
Personnally, when writing the Coq'Art book, I proposed that we write a simple tactic on top of case that keeps a link between A and the various constructor instances in the form of an equality. This is the tactic now called case_eq. It does the same thing as case but adds an extra implication in the goal, where the premise of the implication is an equality of the form A = ... and where ... is an instance of each constructor.
At about the same time, the tactic destruct was proposed. Instead of limiting the effect of replacement in the goal's conclusion, destruct replaces all instances of A appearing in the hypotheses with instances of constructors of type T. In a sense, this is cleaner because it avoids relying on the extra concept of equality, but it is still incomplete because the expression A may be a compound expression f B, and if B appears in the hypothesis but not f B the link between A and B will still be lost.
Illustration
Definition my_pred (n : nat) := match n with 0 => 0 | S p => p end.
Lemma example n : n <= 1 -> my_pred n <= 0.
Proof.
case_eq (my_pred n).
Gives the two goals
------------------
n <= 1 -> my_pred n = 0 -> 0 <= 0
and
------------------
forall p, my_pred n = S p -> n <= 1 -> S p <= 0
the extra equality is very useful here.
In this question I suggested that the developer use case_eq (a == b) when (a == b) has type bool because this type is inductive and not very informative (constructors have no argument). But when (a == b) has type {a = b}+{a <> b} (which is the case for the string_dec function) the constructors have arguments that are proofs of interesting properties and the extra universal quantification for the arguments of the constructors are enough to give the relevant information, in this case a = b in a first goal and a <> b in a second goal.

Adding complete disjunctive assumption in Coq

In mathematics, we often proceed as follows: "Now let us consider two cases, the number k can be even or odd. For the even case, we can say exists k', 2k' = k..."
Which expands to the general idea of reasoning about an entire set of objects by disassembling it into several disjunct subsets that can be used to reconstruct the original set.
How is this reasoning principle captured in coq considering we do not always have an assumption that is one of the subsets we want to deconstruct into?
Consider the follow example for demonstration:
forall n, Nat.Even n => P n.
Here we can naturally do inversion on Nat.Even n to get n = 2*x (and an automatically-false eliminated assumption that n = 2*x + 1). However, suppose we have the following:
forall n, P n
How can I state: "let us consider even ns and odd ns". Do I need to first show that we have decidable forall n : nat, even n \/ odd n? That is, introduce a new (local or global) lemma listing all the required subsets? What are the best practices?
Indeed, to reason about a splitting of a class of objects in Coq you need to show an algorithm splitting them, unless you want to reason classically (there is nothing wrong with that).
IMO, a key point is getting such decidability hypotheses "for free". For instance, you could implement odd : nat -> bool as a boolean function, as it is done in some libraries, then you get the splitting for free.
[edit]
You can use some slightly more convenient techniques for pattern matching, by enconding the pertinent cases as inductives:
Require Import PeanoNat Nat Bool.
CoInductive parity_spec (n : nat) : Type :=
| parity_spec_odd : odd n = true -> parity_spec n
| parity_spec_even: even n = true -> parity_spec n
.
Lemma parityP n : parity_spec n.
Proof.
case (even n) eqn:H; [now right|left].
now rewrite <- Nat.negb_even, H.
Qed.
Lemma test n : even n = true \/ odd n = true.
Proof. now case (parityP n); auto. Qed.

What's the difference between logical (Leibniz) equality and local definition in Coq?

I am having trouble understanding the difference between an equality and a local definition. For example, when reading the documentation about the set tactic:
remember term as ident
This behaves as set ( ident := term ) in * and
using a logical (Leibniz’s) equality instead of a local definition
Indeed,
set (ca := c + a) in *. e.g. generates ca := c + a : Z in the context, while
remember (c + a ) as ca. generates Heqca : ca = c + a in the context.
In case 2. I can make use of the generated hypothesis like rewrite Heqca., while in case 1., I cannot use rewrite ca.
What's the purpose of case 1. and how is it different from case 2. in terms of practical usage?
Also, if the difference between the two is fundamental, why is remember described as a variant of set in the documentation (8.5p1)?
You could think of set a := b + b in H as rewriting H to be:
(fun a => H[b+b/a]) (b+b)
or
let a := b + b in
H[b+b/a]
That is, it replaces all matched patterns b+b by a fresh variable a, which is then instantiated to the value of the pattern. In this regard, both H and the rewrited hypotheses remain equal by "conversion".
Indeed, remember is in a sense a variant of set, however its implications are very different. In this case, remember will introduce a new proof of equality eq_refl: b + b = b + b, then it will abstract away the left part. This is convenient for having enough freedom in pattern matching etc... This is remember in terms of more atomic tactics:
Lemma U b c : b + b = c + c.
Proof.
assert (b + b = b + b). reflexivity.
revert H.
generalize (b+b) at 1 3.
intros n H.
In addition to #ejgallego's answer.
Yes, you cannot rewrite a (local) definition, but you can unfold it:
set (ca := c + a) in *.
unfold ca.
As for the differences in their practical use -- they are quite different. For example, see this answer by #eponier. It relies on the remember tactic so that induction works as we'd like to. But, if we replace remember with set it fails:
Inductive good : nat -> Prop :=
| g1 : good 1
| g3 : forall n, good n -> good (n * 3)
| g5 : forall n, good n -> good (n + 5).
Require Import Omega.
The variant with remember works:
Goal ~ good 0.
remember 0 as n.
intro contra. induction contra; try omega.
apply IHcontra; omega.
Qed.
and the variant with set doesn't (because we didn't introduce any free variables to work with):
Goal ~ good 0.
set (n := 0). intro contra.
induction contra; try omega.
Fail apply IHcontra; omega.
Abort.

`No more subgoals, but there are non-instantiated existential variables` in Coq proof language?

I was following (incomplete) examples in Coq 8.5p1 's reference manual in chapter 11 about the mathematical/declarative proof language. In the example below for iterated equalities (~= and =~), I got a warning Insufficient Justification for rewriting 4 into 2+2, and eventually got an error saying:
No more subgoals, but there are non-instantiated existential
variables:
?Goal : [x : R H : x = 2 _eq0 : 4 = x * x
|- 2 + 2 = 4]
You can use Grab Existential Variables.
Example:
Goal forall x, x = 2 -> x + x = x * x.
Proof.
proof. Show.
let x:R.
assume H: (x = 2). Show.
have ( 4 = 4). Show.
~= (2*2). Show.
~= (x*x) by H. Show.
=~ (2+2). Show. (*Problem Here: Insufficient Justification*)
=~ H':(x + x) by H.
thus thesis by H'.
end proof.
Fail Qed.
I'm not familiar with the mathematical proof language in Coq and couldn't understand why this happens. Can someone help explain how to fix the error?
--EDIT--
#Vinz
I had these random imports before the example:
Require Import Reals.
Require Import Fourier.
Your proof would work for nat or Z, but it fails in case of R.
From the Coq Reference Manual (v8.5):
The purpose of a declarative proof language is to take the opposite approach where intermediate states are always given by the user, but the transitions of the system are automated as much as possible.
It looks like the automation fails for 4 = 2 + 2. I don't know what kind of automation uses the declarative proof engine, but, for instance, the auto tactic is not able to prove almost all simple equalities, like this one:
Open Scope R_scope.
Goal 2 + 2 = 4. auto. Fail Qed.
And as #ejgallego points out we can prove 2 * 2 = 4 using auto only by chance:
Open Scope R_scope.
Goal 2 * 2 = 4. auto. Qed.
(* `reflexivity.` would do here *)
However, the field tactic works like a charm. So one approach would be to suggest the declarative proof engine using the field tactic:
Require Import Coq.Reals.Reals.
Open Scope R_scope.
Unset Printing Notations. (* to better understand what we prove *)
Goal forall x, x = 2 -> x + x = x * x.
Proof.
proof.
let x : R.
assume H: (x = 2).
have (4 = 4).
~= (x*x) by H.
=~ (2+2) using field. (* we're using the `field` tactic here *)
=~ H':(x + x) by H.
thus thesis by H'.
end proof.
Qed.
The problem here is that Coq's standard reals are defined in an axiomatic way.
Thus, + : R -> R -> R and *, etc... are abstract operations, and will never compute. What does this mean? It means that Coq doesn't have a rule on what to do with +, contrary for instance to the nat case, where Coq knows that:
0 + n ~> 0
S n + m ~> S (n + m)
Thus, the only way to manipulate + for the real numbers it to manually apply the corresponding axioms that characterize the operator, see:
https://coq.inria.fr/library/Coq.Reals.Rdefinitions.html
https://coq.inria.fr/library/Coq.Reals.Raxioms.html
This is what field, omega, etc... do. Even 0 + 1 = 1 is not probable by computation.
Anton's example 2 + 2 = 4 works by chance. Actually, Coq has to parse the numeral 4 to a suitable representation using the real axioms, and it turns out that 4 is parsed as Rmult (Rplus R1 R1) (Rplus R1 R1) (to be more efficient), which is the same than the left side of the previous equality.

Working with Isabelle's code generator: Data refinement and higher order functions

This is a follow-up on Isabelle's Code generation: Abstraction lemmas for containers?:
I want to generate code for the_question in the following theory:
theory Scratch imports Main begin
typedef small = "{x::nat. x < 10}" morphisms to_nat small
by (rule exI[where x = 0], simp)
code_datatype small
lemma [code abstype]: "small (to_nat x) = x" by (rule to_nat_inverse)
definition a_pred :: "small ⇒ bool"
where "a_pred = undefined"
definition "smaller j = [small i . i <- [0 ..< to_nat j]]"
definition "the_question j = (∀i ∈ set (smaller j). a_pred j)"
The problem is that the equation for smaller is not suitable for code generation, as it mentions the abstraction function small.
Now according to Andreas’ answer to my last question and the paper on data refinement, the next step is to introduce a type for sets of small numbers, and create a definition for smaller in that type:
typedef small_list = "{l. ∀x∈ set l. (x::nat) < 10}" by (rule exI[where x = "[]"], auto)
code_datatype Abs_small_list
lemma [code abstype]: "Abs_small_list (Rep_small_list x) = x" by (rule Rep_small_list_inverse)
definition "smaller' j = Abs_small_list [ i . i <- [0 ..< to_nat j]]"
lemma smaller'_code[code abstract]: "Rep_small_list (smaller' j) = [ i . i <- [0 ..< to_nat j]]"
unfolding smaller'_def
by (rule Abs_small_list_inverse, cases j, auto elim: less_trans simp add: small_inverse)
Now smaller' is executable. From what I understand I need to redefine operations on small list as operations on small_list:
definition "small_list_all P l = list_all P (map small (Rep_small_list l))"
lemma[code]: "the_question j = small_list_all a_pred (smaller' j)"
unfolding small_list_all_def the_question_def smaller'_code smaller_def Ball_set by simp
I can define a good looking code equation for the_question. But the definition of small_list_all is not suitable for code generation, as it mentions the abstraction morphismsmall. How do I make small_list_all executable?
(Note that I cannot unfold the code equation of a_pred, as the problem actually occurs in the code equation of the actually recursive a_pred. Also, I’d like to avoid hacks that involve re-checking the invariant at runtime.)
I don't have a good solution to the general problem, but here's an idea that will let you generate code for the_question in this particular case.
First, define a function predecessor :: "small ⇒ small with an abstract code equation (possibly using lift_definition from λn::nat. n - 1).
Now you can prove a new code equation for smaller whose rhs uses if-then-else, predecessor and normal list operations:
lemma smaller_code [code]:
"smaller j = (if to_nat j = 0 then []
else let k = predecessor j in smaller k # [k])"
(More efficient implementations are of course possible if you're willing to define an auxiliary function.)
Code generation should now work for smaller, since this code equation doesn't use function small.
The short answer is no, it does not work.
The long answer is that there are often workarounds possible. One is shown by Brian in his answer. The general idea seems to be
Separate the function that has the abstract type in covariant positions besides the final return value (i.e. higher order functions or functions returning containers of abstract values) into multiple helper functions so that abstract values are only constructed as a single return value of one of the helper function.
In Brian’s example, this function is predecessor. Or, as another simple example, assume a function
definition smallPrime :: "nat ⇒ small option"
where "smallPrime n = (if n ∈ {2,3,5,7} then Some (small n) else None)"
This definition is not a valid code equation, due to the occurrence of small. But this derives one:
definition smallPrimeHelper :: "nat ⇒ small"
where "smallPrimeHelper n = (if n ∈ {2,3,5,7} then small n else small 0)"
lemma [code abstract]: "to_nat (smallPrimeHelper n) = (if n ∈ {2,3,5,7} then n else 0)"
by (auto simp add: smallPrimeHelper_def intro: small_inverse)
lemma [code_unfold]: "smallPrime n = (if n ∈ {2,3,5,7} then Some (smallPrimeHelper n) else None)"
unfolding smallPrime_def smallPrimeHelper_def by simp
If one wants to avoid the redundant calculation of the predicate (which might be more complex than just ∈ {2,3,5,7}, one can make the return type of the helper smarter by introducing an abstract view, i.e. a type that contains both the result of the computation, and the information needed to construct the abstract type from it:
typedef smallPrime_view = "{(x::nat, b::bool). x < 10 ∧ b = (x ∈ {2,3,5,7})}"
by (rule exI[where x = "(2, True)"], auto)
setup_lifting type_definition_small
setup_lifting type_definition_smallPrime_view
For the view we have a function building it and accessors that take the result apart, with some lemmas about them:
lift_definition smallPrimeHelper' :: "nat ⇒ smallPrime_view"
is "λ n. if n ∈ {2,3,5,7} then (n, True) else (0, False)" by simp
lift_definition smallPrimeView_pred :: "smallPrime_view ⇒ bool"
is "λ spv :: (nat × bool) . snd spv" by auto
lift_definition smallPrimeView_small :: "smallPrime_view ⇒ small"
is "λ spv :: (nat × bool) . fst spv" by auto
lemma [simp]: "smallPrimeView_pred (smallPrimeHelper' n) ⟷ (n ∈ {2,3,5,7})"
by transfer simp
lemma [simp]: "n ∈ {2,3,5,7} ⟹ to_nat (smallPrimeView_small (smallPrimeHelper' n)) = n"
by transfer auto
lemma [simp]: "n ∈ {2,3,5,7} ⟹ smallPrimeView_small (smallPrimeHelper' n) = small n"
by (auto intro: iffD1[OF to_nat_inject] simp add: small_inverse)
With that we can derive a code equation that does the check only once:
lemma [code]: "smallPrime n =
(let spv = smallPrimeHelper' n in
(if smallPrimeView_pred spv
then Some (smallPrimeView_small spv)
else None))"
by (auto simp add: smallPrime_def Let_def)