How would de Morgan's law apply to ((A.B)`.B)` - boolean

I am at the start of my A-level computing course, but I have gotten stuck on de Morgan's law,
If we have the expression (pronounced: Not(A And B) And B all Not) How would de Morgan's law apply to that?
And can anyone explain me how you handle the Not(A And B) part when the entire thing is notted?
Thanks in advance

Let's say ~, . and v represent NOT, AND and OR operator respectively. Then we can apply the De Morgans's law as:
~((A.B).B) = ~(A.B) v ~B
=> = (~A v ~B) v ~B

Related

Coq ssreflect sum of sums

I have been searching for a lemma in ssreflect that represents sum linearity, so that I could transform
sum(a) + sum(b) = sum(c)
into
sum(a+b) =sum(c)
and then derive into
a+b = c.
Which could be suitable in this case?
The goal:
\big[Rplus/0]_(i <- fin_img (A:=U) (B:=R_eqType) X) (. . .) +
\big[Rplus/0]_(i <- fin_img (A:=U) (B:=R_eqType) X) (. . .) =
\sum_(u in U) X u * `p_ X u
I think you are looking for the big_split lemma. But it is hard to know without knowing what goal you're trying to prove in more detail...

A, B and C are three boolean values. Write an expression in terms of only !(not) and &&(and) which always gives the same value as A||B||C

Chanced upon this beautiful problem. Since I am new to Boolean expressions, it is looking quite difficult.
I guess parentheses can be used.
If one of A, B, C is true, A||B||C must be true. Using AND and NOT, it can be done but, how do we know which one has which value?
I tried using truth tables, but three variables were too much.
Any ideas on how to solve, or at least how to make it faster?
Learn De Morgan's laws. It's a little piece of a basic knowledge for a programmer.
They state, among others, that not(X or Y) = (not X) and (not Y).
If you negate both sides and then apply the formula twice—first to ((A or B) or C), treating the (A or B) subexpression as X, and then to (A or B) itself—you'll get the desired result:
A || B || C =
(A || B) || C =
!(!(A || B) && !C) =
!((!A || !B) && !C) =
!(!A && !B && !C)
DeMorgan's Law (one of them, anyway), normally applied to two variables, states that:
A or B == not (not A and not B)
But this works equally well for three (or more) variables:
A or B or C == not (not A and not B and not C)
This becomes obvious when you realise that A or B or C is true if any of them are true, the only way to get false if if all of them are false.
And, only if they're all false will not A and not B and not C give true (hence not(that) will give false). For confirmation, here's the table, where you'll see that the A or B or C and not(notA and notB and notC) columns give the same values:
A B C A or B or C notA notB notC not(notA and notB and notC)
----- ----------- -------------- ---------------------------
f f f f t t t f
f f t t t t f t
f t f t t f t t
f t t t t f f t
t f f t f t t t
t f t t f t f t
t t f t f f t t
t t t t f f f t

Idiomatic ways of selecting subterm to rewrite

Suppose we have a conclusion of form: a + b + c + d + e.
We also have a lemma: plus_assoc : forall n m p : nat, n + (m + p) = n + m + p.
What are idiomatic ways to arbitrarily "insert a pair of parentheses" into the term? That is, how can we easily choose where to rewrite if there's more than one available place.
What I generally end up doing is the following:
replace (a + b + c + d + e)
with (a + b + c + (d + e))
by now rewrite <- ?plus_assoc
And while this formulation does state exactly what I want to do,
it gets extremely long-winded for formulations more complicated than "a b c...".
rewrite <- lemma expects lemma to be an equality, that is, a term whose type is of the form something1 = something2. Like with most other tactics, you can also pass it a function that returns an equality, that is, a term whose type is of the form forall param1 … paramN, something1 = something2, in which case Coq will look for a place where it can apply the lemma to parameters to form a subterm of the goal. Coq's algorithm is deterministic, but letting it choose is not particularly useful except when performing repeated rewrites that eventually exhaust all possibilities. Here Coq happens to choose your desired goal with rewrite <- plus_assoc, but I assume that this was just an example and you're after a general technique.
You can get more control over where to perform the rewrite by supplying more parameters to the lemma, to get a more specific equality. For example, if you want to specify that (((a + b) + c) + d) + e should be turned into ((a + b) + c) + (d + e), i.e. that the associativity lemma should be applied to the parameters (a + b) + c, d and e, you can write
rewrite <- (plus_assoc ((a + b) + c) d e).
You don't need to supply all the parameters, just enough to pinpoint the place where you want to apply the lemma. For example, here, it's enough to specify d as the second argument. You can do this by leaving the third parameter out altogether and specifying the wildcard _ as the first parameter.
rewrite <- (plus_assoc _ d).
Occasionally there are identical subterms and you only want to rewrite one of them. In this case you can't use the rewrite family of tactics alone. One approach is to use replace with a bigger term where you pick what you want to change, or event assert to replace the whole goal. Another approach is to use the set tactics, which lets you give a name to a specific occurrence of a subterm, then rely on that name to identify specific subterms, and finally call subst to get rid of the name when you're done.
An alternative approach is to forget about which lemmas to apply, and just specify how you want to change the goal with something like assert or a plain replace … with ….. Then let automated tactics such as congruence, omega, solve [firstorder], etc. find parameters that make the proof work. With this approach, you do have to write down big parts of the goal, but you save on specifying lemmas. Which approach works best depends on where you are on a big proof and what tends to be stable during development and what isn't.
IMO your best option is to use the ssreflect pattern selection language, available in Coq 8.7 or by installing math-comp in earlier versions. This language is documented in the manual: https://hal.inria.fr/inria-00258384
Example (for Coq 8.7):
(* Replace with From mathcomp Require ... in Coq < 8.7 *)
From Coq Require Import ssreflect ssrfun ssrbool.
Lemma addnC n m : m + n = n + m. Admitted.
Lemma addnA m n o : m + (n + o) = m + n + o. Admitted.
Lemma example m n o p : n + o + p + m = m + n + o + p.
Proof. by rewrite -[_ + _ + o]addnA -[m + _ + p]addnA [m + _]addnC.
Qed.
If you don't want to prove a helper lemma, then one of your choices is using Ltac to pattern match on the structure of the equality on your hands. This way you can bind arbitrary complex subexpressions to pattern variables:
Require Import Coq.Arith.Arith.
Goal forall a b c d e,
(a + 1 + 2) + b + c + d + e = (a + 1 + 2) + (b + c + d) + e -> True.
intros a b c d e H.
match type of H with ?a + ?b + ?c + ?d + ?e = _ =>
replace (a + b + c + d + e)
with (a + (b + c + d) + e)
in H
by now rewrite <- ?plus_assoc
end.
Abort.
In the above piece of code ?a stands for a + 1 + 2. This, of course, doesn't improve anything if you are dealing with simple variables, it helps only when you are dealing with complex nested expressions.
Also, if you need to rewrite things in the goal, then you can use something like this:
match goal with
| |- ?a + ?b + ?c + ?d + ?e = _ => <call your tactics here>

Proof with false hypothesis in Isabelle/HOL Isar

I am trying to prove a lemma which in a certain part has a false hypothesis. In Coq I used to write "congruence" and it would get rid of the goal. However, I am not sure how to proceed in Isabelle Isar. I am trying to prove a lemma about my le function:
primrec le::"nat ⇒ nat ⇒ bool" where
"le 0 n = True" |
"le (Suc k) n = (case n of 0 ⇒ False | Suc j ⇒ le k j)"
lemma def_le: "le a b = True ⟷ (∃k. a + k = b)"
proof
assume H:"le a b = True"
show "∃k. a + k = b"
proof (induct a)
case 0
show "∃k. 0 + k = b"
proof -
have "0 + b = b" by simp
thus ?thesis by (rule exI)
qed
case Suc
fix n::nat
assume HI:"∃k. n + k = b"
show "∃k. (Suc n) + k = b"
proof (induct b)
case 0
show "∃k. (Suc n) + k = 0"
proof -
have "le (Suc n) 0 = False" by simp
oops
Note that my le function is "less or equal". At this point of the proof I find I have the hypothesis H which states that le a b = True, or in this case that le (Suc n) 0 = True which is false. How can I solve this lemma?
Another little question: I would like to write have "le (Suc n) 0 = False" by (simp only:le.simps) but this does not work. It seems I need to add some rule for reducing case expressions. What am I missing?
Thank you very much for your help.
The problem is not that it is hard to get rid of a False hypothesis in Isabelle. In fact, pretty much all of Isabelle's proof methods will instantly prove anything if there is False in the assumptions. No, the problem here is that at that point of the proof, you don't have the assumptions you need anymore, because you did not chain them into the induction. But first, allow me to make a few small remarks, and then give concrete suggestions to fix your proof.
A Few Remarks
It is somewhat unidiomatic to write le a b = True or le a b = False in Isabelle. Just write le a b or ¬le a b.
Writing the definition in a convenient form is very important to get good automation. Your definition works, of course, but I suggest the following one, which may be more natural and will give you a convenient induction rule for free:
Using the function package:
fun le :: "nat ⇒ nat ⇒ bool" where
"le 0 n = True"
| "le (Suc k) 0 = False"
| "le (Suc k) (Suc n) = le k n"
Existentials can sometimes hide important information, and they tend mess with automation, since the automation never quite knows how to instantiate them.
If you prove the following lemma, the proof is fully automatic:
lemma def_le': "le a b ⟷ a + (b - a) = b"
by (induction a arbitrary: b) (simp_all split: nat.split)
Using my function definition, it is:
lemma def_le': "le a b ⟷ (a + (b - a) = b)"
by (induction a b rule: le.induct) simp_all
Your lemma then follows from that trivially:
lemma def_le: "le a b ⟷ (∃k. a + k = b)"
using def_le' by auto
This is because the existential makes the search space explode. Giving the automation something concrete to follow helps a lot.
The actual answer
There are a number of problems. First of all, you will probably need to do induct a arbitrary: b, since the b will change during your induction (for le (Suc a) b, you will have to do a case analysis on b, and then in the case b = Suc b' you will go from le (Suc a) (Suc b') to le a b').
Second, at the very top, you have assume "le a b = True", but you do not chain this fact into the induction. If you do induction in Isabelle, you have to chain all required assumptions containing the induction variables into the induction command, or they will not be available in the induction proof. The assumption in question talks about a and b, but if you do induction over a, you will have to reason about some arbitrary variable a' that has nothing to do with a. So do e.g:
assume H:"le a b = True"
thus "∃k. a + k = b"
(and the same for the second induction over b)
Third, when you have several cases in Isar (e.g. during an induction or case analysis), you have to separate them with next if they have different assumptions. The next essentially throws away all the fixed variables and local assumptions. With the changes I mentioned before, you will need a next before the case Suc, or Isabelle will complain.
Fourth, the case command in Isar can fix variables. In your Suc case, the induction variable a is fixed; with the change to arbitrary: b, an a and a b are fixed. You should give explicit names to these variables; otherwise, Isabelle will invent them and you will have to hope that the ones it comes up with are the same as those that you use. That is not good style. So write e.g. case (Suc a b). Note that you do not have to fix variables or assume things when using case. The case command takes care of that for you and stores the local assumptions in a theorem collection with the same name as the case, e.g. Suc here. They are categorised as Suc.prems, Suc.IH, Suc.hyps. Also, the proof obligation for the current case is stored in ?case (not ?thesis!).
Conclusion
With that (and a little bit of cleanup), your proof looks like this:
lemma def_le: "le a b ⟷ (∃k. a + k = b)"
proof
assume "le a b"
thus "∃k. a + k = b"
proof (induct a arbitrary: b)
case 0
show "∃k. 0 + k = b" by simp
next
case (Suc a b)
thus ?case
proof (induct b)
case 0
thus ?case by simp
next
case (Suc b)
thus ?case by simp
qed
qed
next
It can be condensed to
lemma def_le: "le a b ⟷ (∃k. a + k = b)"
proof
assume "le a b"
thus "∃k. a + k = b"
proof (induct a arbitrary: b)
case (Suc a b)
thus ?case by (induct b) simp_all
qed simp
next
But really, I would suggest that you simply prove a concrete result like le a b ⟷ a + (b - a) = b first and then prove the existential statement using that.
Manuel Eberl did the hard part, and I just respond to your question on how to try and control simp, etc.
Before continuing, I go off topic and clarify something said on another site. The word "a tip" was used to give credit to M.E., but it should have been "3 explanations provided over 2 answers". Emails on mailing lists can't be corrected without spamming the list.
Some short answers are these:
There is no guarantee of completely controlling simp, but attributes del and only, shown below, will many times control it to the extent that you desire. To see that it's not doing more than you want, traces need to be used; an example of traces is given below.
To get complete control of proof steps, you would use "controlled" simp, along with rule, drule, and erule, and other methods. Someone else would need to give an exhaustive list.
Most anyone with the expertise to be able to answer "what's the detailed proof of what simp, auto, blast, etc does" will very rarely be willing to put in the work to answer the question. It can be plain, tedious work to investigate what simp is doing.
"Black box proofs" are always optional, as far as I can tell, if we want them to be and have the expertise to make them optional. Expertise to make them optional is generally a major limiting factor. With expertise, motivation becomes the limiting factor.
What's simp up to? It can't please everyone
If you watch, you'll see. People complain there's too much automation, or they complain there's too little automation with Isabelle.
There can never be too much automation, but that's because with Isabelle/HOL, automation is mostly optional. The possibility of no automation is what makes proving potentially interesting, but with only no automation, proving is nothing but pure tediousness, in the grand scheme.
There are attributes only and del, which can be used to mostly control simp. Speaking only from experimenting with traces, even simp will call other proof methods, similar to how auto calls simp, blast, and others.
I think you cannot prevent simp from calling linear arithmetic methods. But linear arithmetic doesn't apply much of the time.
Get set up for traces, and even the blast trace
My answer here is generalized for also trying to determine what auto is up to. One of the biggest methods that auto resorts to is blast.
You don't need the attribute_setups if you don't care about seeing when blast is used by auto, or called directly. Makarius Wenzel took the blast trace out, but then was nice enough to show the code on how to implement it.
Without the blast part, there is just the use of declare. In a proof, you can use using instead of declare. Take out what you don't want. Make sure you look at the new simp_trace_new info in the PIDE Simplifier Trace panel.
attribute_setup blast_trace = {*
Scan.lift
(Parse.$$$ "=" -- Args.$$$ "true" >> K true ||
Parse.$$$ "=" -- Args.$$$ "false" >> K false ||
Scan.succeed true) >>
(fn b => Thm.declaration_attribute (K (Config.put_generic Blast.trace b)))
*}
attribute_setup blast_stats = {*
Scan.lift
(Parse.$$$ "=" -- Args.$$$ "true" >> K true ||
Parse.$$$ "=" -- Args.$$$ "false" >> K false ||
Scan.succeed true) >>
(fn b => Thm.declaration_attribute (K (Config.put_generic Blast.stats b)))
*}
declare[[simp_trace_new mode=full]]
declare[[linarith_trace,rule_trace,blast_trace,blast_stats]]
Try and control simp, to your heart's content with only & del
I don't want to work hard by using the formula in your question. With simp, what you're looking for with only and the traces is that no rule was used that you weren't expecting.
Look at the simp trace to see what basic rewrites are done that will always be done, like basic rewrites for True and False. If you don't even want that, then you have to resort to methods like rule.
A starting point to see if you can completely shut down simp is apply(simp only:).
Here are a few examples. I would have to work harder to find an example to show when linear arithmetic is automatically being used:
lemma
"a = 0 --> a + b = (b::'a::comm_monoid_add)"
apply(simp only:) (*
ERROR: simp can't apply any magic whatsoever.
*)
oops
lemma
"a = 0 --> a + b = (b::'a::comm_monoid_add)"
apply(simp only: add_0) (*
ERROR: Still can't. Rule 'add_0' is used, but it can't be used first.
*)
oops
lemma
"a = 0 --> a + b = (b::'a::comm_monoid_add)"
apply(simp del: add_0) (*
A LITTLE MAGIC:
It applied at least one rule. See the simp trace. It tried to finish
the job automatically, but couldn't. It says "Trying to refute subgoal 1,
etc.".
Don't trust me about this, but it looks typical of blast. I was under
the impressions that simp doesn't call blast.*)
oops
lemma
"a = 0 --> a + b = (b::'a::comm_monoid_add)"
by(simp) (*
This is your question. I don't want to step through the rules that simp
uses to prove it all.
*)

Propositional formula for A, B, AC and BC

I am struggling trying to get a propositional formula of the following truth table: A, B, AC, BC.
For A and B it's easy: A xor B. However, when you insert a new literal C...
I tried using Wolfram by inputting the truth table (A & ~B & ~C) || (~A & B & ~C) || (A & ~B & C) || (~A & B & C). However, the suggested minimal forms are wrong since they do not consider C.
Can someone help expressing this in propositional logic using logical connectives like (A xor B) => C? Thanks!
You can perform minimization by means of using Karnaugh maps (amongst other methods - this one is the simpliest, you'll have to introduce a dummy variable D and just ignore it in the results).
The solutions are right about not considering C, though - it doesn't matter what the C evaluates to, as long as A xor B evaluates to true. I just did check that, to remind myself about how the Karnaugh maps are constructed. Try drawing yourself a full truth table to see that.
Take a look at the expression:
(A & ~B & ~C) || (~A & B & ~C) ||
(A & ~B & C) || (~A & B & C)
Both lines are identical except for the negation of C, this means that C is irrelevant: the value for C doesn't change anything to the output of the function.
This is also the conclusion one draws from a truth table:
|A|B|C||F|
+-+-+-++-+
|F|F|F||F|
|F|F|T||F|
|F|T|F||T|
|F|T|T||T|
|T|F|F||F|
|T|F|T||F|
|T|T|F||F|
|T|T|T||F|
Here F being the outcome of the expression. If you for instance take the first line: |F|F|F||F| the result is false, which is the same for |F|F|T||F| (with C flipped). By doing this for every (A,B) configuration one sees that the value of C doesn't matter.
Therefore you can simply exclude C from the formula resulting in:
(A & ~B) || (~A & B)
Which means A xor B.
Wolfram Alpha comes to the same conclusion (see ANF expression).
I have the answer
(A xor B) and (C => (A or B))