Why can't brackets in a simplified Boolean expression be separated by an AND? - boolean

Consider this map of (A ∧ D) ∨ (B ∧ D) ∨ (A ∧ ¬B ∧ C ∧ D):
The map is grouped into two sections, both of four squares.
Thus producing the simplified expression of (B ∧ D) ∨ (A ∧ D) as shown below.
This is in following with the rule:
"Groups must contain 1, 2, 4, 8, or in general 2^n cells"
However, if I were to group in such a way that groups contain six cells (not following the 2^n rule):
This would produce the simplified expression of:
(A ∨ B) ∧ D
I have run this trial a few more times. Even splitting Karnaugh maps where I split possible groups of eight into six and four. I have come to the conclusion that when splitting by six, or any value not of 2^n, the Boolean value between brackets in the expression is ∧ (AND) whereas when using groups of 2^n the splitting boolean value is ∨ (OR).
Thus as groups not in sizes of 2^n produce AND divisions (between brackets), does this mean brackets in boolean expressions cannot be separated by an AND?
And by proxy, is this why Karnaugh maps must be grouped into groups of 2n squares?
Note
Online tools simplify exclusively with OR dividers also: as shown

(B ∧ D) ∨ (A ∧ D) is the correct "sum of products" expression for this Karnaugh map, and (A ∨ B) ∧ D is an equivalent expression (per the "OR distributive law"), but it is no longer in a "sum of products" form.
You did (instead of using the "OR distributive law"): instead of noting (from the top of the Karnaugh map) that the B value does not change for the first 2x2 block and that the A value does not change for the second 2x2 block, you noted further that these two columns of size 2 overlap forming three columns defined by (A ∨ B).
That is fine, but it does not give the "sum of products" (groups of AND'd variables OR'd together), which is what the 2^n rule relates to. Instead you by happenstance ended up with the actual "product of sums" (groups of OR'd variables AND'd together).
The "formal", "graphical", "traditional", "easy", etc. way (which also has a 2^n rule, but for 0s instead of 1s) of getting to your final result is to instead of 1s, circle the 0s, noting again what variable values on the top and/or on the right side do not change, but this time not these values. In your example, the four 0s at the top and the four 0s at the bottom result in the D "sum" (note this "circle" spans from the top to the bottom forming a "logical" circle so-to-speak). The remaining two 0s are combined with the 0 above them and the 0 below them, resulting in the (A ∨ B) "sum" (the idea is to cover all 0s while also selecting the biggest 2^n blocks even if they overlap). (A ∨ B) ∧ D is the "product" of these two "sums". Check out:
Minterm vs Maxterm Solution.
The method is "perfect" (as long as the "circles" are as big as possible and nothing is missed). If the "circles" are not as big as possible (but nothing is missed), the result will still be logically correct, but it will use more gates than the minimum.

Related

Why is sum-type a disjunction in the Curry-Howard correspondence?

According to the Curry-Howard correspondence the sum-type aka tagged-union is the equivalent of disjunction, logical OR
Why is this the case? Is it not closer to XOR? (a or b) implies that it could be a or b or both, whereas Either a b must be a or b but never both.
Curry-Howard correspondence says that an element of Either a b represents a proof of a or b. To prove a disjunction it's enough to prove (provide an element of) a or b. You do this by constructing Either a b using either Left a or Right b.

"NAND only" vs "AND and NOT only" difference?

When you simplify something down into these two, is there really any difference between?
For example:
( (B'C)' * (B'D')' )'
Is this and NAND only? If so, can it be converted to AND and NOT only? Or vice-versa? I'm confused on the difference between the two.
Your formula:
( (B'C)' * (B'D')' )'
Assume ' is negation and * and juxtaposition are used for conjunction; disjunction is not shown but would be denoted +. Let us rewrite the formula using not and and instead:
not (not (not B and C) and not (not B and not D))
Let us also indicate which nots go with which ands:
not (not (not B and C) and not (not B and not D))
^1 ^2 ^2 ^1 ^3 ^3
We can therefore eliminate three nots and replace the corresponding ands with three nands:
(not B nand C) nand (not B nand not D)
We see right away that the original formula was not expressed using only nands since eliminating pairs of nots and ands using the definition of nand did not eliminate non-nand operators.
However, the original formula does consist only of and and not. Because any formula can be written using nands only, this one can. The lazy way is to just use not x = x nand x three times to remove each of the remaining nots.

And operator Lisp

Why does and operator returns a value? What is the returned value dependent on?
When I try the following example -
(write (and a b c d)) ; prints the greatest integer among a b c d
where a, b, c and d are positive integers, then and returns the greatest of them.
However when one of a, b, c and d is 0 or negative, then the smallest integer is returned. Why is this the case?
As stated in the documentation:
The macro and evaluates each form one at a time from left to right. As soon as any form evaluates to nil, and returns nil without evaluating the remaining forms. If all forms but the last evaluate to true values, and returns the results produced by evaluating the last form. If no forms are supplied, (and) returns t.
So the returned value doesn't depend on which value is the "greatest" or "smallest" of the arguments.
and can be regarded as a generalization of the two-place if. That is to say, (and a b) works exactly like (if a b). With and in the place of if we can add more conditions: (and a1 a2 a3 a4 ... aN b). If they all yield true, then b is returned. If we use if to express this, it is more verbose, because we still have to use and: (if (and a1 a2 a3 ... aN) b). and also generalizes in that it can be used with only one argument (if cannot), and even with no arguments at all (yields t in that case).
Mathematically, and forms a group. The identity element of that group is t, which is why (and) yields t. To describe the behavior of the N-argument and, we just need these rules:
(and) -> yield t
(and x y) -> { if x is true evaluate and yield y
{ otherwise yield nil
Now it turns out that this rule for a two-place and obeys the associative law: namely (and (and x y) z) behaves the same way as (and x (and y z)). The effect of the above rules is that no matter in which of these two ways we group the terms in this compound expression, x, y, and z are evaluated from left to right, and evaluation either stops at the first nil which it encounters and yields nil, or else it evaluates through to the end and yields the value of z.
Thus because we have this nice associative property, the rational thing to do in a nice language like Lisp which isn't tied to infix operators is to recognize that since the associative grouping doesn't matter, let's just have the flat syntax (and x1 x2 x3 ... xN), with any number of arguments including zero. This syntax denotes any one of the possible associations of N-1 binary ands which all yield the same behavior.
In other words, let's not make the poor programmer write a nested (and (and (and ...) ...) ...) to express a four-term and, and just let them write (and ...) with four arguments.
Summary:
the zero-place and yields t, which has to do with t being the identity element for the and operation.
the two-place and yields the second value if the first one is true. This is a useful equivalence to the two-place if. Binary and could be defined as yielding t when both arguments are true, but that would be less useful. In Lisp, any value that is not nil is a Boolean true. If we replace a non-nil value with t, it remains Boolean true, but we have lost potentially useful information.
the behavior of the n-place and is a consequence of the associative property; or rather preserving the equivalence between the flat N-argument form and all the possible binary groupings which are already equivalent to each other thanks to the associative property.
One consequence of all this is that we can have an extended if, like (if cond1 cond2 cond3 cond4 ... condN then-form), where then-form is evaluated and yielded if all the conditions are true. We just have to spell if using the and symbol.

Basic Pumping Lemma proof doesn't make sense

Proving that a^n b^n, n >= 0, is non-regular.
Using the string a^p b^p.
Every example I've seen claims that y can either contain a's, b's, or both. But I don't see how y can contain anything other than a's, because if y contains any b's, then the length of xy must be greater than p, which makes it invalid.
Conversely, for examples such as:
www, w is {a, b}*, the string used is a^p b a^p b a^p b. In the proofs I've seen, it claims that y cannot contain anything other than a's, for the reason I stated above. Why is this different?
Also throwing in another question:
Describe the error in the following "proof" that 0* 1* is not a regular language. (An
error must exist because 0* 1* is regular.) The proof is by contradiction. Assume
that 0* 1* is regular. Let p be the pumping length for 0* 1* given by the pumping
lemma. Choose s to be the string OP P. You know that s is a member of 0* 1*, but
a^p b^p cannot be pumped. Thus you have a contradiction. So 0* 1* is not regular.
I can't find any problem with this proof. I only know that 0*1* is a regular language because I can construct a DFA.
The pumping lemma states that for a regular language L:
for all strings s greater than p there exists a subdivision s=xyz such that:
For all i, xyiz is in L;
|y|>0; and
|xy|<p.
Now the claim that y can only contain a's or b's originates from the first item. Since if it contained both a's and b's, with i=2, this would result in a string of the form aa...abb...baa...b, etc. That's what the statement wants to say.
The third part indeed, makes it obvious that y can only contain a's. In other words, what the textbooks say is a conclusion derived from the first item.
Finally if you combine 1., 2. and 3., one reaches contradiction, because we know y must contain at least one character (2.), the string can only contain a's. Say y contains k a's. If we would "pump" this with i=2, the result is that we generate a string:
s'=xy2z=ap+kbp
We know however that s' is not part of L, which it should be by 1., so we reach inconsistency.
You can thus only make the proof work by combining the three items. It's not enough to know that y consist only out of a's: that doesn't result in contradiction. It's because there is no subdivision available that satisfies all three constraints simultaneously.
About your second question. In that case, L looks different. You can't reuse the proof of a^nb^n because L is perfectly happy if the string contains more a's. In other words, you can't find a contradiction. In other words, the last item of the proof fails. As long as y contains only one type of characters - regardless of its length - it can satisfy all three constraints.

Formalizing time and space complexity requirements

∀ a b ∈ ℕ, b ≠ 0 → ∃ ! q r ∈ ℕ, a = q × b + r ∧ r < b is a standard example of the use of dependent types. How do I extend this type so that it also expresses time and space complexity requirements?
Nils Anders Danielsson uses a monad in Agda to track time complexity: sub-computations which are "relevant" to the complexity being studied are explicitly marked as such by making each of them take "one tick of time". These sub-computations are then combined monadically by tracking the sum number of ticks in the index of the monad type.
The details are described in his paper Lightweight Semiformal Time Complexity Analysis for Purely Functional Data Structures.