"NAND only" vs "AND and NOT only" difference? - boolean

When you simplify something down into these two, is there really any difference between?
For example:
( (B'C)' * (B'D')' )'
Is this and NAND only? If so, can it be converted to AND and NOT only? Or vice-versa? I'm confused on the difference between the two.

Your formula:
( (B'C)' * (B'D')' )'
Assume ' is negation and * and juxtaposition are used for conjunction; disjunction is not shown but would be denoted +. Let us rewrite the formula using not and and instead:
not (not (not B and C) and not (not B and not D))
Let us also indicate which nots go with which ands:
not (not (not B and C) and not (not B and not D))
^1 ^2 ^2 ^1 ^3 ^3
We can therefore eliminate three nots and replace the corresponding ands with three nands:
(not B nand C) nand (not B nand not D)
We see right away that the original formula was not expressed using only nands since eliminating pairs of nots and ands using the definition of nand did not eliminate non-nand operators.
However, the original formula does consist only of and and not. Because any formula can be written using nands only, this one can. The lazy way is to just use not x = x nand x three times to remove each of the remaining nots.

Related

Why can't brackets in a simplified Boolean expression be separated by an AND?

Consider this map of (A ∧ D) ∨ (B ∧ D) ∨ (A ∧ ¬B ∧ C ∧ D):
The map is grouped into two sections, both of four squares.
Thus producing the simplified expression of (B ∧ D) ∨ (A ∧ D) as shown below.
This is in following with the rule:
"Groups must contain 1, 2, 4, 8, or in general 2^n cells"
However, if I were to group in such a way that groups contain six cells (not following the 2^n rule):
This would produce the simplified expression of:
(A ∨ B) ∧ D
I have run this trial a few more times. Even splitting Karnaugh maps where I split possible groups of eight into six and four. I have come to the conclusion that when splitting by six, or any value not of 2^n, the Boolean value between brackets in the expression is ∧ (AND) whereas when using groups of 2^n the splitting boolean value is ∨ (OR).
Thus as groups not in sizes of 2^n produce AND divisions (between brackets), does this mean brackets in boolean expressions cannot be separated by an AND?
And by proxy, is this why Karnaugh maps must be grouped into groups of 2n squares?
Note
Online tools simplify exclusively with OR dividers also: as shown
(B ∧ D) ∨ (A ∧ D) is the correct "sum of products" expression for this Karnaugh map, and (A ∨ B) ∧ D is an equivalent expression (per the "OR distributive law"), but it is no longer in a "sum of products" form.
You did (instead of using the "OR distributive law"): instead of noting (from the top of the Karnaugh map) that the B value does not change for the first 2x2 block and that the A value does not change for the second 2x2 block, you noted further that these two columns of size 2 overlap forming three columns defined by (A ∨ B).
That is fine, but it does not give the "sum of products" (groups of AND'd variables OR'd together), which is what the 2^n rule relates to. Instead you by happenstance ended up with the actual "product of sums" (groups of OR'd variables AND'd together).
The "formal", "graphical", "traditional", "easy", etc. way (which also has a 2^n rule, but for 0s instead of 1s) of getting to your final result is to instead of 1s, circle the 0s, noting again what variable values on the top and/or on the right side do not change, but this time not these values. In your example, the four 0s at the top and the four 0s at the bottom result in the D "sum" (note this "circle" spans from the top to the bottom forming a "logical" circle so-to-speak). The remaining two 0s are combined with the 0 above them and the 0 below them, resulting in the (A ∨ B) "sum" (the idea is to cover all 0s while also selecting the biggest 2^n blocks even if they overlap). (A ∨ B) ∧ D is the "product" of these two "sums". Check out:
Minterm vs Maxterm Solution.
The method is "perfect" (as long as the "circles" are as big as possible and nothing is missed). If the "circles" are not as big as possible (but nothing is missed), the result will still be logically correct, but it will use more gates than the minimum.

How can I find a logically equivalent formula without "implication" or "or"s?

So I am trying to finish up my discrete math homework, but I am completely stumped as to how I am supposed to solve this problem. My teacher wants me to find a logically equivalent equation for p v q that does not include exclusive or, implication, or inclusive or (aka she wants me to use only negation and ands). I don't want any answers, for I need to do my homework myself. But please any examples or help would be GREATLY appreciated. I feel as though there is a simple way to do this going right over my head.
Using just NOTs and ANDs, you can create the NAND gate. Think of how complements relate ANDs and ORs and try and work it out for yourself before you see a big hint below.
Use De Morgan's laws to relate NANDs with ORs.
Furthermore, the NAND gate is a universal gate, which means that in principle any logic function can be realised with just the NAND gate. Try and see for yourself if you can simulate every other gate with just the NAND one.
If you really want the answer, it's below.
p OR q is equivalent to (NOT p) NAND (NOT q)
Here's a truth table for p V q:
p q p V q
T T T
T F T
F T T
F F F
We need to find an equivalent expression that gives the same final column (T, T, T, F) but using only not and and.
You can begin enumerating all possible formulas until you find one. The formula should use only p and q and not and and.
p q p and q
T T T
T F F
F T F
F F F
First thing we note is that the truth table for and gives three Fs, whereas ours needs three Ts. We can turn Ts to Fs and vice versa using negation, so maybe we guess that.
p q not(p and q)
T T F
T F T
F T T
F F T
This seems close, except we need T, T, T, F and we have F, T, T, T. We might notice that this pattern is totally backwards, and since the variables are ordered by truth value, we might guess that swapping truth values would work. To swap truth values, we use negation again:
p q not(not p and not q)
T T T
T F T
F T T
F F F
We found what we wanted. Now, I knew what the answer would be, but even if I hadn't, we would have eventually reached here by just listing out reasonable logical formulas in order. We knew:
Both symbols had to appear
Only "and" could allow both symbols to appear
The only other symbol allowed was not
not not x = x; so we don't need to duplicate
The formulas we could have blindly started writing down truth tables for is:
p and q
(not p) and q
p and (not q)
not(p and q)
not(p) and not(q)
not(not(p) and q)
not(p and (not q))
not((not p) and (not q))
At which point we could have found the answer with no insights other than the four points above.
Let's think about what the sentence p v q means.
p and q are, of course, two propositions - simple sentences that are true or false.
The sentence p v q is just saying 'p is true and/or q is true'. Basically, p v q is true when at least one of the two propositions are true.
So ask yourself the opposite: when would that sentence be false? Why, when neither of them are true! How would we express this? not p and not q
That amounts to saying that not (p or q) and not p and not q are equivalent sentences.
Which implies that not not (p or q) and not(not p and not q) are equivalent.
Now, by the double negation law, we know that two negations cancel out.
So we have p or q and not(not p and not q) are equivalent sentences.
And that is the answer you were looking for.

Are side effects everything that cannot be found in a pure function?

Is it safe to say that the following dichotomy holds:
Each given function is
either pure
or has side effects
If so, side effects (of a function) are anything that can't be found in a pure function.
This very much depends on the definitions that you choose. It is definitely fair to say that a function is pure or impure. A pure function always returns the same result and does not modify the environment. An impure function can return different results when it is executed repeatedly (which can be caused by doing something to the environment).
Are all impurities side-effects? I would not say so - a function can depend on something in the environment in which it executes. This could be reading some configuration, GPS location or reading data from the internet. These are not really "side-effects" because it does not do anything to the world.
I think there are two different kinds of impurities:
Output impurity is when a function does something to the world. In Haskell, this is modelled using monads - an impure function a -> b is actually a function a -> M b where M captures the other things that it does to the world.
Input impurity is when a function requires something from the environment. An impure function a -> b can be modelled as a function C a -> b where the type C captures other things from the environment that the function may access.
Monads and output impurities are certainly better known, but I think input impurities are equally important. I wrote my PhD thesis about input impurities which I call coeffects, so I this might be a biased answer though.
For a function to be pure it must:
not be affected by side-effects (i.e. always return same value for same arguments)
not cause side-effects
But, you see, this defines functional purity with the property or absence of side-effects. You are trying to apply backwards logic to deduct the definition of side-effects using pure functions, which logically should work, but practically the definition of a side-effect has nothing to do with functional purity.
I don't see problems with the definition of a pure function: a pure function is a function. I.e. it has a domain, a codomain and maps the elements of the former to the elements of the latter. It's defined on all inputs. It doesn't do anything to the environment, because "the environment" at this point doesn't exist: there are no machines that can execute (for some definition of "execute") a given function. There is just a total mapping from something to something.
Then some capitalist decides to invade the world of well-defined functions and enslave such pure creatures, but their fair essence can't survive in our cruel reality, functions become dirty and start to make the CPU warmer.
So it's the environment is responsible for making the CPU warmer and it makes perfect sense to talk about purity before its owner was abused and executed. In the same way referential transparency is a property of a language — it doesn't hold in the environment in general, because there can be a bug in the compiler or a meteorite can fall upon your head and your program will stop producing the same result.
But there are other creatures: the dark inhabitants of the underworld. They look like functions, but they are aware of the environment and can interact with it: read variables, send messages and launch missiles. We call these fallen relatives of functions "impure" or "effectful" and avoid as much as possible, because their nature is so dark that it's impossible to reason about them.
So there is clearly a big difference between those functions which can interact with the outside and those which doesn't. However the definition of "outside" can vary too. The State monad is modeled using only pure tools, but we think about f : Int -> State Int Int as about effectful computation. Moreover, non-termination and exceptions (error "...") are effects, but haskellers usually don't consider them so.
Summarizing, a pure function is a well-defined mathematical concept, but we usually consider functions in programming languages and what is pure there depends on your point of view, so it doesn't make much sense to talk about dichotomies when involved concepts are not well-defined.
A way to define purity of a function f is ∀x∀y x = y ⇒ f x = f y, i.e. given the same argument the function returns the same result, or it preserves equality.
This isn't what people usually mean when they talk about "pure functions"; they usually mean "pure" as "does not have side effects". I haven't figured out how to qualify a "side effect" (comments welcome!) so I don't have anything to say about it.
Nonetheless, I'll explore this concept of purity because it might offer some related insight. I'm no expert here; this is mostly me just rambling. I do however hope it sparks some insightful (and corrective!) comments.
To understand purity we have to know what equality we are talking about. What does x = y mean, and what does f x = f y mean?
One choice is the Haskell semantic equality. That is, equality of the semantics Haskell assigns to its terms. As far as I know there are no official denotational semantics for Haskell, but Wikibooks Haskell Denotational Semantics offers a reasonable standard that I think the community more or less agrees to ad-hoc. When Haskell says its functions are pure this is the equality it refers to.
Another choice is a user-defined equality (i.e. (==)) by deriving the Eq class. This is relevant when using denotational design — that is, we are assigning our own semantics to terms. With this choice we can accidentally write functions which are impure; Haskell is not concerned with our semantics.
I will refer to the Haskell semantic equality as = and the user-defined equality as ==. Also I assume that == is an equality relation — this does not hold for some instances of == such as for Float.
When I use x == y as a proposition what I really mean is x == y = True ∨ x == y = ⊥, because x == y :: Bool and ⊥ :: Bool. In other words, when I say x == y is true, I mean that if it computes to something other than ⊥ then it computes to True.
If x and y are equal according to Haskell's semantics then they are equal according to any other semantic we may choose.
Proof: if x = y then x == y ≡ x == x and x == x is true because == is pure (according to =) and reflexive.
Similarly we can prove ∀f∀x∀y x = y ⇒ f x == f y. If x = y then f x = f y (because f is pure), therefore f x == f y ≡ f x == f x and f x == f x is true because == is pure and reflexive.
Here is a silly example of how we can break purity for a user-defined equality.
data Pair a = Pair a a
instance (Eq a) => Eq (Pair a) where
Pair x _ == Pair y _ = x == y
swap :: Pair a -> Pair a
swap (Pair x y) = Pair y x
Now we have:
Pair 0 1 == Pair 0 2
But:
swap (Pair 0 1) /= swap (Pair 0 2)
Note: ¬(Pair 0 1 = Pair 0 2) so we were not guaranteed that our definition of (==) would be okay.
A more compelling example is to consider Data.Set. If x, y, z :: Set A then you would hope this holds, for example:
x == y ⇒ (Set.union z) x == (Set.union z) y
Especially when Set.fromList [1,2,3] and Set.fromList [3,2,1] denote the same set but probably have different (hidden) representations (not equivalent by Haskell's semantics). That is to say we want to be sure that ∀z Set.union z is pure according to (==) for Set.
Here is a type I have played with:
newtype Spry a = Spry [a]
instance (Eq a) => Eq (Spry a) where
Spry xs == Spry ys = fmap head (group xs) == fmap head (group ys)
A Spry is a list which has non-equal adjacent elements. Examples:
Spry [] == Spry []
Spry [1,1] == Spry [1]
Spry [1,2,2,2,1,1,2] == Spry [1,2,1,2]
Given this, what is a pure implementation (according to == for Spry) for flatten :: Spry (Spry a) -> Spry a such that if x is an element of a sub-spry it is also an element of the flattened spry (i.e. something like ∀x∀xs∀i x ∈ xs[i] ⇒ x ∈ flatten xs)? Exercise for the reader.
It is also worth noting that the functions we've been talking about are across the same domain, so they have type A → A. That is except for when we proved ∀f∀x∀y x = y ⇒ f x == f y which crosses from Haskell's semantic domain to our own. This might be a homomorphism of some sorts… maybe a category theorist could weigh in here (and please do!).
Side effects are part of the definition of the language. In the expression
f e
the side effects of e are all the parts of e's behavior that are 'moved out' and become part of the behavior of the application expression, rather than being passed into f as part of the value of e.
For a concrete example, consider this program:
f x = x; x
f (print 3)
where conceptually the syntax x; x means 'run x, then run it again and return the result'.
In a language where print writes to stdout as a side effect, this writes
3
because the output is part of the semantics of the application expression.
In a language where the output of print is not a side effect, this writes
3
3
because the output is part of the semantics of the x variable inside the definition of f.

And operator Lisp

Why does and operator returns a value? What is the returned value dependent on?
When I try the following example -
(write (and a b c d)) ; prints the greatest integer among a b c d
where a, b, c and d are positive integers, then and returns the greatest of them.
However when one of a, b, c and d is 0 or negative, then the smallest integer is returned. Why is this the case?
As stated in the documentation:
The macro and evaluates each form one at a time from left to right. As soon as any form evaluates to nil, and returns nil without evaluating the remaining forms. If all forms but the last evaluate to true values, and returns the results produced by evaluating the last form. If no forms are supplied, (and) returns t.
So the returned value doesn't depend on which value is the "greatest" or "smallest" of the arguments.
and can be regarded as a generalization of the two-place if. That is to say, (and a b) works exactly like (if a b). With and in the place of if we can add more conditions: (and a1 a2 a3 a4 ... aN b). If they all yield true, then b is returned. If we use if to express this, it is more verbose, because we still have to use and: (if (and a1 a2 a3 ... aN) b). and also generalizes in that it can be used with only one argument (if cannot), and even with no arguments at all (yields t in that case).
Mathematically, and forms a group. The identity element of that group is t, which is why (and) yields t. To describe the behavior of the N-argument and, we just need these rules:
(and) -> yield t
(and x y) -> { if x is true evaluate and yield y
{ otherwise yield nil
Now it turns out that this rule for a two-place and obeys the associative law: namely (and (and x y) z) behaves the same way as (and x (and y z)). The effect of the above rules is that no matter in which of these two ways we group the terms in this compound expression, x, y, and z are evaluated from left to right, and evaluation either stops at the first nil which it encounters and yields nil, or else it evaluates through to the end and yields the value of z.
Thus because we have this nice associative property, the rational thing to do in a nice language like Lisp which isn't tied to infix operators is to recognize that since the associative grouping doesn't matter, let's just have the flat syntax (and x1 x2 x3 ... xN), with any number of arguments including zero. This syntax denotes any one of the possible associations of N-1 binary ands which all yield the same behavior.
In other words, let's not make the poor programmer write a nested (and (and (and ...) ...) ...) to express a four-term and, and just let them write (and ...) with four arguments.
Summary:
the zero-place and yields t, which has to do with t being the identity element for the and operation.
the two-place and yields the second value if the first one is true. This is a useful equivalence to the two-place if. Binary and could be defined as yielding t when both arguments are true, but that would be less useful. In Lisp, any value that is not nil is a Boolean true. If we replace a non-nil value with t, it remains Boolean true, but we have lost potentially useful information.
the behavior of the n-place and is a consequence of the associative property; or rather preserving the equivalence between the flat N-argument form and all the possible binary groupings which are already equivalent to each other thanks to the associative property.
One consequence of all this is that we can have an extended if, like (if cond1 cond2 cond3 cond4 ... condN then-form), where then-form is evaluated and yielded if all the conditions are true. We just have to spell if using the and symbol.

How to define a function that returns half of input, in two different ways?

I am reading a Gentle Introduction to Symbolic Computation and it asks this question. Basically, the previous content deals with making up bigger functions with small ones. (Like 2- will be made of two 1- (decrement operators for lisp))
So one of the questions is what are the two different ways to define a function HALF which returns one half of its input. I have been able to come up with the obvious one (dividing number by 2) but then get stuck. I was thinking of subtracting HALF of the number from itself to get half but then the first half also has to be calculated...(I don't think the author intended to introduce recursion so soon in the book, so I am most probably wrong).
So my question is what is the other way? And are there only two ways?
EDIT : Example HALF(5) gives 2.5
P.S - the book deals with teaching LISP of which I know nothing about but apparently has a specific bent towards using smaller blocks to build bigger ones, so please try to answer using that approach.
P.P.S - I found this so far, but it is on a completely different topic - How to define that float is half of the number?
Pdf of book available here - http://www.cs.cmu.edu/~dst/LispBook/book.pdf (ctrl+f "two different ways")
It's seems to be you are describing peano arithmetic. In practice it works the same way as doing computation with fluids using cups and buckets.
You add by taking cups from the source(s) to a target bucket until the source(s) is empty. Multiplication and division is just advanced adding and substraction. To halve you take from source to two buckets in alterations until the source is empty. Of course this will either do ceil or floor depending on what bucket you choose to use as answer.
(defun halve (x)
;; make an auxillary procedure to do the job
(labels ((loop (x even acc)
(if (zerop x)
(if even (+ acc 0.5) acc)
(loop (- x 1) (not even) (if even (+ acc 1) acc)))))
;; use the auxillary procedure
(loop x nil 0)))
Originally i provided a Scheme version (since you just tagged lisp)
(define (halve x)
(let loop ((x x) (even #f) (acc 0))
(if (zero? x)
(if even (+ acc 0.5) acc)
(loop (- x 1) (not even) (if even (+ acc 1) acc)))))
Edit: Okay, lets see if I can describe this step by step. I'll break the function into multiple lines also.
(defun half (n)
;Takes integer n, returns half of n
(+
(ash n -1) ;Line A
(if (= (mod n 2) 1) .5 0))) ;Line B
So this whole function is an addition problem. It is simply adding two numbers, but to calculate the values of those two numbers requires additional function calls within the "+" function.
Line A: This performs a bit-shift on n. The -1 tells the function to shift n to the right one bit. To explain this we'll have to look at bit strings.
Suppose we have the number 8, represented in binary. Then we shift it one to the right.
1000| --> 100|0
The vertical bar is the end of the number. When we shift one to the right, the rightmost bit pops off and is not part of the number, leaving us with 100. This is the binary for 4.
We get the same value, however if we perform the shift on nine:
1001| --> 100|1
Once, again we get the value 4. We can see from this example that bit-shifting truncates the value and we need some way to account for the lost .5 on odd numbers, which is where Line B comes in.
Line B: First this line tests to see if n is even or odd. It does this by using the modulus operation, which returns the remainder of a division problem. In our case, the function call is (mod n 2), which returns the remainder of n divided by 2. If n is even, this will return 0, if it is odd, it will return 1.
Something that might be tripping you up is the lisp "=" function. It takes a conditional as its first parameter. The next parameter is the value the "=" function returns if the conditional is true, and the final parameter is what to return if the conditional is false.
So, in this case, we test to see if (mod n 2) is equal to one, which means we are testing to see if n is odd. If it is odd, we add .5 to our value from Line A, if it is not odd, we add nothing (0) to our value from Line A.