Rearrange Boolean expression by replacing brackets - boolean-expression

I was trying to check online if there is any equivalent expression to 1. (A AND B) OR C, by removing the brackets and rearranging the operands and boolean operators in the above expression.
Like 2. (A OR B) AND C = A AND C OR B AND C.
If I try to solve the first expression with the same logic as above, it doesn't seem logically equivalent I. E. A OR C AND B OR C . I want to remove the brackets from the expression. That's my main aim

What do you mean by "remove the brackets"?
When you write
(A OR B) AND C = A AND C OR B AND C
what does mean "A AND C OR B AND C"?
Does it means
A AND (C OR B) AND C
or
(A AND C) OR (B AND C)
Probably the latter, but it is only because you see implicit brackets around the AND expressions.
One generally considers that "AND" has a higher priority (precedence in programming languages) than "OR". Like "×" has a higher priority than "+".
and
a×b+c
is generally not ambiguous in terms of interpretation.
And it is true that "×" can be distributed over a sum, while the opposite is not true.
But there is no such thing in boolean algebra, and "AND" and "OR" have similar properties.
And they have the same distributivity.
So if your question is
"can we distribute an "OR" over an "AND" expression?",
the answer is yes.
(A AND B) OR C = (A OR C) AND (B OR C)
(just consider what happens when C=0 and when C=1 to verify that it is true)
If your question is
"does such an expression can be expressed without parenthesis with the rule that AND expressions are implicitely surrounded by parenthesis?",
the answer is also yes. Just write
A AND B OR C
with the precedence rule, it is interpreted as
(A AND B) OR C
You can also write
A.B+C
to make clearer that "AND" is generally considered as equivalent to "×" and "OR" equivalent to "+" in terms of precedence.
And in programming languages like C, when one writes
A & B | C
it is clearly interpreted as
(A & B) | C
and the same is true for
A && B || C
So with the "implicit parenthesis around AND expressions" rule, you can write
A AND B OR C = (A OR C) AND (B OR C)
(A OR B) AND C = A AND C OR B AND C
Note that in either case, you need parenthesis in one side of the equality.

Related

Understand Either as a Functor

Looking into how Either is defined as a functor, I can see that
derive instance functorEither :: Functor (Either a)
which reads to me as "You can map an Either so long as you can map its element.
But either doesn't have just one element. How would this be implemented without derive? Here's what I've tried:
data Either a b = Left a | Right b
instance functorEither :: Functor (Either a)
where
map f (Right b) = Right $ f b
map _ a = a
Of course, the types don't work here:
The Right has this signature: map :: forall a b. (a -> b) -> f a -> f b
The Left however, isn't okay: map :: forall a b. (a -> b) -> f a -> f a
Part of my intuition is saying that Either a b isn't a functor, only Either a is a functor. Which is why map works over Right and ignores Left
That doesn't really give me any intuition for how this is implemented. I still need a way of matching both constructors, don't I?
On the other hand, I think an implementation of map that replaces the inner function with identity is technically law-abiding for functor? The law of composition is met if you just ignore it?
While your proposed definition of the Functor instance indeed fails to compile, it isn't for the reason you say. And it's also "essentially" correct, just not written in a way that will satisfy the compiler.
For convenience, here's your definition again:
data Either a b = Left a | Right b
instance functorEither :: Functor (Either a)
where
map f (Right b) = Right $ f b
map _ a = a
and here's the actual error that you get when trying to compile it:
Could not match type
a02
with type
b1
while trying to match type Either a0 a02
with type Either a0 b1
while checking that expression a
has type Either a0 b1
in value declaration functorEither
where a0 is a rigid type variable
bound at (line 0, column 0 - line 0, column 0)
b1 is a rigid type variable
bound at (line 0, column 0 - line 0, column 0)
a02 is a rigid type variable
bound at (line 0, column 0 - line 0, column 0)
I admit that's a little hard to interpret, if you're not expecting it. But it has to do with the fact that map for Either a needs to have type forall b c. (b -> c) -> Either a b -> Either a c. So the a on the left of map _ a = a has type Either a b, while the one on the right has type Either a c - these are different types (in general), since b and c can be anything, so you can't use the same variable, a, to denote a value of each type.
(This question, although about Haskell rather than Purescript, goes deeper into explanation of exactly this error.)
To fix it, as implied in the question above, you have to explicitly mention that the value you're mapping over is a Left value:
data Either a b = Left a | Right b
instance functorEither :: Functor (Either a)
where
map f (Right b) = Right $ f b
map _ (Left a) = Left a
which is fine because Left a can be interpreted on the left as of type Either a b and on the right as an Either a c.
As for what the instance "does": you are correct that "Either a b isn't a functor, only Either a is a functor" - because a functor must take one type variable, which Either a does but Either a b doesn't. And yes, because the type variable that actually "varies" between Either a b and Either a c is the one that is used in Right, map must only map over the Right values, and leave the Left ones alone - that's the only thing that will satisfy the types needed.
Either a b is often interpreted as representing the result of a computation, where Left values represent failure while Right ones represent success. In this sense it's a slightly "expanded" version of Maybe - the difference is that rather than failure being represented by a single value (Nothing), you get a piece of data (the a type in Either a b) which can tell you information about the error. But the Functor instance works identically to that for Maybe: it maps over any success, and leaves failures alone.
(But there's no logical reason why you can't "map over" the Left values as well. The Bifunctor class is an extension of Functor which can do exactly that.)

What are isomorphism and homomorphisms

I tried to understand isomorphism and homomorphisms in context of programming and need some help.
In the book FPiS it explains:
Let start with a homomorphisms:
"foo".length + "bar".length == ("foo" + "bar").length
Here, length is a function from String to Int that preserves the monoid structure.
Why is that a homomorphisms?
Why it preserve the monoid structure?
Is for example map on list function a homomorphisms?
About isomorphism, I have following explaination that I took it from a book:
A monoid isomorphism between M and N has two homomorphisms
f and g, where both f andThen g and g andThen f are an identity function.
For example, the String and List[Char] monoids with concatenation are isomorphic.
The two Boolean monoids (false, ||) and (true, &&) are also isomorphic,
via the ! (negation) function.
Why (false, ||), (true, &&) and String and List[Char] monoids with concatenation are isomorphism?
Why is that a homomorphisms?
By definition.
Why it preserve the monoid structure?
Because of the == in the expression above.
Is for example map on list function a homomorphisms?
Yes. Replace "foo" and "bar" by two lists, and .length by .map(f). It's then easy to see (and prove) that the equation holds.
Why (false, ||), (true, &&) and String and List[Char] monoids with concatenation are isomorphism?
By definition. The proof is trivial, left as an exercise. (Hint: take the definition of an isomorphism, replace all abstract objects with concrete objects, prove that the resulting mathematical expression is correct)
Edit: Here are the few definitions you asked in comments:
Homomorphism: a transformation of one set into another that preserves in the second set the relations between elements of the first. Formally f: A → B where bothA and B have a * operation such that f(x * y) = f(x) * f(y).
Monoid: algebraic structure with a single associative binary operation and an identity element. Formally (M, *, id) is a Monoid iff (a * b) * c == a * (b * c) && a * id == a && id * a == a for all a, b, c in M.
"foo".length + "bar".length == ("foo" + "bar").length
To be precise, this is saying that length is a monoid homomorphism between the monoid of strings with concatenation, and the monoid of natural numbers with addition. That these two structures are monoids are easy to see if you put in the effort.
The reason length is a monoid homomorphism is because it has the properties that "".length = 0 and x.length ⊕ y.length = (x ⊗ y).length. Here, I've deliberately used two different symbols for the two monoid operations, to stress the fact that we are either applying the addition operation on the results of applying length vs the string concatenation operation on the two arguments before applying length. It is just unfortunate notation that the example you're looking at uses the same symbol + for both operations.
Edited to add: The question poster asked for some further details on what exactly is a monoid homomorphism.
OK, so suppose we have two monoids (A, ⊕, a) and (B, ⊗, b), meaning A and B are our two carriers, ⊕ : A × A → A and ⊗ : B × B → B are our two binary operators, and a ∈ A and b ∈ B are our two neutral elements. A monoid homomorphism between these two monoids is a function f : A → B with the following properties:
f(a) = b, i.e. if you apply f on the neutral element of A, you get the neutral element of B
f(x ⊕ y) = f(x) ⊗ f(y), i.e. if you apply f on the result of the operator of A on two elements, it is the same as applying it twice, on the two A elements, and then combining the results using the operator of B.
The point is that the monoid homomorphism is a structure-preserving mapping (which is what a homomorphism is; break down the word to its ancient Greek roots and you will see that it means 'same-shaped-ness').
OK, you asked for examples, here are some examples!
The one from the above example: length is a monoid homomorphism from the free monoid (A, ·, ε)* to (ℕ, +, 0)
Negation is a monoid homomorphism from (Bool, ∨, false) to (Bool, ∧, true) and vice verse.
exp is a monoid homomorphism from (ℝ, +, 0) to (ℝ\{0}, *, 1)
In fact, every group homomorphism is also, of course, a monoid homomorphism.

Propositional formula for A, B, AC and BC

I am struggling trying to get a propositional formula of the following truth table: A, B, AC, BC.
For A and B it's easy: A xor B. However, when you insert a new literal C...
I tried using Wolfram by inputting the truth table (A & ~B & ~C) || (~A & B & ~C) || (A & ~B & C) || (~A & B & C). However, the suggested minimal forms are wrong since they do not consider C.
Can someone help expressing this in propositional logic using logical connectives like (A xor B) => C? Thanks!
You can perform minimization by means of using Karnaugh maps (amongst other methods - this one is the simpliest, you'll have to introduce a dummy variable D and just ignore it in the results).
The solutions are right about not considering C, though - it doesn't matter what the C evaluates to, as long as A xor B evaluates to true. I just did check that, to remind myself about how the Karnaugh maps are constructed. Try drawing yourself a full truth table to see that.
Take a look at the expression:
(A & ~B & ~C) || (~A & B & ~C) ||
(A & ~B & C) || (~A & B & C)
Both lines are identical except for the negation of C, this means that C is irrelevant: the value for C doesn't change anything to the output of the function.
This is also the conclusion one draws from a truth table:
|A|B|C||F|
+-+-+-++-+
|F|F|F||F|
|F|F|T||F|
|F|T|F||T|
|F|T|T||T|
|T|F|F||F|
|T|F|T||F|
|T|T|F||F|
|T|T|T||F|
Here F being the outcome of the expression. If you for instance take the first line: |F|F|F||F| the result is false, which is the same for |F|F|T||F| (with C flipped). By doing this for every (A,B) configuration one sees that the value of C doesn't matter.
Therefore you can simply exclude C from the formula resulting in:
(A & ~B) || (~A & B)
Which means A xor B.
Wolfram Alpha comes to the same conclusion (see ANF expression).
I have the answer
(A xor B) and (C => (A or B))

Type hierarchy definition in Coq or Agda

I would like to build a kind of type hierarchy:
B is of type A ( B::A )
C and D are of type of B (C,D ::B)
E and F are of type of C (E,F ::C)
I asked here if this is possible to be directly implemented in Isabelle, but the answer as you see was No. Is it possible to encode this directly in Agda or Coq?
PS: Suppose A..F are all abstract and some functions are defined over each type)
Thanks
If I understood your question correctly, you want something that looks like the identity type. When we declare the type constructor _isOfType_, we mention two Sets (the parameter A and the index B) but the constructor indeed makes sure that the only way to construct an element of such a type is to enforce that they are indeed equal (and that a is of this type):
data _isOfType_ {ℓ} {A : Set ℓ} (a : A) : (B : Set ℓ) → Set where
indeed : a isOfType A
We can now have functions taking as arguments proofs that things are of the right type. Here I translated your requirements & assumed that I had a function f able to combine two Cs into one. Pattern-matching on the appropriate assumptions reveals that E and F are indeed on type C and can therefore be fed to f to discharge the goal:
example : ∀ (A : Set₃) (B : Set₂) (C D : Set₁) (E F : Set) →
B isOfType A
→ C isOfType B → D isOfType B
→ E isOfType C → F isOfType C
→ (f : C → C → C) → C
example A B .Set D E F _ _ _ indeed indeed f = f E F
Do you have a particular use case in mind for this sort of patterns or are you coming to Agda with ideas you have encountered in other programming languages? There may be a more idiomatic way to formulate your problem.

Why does Haskell's foldr NOT stackoverflow while the same Scala implementation does?

I am reading FP in Scala.
Exercise 3.10 says that foldRight overflows (See images below).
As far as I know , however foldr in Haskell does not.
http://www.haskell.org/haskellwiki/
-- if the list is empty, the result is the initial value z; else
-- apply f to the first element and the result of folding the rest
foldr f z [] = z
foldr f z (x:xs) = f x (foldr f z xs)
-- if the list is empty, the result is the initial value; else
-- we recurse immediately, making the new initial value the result
-- of combining the old initial value with the first element.
foldl f z [] = z
foldl f z (x:xs) = foldl f (f z x) xs
How is this different behaviour possible?
What is the difference between the two languages/compilers that cause this different behaviour?
Where does this difference come from ? The platform ? The language? The compiler?
Is it possible to write a stack-safe foldRight in Scala? If yes, how?
Haskell is lazy. The definition
foldr f z (x:xs) = f x (foldr f z xs)
tells us that the behaviour of foldr f z xs with a non-empty list xs is determined by the laziness of the combining function f.
In particular the call foldr f z (x:xs) allocates just one thunk on the heap, {foldr f z xs} (writing {...} for a thunk holding an expression ...), and calls f with two arguments - x and the thunk. What happens next, is f's responsibility.
In particular, if it's a lazy data constructor (like e.g. (:)), it will immediately be returned to the caller of the foldr call (with the constructor's two slots filled by (references to) the two values).
And if f does demand its value on the right, with minimal compiler optimizations no thunks should be created at all (or one, at the most - the current one), as the value of foldr f z xs is immediately needed and the usual stack-based evaluation can used:
foldr f z [a,b,c,....,n] ==
a `f` (b `f` (c `f` (... (n `f` z)...)))
So foldr can indeed cause SO, when used with strict combining function on extremely long input lists. But if the combining function doesn't demand right away its value on the right, or only demands a part of it, the evaluation will be suspended in a thunk, and the partial result as created by f will be immediately returned. Same with the argument on the left, but they already come as thunks, potentially, in the input list.
Haskell is lazy. So foldr allocates on the heap, not the stack. Depending on the strictness of the argument function, it may allocate a single (small) result, or a large structure.
You're still losing space, compared to a strict, tail-recursive implementation, but it doesn't look as obvious, since you've traded stack for heap.
Note that the authors here are not referring to any foldRight definition in the scala standard library, such as the one defined on List. They are referring to the definition of foldRight they gave above in section 3.4.
The scala standard library defines the foldRight in terms of foldLeft by reversing the list (which can be done in constant stack space) then calling foldLeft with the the arguments of the passed function reversed. This works for lists, but won't work for a structure which cannot be safely reversed, for example:
scala> Stream.continually(false)
res0: scala.collection.immutable.Stream[Boolean] = Stream(false, ?)
scala> res0.reverse
java.lang.OutOfMemoryError: GC overhead limit exceeded
Now lets think about what should be the result of this operation:
Stream.continually(false).foldRight(true)(_ && _)
The answer should be false, it doesn't matter how many false values are in the stream or if it is infinite, if we are going to combine them with a conjunction, the result will be false.
haskell of course gets this with no problem:
Prelude> foldr (&&) True (repeat False)
False
And that is because of two important things: haskell's foldr will traverse the stream from left to right, not right to left, and haskell is lazy by default. The first item here, that foldr actually traverses the list from left to right might surprise or confuse some people who think of a right fold as starting from the right, but the important feature of a right fold is not which end of a structure it starts on, but in which direction the associativity is. So give a list [1,2,3,4] and an op named op, a left fold is
((1 op 2) op 3) op 4)
and a right fold is
(1 op (2 op (3 op 4)))
But the order of evaluation shouldn't matter. So what the authors have done here in chapter 3 is to give you a fold which traverses the list from left to right, but because scala is by default strict, we still will not be able to traverse our stream of infinite falses, but have some patience, they will get to that in chapter 5 :) I'll give you a sneak peek, lets look at the difference between foldRight as it is defined in the standard library and as it is defined in the Foldable typeclass in scalaz:
Here's the implementation from the scala standard library:
def foldRight[B](z: B)(op: (A, B) => B): B
Here's the definition from scalaz's Foldable:
def foldRight[B](z: => B)(f: (A, => B) => B): B
The difference is that the Bs are all lazy, and now we get to fold our infinite stream again, as long as we give a function which is sufficiently lazy in its second parameter:
scala> Foldable[Stream].foldRight(Stream.continually(false),true)(_ && _)
res0: Boolean = false
One easy way to demonstrate this in Haskell is to use equational reasoning to demonstrate lazy evaluation. Let's write the find function in terms of foldr:
-- Return the first element of the list that satisfies the predicate, or `Nothing`.
find :: (a -> Bool) -> [a] -> Maybe a
find p = foldr (step p) Nothing
where step pred x next = if pred x then Just x else next
foldr :: (a -> b -> b) -> b -> [a] -> b
foldr f z [] = z
foldr f z (x:xs) = f x (foldr f z xs)
In an eager language, if you wrote find with foldr it would traverse the whole list and use O(n) space. With lazy evaluation, it stops at the first element that satisfies the predicate, and uses only O(1) space (modulo garbage collection):
find odd [0..]
== foldr (step odd) Nothing [0..]
== step odd 0 (foldr (step odd) Nothing [1..])
== if odd 0 then Just 0 else (foldr (step odd) Nothing [1..])
== if False then Just 0 else (foldr (step odd) Nothing [1..])
== foldr (step odd) Nothing [1..]
== step odd 1 (foldr (step odd) Nothing [2..])
== if odd 1 then Just 1 else (foldr (step odd) Nothing [2..])
== if True then Just 1 else (foldr (step odd) Nothing [2..])
== Just 1
This evaluation stops in a finite number of steps, in spite of the fact that the list [0..] is infinite, so we know that we're not traversing the whole list. In addition, there is an upper bound on the complexity of the expressions at each step, which translates into a constant upper bound on the memory required to evaluate this.
The key here is that the step function that we're folding with has this property: no matter what the values of x and next are, it will either:
Evaluate to Just x, without invoking the next thunk, or
Tail-call the next thunk (in effect, if not literally).