I am trying to understand the Product and Coproduct corresponding to the following picture:
Product:
Coproduct:
As I understand, a Product type in Haskell is for example:
data Pair = P Int Double
and a Sumtype is:
data Pair = I Int | D Double
How to understand the images in relation with Sum and Product type?
The images are from http://blog.higher-order.com/blog/2014/03/19/monoid-morphisms-products-coproducts/.
So as far as I can tell, the idea behind these diagrams is that you are given:
types A, B and Z
function f and g of the indicated types (in the first diagram, f :: Z -> A and g :: Z -> B, in the second the arrows go "the other way", so f :: A -> Z and g :: B -> Z).
I'll concentrate on the first diagram for now, so that I don't have to say everything twice with slight variations.
Anyway, given the above, the idea is that there is a type M together with functions fst :: M -> A, snd :: M -> B, and h :: Z -> M such that, as the mathematicians say, the diagram "commutes". By that is simply meant that, given any two points in the diagram, if you follow the arrows in any way from one to the other, the resulting functions are the same. That is, f is the same as fst . h and g is the same as snd . h
It is easy to see that, no matter what Z is, the pair type (A, B), together with the usual Haskell functions fst and snd, satisfies this - together with an appropriate choice of h, which is:
h z = (f z, g z)
which trivially satisfies the two required identities for the diagram to commute.
That's a basic explanation of the diagram. But you may be slightly confused about the role of Z in all this. That arises because what's actually being stated is rather stronger. It is that, given A, B, f and g, there is an M together with functions fst and snd, that you can construct such a diagram for any type Z (which means supplying a function h :: Z -> M as well). And further that there is only one function h which satisfies the required properties.
It's pretty clear, once you play with it and understand the various requirements, that the pair (A, B), and various other types isomorphic to it (which basically means MyPair A B where you've defined data MyPair a b = MyPair a b), are the only things which satisfy this. And that there are other types M which would also work, but which would give various different hs - eg. take M to be a triple (A, B, Int), with fst and snd extracting ("projecting to" in mathematical terminology) the first and second components, and then h z = (f z, g z, x) is such a function for any x :: Int that you care to name.
It's been too long since I studied mathematics, and category theory in particular, to be able to prove that the pair (A, B) is the only type that satisfies the "universal property" we're talking about - but rest assured that it is, and you really don't need to understand that (or really any of this) in order to be able to program with product and sum types in Haskell.
The second diagram is more or less the same, but with all the arrows reversed. In this case the "coproduct" or "sum" M of A and B turns out to be Either a b (or something isomoprhic to it), and h :: M -> Z will be defined as:
h (Left a) = f a
h (Right b) = g b
A product (Tuple in Haskell) is an object with two projections. Those are functions projecting the product to their individual factors fst and snd.
Conversly a coproduct (Either in Haskell) is an object that has two injections. Those are functions injecting the individual summands lefts and rights into the sum.
Note, both product and coproduct need to satisfy an universal property. I recommend Bartosz Milewski's introduction on the topic along with his lecture.
One thing not communicated by these diagrams is which pieces are inputs and which are outputs. I'm going to start with products and be extra careful about which things are handed to you, and which you must cook up.
So a product says:
You give me two objects, A and B.
I give you a new object M, and two arrows fst : M -> A and snd : M -> B.
You give me an object Z and two arrows f : Z -> A and g : Z -> B.
I give you an arrow h : Z -> M that makes the diagram commute (...and this arrow is uniquely determined by the choices made so far).
We often pretend that there is a category Hask in which the objects are concrete (monomorphic) types, and the arrows are Haskell functions of the appropriate type. Let's see how the protocol above plays out, and demonstrate that Haskell's data Pair a b = P a b is a product in Hask.
You give me two objects (types), A=a and B=b.
I must produce an object (type) and two arrows (functions). I pick M=Pair a b. Then I must write functions of type Pair a b -> a (for the arrow fst : M -> A) and Pair a b -> b (for the arrow snd : M -> B). I choose:
fst :: Pair a b -> a
fst (P a b) = a
snd :: Pair a b -> b
snd (P a b) = b
You give me an object (type) Z=z and two arrows (functions); f will have type z -> a and g will have type z -> b.
I must produce a function h of type z -> Pair a b. I choose:
h = \z -> P (f z) (g z)
This h is required to make the diagram commute. This means that any two paths through the diagram that begin and end at the same object should be equal. For the diagrams given, that means we must show that it satisfies two equations:
f = fst . h
g = snd . h
I'll prove the first; the second is similar.
fst . h
= { definition of h }
fst . (\z -> P (f z) (g z))
= { definition of (.) }
\v -> fst ((\z -> P (f z) (g z)) v)
= { beta reduction }
\v -> fst (P (f v) (g v))
= { definition of fst }
\v -> f v
= { eta reduction }
f
As required.
The story for coproducts is similar, with the slight tweaks to the protocol described below:
You give me two objects, A and B.
I give you a new object W, and two arrows left : A -> W and right : B -> W.
You give me an object Z and arrows f : A -> Z and g : A -> Z.
I give you an arrow h : W -> Z that makes the diagrams commute (...and this arrow is uniquely determined by the choices made so far).
It should be straightforward to adapt the discussion above about products and Pair to see how this would apply to coproducts and data Copair a b = L a | R b.
Related
I am learning DBMS and normalization and I have come across the following exercise. For the following problem:
Consider the relation R(b,e,s,t,r,o,n,g) with functional dependencies
b,s -> e,r,o,n
b -> t
b -> g
n -> b
o -> r
(a) identify candidate keys
(b) identify prime attributes
(c) state the highest normal form of this table
I think that (a) would be {b, s} since they identify all attributes without redundancy.
(b) would also be {b, s} since they compose the candidate keys of (a).
(c) would be 1-NF for several reasons. It does not satisfy 2-NF since there are partial-dependencies n -> b. The aforementioned functional dependency only depends on b and not s, hence partial-dependency. It does not satisfy 3-NF since o -> r indicates that a non prime attribute depends on another non-prime attribute. BCNF is not satisfied since 3-NF is not satisfied.
Lastly, if I were to modify the table until it is in BCNF, would splitting the relation R into:
R1(b, e, s, r, o, n) with b, s -> e, r, o, n
and
R2(b, t, g) with b -> t and b -> g
while eliminating the n -> b and o -> r satisfy BCNF?
I am most confused on the last part regarding satisfying BCNF. I would greatly appreciate any help/thoughts on all steps!
The schema has two candidate keys: {b, s} and {n, s}. You can verify that both are keys be computing the closures of the two sets of attributes.
So the prime attributes are b, s, and n.
You are correct in saying that the relation is not in 2NF, neither in 3NF.
Your proposed decomposition does not produces subschemas in BCNF, since in R1 the dependency o → r still holds, and o is not a superkey of R1.
The “classical” decomposition algorithm for BCNF produces the following normalized schema:
R1(b g t)
R2(o r)
R3(b n)
R4(e n o s)
but the dependencies
b s → e
b s → n
b s → o
b s → r
are not preserved in the decomposition.
A decomposition in 3NF that preserves data and dependencies is the following:
R1(b e n o s)
R2(b g t)
R3(o r)
In this decomposition, R2 and R3 are also in BCNF, while the dependency n → b in R1 violates the BCNF.
Coq defines the multiplicative inverse function 1/x as a total function R -> R, both in Rdefinitions.v and in Field_theory.v. The value 1/0 is left undefined, all calculation axioms ignore it.
However this is a problem in constructive mathematics, because all total functions R -> R must be continuous. And we cannot connect the positive and negative infinities at zero. Therefore the constructive inverse is rather a partial function :
Finv : forall x : R, (0 < x \/ x < 0) -> R
This is for example how it is defined in the C-CoRN library.
Now is there a way to use the field tactic with those partial inverse functions ? A direct Add Field does not work.
The answer is no. The Add Field command relies on a function of type R -> R that represents the inverse and such a function cannot be defined constructively.
The binary relation R is given. Construct the transitive and reflexive closure
R *. in LISP //// how come?
You didn't give much information, but typically, given a relation R, the transitive and reflexive closure of R is a relation R* defined as follows:
for all X,Y such that R(X,Y), the relation R*(X,Y) also holds;
for all X,Y,Z such that R(X,Y) and R*(Y,Z), the relation R*(X,Z) holds;
Let's say that you are given a function R that behaves as follows:
(funcall R :x X :y Y) returns a non-nil value iff R(X,Y)
(funcall R :x X) returns all Y such that R(X,Y)
(funcall R :y Y) returns all X such that R(X,Y)
(funcall R) returns a list of (X . Y) pairs for which R(X,Y).
Then you can build a function that computes the transitive and reflexive closure; if you want to know if R*(X,Z) holds, you start from X and try all possible Y that satisfy R(X,Y) until either Y is equal to Z, or you can recursively determine that R*(Y,Z).
After you implement and test that, try to detect cycles too.
Let A = {a, b, c, d, e, f, g, h, i} and R be a relation on A as follows:
R={(a,a), (f,c), (b,b), (c,f), (a,d), (c,c), (c,i), (d,a), (b,e), (i,c), (e,b), (d,d), (e,e), (f,f), (g,g), (h,h), (i,i),
(h,e), (a,g), (g,a), (d,g), (g,d), (b,h), (h,b), (e,h), (f,i), (i,f)}
I know it is the equivalence relation which is symmetric, transitive and reflexive but I am confused about equivalence classes? What are the equivalence classes?
How can I find the equivalence classes of the relation?
As you stated, an equivalence relation is a relation which is symmetric, reflexive, and transitive. The definition for those terms is as follows:
Symmetric:
Given a,b in A, if a = b then b = a.
Reflexive:
Given a in A, a = a.
Transitive:
Given a,b,c in A, if a = b and b = c, then a = c.
Using these definitions, we can see that the R relation set in your question is indeed the equivalence relation on A. This is because for every a,b,c in A:
a = a, which is represented by (a,a) in R
if a = b, then b = a, represented by (b,a) and (a,b) both being in R
if a = b and b = c, then a = c, represented by (a,b), (b,c), and (a,c) in R.
You can check to make sure this is true, but I'm pretty sure it is. This is what makes R the equivalence relation. Once we have a definition for an equivalence relation, we can define an equivalence class as follows:
The set of all elements in a set which are equal under a given equivalence relation. In formal notation, {x in S | x -> a}, where -> is the equivalence relation.
How do I ensure that R(A,B,C,D,E) is in 3NF and BCNF with the following functional dependencies:
(A -> B, AB -> D, AC -> E)
If they are not in 3NF and BCNF, then how should I split?
I think that AC and ACB are keys. ABC are therefore prime attributes and DE are non-prime attributes.
Any help is appreciated!
A relation is in BCNF if one of the following conditions hold:
X → Y is a trivial functional dependency (Y ⊆ X)
X is a superkey for schema R
Wikipedia
In other words if the determinant, X, is a candidate key or Y is a subset of X then the relation is in BCNF. Furthermore if a relation is in BCNF it will also be in 3NF.
To determine if the determinant is a superkey you'd test it by its attribute closure of X. I'd recommend this Youtube video if you're not familiar with it.