Is this set in 3rd normalized form? - database-normalization

The functional dependencies are :
BCD -> A (I stated this is a SK since BCD+=ABCDEFG)
BC -> E (also SK since BC+=ABCDEFG)
A -> F (not SK or prime)
F -> G
C -> D (not sk or prime)
A -> G (not sk or prime)
My steps:
1:(A,F),(ABCDEG)
(A,F),(C,D),(ABCEG)
(A,F),(C,D),(A,G),(ABCE) (so is this one in 3nf?)
I am only trying to set it to 3nf and not go any further.

When you decompose your dependencies, you should convert them into minimal set of functional dependency.
As you have stated, BCD and BC are both super keys, thus the D in BC is not really needed. Further, between A->F, F->G, A->G you can remove A->G, as that is implied by the the other two(you can't remove F->G, you will lose this dependency).
So the minimal set becomes:
(BC->AE), (A->F), (F->G), (C->D)
Now you can decompose into 4 relations: (ABCE),(AF),(FG),(CD).
This will be in the 3NF.

Related

What is the advantage of Option/Maybe Monad over Functor?

I understand the advantage of IO monad and List Monad over Functor, however, I don't understand the advantage of Option/Maybe Monad over Functor.
Is that simply the integration of types of the language?
or
What is the advantage of Option/Maybe Monad over Functor in specific use?
PS. asking the advantage in specific use is not opinion-based because if there is, it can be pointed out without subjective aspect.
PS.PS. some member here is so eager to push repeatedly
Option is both a functor and a monad?
should be the answer, or QA is duplicate, but actually not.
I already know the basics such as
Every monad is an applicative functor and every applicative functor is a functor
as the accepted answer there, and that is not what I'm not asking here.
Having an excellent answer here that is Not included in the previous QA.
The aspect, detail, or resolution of each question is quite different, so please avoid "bundling" different things in rough manner here.
Let's look at the types.
fmap :: Functor f => (a -> b) -> (f a -> f b)
(<*>) :: Applicative f => f (a -> b) -> (f a -> f b)
flip (>>=) :: Monad f => (a -> f b) -> (f a -> f b)
For a functor, we can apply an ordinary function a -> b to a value of type f a to get a value of type f b. The function never gets a say in what happens to the f part, only the inside. Thinking of functors as sort of box-like things, the function in fmap never sees the box itself, just the inside, and the value gets taken out and put back into the exact same box it started at.
An applicative functor is slightly more powerful. Now, we have a function which is also in a box. So the function gets a say in the f part, in a limited sense. The function and the input each have f parts (which are independent of each other) which get combined into the result.
A monad is even more powerful. Now, the function does not, a priori, have an f part. The function takes a value and produces a new box. Whereas in the Applicative case, our function's box and our value's box were independent, in the Monad case, the function's box can depend on the input value.
Now, what's all this mean? You've asked me to focus on Maybe, so let's talk about Maybe in the concrete.
fmap :: (a -> b) -> (Maybe a -> Maybe b)
(<*>) :: Maybe (a -> b) -> (Maybe a -> Maybe b)
flip (>>=) :: (a -> Maybe b) -> (Maybe a -> Maybe b)
As a reminder, Maybe looks like this.
data Maybe a = Nothing | Just a
A Maybe a is a value which may or may not exist. From a functor perspective, we'll generally think of Nothing as some form of failure and Just a as a successful result of type a.
Starting with fmap, the Functor instance for Maybe allows us to apply a function to the inside of the Maybe, if one exists. The function gets no say over the success or failure of the operation: A failed Maybe (i.e. a Nothing) must remain failed, and a successful one must remain successful (obviously, we're glossing over undefined and other denotational semantic issues here; I'm assuming that the only way a function can fail is with Nothing).
Now (<*>), the applicative operator, takes a Maybe (a -> b) and a Maybe a. Either of those two might have failed. If either of them did, then the result is Nothing, and only if the two both succeeded do we get a Just as our result. This allows us to curry operations. Concretely, if I have a function of the form g :: a -> b -> c and I have values ma :: Maybe a and mb :: Maybe b, then we might want to apply g to ma and mb. But when we start to do that, we have a problem.
fmap g ma :: Maybe (b -> c)
Now we've got a function that may or may not exist. We can't fmap that over mb, because a nonexistent function (a Nothing) can't be an argument to fmap. The problem is that we have two independent Maybe values (ma and mb in our example) which are fighting, in some sense, for control. The result should only exist if both are Just. Otherwise, the result should be Nothing. It's sort of a Boolean "and" operation, in that if any of the intermediates fail, then the whole calculation fails. (Note: If you're looking for a Boolean "or", where any individual success can recover from prior failure, then you're looking for Alternative)
So we write
(fmap g ma) <*> mb :: Maybe c
or, using the more convenient synonym Haskell provides for this purpose,
g <$> ma <*> mb :: Maybe c
Now, the key word in the above situation is independent. ma and mb have no say over the other's success or failure. This is good in many cases, because code like this can often be parallelized (there are very efficient command line argument parsing libraries that exploit just this property of Applicative). But, obviously, it's not always what we want.
Enter Monad. In the Maybe monad, the provided function produces a value of type Maybe b based on the input a. The Maybe part of the a and of the b are no longer independent: the latter can depend directly on the former.
For example, take the classic example of Maybe: a square root function. We can't take a square root of a negative number (let's assume we're not working with complex numbers here), so our hypothetical square root looks like
sqrt :: Double -> Maybe Double
sqrt x | x < 0 = Nothing
| otherwise = Just (Prelude.sqrt x)
Now, suppose we've got some number r. But r isn't just a number. It came from earlier in our computation, and our computation might have failed. Maybe it did a square root earlier, or tried to divide by zero, or something else entirely, but it did something that has some chance of producing a Nothing. So r is Maybe Double, and we want to take its square root.
Obviously, if r is already Nothing, then its square root is Nothing; we can't possibly take a square root if we've already failed to compute everything else. On the other hand, if r is a negative number, then sqrt is going to fail and produce Nothing despite the fact that r is itself Just. So what we really want is
case r of
Nothing -> Nothing
Just r' -> sqrt r'
And this is exactly what the Monad instance for Maybe does. That code is equivalent to
r >>= sqrt
The result of this entire computation (and, namely, whether or not it is Nothing or Just) depends not just on whether or not r is Nothing but also on r's actual value. Two different Just values of r can produce success or failure depending on what sqrt does. We can't do that with just a Functor, we can't even do that with Applicative. It takes a Monad.

How is modus pomens used in this proof sequence?

Here is my problem:
Why does this work specifically for step 7, p and p implies r. I can understand how to show r implies not q. Can someone also tell me what modus ponens means and how it's used in the context?
Here are the relevant claims from the image:
p (given)
p -> r (given)
(3-6 snipped)
r (modus ponens 1, 2)
The Wikipedia article on Modus Ponens explains this pretty well. Quoting with some parts removed/changed:
The argument form has two premises (hypothesis). The first premise is that P, the antecedent of the conditional claim, is true. The second premise is the "if–then" or conditional claim, namely that P implies R. From these two premises it can be logically concluded that R, the consequent of the conditional claim, must be true as well.
An example of an argument that fits the form modus ponens:
Today is Tuesday. (P)
If today is Tuesday, then John will go to work. (P -> R)
Therefore, John will go to work. (R)
Modus ponens is the rule of inferences used to generate new logical statements from valid premises.

If an attribute is dependent on a composite of two attributes, then is this a functional dependency?

Transitive functional dependency is defined as:
If A → B and B → C then A → C (Reference: This Tutorial!)
If an attribute is dependent on a composite of two attributes ( i.e. A,B -> C), then is this a functional dependency?
Can we consider this type of dependence to be (or not) a transitive dependency?
If an attribute is dependent on a composite of two attributes (A,B->C) then this is a "functional dependency".
A transitive dependency occurs when you have a non-key predicate that is placed in a "child" relation when it properly belongs in the "parent" relation. In your case, A->C is a transitive dependency.
There is a pretty clear practical example of a transitive dependency on Wikipedia.
It should be noted that there is a difference between A->B, B->C and A,B->C. These are not equivalent dependencies.
TL;DR A "dependency" written with "→" is a FD (functional dependency). A "transitive dependency" written with "→" is a transitive FD. That's what "dependency" means (is short for) for "→".
A FD has one set determining another. Either set can have any number of attributes. (If a set has just one attribute we also say that that attribute determines or is determined.)
Transitive functional dependency is defined as:
If A → B and B → C then A → C (Reference: This Tutorial!)
That is unclear & wrong. So is your reference. A → C transitively when there exists S where A → S & S → C & not (S → A). So if A → B & B → C & not (B → A) then A → C transitively, but if A → B & B → C but B → A then A → C is not transitive via B, although it might be transitive via some other attribute set.
If an attribute is dependent on a composite of two attributes ( i.e. A,B -> C), then is this a functional dependency?
That means "If an attribute is the dependent attribute of a FD whose determinant is a composite of two attributes ( i.e. {A,B} -> C), then is this a FD?" You assumed there was a FD, so there's a FD. The answer is (trivially) "yes".
Maybe you mean something else that you have not written clearly?
Maybe you mean, can a FD have a set of attributes as determinant? Yes.
Can we consider this type of dependence to be (or not) a transitive dependency?
Look at the definition. {A,B} → C transitively when there exists S where {A,B} → S & S → C & not (S → {A,B}). But no such S involving A, B and/or C exists. So just knowing you have a "composite" determinant tells you nothing about whether a FD is transitive. So no we cannot "consider it to be" transitive. It might or might not be transitive in a particular relation.
Maybe you mean, if A → C & B → C then does {A,B} → C? Yes. Adding a determinant attribute to a FD that holds gives another FD that holds. Since A and B each determine C, any set containing A or B determines C. Does {A,B} → C transitively? Again no such S involving A, B and/or C exists. So this has nothing to do with transitivity.
Maybe you mean, if A → B & B → C then does {A,B} → C? Yes. As above, adding a determinant attribute to a FD that holds gives another FD that holds. Since B determines C, any set containing B determines C. Does {A,B} → C transitively? The only S involving A, B and/or C would be {B}, if not (B → A). So we can't "consider" {A,B} → C to be transitive here either.

Multiple flatMap methods for a single monad?

Does it make sense to define multiple flatMap (or >>= / bind in Haskell) methods in a Monad?
The very few monads I actually use (Option, Try, Either projections) only define one flatMap method.
For exemple, could it make sense to define a flatMap method on Option which would take a function producing a Try? So that Option[Try[User]] would be flattened as Option[User] for exemple? (Considering loosing the exception is not a problem ...)
Or a monad should just define one flatMap method, taking a function which produces the same kind of monad? I guess in this case the Either projections wouldn't be monads? Are they?
I have once seriously thought about this. As it turns out, such a construct (aside from losing all monadic capabilities) is not really interesting, since it is sufficient to provide a conversion from the inner to the outer container:
joinWith :: (Functor m, Monad m) => (n a -> m a) -> m (n a) -> m a
joinWith i = join . (fmap i)
bindWith :: (Functor m, Monad m) => (n a -> m a) -> m a -> (a -> n a) -> m a
bindWith i x f = joinWith i $ fmap f x
*Main> let maybeToList = (\x -> case x of Nothing -> []; (Just y) -> [y])
*Main> bindWith maybeToList [1..9] (\x -> if even x then Just x else Nothing)
[2,4,6,8]
It depends what "make sense" means.
If you mean is it consistent with the monad laws, then it's not exactly clear to me the question entirely makes sense. I'd have to see a concrete proposal to tell. If you do it the way I think you suggest, you'll probably end up violating composition at least in some corner cases.
If you mean is it useful, sure, you can always find cases where such things are useful. The problem is that if you start violating monad laws, you have left traps in your code for the unwary functional (category theory) reasoner. Better to make things that look kind of like monads actually be monads (and just one at a time, though you can provide an explicit way to switch a la Either--but you're right that as written LeftProjection and RightProjection are not, strictly speaking, monads). Or write really clear docs explaining that it isn't what it looks like. Otherwise someone will merrily go along assuming the laws hold, and *splat*.
flatMap or (>>=) doesn't typecheck for your Option[Try[ ]] example. In pseudo-Haskell notation
type OptionTry x = Option (Try x)
instance Monad OptionTry where
(>>=) :: OptionTry a -> (a -> OptionTry b) -> OptionTry b
...
We need bind/flatMap to return a value wrapped in the same context as the input value.
We can also see this by looking at the equivalent return/join implementation of a Monad. For OptionTry, join has the specialized type
instance Monad OptionTry where
join :: OptionTry (OptionTry a) -> OptionTry a
...
It should be clear with a bit of squinting that the "flat" part of flatMap is join (or concat for lists where the name derives from).
Now, it's possible for a single datatype to have multiple different binds. Mathematically, a Monad is actually the data type (or, really, the set of values that the monad consists of) along with the particular bind and return operations. Different operations lead to different (mathematical) Monads.
It doesn't make sense, for a specific data type, as far as I know, you can only have one definition for bind.
In haskell a monad is the following type class,
instance Monad m where
return :: a -> m a
bind :: m a -> (a -> m b) -> m b
Concretely For list Monad we have,
instance Monad [] where
return :: a -> [] a
(>>=) :: [] a -> (a -> [] b) -> [] b
Now let's consider a monadic function as.
actOnList :: a -> [] b
....
A Use case to illustrate,
$ [1,2,3] >>= actOnList
On the function actOnList we see that a list is a polymorphic type constraint by another type (here []). Then when we speak about the bind operator for list monad we speak about the bind operator defined by [] a -> (a -> [] b) -> [] b.
What you want to achieve is a bind operator defined like [] Maybe a -> (a -> [] b) -> [] b, this not a specialize version of the first one but another function and regarding it type signature i really doubt that it can be the bind operator of any kind of monad as you do not return what you have consumed. You surely go from one monad to another using a function but this function is definitely not another version of the bind operator of list.
Which is why I've said, It doesn't make sense, for a specific data type, as far as I know, you can only have one definition for bind.

Functional dependencies and Normalization

consider a relation R = {P, Q, R, S, T} and the functional dependency F = {P -> Q, {Q, R} -> S, S -> {Q, R}, {S, T} -> phi}. Are there any redundant functional dependencies in F? If so, remove them and decompose the relation R to 3NF relation.
Plz answer this
{S,T}->phi is trivial, and hence redundant. Moreover, there are no redundant attributes so you have your canonical cover here.
To decompose to 3NF you should:
1) create table for each dependency in the canonical cover
2) identify a candidate key
3) if the candidate key is not included in any of the tables so far, add it as an additional table
4) remove tables if all their attributes are included in another table