Given the two properties P1 = (R1 or R2) |-> P and P2 = (R1 |-> P) or (R2 |-> P), where R1 and R2 are sequences and P is a property, is it correct to say that P1 is equivalent to P2?
I did the calculations based on the definitions of tight and neutral satisfiability in Annex F of the LRM and they came up as being equivalent. (I don't want to exclude the possibility of me making a mistake somewhere.)
I ask, because I've seen the two being handled differently by simulation tools.
I did the math again today and the two are not equivalent. There are cases where the property-or form passes, but where the sequence-or form would fail.
A simple example of this would be the properties:
P1 = (1 or (1 ##1 1)) |-> 1
P2 = (1 |-> 1) or (1 ##1 1 |-> 1)
P2 is strongly satisfied by any one clock cycle long trace, aside from ⊥. P1 can never be satisfied by traces shorter than two clock cycles. (This comes out when plugging the conditions of property satisfaction for both forms into the definition of strong satisfaction.)
What this means in plain English is that both threads started in P1 (the one for the R1 part and the one for the R2 part) need to complete until an assertion of this property is deemed successful. For P2, though, only one of the properties is required to "mature" and at this point, the other property's attempt will be discarded.
This seems a bit strange at first glance and not so intuitive, but it stems out of the formal semantics for SVAs. I guess, but I'm not sure, that P3 = first_match(R1 or R2) |-> P is equivalent to P2. One would need to do the math.
Related
Let's say I have two relations R1 and R2. If I need to solve a problem by induction over the term R1 A (R2 B C), I need to first do remember R2 B C, otherwise I lose the information that the second argument to R1 was equal to R2 B C. Is there a variant of induction that tactic that doesn't require me to deal with this?
The answer is no.
The way induction works, it replaces every instance of each of the arguments with the values that appear at the same place in the inductive predicate constructors. To make this easier, the remember tactic, replaces the compound expression (R2 B C) by a variable. you can sometimes avoid this if (R2 B C) appears in your goal, but this is rare.
It seems to me that Bernstein's synthesis / 3NF synthesis always yields BCNF subrelations, but that's apparently not true.
When one uses 3NF synthesis, one will have subrelations as a result, and they will each consist of either:
just one functional dependency along with all attributes of the schema, so the left side of the lone functional dependency will be a superkey, and that subrelation will therefore be in BCNF.
multiple functional dependencies each of which have the same left side, so they're each superkeys, and that subrelation will therefore be in BCNF.
no functional dependency where the schema includes the attributes making up the primary key of the original / non-decomposed relation, which would satisfy BCNF vacuously because of there being no functional dependencies.
What is an example of the 3NF synthesis algorithm yielding a non-BCNF decomposition and why it is so?
Bernstein's algorithm returns (one or more) components in EKNF, which lies between 3NF & BCNF.
Your claims of "that subrelation will therefore be in BCNF" are wrong. The FDs that hold in a component are all the ones in the closure of the original relation whose attributes are all in the component. So FDs could hold in a component that are not out of its superkeys. (Which by definition of BCNF is just another way of saying a component could be not in BCNF. Obviously--since we are told that the algorithm doesn't always give BCNF.)
Since your reasoning is unsound, finding a counterexample seems moot. But just about any presentation of BCNF gives an example non-BCNF 3NF relation, which it then decomposes to BCNF. You can join the non-BCNF 3NF relation with a projection on attributes of one of its CKs extended by a fresh non-prime attribute, and Bernstein's algorithm can decompose back to the 2 tables.
Chris Date's classic An Introduction to Database Systems has a non-BCNF 3NF schema R(S, J, T) with minimal/irreducible cover
{S, J} -> T
{T} -> J
CKs are {S, J} & {T, J}. Berstein gives component (S, J, T)--non-BCNF 3NF input R--in which both given FDs hold--plus redundant component (T, J).
For an example with an additional non-redundant component, extend the cover by {T} -> X. CKs are the same. {S, J} -> T again gives (S, J, T)--non-BCNF--plus component (T, J, X).
So, could someone please give me an example of the 3NF synthesis algorithm yielding a non-BCNF decomposition and tell why it is so?
A better "So, [...]" would be, So, what is wrong with my reasoning? You would do well to examine the assumptions you made about what FDs could hold in a component. (That article happens to point out (with reference) that "A 3NF table that does not have multiple overlapping candidate keys is guaranteed to be in BCNF.")
There is no "why" in mathematics. We assume things ("assumptions", "axioms", "premises") & other things follow. We can ask for a proof of something, but the proof does not say "why" the something is so, it's a demonstration that it is so. "Why" might be used trying to ask for a proof or for steps that you got wrong in or are missing from whatever almost-proof you have in mind.
PS Such a ubiquitous non-BCNF 3NF relation is Today's Court Bookings in the Wikipedia article on BCNF as I write. But beware that that particular example has perhaps unintuitive FDs. Indeed beware that almost every relational model Wikipedia page--including that one--has errors & misconceptions. So do many, many textbooks, especially re normalization.
The answer of philipxy is correct. Since you are asking for an example, here there are a couple of them.
The relation (with a cover of the functional dependencies):
R (A B C D)
A B → C
C → D
D → B
through the synthesis algorithm is decomposed in:
R1 (A B C)
R2 (C D)
R3 (B D)
and R1 is not in BCNF for the dependency C → B (the candidate key is AB). Note that C → B is not present in the original cover, but is a dependency implied from it.
Here is another (classical) example:
Phones (AreaCode, PhoneNumber, Subscriber, Town, Street)
AreaCode, PhoneNumber → Town
AreaCode, PhoneNumber → Subscriber
AreaCode, PhoneNumber → Street
Town → AreaCode
The Bernsteins’s synthesis algorithm produces two subschemas:
R1 (AreaCode, PhoneNumber, Subscriber, Town, Street)
AreaCode, PhoneNumber → Town
AreaCode, PhoneNumber → Subscriber
AreaCode, PhoneNumber → Street
and:
R2 (Town, AreaCode)
Town → AreaCode
since R2 is included in R1, the algorithm eliminates the second relation. The resulting relation is in 3NF but not in BCNF, since the relation has two candidate keys, (AreaCode, PhoneNumber) and (PhoneNumber, Town) and the functional dependency Town → AreaCode violates the BCNF.
Consider a relation R with five attributes ABCDE. Now
assume that R is decomposed into two smaller relations ABC and CDE.
Define S to be the relation (ABC NaturalJoin CDE).
a) Assume that the above decomposition is lossless join. What is the
dependency that guarantees the lossless join property.
b) Give an additional FD such that “dependency preserving” property is
violated by this decomposition.
c) Give two additional FD's that would be preserved by this
decomposition.
Question seems different to me because there is no FD given and its asking:
a)
R1=(A,B,C) R2=(C,D,E) R1∩R2 =C (how can i control dependency now)
F1' = {A->B,A->C,B->C,B->A,C->A,C->B,AB->C,AC->B,BC->A...}
F2' = {C->D,C->E,D->E....}
then i will find F' ??
b,c) how do i check , do i need to look for all possible FD's for R1 and R2
The question is definitely assuming things it hasn't said clearly. ABCDE could be subject to the JD *{ABC,CDE} while not being subject to any nontrivial FDs at all.
But suppose that the relation is subject to some FDs and isn't subject to any JDs other than ones that they imply. If C is a CK then the join is lossless. But then C -> ABCDE holds, because a CK determines all attributes, and C -> ABDE holds, because a CK determines all other attributes. No other FD holding would imply that the join is lossless, although that requires tedium (by looking at every possible case of CK) or inspiration to show.
Both these FDs guarantee losslessness. Although if one of these holds the other holds, and they express the same condition. So the question is sloppy. Or the question might consider that the two expressions express the same FD in the sense of a condition, but a FD is an expression and not a condition, so that would also be sloppy.
I suspect that the questioner really just wanted you to give some FD whose holding would guarantee losslessness. That would get rid of the complications.
I am just wondering how is the "less than" relationship defined for real numbers.
I understand that for natural numbers (nat), < can be defined recursively in terms of one number being the (1+) successor S of another number. I heard that many things about real numbers are axiomatic in Coq and do not compute.
But I am wondering whether there is a minimum set of axioms for real numbers in Coq based upon which other properties/relations can be derived. (e.g. Coq.Reals.RIneq has it that Rplus_0_r : forall r, r + 0 = r. is an axiom, among others)
In particular, I am interested in whether the relationships such as < or <= can be defined on top of the equality relationship. For example, I can imagine that in conventional math, given two numbers r1 r2:
r1 < r2 <=> exists s, s > 0 /\ r1 + s = r2.
But does this hold in the constructive logic of Coq? And can I use this to at least do some reasoning about inequalities (instead of rewriting axioms all the time)?
Coq.Reals.RIneq has it that Rplus_0_r : forall r, r + 0 = r. is an axiom, among others
Nitpick: Rplus_0_r is not an axiom but Rplus_0_l is. You can get a list of them in the module Coq.Reals.Raxioms and a list of the parameters used in Coq.Reals.Rdefinitions.
As you can see "greater than (or equal)" and "less than or equal" are all defined in terms of "less than" which is postulated rather than introduced using the proposition you suggest.
It looks like Rlt could indeed be defined in the fashion you suggest: the two propositions are provably equivalent as shown below.
Require Import Reals.
Require Import Psatz.
Open Scope R_scope.
Goal forall (r1 r2 : R), r1 < r2 <-> exists s, s > 0 /\ r1 + s = r2.
Proof.
intros r1 r2; split.
- intros H; exists (r2 - r1); split; [lra | ring].
- intros [s [s_pos eq]]; lra.
Qed.
However you would still need to define what it means to be "strictly positive" for the s > 0 bit to make sense and it's not at all clear that you'd have fewer axioms in the end (e.g. the notion of being strictly positive should be closed under addition, multiplication, etc.).
Indeed, the Coq.Real library is a bit weak in the sense that it is totally specified as axioms, and at some (brief) points in the past it was even inconsistent.
So the definition of le is a bit "ad hoc" in the sense that from the point of view of the system it carries zero computational meaning, being just a constant and a few axioms. You could well add the axiom "x < x" and Coq could do nothing to detect it.
It is worth pointing to some alternative constructions of the Reals for Coq:
My favourite classical construction is the one done in the four Color theorem by Georges Gonthier and B. Werner: http://research.microsoft.com/en-us/downloads/5464e7b1-bd58-4f7c-bfe1-5d3b32d42e6d/
It only uses the excluded middle axiom (mainly to compare real numbers) so the confidence in its consistency is very high.
The best known axiom-free characterization of the reals is the C-CORN project, http://corn.cs.ru.nl/ but we aware that constructive analysis significantly differs from the usual one.
∀ a b ∈ ℕ, b ≠ 0 → ∃ ! q r ∈ ℕ, a = q × b + r ∧ r < b is a standard example of the use of dependent types. How do I extend this type so that it also expresses time and space complexity requirements?
Nils Anders Danielsson uses a monad in Agda to track time complexity: sub-computations which are "relevant" to the complexity being studied are explicitly marked as such by making each of them take "one tick of time". These sub-computations are then combined monadically by tracking the sum number of ticks in the index of the monad type.
The details are described in his paper Lightweight Semiformal Time Complexity Analysis for Purely Functional Data Structures.