FLP Impossiblity Result assumption of C_1 = e'(C_0) - distributed-computing

FLP86's famous proof regarding impossibility of solving consensus in a asynchronous distributed system (even with only a single failure) assumes, in the proof of the third lemma, the existence of an event e', such that the neighbor configurations C0 and C1 can be related as C1 = e'(C0).
I don't get how this is possible, as this seems to me like e' carries out a state transition from a 0-valent configuration to a 1-valent configuration. Further, the proof of case 1 of lemma 3 clearly states that any successor of any 0-valent configuration has to be a 0-valent configuration. What am I missing here?
The answers to this question do not answer the above question. That other question is relates to the proof of existence of C0 and C1 and not that of e'.

Actually, C0 is not 0-valent, C1 is not 1-valent. It is quite confusing for the two symbols.

Related

Is it possible to prove 'implies_to_or -> de_morgan_not_and_not' without resorting to other classical laws?

Theorem implies_to_or_to_de_morgan_not_and_not :
implies_to_or -> de_morgan_not_and_not.
Proof.
unfold implies_to_or, de_morgan_not_and_not, not.
intros.
Admitted.
1 subgoal
H : forall P Q : Prop, (P -> Q) -> (P -> False) \/ Q
P, Q : Prop
H0 : (P -> False) /\ (Q -> False) -> False
______________________________________(1/1)
P \/ Q
This is from the 5 star exercise near the end of SF Logic chapter.
I've bashed my head against this particular problem for too many hours now, so I really have to ask at this point. I've already proven excluded_middle <-> peirce, peirce <-> double_negation_elimination, double_negation_elimination <-> de_morgan_not_and_not, implies_to_or <-> excluded_middle, de_morgan_not_and_not -> implies_to_or so I already have more than all the paths covered. To me that only makes this problem that much more confusing and I do not understand why I can't even get this proof off the ground.
Somehow there just is not that much to work with here.
One option would be to do exfalso and try to do something from there, but that would throw away the P \/ Q goal and I suspect that would be too much of a loss of information even if I could make some kind of headway.
Another option would be to try and destruct H, but in that case there is a problem of trying to prove P -> Q without anything usable being in the premise.
I've had trouble with exercises in the past week and managed to surmount them with effort, but I am not experienced enough to just let this thing lie without asking for advice. What exactly am I not seeing here?
Obviously, I do not want to convert de_morgan_not_and_not to some other
easier to solve classical law (like the excluded middle) as that would be besides the point.
Since the Software Foundations book explicitly asks not to publish solutions, let me give a hint.
Notice that the hypothesis H is universally quantified wrt both propositions it talks about.
This means you can supply any propositions for P and Q, even same ones. Basically, this observation lets you reason classically, which is enough to solve this problem.
Is there a particular reason you don't want to use your other proofs to prove this? It's fairly artificial to avoid using a result that you know is unconditionally true.
You can avoid manipulating de_morgan_not_and_not by using implies_to_or to perform case analysis on P and Q (refer to your proof of implies_to_or -> excluded_middle). Then you have four cases to look at, and all four resulting goals are simple 1-3 line proofs.

Decomposition into ABC & CDE and preserving functional dependencies

Consider a relation R with five attributes ABCDE. Now
assume that R is decomposed into two smaller relations ABC and CDE.
Define S to be the relation (ABC NaturalJoin CDE).
a) Assume that the above decomposition is lossless join. What is the
dependency that guarantees the lossless join property.
b) Give an additional FD such that “dependency preserving” property is
violated by this decomposition.
c) Give two additional FD's that would be preserved by this
decomposition.
Question seems different to me because there is no FD given and its asking:
a)
R1=(A,B,C) R2=(C,D,E) R1∩R2 =C (how can i control dependency now)
F1' = {A->B,A->C,B->C,B->A,C->A,C->B,AB->C,AC->B,BC->A...}
F2' = {C->D,C->E,D->E....}
then i will find F' ??
b,c) how do i check , do i need to look for all possible FD's for R1 and R2
The question is definitely assuming things it hasn't said clearly. ABCDE could be subject to the JD *{ABC,CDE} while not being subject to any nontrivial FDs at all.
But suppose that the relation is subject to some FDs and isn't subject to any JDs other than ones that they imply. If C is a CK then the join is lossless. But then C -> ABCDE holds, because a CK determines all attributes, and C -> ABDE holds, because a CK determines all other attributes. No other FD holding would imply that the join is lossless, although that requires tedium (by looking at every possible case of CK) or inspiration to show.
Both these FDs guarantee losslessness. Although if one of these holds the other holds, and they express the same condition. So the question is sloppy. Or the question might consider that the two expressions express the same FD in the sense of a condition, but a FD is an expression and not a condition, so that would also be sloppy.
I suspect that the questioner really just wanted you to give some FD whose holding would guarantee losslessness. That would get rid of the complications.

Boolean Algebra Proof

I am teaching myself Boolean Algebra.
I was hoping someone could correct the following if I'm wrong.
Question:
Using Boolean Algebra prove that A(A+B)=A.
A(A+B) would mean A and ( A or B).
My Answer:
A(A+B) = A(A(1+B)) = A(A1) = AA = A.
Distribute A first, as such:
A(A+B)=A
AA+AB=A
A+AB=A
A(1+B)=A
A(1)=A
A=A
You seemed to skip a couple steps within your first step: you essentially stated A+B=1+B, which is not always correct.
Let me introduce you to propositional logic. We use the notions below to denote and, or, and logically equivalent respectively:
Below is your equation rewritten to use this notion:
To complete the proof, 3 laws are applied. At line 2, the distributed law is applied for reduction, the idempotent law is applied at line 3, and the absorption law is applied at line 4:
And this completes the proof.

In Answer Set Programming, what is the difference between a model and a least model?

I'm taking an artificial intelligence class and we are working with Answer Set Programming (Clingo specifically). We're talking mostly theory at the moment and I am having some trouble differentiating between models and least models. I have the following definitions:
Satisfying rules, models, least models and answer sets of definite
program
A program is called definite if it does not have “not” in the body of its rules.
A set S is said to satisfy a rule of the form a :- b1, …, bm, not c1, …, not cn. if its body is satisfied by S (I.e., b1 … bm are in S
and none of c1 ... cn are in S) implies that its head must be
satisfied by S (i..e, a is in S).
A set S is said to satisfy a program if it satisfies all rules of that program.
A set S is said to be an answer set of a definite program P if (a) S satisfies P (also referred to as S is a model of P) and (b) No
strict subset of S satisfies P (i.e., S is the least model of P).
With the question (pulled from the lecture slides, not homework):
P is defined as:
a :- b,e.
b.
c :- d,b.
d.
Which of the following are models and least models?
{}, {b}, {b,d}, {b,d,c}, {b,d,c,e}, {b,d,c,e,a}
Can anyone let me know what the answer to the above question is? I can probably figure out the difference from there, although if someone could explain the difference in common speak (rather than text-book definition), that would be wonderful. I'm not sure which forum to post this question under - please let me know if it should be posted somewhere else.
Thanks
First of all, note that this section of your slides is talking about the answer sets of a positive program P (also called a definite program), even though it mentions also not. The positive program is the simple case, as for a positive program P there always exists a unique least model LM(P), which is the intersection of all it's models.
Allowing not rules in the rule bodies makes things more complex. The body of a rule is the right side of :-.
The answer to the question would be, set by set:
S={} is not a model, since b and d are facts b. d.
S={b} is not a model, since d is a fact d.
S={b,d} is not a model, since c is implied by c :- d,b. and c is not in S
S={b,d,c} is a model
S={b,d,c,e} is not a model, since a is implied by a :- b,e. and a is not in S
S={b,d,c,e,a} is a model
So what is the least model? It's S={b,c,d} since no strict subset of S satisfies P.
We can arrive to the least model of a positive program P in two ways:
Enumerate all models and take their intersection (here {b,c,d}∩{a,b,c,d,e}={b,c,d}).
Starting with the facts (here b. d.) and iteratively adding implied atoms (here c :- b,d.) to S, repeating until S is a model and stopping at that point.
Like your slides say, the definition of an answer set for a positive program P is: S is an answer set of P if S is the least model of P. To be stricter, this is actually if and only if, since the least model LM(P) is unique.
As a last note, so you are not later confused by them, a constraint :- a, b is actually just shorthand for x :- not x, a, b. Thus programs containing constraints are not positive programs; though they might look like they are at first, since the body of a constraint seemingly doesn't contain a not.

Is this language regular? {a^i b^j| i=j mod 19}

I know that {a^i b^j | i = j } is not regular and I can prove with pumping lemma; similarly I can use pumping lemma to prove this one not regular too. But I think I see some similar problem that says such language is actually regular. And because I'm not confident about my knowledge in pumping lemma I'm asking this bad question. Sorry.
This is how I prove it: let w be a^p b^(19k+p) , clearly this is in the language. Then if I pump a, making it a^(p+1) b^(19k+p) . It fails. Therefore, it's not regular.
Is my proof right?
Take a look at this answer. In short, you're not pumping the string correctly. The pumping lemma states that your string w can be divided as w = xyz where |xy| ≥ p, and y is not empty. You can then pump the string as xyiz for all i ≥ 0.
The key here, is that the pumping lemma states that there exists a division of the string w satisfying these properties, you do not get to choose the division, and that you can only pump the string as xyiz.
However, this language is regular, so the pumping lemma can't be used to prove if a language is regular, it can only prove if a language is irregular (it's a necessary but not sufficient condition). To show that a language is regular you can construct a DFA, NFA or regular expression that describes your language exactly. One such regular expression is:
(a^19)*(e|ab|aabb|aaabbb|...|a^18b^18)(b^19)*
where e is the empty string.
I suspect that your language is an example from an introductory course in automata or computation. If you're interested, the Myhill-Nerode theorem is not often covered in introductory material, but in this case offers a very easy proof of regularity: consider the distinguising extensions b, bb, bbb, ..., b^19 and the proof follows relatively easily from that.