I think this is basic, but how do you do disjunction with literals to the body of a rule in clingo? I tried
p3 :- p1 ; p2. But it does not work and it assumes the answers of
p3 :- p1 , p2.
Thanks.
Disjunctions in bodies are only "implicit".
This means you can achive this by using two rules:
p3 :- p1.
p3 :- p2.
Or using first order variables
p(3) :- p(1..2).
or
dom(1..2).
p(3) :- p(X), dom(X).
All three versions are grounded into the same set of rules
p(3):-p(2).
p(3):-p(1).
(If p(1) and p(2) are actually deriveable)
You can check this by adding the line
{p(X) : dom(X)}.
and using clingo --text.
Related
So I am trying to finish up my discrete math homework, but I am completely stumped as to how I am supposed to solve this problem. My teacher wants me to find a logically equivalent equation for p v q that does not include exclusive or, implication, or inclusive or (aka she wants me to use only negation and ands). I don't want any answers, for I need to do my homework myself. But please any examples or help would be GREATLY appreciated. I feel as though there is a simple way to do this going right over my head.
Using just NOTs and ANDs, you can create the NAND gate. Think of how complements relate ANDs and ORs and try and work it out for yourself before you see a big hint below.
Use De Morgan's laws to relate NANDs with ORs.
Furthermore, the NAND gate is a universal gate, which means that in principle any logic function can be realised with just the NAND gate. Try and see for yourself if you can simulate every other gate with just the NAND one.
If you really want the answer, it's below.
p OR q is equivalent to (NOT p) NAND (NOT q)
Here's a truth table for p V q:
p q p V q
T T T
T F T
F T T
F F F
We need to find an equivalent expression that gives the same final column (T, T, T, F) but using only not and and.
You can begin enumerating all possible formulas until you find one. The formula should use only p and q and not and and.
p q p and q
T T T
T F F
F T F
F F F
First thing we note is that the truth table for and gives three Fs, whereas ours needs three Ts. We can turn Ts to Fs and vice versa using negation, so maybe we guess that.
p q not(p and q)
T T F
T F T
F T T
F F T
This seems close, except we need T, T, T, F and we have F, T, T, T. We might notice that this pattern is totally backwards, and since the variables are ordered by truth value, we might guess that swapping truth values would work. To swap truth values, we use negation again:
p q not(not p and not q)
T T T
T F T
F T T
F F F
We found what we wanted. Now, I knew what the answer would be, but even if I hadn't, we would have eventually reached here by just listing out reasonable logical formulas in order. We knew:
Both symbols had to appear
Only "and" could allow both symbols to appear
The only other symbol allowed was not
not not x = x; so we don't need to duplicate
The formulas we could have blindly started writing down truth tables for is:
p and q
(not p) and q
p and (not q)
not(p and q)
not(p) and not(q)
not(not(p) and q)
not(p and (not q))
not((not p) and (not q))
At which point we could have found the answer with no insights other than the four points above.
Let's think about what the sentence p v q means.
p and q are, of course, two propositions - simple sentences that are true or false.
The sentence p v q is just saying 'p is true and/or q is true'. Basically, p v q is true when at least one of the two propositions are true.
So ask yourself the opposite: when would that sentence be false? Why, when neither of them are true! How would we express this? not p and not q
That amounts to saying that not (p or q) and not p and not q are equivalent sentences.
Which implies that not not (p or q) and not(not p and not q) are equivalent.
Now, by the double negation law, we know that two negations cancel out.
So we have p or q and not(not p and not q) are equivalent sentences.
And that is the answer you were looking for.
Given the two properties P1 = (R1 or R2) |-> P and P2 = (R1 |-> P) or (R2 |-> P), where R1 and R2 are sequences and P is a property, is it correct to say that P1 is equivalent to P2?
I did the calculations based on the definitions of tight and neutral satisfiability in Annex F of the LRM and they came up as being equivalent. (I don't want to exclude the possibility of me making a mistake somewhere.)
I ask, because I've seen the two being handled differently by simulation tools.
I did the math again today and the two are not equivalent. There are cases where the property-or form passes, but where the sequence-or form would fail.
A simple example of this would be the properties:
P1 = (1 or (1 ##1 1)) |-> 1
P2 = (1 |-> 1) or (1 ##1 1 |-> 1)
P2 is strongly satisfied by any one clock cycle long trace, aside from ⊥. P1 can never be satisfied by traces shorter than two clock cycles. (This comes out when plugging the conditions of property satisfaction for both forms into the definition of strong satisfaction.)
What this means in plain English is that both threads started in P1 (the one for the R1 part and the one for the R2 part) need to complete until an assertion of this property is deemed successful. For P2, though, only one of the properties is required to "mature" and at this point, the other property's attempt will be discarded.
This seems a bit strange at first glance and not so intuitive, but it stems out of the formal semantics for SVAs. I guess, but I'm not sure, that P3 = first_match(R1 or R2) |-> P is equivalent to P2. One would need to do the math.
I am teaching myself Boolean Algebra.
I was hoping someone could correct the following if I'm wrong.
Question:
Using Boolean Algebra prove that A(A+B)=A.
A(A+B) would mean A and ( A or B).
My Answer:
A(A+B) = A(A(1+B)) = A(A1) = AA = A.
Distribute A first, as such:
A(A+B)=A
AA+AB=A
A+AB=A
A(1+B)=A
A(1)=A
A=A
You seemed to skip a couple steps within your first step: you essentially stated A+B=1+B, which is not always correct.
Let me introduce you to propositional logic. We use the notions below to denote and, or, and logically equivalent respectively:
Below is your equation rewritten to use this notion:
To complete the proof, 3 laws are applied. At line 2, the distributed law is applied for reduction, the idempotent law is applied at line 3, and the absorption law is applied at line 4:
And this completes the proof.
Proving that a^n b^n, n >= 0, is non-regular.
Using the string a^p b^p.
Every example I've seen claims that y can either contain a's, b's, or both. But I don't see how y can contain anything other than a's, because if y contains any b's, then the length of xy must be greater than p, which makes it invalid.
Conversely, for examples such as:
www, w is {a, b}*, the string used is a^p b a^p b a^p b. In the proofs I've seen, it claims that y cannot contain anything other than a's, for the reason I stated above. Why is this different?
Also throwing in another question:
Describe the error in the following "proof" that 0* 1* is not a regular language. (An
error must exist because 0* 1* is regular.) The proof is by contradiction. Assume
that 0* 1* is regular. Let p be the pumping length for 0* 1* given by the pumping
lemma. Choose s to be the string OP P. You know that s is a member of 0* 1*, but
a^p b^p cannot be pumped. Thus you have a contradiction. So 0* 1* is not regular.
I can't find any problem with this proof. I only know that 0*1* is a regular language because I can construct a DFA.
The pumping lemma states that for a regular language L:
for all strings s greater than p there exists a subdivision s=xyz such that:
For all i, xyiz is in L;
|y|>0; and
|xy|<p.
Now the claim that y can only contain a's or b's originates from the first item. Since if it contained both a's and b's, with i=2, this would result in a string of the form aa...abb...baa...b, etc. That's what the statement wants to say.
The third part indeed, makes it obvious that y can only contain a's. In other words, what the textbooks say is a conclusion derived from the first item.
Finally if you combine 1., 2. and 3., one reaches contradiction, because we know y must contain at least one character (2.), the string can only contain a's. Say y contains k a's. If we would "pump" this with i=2, the result is that we generate a string:
s'=xy2z=ap+kbp
We know however that s' is not part of L, which it should be by 1., so we reach inconsistency.
You can thus only make the proof work by combining the three items. It's not enough to know that y consist only out of a's: that doesn't result in contradiction. It's because there is no subdivision available that satisfies all three constraints simultaneously.
About your second question. In that case, L looks different. You can't reuse the proof of a^nb^n because L is perfectly happy if the string contains more a's. In other words, you can't find a contradiction. In other words, the last item of the proof fails. As long as y contains only one type of characters - regardless of its length - it can satisfy all three constraints.
Say I give something like AB+AB+BA to matlab (or mupad), and ask it to simplify it. the answer should be: 2AB+BA. Can this be done in matlab or mupad?
Edit:
Ok, this is feeling rediculous. I'm trying to do this in either matlab or mulab, and.. it's frustrating not knowing how to do what should be the simplest things, and not being able to find the answers right away via google.
I want to expand the following, multiplied together, as a taylor series:
eq1 := exp(g*l*B):
eq2 := exp(l*A):
eq3 := exp((1-g)*l*B):
g is gamma, l is lambda (don't know how to represent either of these in matlab or mulab). A and B don't commute. I want to multiply the three exponentials together, expand, select all terms of a given power in lambda, and simplify the result. Is there a simple way to do this? or should I give up and go to another system, like maple?
This is mupad, not matlab:
operator("x", _vector_product, Binary, 1999):
A x B + A x B + B x A
returns
2 A x B + B x A
The vetor product is used, simply because it matches the described requirements.