I would like to be sure that my answer is true.
the question is :
Let I (x) be the statement “x has an Internet connection”
and C(x, y) be the statement “x and y have chatted over
the Internet,” where the domain for the variables x and y
consists of all students in your class. Use quantifiers to
express each of these statements:
** Exactly one student in your class has an Internet connection.
my answer is: ∃x∀y(x=y ↔ I(y)).
Yes, it works.
An alternative approach would be to try to do it in two steps, and take a conjunction.
First would be "someone has internet" exists X. I(x) and second would be "if two people have internet then they are the same person" forall x,y. I(x) and I(y) -> x = y.
This way is 'simpler' in that there is less quantifier depth. Yours has a quantifier depth of two, and mine has only one.
But yours is more elegant, so YMMV.
Related
I'm trying to teach basic set theory to Prover9. The following definition of membership seems to work very well (the second axiom is just to make lists unordered):
member(x,[x:y]).
[x,y]=[y,x].
With this, I can have Prover9 prove 'complicated' things like member([A,B],[C,[A,B]]) and others.
However, I must be doing something wrong when I use it to define subsets:
subset(x,y) <-> (member(z,x) -> member(z,y)).
Prover9 clausifies this as subset(x,y) | -member(z,y) and uses it to prove false clauses, like subset([A],[B,C]).
What am I missing?
Your "second axiom ... just to make lists unordered" looks suspicious.
Note [x,y] is a two-element list. So your axiom is saying nothing about lists in general. Your 'complicated' examples, are still 2-element lists. So not very complicated. I think you'll be unable to prove member(A, [C, B, A])
Contrast that [x:y] in your first axiom is a 1-or-more-element list. The y might be nil, or might be any number of elements. IOW y is a list, whereas in [x,y], y is an element of a list.
See http://www.cs.unm.edu/~mccune/prover9/manual/2009-11A/, 'Clauses & Formulas' at 'list notation'.
I'd go:
member(x, [x:y]).
member(x, z) -> member(x, [y:z]).
(But that defines a bag, not a set.)
I think the quantification over variables is a red herring.
Edit: Errk. I'm wrong. So I don't know why I got this result:
The example that #Doug points to doesn't need to quant z 'inside'
the rhs of equivalence. You can just remove all the explicit
quantification, and the proof still works.
OK. Let's apply the rewrite rules per the Manual 'Clauses & Formulas' at "If non-clausal formulas are entered ...".
That definition of subset is an equivalence (aka bi-implication); rewrite as two separate axioms: 'forwards' and backwards' implications; rewrite each of those using p -> q ==> -p | q.
In the 'forwards' direction we get:
-subset(x, y) | (member(z, x) -> member(z, y)).
Doesn't matter whether the z is quantified narrow or wide.
In the 'backwards' direction:
-(member(z, x) -> member(z, y)) | subset(x, y).
Here if we quantify z narrowly around the implication, that's inside the scope of negation; and so different to quantifying across the whole formula.
Conclusion: both your definition of member( ) and of subset( ) are wrong.
BTW Are you sure you need to define member? The proof #Doug points to just takes member(x,y) as primitive.
Proving that a^n b^n, n >= 0, is non-regular.
Using the string a^p b^p.
Every example I've seen claims that y can either contain a's, b's, or both. But I don't see how y can contain anything other than a's, because if y contains any b's, then the length of xy must be greater than p, which makes it invalid.
Conversely, for examples such as:
www, w is {a, b}*, the string used is a^p b a^p b a^p b. In the proofs I've seen, it claims that y cannot contain anything other than a's, for the reason I stated above. Why is this different?
Also throwing in another question:
Describe the error in the following "proof" that 0* 1* is not a regular language. (An
error must exist because 0* 1* is regular.) The proof is by contradiction. Assume
that 0* 1* is regular. Let p be the pumping length for 0* 1* given by the pumping
lemma. Choose s to be the string OP P. You know that s is a member of 0* 1*, but
a^p b^p cannot be pumped. Thus you have a contradiction. So 0* 1* is not regular.
I can't find any problem with this proof. I only know that 0*1* is a regular language because I can construct a DFA.
The pumping lemma states that for a regular language L:
for all strings s greater than p there exists a subdivision s=xyz such that:
For all i, xyiz is in L;
|y|>0; and
|xy|<p.
Now the claim that y can only contain a's or b's originates from the first item. Since if it contained both a's and b's, with i=2, this would result in a string of the form aa...abb...baa...b, etc. That's what the statement wants to say.
The third part indeed, makes it obvious that y can only contain a's. In other words, what the textbooks say is a conclusion derived from the first item.
Finally if you combine 1., 2. and 3., one reaches contradiction, because we know y must contain at least one character (2.), the string can only contain a's. Say y contains k a's. If we would "pump" this with i=2, the result is that we generate a string:
s'=xy2z=ap+kbp
We know however that s' is not part of L, which it should be by 1., so we reach inconsistency.
You can thus only make the proof work by combining the three items. It's not enough to know that y consist only out of a's: that doesn't result in contradiction. It's because there is no subdivision available that satisfies all three constraints simultaneously.
About your second question. In that case, L looks different. You can't reuse the proof of a^nb^n because L is perfectly happy if the string contains more a's. In other words, you can't find a contradiction. In other words, the last item of the proof fails. As long as y contains only one type of characters - regardless of its length - it can satisfy all three constraints.
I'm taking an artificial intelligence class and we are working with Answer Set Programming (Clingo specifically). We're talking mostly theory at the moment and I am having some trouble differentiating between models and least models. I have the following definitions:
Satisfying rules, models, least models and answer sets of definite
program
A program is called definite if it does not have “not” in the body of its rules.
A set S is said to satisfy a rule of the form a :- b1, …, bm, not c1, …, not cn. if its body is satisfied by S (I.e., b1 … bm are in S
and none of c1 ... cn are in S) implies that its head must be
satisfied by S (i..e, a is in S).
A set S is said to satisfy a program if it satisfies all rules of that program.
A set S is said to be an answer set of a definite program P if (a) S satisfies P (also referred to as S is a model of P) and (b) No
strict subset of S satisfies P (i.e., S is the least model of P).
With the question (pulled from the lecture slides, not homework):
P is defined as:
a :- b,e.
b.
c :- d,b.
d.
Which of the following are models and least models?
{}, {b}, {b,d}, {b,d,c}, {b,d,c,e}, {b,d,c,e,a}
Can anyone let me know what the answer to the above question is? I can probably figure out the difference from there, although if someone could explain the difference in common speak (rather than text-book definition), that would be wonderful. I'm not sure which forum to post this question under - please let me know if it should be posted somewhere else.
Thanks
First of all, note that this section of your slides is talking about the answer sets of a positive program P (also called a definite program), even though it mentions also not. The positive program is the simple case, as for a positive program P there always exists a unique least model LM(P), which is the intersection of all it's models.
Allowing not rules in the rule bodies makes things more complex. The body of a rule is the right side of :-.
The answer to the question would be, set by set:
S={} is not a model, since b and d are facts b. d.
S={b} is not a model, since d is a fact d.
S={b,d} is not a model, since c is implied by c :- d,b. and c is not in S
S={b,d,c} is a model
S={b,d,c,e} is not a model, since a is implied by a :- b,e. and a is not in S
S={b,d,c,e,a} is a model
So what is the least model? It's S={b,c,d} since no strict subset of S satisfies P.
We can arrive to the least model of a positive program P in two ways:
Enumerate all models and take their intersection (here {b,c,d}∩{a,b,c,d,e}={b,c,d}).
Starting with the facts (here b. d.) and iteratively adding implied atoms (here c :- b,d.) to S, repeating until S is a model and stopping at that point.
Like your slides say, the definition of an answer set for a positive program P is: S is an answer set of P if S is the least model of P. To be stricter, this is actually if and only if, since the least model LM(P) is unique.
As a last note, so you are not later confused by them, a constraint :- a, b is actually just shorthand for x :- not x, a, b. Thus programs containing constraints are not positive programs; though they might look like they are at first, since the body of a constraint seemingly doesn't contain a not.
When I prove some theorem, my goal evolves as I apply more and more tactics. Generally speaking the goal tends to split into sub goals, where the subgoals are more simple. At some final point Coq decides that the goal is proven. How this "proven" goal may look like? These goals seems to be fine:
a = a. (* Any object is identical to itself (?) *)
myFunc x y = myFunc x y. (* Result of the same function with the same params
is always the same (?) *)
What else can be here or can it be that examples are fundamentally wrong?
In other words, when I finally apply reflexivity, Coq just says ** Got it ** without any explanation. Is there any way to get more details on what it actually did or why it decided that the goal is proven?
You're actually facing a very general notion that seems not so general because Coq has some user-friendly facility for reasoning with equality in particular.
In general, Coq accepts a goal as solved as soon as it receives a term whose type is the type of the goal: it has been convinced the proposition is true because it has been convinced the type that this proposition describes is inhabited, and what convinced it is the actual witness you helped build along your proof.
For the particular case of inductive datatypes, the two ways you are going to be able to proved the proposition P a b c are:
by constructing a term of type P a b c, using the constructors of the inductive type P, and providing all the necessary arguments.
or by reusing an existing proof or an axiom in the environment whose type you can get to match P a b c.
For the even more particular case of equality proofs (equality is just an inductive datatype in Coq), the same two ways I list above degenerate to this:
the only constructor of equality is eq_refl, and to apply it you need to show that the two sides are judgementally equal. For most purposes, this corresponds to goals that look like T a b c = T a b c, but it is actually a slightly more broad notion of equality (see below). For these, all you have to do is apply the eq_refl constructor. In a nutshell, that is what reflexivity does!
the second case consists in proving that the equality holds because you have other equalities in your context, nothing special here.
Now one part of your question was: when does Coq accept that two sides of an equality are equal by reflexivity?
If I am not mistaken, the answer is when the two sides of the equality are αβδιζ-convertible.
What this grossly means is that there is a way to make them syntactically equal by repeated applications of:
α : sane renaming of non-free variables
β : computing reducible expressions
δ : unfolding definitions
ι : simplifying matches
ζ : expanding let-bound expressions
[someone please correct me if more rules apply or if I got one wrong]
For instance some of the things that are not captured by these rules are:
equality of functions that do more or less the same thing in different ways:
(fun x => 0 + x) = (fun x => x + 0)
quicksort = mergesort
equality of terms that are stuck reducing but would be equal:
forall n, 0 + n = n + 0
I wrote an answer to what I thought was a quite interesting question, but unfortunately the question was deleted by its author before I could post. I'm reposting a summary of the question and my answer here in case it might be of use to anyone else.
Suppose I have a SAT solver that, given a Boolean formula in conjunctive normal form, returns either a solution (a variable assignment that satisfies the formula) or the information that the problem is unsatisfiable.
Can I use this solver to find all the solutions?
There is definitely a way to use the SAT solver you described to find all the solutions of a SAT problem, although it may not be the most efficient way.
Just use the solver to find a solution to your original problem, add a clause that does nothing except rule out the solution you just found, use the solver to find a solution to the new problem, and so forth. Keep going until you get a problem that's unsatisfiable.
For example, suppose you want to satisfy (X or Y) and (X or Z). There are five solutions:
Four with X true, Y and Z arbitrary.
One with X false, Y and Z true.
So you run your solver, and let's say it gives you the solution (X, Y, Z) = (T, F, F). You can rule out this solution---and only this solution---with the constraint
not (X and (not Y) and (not Z))
This constraint can be rewritten as the clause
(not X) or Y or Z
So now you can run your solver on the new problem
(X or Y) and (X or Z) and ((not X) or Y or Z)
and so forth.
Like I said, this is a way to do what you want, but it probably isn't the most efficient way. When your SAT solver is looking for a solution, it learns a lot about the problem, but it doesn't return all that information to you---it just gives you the solution it found. When you run the solver again, it has to re-learn all the information that was thrown away.
Sure it can. When MiniSat[1] finds a solution
s SATISFIABLE
v 1 2 -3 0
(solution 1=True, 2=True, 3=False) then you have to put into the original CNF[2] a clause that bans this solution:
-1 -2 3 0
(which means, either 1 or 2 must be False or 3 must be True). Then you solve again. You do this until the solver returns UNSAT i.e. that there are no more solutions to the problem. You will insert one clause for each iteration, and each clause will have the same format as the solution except that it's all inverted and has a 0 at the end
It's much faster to do this using the C++ interface of MiniSat, as it can then save intermediate data and the iterations will be faster.
[1] http://minisat.se/
[2] http://fairmut3x.wordpress.com/2011/07/29/cnf-conjunctive-normal-form-dimacs-format-explained/