Reduce the expression (w+x’+z)(w’+y+z’)(x+y+z)to a minimum SOP
I’ve tried
(The original expression)
= (z+wx+wy+x’y)(w’+y+z’)
=w’z+yz+wxy+wxz’+wy+wyz’+w’x’y+x’y+x’yz’
=w’z+yz+wxy+wxz’+wy+x’y+x’yz’
=w’z+yz+wxy+wxz’+wy+x’z
=w’z+yz+wy+wxz’+x’y
I’m not sure the result that I found is the “minimum” SOP. Is there any generalized way to get ‘minimum SOP from POS’ or ‘minimum POS from SOP’?
The problem you are facing right now is called consensus theorem. It's not really supported in boolean algebra since it's a rare 3 term rule. Generally you would solve them with a K-Map. This can be reduced down another step using consensus theorem.
w'z+wy+wxz'+x'y
I am not totally sure if there's another step because it's not easy to find it.
Related
my question is, if I can gather all answer sets into one answer. I attach the code below for my program. The results that it returns and a description of what I would like to have.
% Main domain predicates definitions
argument(1..3).
element(1).
#show scope/2.
{scope(A, U) : element(U)}:- argument(A).
what I get is shown in the picture below
But what I would like to get is, some predicate that has a unique id for each answer set. For example:
newScope(1,empty)-newScope(2,2,1)-newScope(3,3,1)-....-newScope(8,1,1)|newScope(8,2,1)|newScope(8,3,1)
thanks in advance to whomever has the patience to answer me.
You can't do this, except if you accept an exponential blowup in atoms and even than it isn't that easy. You can't enumerate answer sets in a single answer set (the complexity for enumerating is higher, so you can't solve n NP problems (enumeration) inside one NP problem (except you describe n NP problems in your encoding, which is not practical).
Maybe you can describe what you are trying to achieve and there are other ways of how to do this.
Your problem could be solveable with disjunctive rules, as then the complexity rises to NP^2.
Edit: I found that https://github.com/potassco/guess_and_check could exactly do what you are looking for to describe a NP^2 problem without manually writing the disjunctive logic program.
you can also have a look at section 3.3.2 of the following paper:
https://arxiv.org/abs/2008.06692
The method presented there allows you to compute m different stable models of a logic program by finding m 1-diverse stable models. But note that m has to be given as input, hence the method is not directly applicable to the problem of computing one stable model that contains all stable models of a logic program.
In my coq development I am learning how to create new tactics tailored to my problem domain, a la Prof. Adam Chlipala.
On that page he describes how to create powerful tactics by wrapping repeat around a match that responds to various interesting conditions. The repeat then iterates, allowing for far-reaching inference.
The use of repeat has a caveat (emphasis mine):
The repeat that we use here is called a tactical, or tactic combinator. The behavior of repeat t is to loop through running t, running t on all generated subgoals, running t on their generated subgoals, and so on. When t fails at any point in this search tree, that particular subgoal is left to be handled by later tactics. Thus, it is important never to use repeat with a tactic that always succeeds.
Now, I already have a powerful tactic in use, auto. It similarly strings together chains of steps, this time found from hint databases. From auto's page:
auto either solves completely the goal or else leaves it intact. auto and trivial never fail.
Boo! I have already invested some effort in curating auto's hint databases, but it seems I am forbidden from employing them in tactics using repeat (that is, interesting tactics.)
Is there some variation of auto that can fail, or otherwise be used correctly in loops?
For example, perhaps this variant fails when it "leaves [the goal] intact".
EDIT: Incorporating auto into loops isn't the "right" way to do it anyway (see this), but the actual question of a failing version of auto is still perhaps interesting.
As mentioned by #AntonTrunov you can always use the progress tactical to make the tactic fail if the goal has not been changed. In the case of auto since it is supposed to solve the goal or leave it unchanged, you can also wrap it in solve [ auto ] which will have the same effect because it will fail if auto does not solve the goal completely (here is the doc for solve).
I have seen many theories about Theory but the truth is that none of them is good enough to explain why not always use Theory. I do not see a reason (apart of ... well ... that the words Theory and Test are not the same words) why not always use a Theory.
The funny thing is that in all examples out there you could easily setup your test to be either a Test or a Theory and I am sure I am missing something here but I do not see the need to put myself in a philosophical uncertainty about what type of attribute I should use.
Let's suppose that the difference is only for the sake of self documented code. Then, why you use Datapoint and DatapointSource for Theory and something else for Test?
The truth is that I feel that nobody has come with a simple and clean answer that lights where the difference really is. At least one example where Theory makes absolute no sense and Test nicely fit, or the other way...
As a programmer I am.....If the answer is not simple there is something wrong there so.. help me to see what i am missing.
Since you are referring to NUnit, I'll answer WRT what NUnit means by a theory. Note that this is entirely different from how xUnit.Net uses the term.
I got the idea of theories from JUnit and particularly from the work of David Saff. Here's one paper on the subject: http://groups.csail.mit.edu/pag/pubs/test-theory-demo-oopsla2007.pdf Google "saff theories" and you'll find more.
Basically, when creating a test you sometimes have a theory about how the tested code should work. (In this answer, lower case theory is the English word while Theory is the NUnit thing) This is especially common in mathematical reasoning. For example, consider a program that calculates square roots. I could theorize that the square root of any non-negative number is a value such that it gives the original number when multiplied by itself. That's a completely self-contained statement about the computation.
Using NUnit, I could write a test like this...
public void SqrtTest(double value)
{
Assume.That(value >=0 );
double answer = SquareRootOf(value);
Assert.That(answer*answer, Is.EqualTo(value));
}
This test works no matter what number we give it. If the number is negative, the result is inconclusive and doesn't affect the overall outcome. If it is positive, an actual assertion is performed and the test either succeeds or fails.
In the ideal world, the provided values can come from anywhere. In JUnit, they come from Datapoints and I copied that. I also permitted them to be programmer-specified mainly as an interim solution. Ultimately, the idea was that the test framework would support a range of ways to generate data for a Theory, without the intervention of the programmer or tester.
Unfortunately, we are still waiting for that last bit. :-)
Bottom line, I think you should use Theory when you have a theory. Use a test when you just have examples without any logic binding them together. IME that's what happens in most business applications.
One of these days I hope to write a pretty long chapter about Theories.
After the implementation of some of the rules for my project, i did a "ScoreConsistencyCheck" to ensure myself that the rules were implemented correctly.
"ScoreConsistencyCheck" meaning implementing my own Java method that would be called after i either terminate the solving early, or it terminated via configuration, that would output the expected score. The parameter of this method is a Solution instance, based on what the state of the solution is the expected score is calculated, and then it is compared to the score that comes from the "score" variable from the Solution instance.
When i use FULL_ASSERT, it doesn't throw a ScoreCorruption exception, but when i try it this way, i sometimes get a score difference at a particular step either in the construction heuristic or local search. My guess is because OptaPlanner does not know what the expected score based on the solution is all it cares about in FULL_ASSERT is the step score to be the same as the score that is recalculated after the undo move is done.
So because the "ScoreConsistencyCheck" is only called after the solving has ended, i can't really deduce what case is causing a problem(if it is causing any) , because the move and step in which this occurred is unknown.
Because of this i'm looking for a way that will show me my expected score(from the "ScoreConsistencyCheck") after each move, so i can compare it with the OptaPlanner one, and find the cases that are missed in the calculations. To do this, i need a way to get the working solution after each move.
After some searching i wasn't able to find much. I did however, find that in Optaplanner 7.0.0 Beta there is a ScoreVerifier (using Optaplanner 6.4.0) but the case with that is:
I don't know if that will accomplish what i'm looking for, as there is little documentation about it.
I'm having trouble implementing it.
My questions here are:
How to get the workingSolution after each move, and use it for the check?
Is there a feature in Optaplanner 6.4.0 that will allow me to do this?
If there isn't a feature, is there a possible workaround?
Is there a better way to check the score consistency of the Rules?
Yes, <assertionScoreDirectorFactory> (see docs). Use that in combination with FULL_ASSERT and you'll have a more isolated view on where score corruption first started.
I would like to encode "implies" logic in LogicBlox.
I have a predicate:
Number(n),hasNumberName(n:i)->int(i).
isTrue[n] = i -> Number(n), boolean(i).
And I add some data in that predicate:
+Number(1).
Now, I want to create number 2 and number 3, and the truth value for these two number following this logic rule:
If isTrue[1] is true, then isTrue[2] is true or isTrue[3] is true. (isTrue[1] implies (isTrue[2] or isTrue[3]))
So I create a predicate:
implies[n1,n2,n3] = e -> Number(n1), Number(n2), Number(n3),boolean(e).
Then I try to create a rule like that:
isTrue[n2] = true;isTrue[n3] = true <- isTrue[n1] = true,implies[n1,n2,n3] = true.
But LogicBlox reports:"error: disjunction is not supported in the head of a rule "
So how can I encoding this implies logic in LogicBlox?
From your question it looks like you're asking this question with a Prolog background. If so, then it might be helpful to read a Datalog introduction, for example "What you always wanted to know about Datalog (and never dared to ask)".
The logic you want to express is on purpose not allowed in Datalog, because it requires a solving or search strategy. As opposed to Prolog, Datalog is on purpose restricted in the computational complexity of the programs you can express. As a result of these restrictions it meets important requirement for use in a database management system, most importantly supporting very large data sets. The computational complexity restrictions will be more clear after reading a good introduction to Datalog.
People have studied extensions of Datalog to allow more programs to be expressed (without going to full Prolog, which would result in a more procedural semantics). This particular example is called "Disjunctive Datalog". The hits on Google look good for this if you want to read more. LogicBlox does (at least currently) not implement Disjunctive Datalog because our primary objective is to be a scalable database management system.
LogicBlox does support using a solver for specific programs. A typical example is the knapsack problem. If your problem is expressible as an optimization problem (it almost certainly is, but the formulation usually requires some creativity for things that are not conventional optimization problems), then you could use this feature. The solver functionality is not very well documented in publicly available material yet. Please reach out to us directly if you would like to give this a try.
I assume you are trying to enforce a constraint that 1 -> 2 or 3 ? If so, trying to derive a value using <- is not going to work: if neither 2 nor 3 is present, which one(s) are you telling the system create? Instead, just write the constraint using -> syntax. Constraints are implications, after all (the right arrow syntax is no accident!), and that puts the disjunction on the right hand side where the language allows it. Then, if you ever try to create 1 and neither 2 nor 3 exists, the system will report a constraint failure because the implication was not found to hold.
Also, you don't usually need boolean-valued functions in logic languages; isTrue(x) can just be the set of x which you consider to be "true" (and any not present are "false").