Gather all answer sets into one answer set (ASP) - answer-set-programming

my question is, if I can gather all answer sets into one answer. I attach the code below for my program. The results that it returns and a description of what I would like to have.
% Main domain predicates definitions
argument(1..3).
element(1).
#show scope/2.
{scope(A, U) : element(U)}:- argument(A).
what I get is shown in the picture below
But what I would like to get is, some predicate that has a unique id for each answer set. For example:
newScope(1,empty)-newScope(2,2,1)-newScope(3,3,1)-....-newScope(8,1,1)|newScope(8,2,1)|newScope(8,3,1)
thanks in advance to whomever has the patience to answer me.

You can't do this, except if you accept an exponential blowup in atoms and even than it isn't that easy. You can't enumerate answer sets in a single answer set (the complexity for enumerating is higher, so you can't solve n NP problems (enumeration) inside one NP problem (except you describe n NP problems in your encoding, which is not practical).
Maybe you can describe what you are trying to achieve and there are other ways of how to do this.
Your problem could be solveable with disjunctive rules, as then the complexity rises to NP^2.
Edit: I found that https://github.com/potassco/guess_and_check could exactly do what you are looking for to describe a NP^2 problem without manually writing the disjunctive logic program.

you can also have a look at section 3.3.2 of the following paper:
https://arxiv.org/abs/2008.06692
The method presented there allows you to compute m different stable models of a logic program by finding m 1-diverse stable models. But note that m has to be given as input, hence the method is not directly applicable to the problem of computing one stable model that contains all stable models of a logic program.

Related

Multi-State dependent Variable Encoding

I am a novice in this and am trying to get some knowledge over the pre-processing part I have the following query:
I am aware that if your dependent variable(y) has 2 states, then you might not need feature scaling on the dependent variable. But, in case of multi-state dependent variable like Customer_index(probable customer): no, yes, maybe , NA
If I use OnehotEncoder or LabelEncoder, i might get 0,1,2,3.
But, I believe when I try fitting up a model to this, the algorithm will assume this variables as ordinal/Weighted.
How can I handle this.
Some Useful artifact I found:
https://towardsdatascience.com/all-about-categorical-variable-encoding-305f3361fd02
Thanks for hep in advance.
I have some research on my questions above and I found once we apply encoding, we anyhow must do scaling on our data to apply biasness over the weights of the features and that is true for the multistate variable as well.
Let know your thoughts.

How to pass multiple variables from one model to another model (inner/outer)

Let's say we have the following model:
Collector:
model Collector
Real collect_here;
annotation(defaultComponentPrefixes="inner");
end Collector;
and the following model potentially multiple times:
model Calculator
outer Collector collector;
Real calculatedVariable = 2*time;
equation
calculatedVariable = collector.collect_here;
end Calculator;
The code above works if calcModel is present only once in the system to be simulated. If the model exists more than once I get a singular system. This is demonstrated by the Example below. Changing the parameter works either gives a working or failing system.
model Example
parameter Boolean works = true;
inner Collector collector;
Calculator calculator1;
Calculator calculator2 if not works;
end Example;
Using an array inside the collector to pass multiple variables in it doesn't solve it.
Another possible way to solve this is possible by use of connectors, but I only made it work with one calcModel.
Using multiple instances of Calculator does brake the model, as the single variable calculatedVariable will have multiple equations trying to compute its value. Therefore Dymola complains that the system is structurally singular, in this case meaning that there are more equations than variables in the resulting system of equations.
To give a bit more of an insight: Actually checking Collector will fail, as since Modelica 3.0 every component has to be balanced (meaning it has to have as many unknowns as states), which is not the case for Collector as it does have one unknown but no equation. This strongly limits the possible applications for the inner/outer construct as basically every variable has to be computed where it is defined.
In the given example this is compensated in the overall system if exactly one Calculator is used. So this single combination will work. Although this works, it is something that should not be done - for the obvious reason of being very error-prone (and all sub-models should pass the check).
Your question on how to solve this issue actually misses a description of what the issue actually is. There are some cases in my mind that your approach could be useful for:
You want to plot multiple variables from a single point, which would be collector. For this purpose "variable selections" should be the most straight-forward way to go: see Dymola Manual Vol. 1, Section "4.3.11 Matching and variable selections" on how to apply them.
You want to carry out some mathematical operation on that variables. Then it could be useful to have a vectorized input of variable size. This enables an arbitrary number of connections to this input. For an example of this take a look at: Modelica.Blocks.Math.MultiSum
You want to route multiple signals between different models (which is unlikely judging from your description, but still): Then expandable connectors would be a good possibility. To get an impression of what that does take a look at Modelica.Blocks.Examples.BusUsage.
Hope this helps, otherwise please specify more clearly what you actually want to achieve with your code.
I prepared a demonstrative library for such scenario some days ago. You can access it at https://gist.github.com/beutlich/e630b2bf6cdf3efe96e5e9a637124fe1. If you read the documentation on Example2 you can see the link to an article from H. Elmqvis et. al., which is the clue to your problem. That is, you need a connector, and inherited connects from every Calculator to the one Collector.

What alternatives are there to dynamic patching (to deal with variables passed at creation time)?

I have heard people describe dynamic patching as a bit of a hack or at risk of breaking in future releases of Pd. This is reasonable enough, but it seems to imply that there are alternatives when building abstractions.
Dynamic patching seems to be useful for both instantiating a variable number of objects and connecting up to a variable number (a number defined at creation time - I personally don't need it to change after the fact, at this stage) of inlets and outlets within an abstraction.
Now I understand that the [clone] object can solve the problem of creating objects. I can see too that looping through send and receive objects would solve much of the connection issues with careful planning but what I do not understand is how objects like [trigger], [route] and [select] can be adjusted or replaced in some way? I fail to see how you would avoid using dynamic patching to, for example, create a [trigger f f] when the creation arg to your abstraction is 2, and a [trigger f f f] when the creation arg is 3. Again, the same with [route] and [select] and similar objects.
EDIT: The original question was perceived as too vague. I later posed a follow-up question in the comments which should really be here instead. As it happens, the answer to the follow-up provided a good answer to the original question, in my opinion. So to summarise and hopefully clarify, I was after a few "tools" to use when building abstractions so that I could limit my use of dynamic patching, if possible. These tools turned out to be:
using send and receive instead of inlets and outlets (although [initbang] can be used for creating inlets and outlets at instantiation).
using [clone]
chaining trigger, route and select objects using send and receive - for example, using [t b b] - [t b b] instead of [t b b b]. This means that the number of arguments in these objects can be defined at creation time with the help of [clone] for example. This is discussed in the Pd mailing list.
using [initbang] as indicated in the answer below.
After having attempted to build a drum machine with presets and an arbitrary number of tracks with my limited knowledge of dynamic patching techniques, I realised that there must be many ways of avoiding the problems I had when doing this, which were several! Of course, some things have to be done with dynamic patching and that's fine. It's just about creating manageable code.
This is really an answer to "follow-up question" in the comment¹, rather than the original question (which I consider too broad to be answered),
Is there a way to define an abstraction that has an argument that defines how many outlets the abstraction exposes?
Sure, just use $1 for that.
E.g. [gates 10] could create 10 outlets...
Presumably it could dynamically patch itself, but that doesn't seem like a good idea.
well, if you want an abstraction to have a dynamic API (that is: a variable number of inlets/outlets), then there is no way around dynamic patching.
Is this a good case for building your own external?
depends on what you actually want the external to do.
the iemguts library (disclaimer: of which I am the author) has everything in place to allow you to dynamically patch what you need.
Most important, there is [initbang], to create iolets before Pd tries to connect them (if you use [loadbang], the iolets will be created after Pd failed to connect to them).
It also includes a [canvasargs] object which allows you to get all the arguments to the abstraction (e.g. which simplifies the task of having the number of outlets equal the number of arguments - like [trigger] or [pack])
if instead you want to wrap the entire functionality of your abstraction into an external, that's of course also possible (and pretty simple in the realm of C).
Also keep in mind that other's might have already coded what you need.
¹ please don't abuse the comment field for follow-up questions. either update your original question (if the follow-up is a mere clarification of the original question) or post a new one.

NUnit Theory replaces Test?

I have seen many theories about Theory but the truth is that none of them is good enough to explain why not always use Theory. I do not see a reason (apart of ... well ... that the words Theory and Test are not the same words) why not always use a Theory.
The funny thing is that in all examples out there you could easily setup your test to be either a Test or a Theory and I am sure I am missing something here but I do not see the need to put myself in a philosophical uncertainty about what type of attribute I should use.
Let's suppose that the difference is only for the sake of self documented code. Then, why you use Datapoint and DatapointSource for Theory and something else for Test?
The truth is that I feel that nobody has come with a simple and clean answer that lights where the difference really is. At least one example where Theory makes absolute no sense and Test nicely fit, or the other way...
As a programmer I am.....If the answer is not simple there is something wrong there so.. help me to see what i am missing.
Since you are referring to NUnit, I'll answer WRT what NUnit means by a theory. Note that this is entirely different from how xUnit.Net uses the term.
I got the idea of theories from JUnit and particularly from the work of David Saff. Here's one paper on the subject: http://groups.csail.mit.edu/pag/pubs/test-theory-demo-oopsla2007.pdf Google "saff theories" and you'll find more.
Basically, when creating a test you sometimes have a theory about how the tested code should work. (In this answer, lower case theory is the English word while Theory is the NUnit thing) This is especially common in mathematical reasoning. For example, consider a program that calculates square roots. I could theorize that the square root of any non-negative number is a value such that it gives the original number when multiplied by itself. That's a completely self-contained statement about the computation.
Using NUnit, I could write a test like this...
public void SqrtTest(double value)
{
Assume.That(value >=0 );
double answer = SquareRootOf(value);
Assert.That(answer*answer, Is.EqualTo(value));
}
This test works no matter what number we give it. If the number is negative, the result is inconclusive and doesn't affect the overall outcome. If it is positive, an actual assertion is performed and the test either succeeds or fails.
In the ideal world, the provided values can come from anywhere. In JUnit, they come from Datapoints and I copied that. I also permitted them to be programmer-specified mainly as an interim solution. Ultimately, the idea was that the test framework would support a range of ways to generate data for a Theory, without the intervention of the programmer or tester.
Unfortunately, we are still waiting for that last bit. :-)
Bottom line, I think you should use Theory when you have a theory. Use a test when you just have examples without any logic binding them together. IME that's what happens in most business applications.
One of these days I hope to write a pretty long chapter about Theories.

Where can I find good and simple test functions for evolutionary algorithms?

I've started learning evolutionary algorithms (GA, PSO, ...) and I want to implement them in Matlab and play with different parameters to get a hold of the algorithms' structures and how they work.
My problem is, I don't have some simple test functions to use. For example, functions with multiple peaks/valleys, one global minimum and multiple local ones, .... Nothing complicated, just some simple mathematical functions with their formulas.
I can try to make some up with putting some sin/cos/exp together, but it'll take time and is really frustrating!
Anybody knows of a resource (site, book, ...) that have these listed?
Here is a set from our very own #Rody Oldenhuis:
Test functions
You might want to try those in the BBOB benchmark set. There is also some nice accompanying literature to this set in form of the corresponding GECCO workshop.
Some of the classic functions were mentioned by AGS already and include Rastrigin, Rosenbrock and Generalized Rosenbrock, Schwefel, Sphere, Griewank, etc.. We have also implemented these and more in HeuristicLab, so if you want to experiment you can also try that (PSO and GA are included also).