If requirements are used instead of constraints what is the effect on the optimization performance? - AnyLogic - anylogic

In the Optimization Experiment
AnyLogic allows the use of the top-level agent root on the requirements expression and does not allow the use of the top-level agent root on the constraints.
Although they mention in the AnyLogic help that root can be used in the constraints expression, the help is wrong, root can not be used. Please check the answer to this question:
Error - can not use root. in the constraints expression - AnyLogic
So, in this case, to avoid the root error: use requirements or change your constraints, so they do not need access to root.
If I change my constraints, so they do not need access to root, I will need to reduce the number of parameters (decision variables) which I'm trying to avoid as much as possible till I discover that there is no way else.
However, I'm afraid that if I used the requirements instead of the constraints, this would reduce the optimization performance. As you know, the constraints reduce the search space (this is mentioned in the help, too), but they did not mention if the requirement does the same (although they mentioned that requirements help in guiding to the solution):
"A requirement can also be a restriction on a response that requires its value to fall within a specified range."
Does this mean that the requirement is exactly the same as the constraint (in terms of reducing the search space)?
According to the above-mentioned, if I used requirement instead of constraints (because it is not allowed to use root in the constraints expression), what is the effect on the performance?

so let's review the concepts here...
CONSTRAINTS
First, the constraints are evaluated before the simulation run, and this is only used to check if your parameters fulfill certain conditions:
param1 and param2 can be evaluated between 0 and 10
but the constrain can be that the sum of both has to be below 10
This is effectively to reduce the search space since there would be no point to run the model for param1=8 and param2=8 if this is not within the search space
Requirements
Requirements are evaluated AT THE END of the simulation, that's why you can use root, so you can evaluate not only the parameters, but any variable in the simulation...
For instance if a variable ends up being above 10 when the system only allows its maximum value to be 8, then the solution is not feasible.
This means that requirements and constraints are very different, but they both find unfeasible solutions...
Other options
so from the optimization experiment side you only have these 2 options: evaluate the parameters before you run the simulation, or anything in your model after the simulation is run
Of course there's another option you can use, which is to define the restrictions inside the simulation. If your model is supposed to run for 1 day, but after 1 hour, a variable in question ends up being over 10 (which is not allowed) you can just use finishSimulation() in order to end the simulation early, and your Requirements will evaluate this variable after only 1 hour reducing the time it took to run that simulation and defining the results as unfeasible.
Conclusion
Obviously if you use requirements instead of constraints, you will have to run more simulations that you want, so the speed in which the optimization will find a solution will be much lower, so you shouldn't do that and there's no reason to do that.
Of course i have no idea what you are trying to optimize, but this is how all this works, and even though the help documentation may show an error, it wouldn't make sense to use root in the constraints.

Related

Query on various addressing modes?

Can an array be implemented using only indirect addressing mode? I think we can only access the first element but what about the other elements? For that, I think, we'll have to use immediate addressing mode.
An add instruction can generate an address in a register.
A CPU with only [register] addressing modes would work, but need more instructions than one with an immediate displacement as part of load/store instructions.
Instruction set design isn't about what's necessary for computation to be possible, but rather about how to make it efficient.
related:
Why does the lw instruction's second argument take in both an offset and regSource?
What is the minimum instruction set required for any Assembly language to be considered useful? (note the difference between useful and Turing-complete.)

Optaplanner: Generating a partial solution to VRP where trucks and/or stops may remain unassigned based on Time windows

I am solving a variation on vehicle routing problem. The model worked until I implemented a change where certain vehicles and/or stops may remain unassigned because the construction filter does not allow the move due to time window considerations (late arrival not allowed).
The problem size is 2 trucks/3 stops. truck_1 has 2 stops (Stop_1 and Stop_2) assigned to it, and consequently 1 truck and 1 stop remain unassigned since truck_2 will arrive late to Stop_3.
I have the following error:
INFO o.o.c.i.c.DefaultConstructionHeuristicPhase - Construction Heuristic phase (0) ended: step total (2), time spent (141), best score (-164hard/19387soft).
java.lang.IllegalStateException: Local Search phase started with an uninitialized Solution. First initialize the Solution. For example, run a Construction Heuristic phase first.
at org.optaplanner.core.impl.localsearch.DefaultLocalSearchPhase.phaseStarted(DefaultLocalSearchPhase.java:119)
at org.optaplanner.core.impl.localsearch.DefaultLocalSearchPhase.solve(DefaultLocalSearchPhase.java:60)
at org.optaplanner.core.impl.solver.DefaultSolver.runPhases(DefaultSolver.java:213)
at org.optaplanner.core.impl.solver.DefaultSolver.solve(DefaultSolver.java:176)
I tried to set the planning variable to null (nullable = true) but it seems it is not allowed in case of chained variables.
I am using Optaplanner 6.2.
Please help,
Thank you,
Piyush
Your construction filter may be too restrictive, it could prevent the construction heuristic from creating an initialized solution. You should remove the time window constraint from the construction filter and add it as a hard score constraint in your score calculator instead.
From the Optaplanner docs:
Instead of implementing a hard constraint, it can sometimes be built in. For example: If Lecture A should never be assigned to Room X, but it uses ValueRangeProvider on Solution, so the Solver will often try to assign it to Room X too (only to find out that it breaks a hard constraint). Use a ValueRangeProvider on the planning entity or filtered selection to define that Course A should only be assigned a Room different than X.
This can give a good performance gain in some use cases, not just because the score calculation is faster, but mainly because most optimization algorithms will spend less time evaluating unfeasible solutions. However, usually this not a good idea because there is a real risk of trading short term benefits for long term harm:
Many optimization algorithms rely on the freedom to break hard constraints when changing planning entities, to get out of local optima.
Both implementation approaches have limitations (feature compatiblity, disabling automatic performance optimizations, ...), as explained in their documentation.

How to implement deterministic single threaded network simulation

I read about how FoundationDB does its network testing/simulation here: http://www.slideshare.net/FoundationDB/deterministic-simulation-testing
I would like to implement something very similar, but cannot figure out how they actually did implement it. How would one go about writing, for example, a C++ class that does what they do. Is it possible to do the kind of simulation they do without doing any code generation (as they presumeably do)?
Also: How can a simulation be repeated, if it contains random events?? Each time the simulation would require to choose a new random value and thus be not the same run as the one before. Maybe I am missing something here...hope somebody can shed a bit of light on the matter.
You can find a little bit more detail in the talk that went along with those slides here: https://www.youtube.com/watch?v=4fFDFbi3toc
As for the determinism question, you're right that a simulation cannot be repeated exactly unless all possible sources of randomness and other non-determinism are carefully controlled. To that end:
(1) Generate all random numbers from a PRNG that you seed with a known value.
(2) Avoid any sort of branching or conditionals based on facts about the world which you don't control (e.g. the time of day, the load on the machine, etc.), or if you can't help that, then pseudo-randomly simulate those things too.
(3) Ensure that whatever mechanism you pick for concurrency has a mode in which it can guarantee a deterministic execution order.
Since it's easy to mess all those things up, you'll also want to have a way of checking whether determinism has been violated.
All of this is covered in greater detail in the talk that I linked above.
In the sims I've built the biggest issue with repeatability ends up being proper seed management (as per the previous answer). You want your simulations to give different results only when you supply a different seed to your random number generators than before.
After that the biggest issue I've seen seems tends to be making sure you don't iterate over collections with nondeterministic ordering. For instance, in Java, you'd use a LinkedHashMap instead of a HashMap.

resolving same generic-name functions via size of argument

I'd like to have a function interface that resolves which specific procedure is to be used depending on the size of an array argument. I could then, for example, have a program that handles vectors with less than or exactly N elements with one procedure and longer vectors with another one. As far as I know, Fortran only resolves using types, ranks and keywords of the arguments. Why is that? Aren't compilers intelligent enough to differentiate between arrays of different sizes or is it intrinsically impossible to write a compiler that does that?
Is there a workaround to achieve the desired functionality? I know of course that I could write a subroutine with an if-clause to sort out which procedure to use for which array size. Wouldn't that cost more CPU time though?
Resolution of specific procedures has been designed so that it is something which can be done at compile time. In the general case the size of an array is a run time concept.
If you know at compile time that a certain specific procedure is more suitable for certain input, then you can call that specific procedure directly.
Otherwise use IF to test and branch on the size (If the language had this sort of magic that's all that it would be doing behind the scenes anyway). That test and branch is likely to be substantially faster than calling reshape at runtime anyway.

How to start working with a large decision table

Today I've been presented with a fun challenge and I want your input on how you would deal with this situation.
So the problem is the following (I've converted it to demo data as the real problem wouldn't make much sense without knowing the company dictionary by heart).
We have a decision table that has a minimum of 16 conditions. Because it is an impossible feat to manage all of them (2^16 possibilities) we've decided to only list the exceptions. Like this:
As an example I've only added 10 conditions but in reality there are (for now) 16. The basic idea is that we have one baseline (the default) which is valid for everyone and all the exceptions to this default.
Example:
You have a foreigner who is also a pirate.
If you go through all the exceptions one by one, and condition by condition you remove the exceptions that have at least one condition that fails. In the end you'll end up with the following two exceptions that are valid for our case. The match is on the IsPirate and the IsForeigner condition. But as you can see there are 2 results here, well 3 actually if you count the default.
Our solution
Now what we came up with on how to solve this is that in the GUI where you are adding these exceptions, there should run an algorithm which checks for such cases and force you to define the exception more specifically. This is only still a theory and hasn't been tested out but we think it could work this way.
My Question
I'm looking for alternative solutions that make the rules manageable and prevent the problem I've shown in the example.
Your problem seem to be resolution of conflicting rules. When multiple rules match your input, (your foreigner and pirate) and they end up recommending different things (your cangetjob and cangetevicted), you need a strategy for resolution of this conflict.
What you mentioned is one way of resolution -- which is to remove the conflict in the first place. However, this may not always be possible, and not always desirable because when a user adds a new rule that conflicts with a set of old rules (which he/she did not write), the user may not know how to revise it to remove the conflict.
Another possible resolution method is prioritization. Mark a priority on each rule (based on things like the user's own authority etc.), sort the matching rules according to priority, and apply in ascending sequence of priority. This usually works and is much simpler to manage (e.g. everybody knows that the top boss's rules are final!)
Prioritization may also be used to mark a certain rule as "global override". In your example, you may want to make "IsPirate" as an override rule -- which means that it overrides settings for normal people. In other words, once you're a pirate, you're treated differently. This make it very easy to design a system in which you have a bunch of normal business rules governing 90% of the cases, then a set of "exceptions" that are treated differently, automatically overriding certain things. In this case, you should also consider making "?" available in the output columns as well.
One other possible resolution method is to include attributes in each of your conditions. For example, certain conditions must have no "zeros" in order to pass (? doesn't matter). Some conditions must have at least one "one" in order to pass. In other words, mark each condition as either "AND", "OR", or "XOR". Some popular file-system security uses this model. For example, CanGetJob may be AND (you want to be stringent on rights-to-work). CanBeEvicted may be OR -- you may want to evict even a foreigner if he is also a pirate.
An enhancement on the AND/OR method is to provide a threshold that the total result must exceed before passing that condition. For example, putting CanGetJob at a threshold of 2 then it must get at least two 1's in order to return 1. This is sometimes useful on conditions that are not clearly black-and-white.
You can mix resolution methods: e.g. first prioritize, then use AND/OR to resolve rules with similar priorities.
The possibilities are limitless and really depends on what your actual needs are.
To me this problem reminds business rules engine where there is no known algorithm to define outputs from inputs (e.g. using boolean logic) but the user (typically some sort of administrator) has to define all or some the logic itself.
This might sound a bit of an overkill but OTOH this provides virtually limit-less extension capabilities: you don't have to code any new business logic, just define a new rule set.
As I understand your problem, you are looking for a nice way to visualise the editing for these rules. But this all depends on your programming language and the tool you select for this. Java, for example, has JBoss Drools. Quoting their page:
Drools Guvnor provides a (logically
centralized) repository to store you
business knowledge, and a web-based
environment that allows business users
to view and (within certain
constraints) possibly update the
business logic directly.
You could possibly use this generic tool or write your own.
Everything depends on what your actual rules will look like. Rules like 'IF has an even number of these properties THEN' would be painful to represent in this format, whereas rules like 'IF pirate and not geek THEN' are easy.
You can 'avoid the ambiguity' by stating that you'll always be taking the first actual match, in other words your rules have a priority. You'd then want to flag rules which have no effect because they are 'shadowed' by rules higher up. They're not hard to find, so it's something your program should do.
Your interface could also indicate groups of rules where rules within the group can be in any order without changing the outcomes. This will add clarity to what the rules are really saying.
If some of your outputs are relatively independent of the others, you will also get a more compact and much clearer table by allowing question marks in the output. In that design the scan for first matching rule is done once for each output. Consider for example if 'HasChildren' is the only factor relevant to 'Can Be Evicted'. With question marks in the outputs (= no effect) you could be halving the number of exception rules.
My background for this is circuit logic design, not business logic. What you're designing is similar to, but not the same as, a PLA. As long as your actual rules are close to sum of products then it can work well. If your rules aren't, for example the 'even number of these properties' rule, then the grid like presentation will break down in a combinatorial explosion of cases. Your best hope if your rules are arbitrary is to get a clearer more compact presentation with either equations or with diagrams like a circuit diagram. To be avoided, if you can.
If you are looking for a Decision Engine with a GUI, than you can try this one: http://gandalf.nebo15.com/
We just released it, it's open source and production ready.
You probably need some kind of inference engine. Think about doing it in prolog.