Minizinc: Relax constraints when inconsistency found - minizinc

When getting in MiniZinc the message:
WARNING: model inconsistency detected
That means that the Model is UNSATISFIABLE due to a specific constraint in some line of the Model, is there a way apart from commenting out the constraint that leads to the inconsistency to "relax" this constraint and Minizinc recalculates a solution?

(I copied my comment to a "formal" answer, so now you can accept the answer.)
In the newest MiniZinc version (v2.2.0) there is a solver - "findMUS" - that is for these kind of problems where it tries to identify the culprit.

Related

How to fix the 'Error in the model during iteration' in anylogic

I've built a model in which a fleet of trucks delivers multiple orders to different customers. This model works fine when I perform one simulation experiment. However, when I try to run a parameter variation, the following error occurs: 'Error in the model during iteration x'. A snapshot of that particular error can be found in 2.
A question about this topic is earlier asked here:
NullPointerException during Parameter Variation Experiment with agent statistics
I have tried the tips given in that post but none of them seems to solve the problem.
I have replaced all the conditional transitions with messages in my state chart (see figure).
My data sets are stored in the database, so that cannot be the problem.
I can't get my head around why the model works with some seed values and with some not. I understand that finding the modelling flaw from just the snapshot is hard, but any tips on how I could find the mistake could be helpful.
PS: I have the learning edition so there is no debugger
Edit:
The error happens at a specific line of code written in the transitions pointing towards the state from the state "movingToClient1". The line that seems to cause the error is:
Order order = orderStore.myOrdercollection.get(0);
the iterations seem to work. However, I need it to be equal to one (to specifically measure certain KPIs of the last route). Hopefully, this helps in finding a solution.
The most likely thing to cause the problem is that your arraylist called collectionOfOrders is missused.
so at some point on the "on enter" of one of your states, you do :
collectionOfOrders.get(something)
when collectionOfOrders is actually empty.
sometimeswhat happens is that multiple things happen at the same time in your model, and when you ask if collectionOfOrders==1, another of your truck agents does the same and they both return true, which means that one of them will get the issue.
This happens only with certain seeds, because it occurs with a very low probability.
This is my guess, with the current information provided
Due to the insight given by Felipe and Benjamin, I found the problem in my model. My model starts with an import order with a specific arrival rate of one in a source block. The rate of 1 is equivalent to exponentially distributed interarrival time with mean = 1/ratedefined (https://anylogic.help/library-reference-guides/process-modeling-library/source.html). This means that it is possible for some seed values that the orders are generated at the same time. Therefore, changing the setting from 'rate' to 'interarrival time' solved the problem.

What does "Canceled on identification as a pivot" mean?

I use a postgres database and I see the following error message:
could not serialize access due to read/write dependencies among transactions
DETAIL: Reason code: Canceled on identification as a pivot, during conflict out checking.
HINT: The transaction might succeed if retried.
The issue is that I have two transactions which are executed concurrently and they can influence each other.
This question is about understanding the error message itself, not about fixing the issue. I would love if somebody could walk me through it. Some questions I have:
What is the mentioned "identification"?
What is a pivot in this case?
What is "conflict out checking"?

More information for Not Enough Spatial Data problem

After many attempts to make a version of SpectatorView functional using UNet, I was able to successfully communicate between client / server using Azure Spatial Anchor to share anchor information. Many errors have occurred, including an unknown error, which appears to resolve the deletion of the MRTK 2.2.0 library. This has already been resolved, but now at random a problem occurs in the function CloudManager.CreateAnchorAsync, which says just like this: Not enough Neighborhood Spatial Data was available to complete the desired Create operation.
This problem does not happen directly, but randomly makes it impossible to find a pattern to try to think of a solution to this problem.
Remembering that the CloudManager.SessionStatus.RecommendedForCreateProgress variable is above 1 before calling the creation function.
Could you give me more information about this problem? What do I do to not do a project to solve it?

Reversing a legacy DB in MDriven, undocumented errror

What is the correct action to take for solving the error ", this occurred after "pressing play" with persistency towards a legacy db (read only access)
Though one - very little to go on. Maybe your association end requires 2 attributes but the primary key of the target only has 1 attribute marked as key.
Try to downscale your model until it works - then slowly add things back. See if that process helps you to understand the error message better.

How to avoid arithmetic errors in PostgreSQL?

I have a PostgreSQL-powered web app that does some non-essential, simple calculations involving getting values from outside sources, multiplication and division for reporting purposes. Today an error where a multiplication that exceeded the value domain of a numeric( 10, 4 ) field led to an application crash. It would be much better if the relevant field had just been set to null and a notice be generated. The way the bug worked was that a wrong value in one field caused several views to become unavailable, and while a missing value in that place would have been sad but no big problem, the blocked view is still essential for the app to work.
Now I'm aware that in this particular case, setting that field to numeric( 11, 4 ) would have prevented the bailout, but that is, of course, only postponing the issue at hand. Since the error happened in a function call, I could also have written an exception handler; lastly, one could check either the multiplicands or the result for sane values (but that is in itself a little strange as I would either have to do a guess based on magnitudes or else do the multiplication in another numeric type that can probably handle a value whose magnitude is in principle not known to me with certainty, because external sources).
Exception handling is probably what this will boil down to, which, however, entails that all numeric calculations will have to be done via PL/pgSQL function calls, and will have to be implemented in many different places. None of the options seems particularly maintainable or elegant. So the question is: Can I somehow configure PostgreSQL to ignore some or all arithmetic errors and use default values in such cases? If so, can that be done per database or will I have to configure the server? If this is impossible or a Bad Idea, what are the best practices to avoid arithmetic errors?
Clarification This is not a question about how to rewrite numeric( 10, 4 ) so that the field can hold values of 1e6 and above, and also not so much about error handling in the application that uses the DB. It's more about whether there is an operator, a function call, a general configuration or a general pattern that is most commonly recommended to deal with situations where a (non-essential) computation normally results in a number (or in fact other value type) except with some inputs that cause exceptions, which is when the result could fully well and safely be discarded. Think Excel printing out #### when cell is too narrow for the digits to be displayed, or JavaScript giving you NaN in place of arithmetic errors. Returning null instead of raising an exception may be a bad idea in general programming but legitimate in specific case.
Observe that PostGreSQL error codes does have e.g. invalid_argument_for_logarithm, invalid_argument_for_ntile_function, division_by_zero all grouped together under Class 22 — Data Exception and does allow exception handling in function bodies, so I can also specifically ask: How to catch all class 22 exceptions short of listing all the error codes?, but then I still hope for a more principled approach.
Arguably the type numeric (without type modifiers) would be the right thing for you if you want to avoid overflows (that's what you seem to mean with “arithmetic error”) as much as possible.
However, there will still be the possibility of value overflows numeric format.
There is no way to configure PostgreSQL so that it ignores a numeric overflow.
If the result of an operation cannot be represented in a data type, there should be an error. If the data supplied by the application can lead to an error, the application should be ready to handle such an error rather than “crash”. Failure to do so is an application bug.