I have reached a point in a structural induction proof where I have 2 equivalent algebraic expressions on different sides of the equation. One of them is just expanded form of another. I hoped reflexivity . would catch up on that, but apparently I still need some simplification. I'm not sure however what command can help me here.
It sounds like a job for the ring tactic (which requires Require Import Ring). In case this is about nat (which does not form a ring) you might be able to convert your goal to Z using the zify tactic (which should be included in Require Import Lia). In case your term is linear (does not multiply variables) you can also try lia instead of ring.
It is hard to tell why reflexivity does not work without looking at your code. You might need a few rewriting steps before reflexivity can do the job. Note that sometimes two expressions might look definitionally equal when they are printed even though they actually aren't. For instance, there could be invisible implicit arguments that are not definitionally equal and that are preventing unification. It might help to use Set Printing All to double check if you are missing such issues.
Related
One finds that Setoids are widely used in languages such as Agda, Coq, ... Indeed languages such as Lean have argued that they could help avoid "Setoid Hell". What is the reason for using Setoids in the first place?
Does the move to extensional type theories based on HoTT (such as cubical Agda) reduce the need for Setoids?
As Li-yao Xia's answer describes, setoids are used when we don't have or don't want to use quotients.
In the HoTT book and in Lean quotients are (basically) axiomatized. One difference between Lean and the HoTT book is that the latter has many more higher inductive types, and Lean only has quotients and (regular) inductive types. If we just restrict our attention to quotients (set quotients in the HoTT book), it works the same in Lean and in Book HoTT. In this case you just postulate that given a type A and an equivalence R on A you have a quotient Q, and a function [-] : A → Q with the property ∀ x y : A, R x y → [x] = [y]. It comes with the following elimination principle: to construct a function g : Q → X for some type X (or hSet X in HoTT) we need a function f : A → X such that we can prove ∀ x y : A, R x y → f x = f y. This comes with the computation rule, that states ∀ x : A, g [x] ≡ f x (this is a definitional equality in both Lean and Book HoTT).
The main disadvantage of this quotient is that it breaks canonicity. Canonicity states that every closed term (that is, a term without free variables) in (say) the natural numbers normalizes to either zero or the successor of some natural number. The reason that this quotient breaks canonicity is that we can apply the elimination principle for = to the new equalities in a quotient, and a term like that will not reduce. In Lean the opinion is that this doesn't matter, since in all cases we care about we can still prove an equality, even though it might not be a definitional equality.
Cubical type theory has a fancy way to work with quotients while retaining canonicity. In CTT equality works differently, and this means that higher inductive types can be introduced while keeping canonicity. Potential disadvantages of CTT are that the type theory is a lot more complicated, and that you have to embrace HoTT (and in particular give up on proof irrelevance).
[The answers by Lia-yao Xia and Floris van Doorn are both excellent, so I will try to augment them with additional information.]
Another view is that quotients, while used a lot in classical mathematics, are perhaps not so great after all. Not taking quotients but sticking to Groupoids is exactly where non-commutative geometry starts from! It teaches us that some quotients are incredibly badly behaved, and the last thing we want to do (in those cases!) is to actually quotient. But that the thing itself is not so bad, even quite good, if you treat it using the 'right' tools.
It is arguably also deeply embedded in all of category theory, where one doesn't quotient out equivalent objects. Taking of 'skeletons' in category theory is regarded to be in bad taste. The same is true of strictness, and a host of other things, all of which boil down to trying to squish things down that are better left as they are, as they do no harm at all. We're just used to wanting 'uniqueness' to be reflected in our representations - something we should just get over.
Setoid hell arises not because some coherences must be proven (you need to prove them to show you have a proper equivalence, and again whenever you define functions on raw representations instead of on the quotiented version). It arises when you're forced to prove these coherences again and again when defining functions that can't possibly "go wrong". So Setoid hell is actually caused by a failure to provide proper abstraction mechanisms.
In other words, if you're doing sufficient simple mathematics, where quotients are well-behaved, then there should be some automation that lets you work with that smoothly. Currently, in type theory, working out exactly what that could look like, is ongoing research. Floris' answer outlines well what one pitfall is: at some point, you give up that computations will be well-behaved, and from then on, are forced to do everything via proofs.
Ideally one would certainly like to be able to treat arbitrary equivalence relations as Leibniz equality (eq), enabling rewriting in arbitrary contexts. That means to define the quotient of a type by an equivalence relation.
I'm not an expert on the topic, but I've been wondering the same for a while, and I think the reliance on setoids stems from the fact that quotients are still a poorly understood concept in type theory.
Setoid Hell is where we're stuck when we don't have/want quotient types.
We can axiomatize quotient types, I believe (I could be mistaken) that's what Lean does.
We can develop a type theory which can naturally express quotients, that's what HoTT/Cubical TT do with higher inductive types.
Furthermore, quotient types (or my naive imagination of them) force us to package programs and proofs together in a perhaps less-than-ideal way: a function between two quotient types is a plain function together with a proof that it respects the underlying equivalence relation. While one can technically do that, this interleaving of programming and proving is arguably indesirable because it makes programs unreadable: one often seeks to either keep programs and proofs in two completely separate worlds (so that mandates setoids, keeping types separate from their equivalence relations), or to change some representations so the program and the proof are one and the same entity (so we might not even need to explicitly reason about equivalences in the first place).
I build a simple model to understand the concept of "Discrete expressions", here is the code:
model Trywhen
parameter Real B[ :] = {1.0, 2.0, 3.0};
algorithm
when time>=0.5 then
Modelica.Utilities.Streams.print("message");
end when;
annotation (uses(Modelica(version="3.2.3")));
end Trywhen;
But when checking the model, I got an error showing that "time==0.5" isn't a discrete expression.
If I change time==0.5 to time>=0.5, the model would pass the check.
And if I use if-clause to when-clause, the model works fine but with a warning showing that "Variables of type Real cannot be compared for equality."
My questions are:
Why time==0.5 is NOT a discrete expression?
Why Variables of type Real cannot be compared for equality? It seems common when comparing two variables of type Real.
The first question is not important, since time==0.5 is not allowed.
The second question is the important one:
Comparing reals for equality is common in other languages, and also a common source of errors - unless special care is taken.
Merely using the floating point compare of the processor is a really bad idea on idea on some processors (like Intel) that mix 80-bit and 64-bit floating point numbers (or comes with a performance penalty), and also in other cases it may not work as intended. In this case 0.5 can be represented as a floating point number, but 0.1 and 0.2 cannot.
Often abs(x-y)<eps is a good alternative, but it depends on the intended use and the eps depends on additional factors; not only machine precision but also which algorithm is used to compute x and y and its error propagation.
In Modelica the problems are worse than in many other languages, since tools are allowed to optimize expressions a lot more (including symbolic manipulations) - which makes it even harder to figure out a good value for eps.
All those problems mean that it was decided to not allow comparison for equality - and require something more appropriate.
In particular if you know that you will only approach equality from one direction you can avoid many of the problems. In this case time is increasing, so if it has been >0.5 at an event it will not be <=0.5 at a later event, and when will only trigger the first time the expression becomes true.
Therefore when time>=0.5 will only trigger once, and will trigger about when time==0.5, so it is a good alternative. However, there might be some numerical inaccuracies and thus it might trigger at 0.500000000000001.
According to Wikipedia, the maximum satisfiability problem (MAX-SAT) is the problem of determining the maximum number of clauses, of a given Boolean formula in conjunctive normal form, that can be made true by an assignment of truth values to the variables of the formula. It is a generalisation of the Boolean satisfiability problem, which asks whether there exists a truth assignment that makes all clauses true.
I do not understand the 2nd sentence on how MAX-SAT is a generalisation of SAT. According to Wikipedia, SAT asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE.
The reason why I am asking this is because of the paper 'Semidefinite Optimization Approaches for Satisfiability and Maximum-Satisfiability Problems', where I would like to try Semidefinite optimisation techniques to solve some SAT problems I have at hand.
Imagine turning each of your clauses to implications, by adding p -> q where p is a fresh variable for each clause q you have in your original problem. Then, a satisfying instance to this modified problem is a solution to MAXSAT problem of the original, when you pick those clauses where the solver assigned true to the corresponding p. This gives you a maxsat solver, albeit a crappy one.
Now imagine you have a system that makes sure it makes as many of those p's true as possible. That combination now gives you a maxsat solver, i.e., one that can optimize the number of ps that are true. This way you get a nice maxsat solver for your original problem, i.e., you can reduce the maxsat problem to sat, provided you have something that maximizes the number of true assignment to those p's that you introduce through the translation.
#PatrickTrentin can probably explain much better! The vZ paper (the maxsat engine associated with z3) is also a very nice and simple read on this topic: https://backend.orbit.dtu.dk/ws/portalfiles/portal/110977246/Bj_rner_Phan_Fleckenstein_Unknown_Z_An_Optimizing_SMT_Solver_1.pdf
I'm doing a map-coloring problem with Scheme, and I used minimum remaining values (Select the vertex with the fewest legal colors) and degree heuristics select the vertex that has the largest number of neighbors). If there exists a solution for a certain configuration, will these heuristics ensures that it won't need to backtrack?
Let's do a simple theoretical analysis.
Graph coloring is NP-complete for general graphs (if not asking for a coloring with less than 4 colors). This means there exists no known polynomial time algorithm.
Your heuristic is computable in polynomial time.
Assuming you need no backtracking, then you need to make n steps, each of which requires polynomial time (n is number of vertices). Thus you can solve the problem in polynomial time.
Either you have proven P=NP or your assumption is wrong.
I leave it up to you to decide upon which option in point (4) is more plausible.
In general: no, MRV and your other heuristic will not guarantee a straight walk to the goal. (I imagine they might if your problem has some very specific structure, but don't count on it until you've seen the theorem.)
Heuristics prune the search space, or change the order of the search to make an early termination more likely. This is not the same thing as backtracking.
But it's a related concept.
We prune some spaces because we are confident that the solution does not lie in those branches of the search tree, or change the order because we have some reason to believe that it will be quicker if we look in some subtrees before others.
We also cut ourselves off from backtracking because we are confident that the solution is in the branch of the space we are in now (so that if we don't find it in this subtree, we can declare failure and don't bother).
Both kinds of strategies are ultimately about searching less of the space somehow and getting to the answer (positive or negative) without searching everything.
MRV and the degrees heuristic are about reordering the sub-searches, not about avoiding backtracking. Heuristics can be right and make a short search but that's not the same
thing as eliminating backtracking (e.g. the "cut" operator in Prolog). When you find what you're looking for, you can declare success, and of course that eliminates further backtracking. But real backtracking elimination means making a decision not to backtrack no matter what, before the search completes.
E.g. if you're doing a depth-first search, and you find what you're looking for by dumb luck without backtracking, we cannot say that dumb luck is a fence operation that eliminates backtracking. :)
If I'm getting a anonymous operator from a user I would like to test (extremely quickly) if the operator is linear. Is there a standard way to do this? I have no way of doing symbolic operators, or parsing the function. Is the only way trying some random functions (what random functions do I choose) and seeing if they satisfy linearity??
Thanks in advance.
Context:
User supplies a black box operator, that is a function which takes functions to functions.
I can give the operator a function and I get a function back. I want to determine if the operator is linear? Is there a standard fast method which gives me high confidence?
No, not without sweeping the entire parameter space. Imagine the following:
#(f) #(x) f(x) + (x == 1e6)
This operator is non-linear, but there's no way of knowing that unless you happen to test at x == 1e6.
UPDATE
As others have suggested, it may be "good enough" to determine a domain of interest, and check for linearity at periodic intervals across the domain. This may give you false positives (i.e. suggest an operator is linear when in fact it's non-linear), but never false negatives.
This is information the user should supply. Add a parameter linear true/false, default to false (I'm assuming that the code for non-linear will work for linear too, just taking more time).
The problem with random testing is that you will classify a non-linear function as linear sooner or later, and then the user has a problem, because your function unpredictably produces wrong results (depending on which points you pick randomly), that may be reasonably close to the correct results, ie people may not notice for a loooong time --> this is a recipe for disaster.
Really, the user should know this in the first place, its very important to avoid false positives and as said before there is no completely reliable way to test this. Save yourself the trouble and add an additional parameter.