Why do Calculus of Construction based languages use Setoids so much? - coq

One finds that Setoids are widely used in languages such as Agda, Coq, ... Indeed languages such as Lean have argued that they could help avoid "Setoid Hell". What is the reason for using Setoids in the first place?
Does the move to extensional type theories based on HoTT (such as cubical Agda) reduce the need for Setoids?

As Li-yao Xia's answer describes, setoids are used when we don't have or don't want to use quotients.
In the HoTT book and in Lean quotients are (basically) axiomatized. One difference between Lean and the HoTT book is that the latter has many more higher inductive types, and Lean only has quotients and (regular) inductive types. If we just restrict our attention to quotients (set quotients in the HoTT book), it works the same in Lean and in Book HoTT. In this case you just postulate that given a type A and an equivalence R on A you have a quotient Q, and a function [-] : A → Q with the property ∀ x y : A, R x y → [x] = [y]. It comes with the following elimination principle: to construct a function g : Q → X for some type X (or hSet X in HoTT) we need a function f : A → X such that we can prove ∀ x y : A, R x y → f x = f y. This comes with the computation rule, that states ∀ x : A, g [x] ≡ f x (this is a definitional equality in both Lean and Book HoTT).
The main disadvantage of this quotient is that it breaks canonicity. Canonicity states that every closed term (that is, a term without free variables) in (say) the natural numbers normalizes to either zero or the successor of some natural number. The reason that this quotient breaks canonicity is that we can apply the elimination principle for = to the new equalities in a quotient, and a term like that will not reduce. In Lean the opinion is that this doesn't matter, since in all cases we care about we can still prove an equality, even though it might not be a definitional equality.
Cubical type theory has a fancy way to work with quotients while retaining canonicity. In CTT equality works differently, and this means that higher inductive types can be introduced while keeping canonicity. Potential disadvantages of CTT are that the type theory is a lot more complicated, and that you have to embrace HoTT (and in particular give up on proof irrelevance).

[The answers by Lia-yao Xia and Floris van Doorn are both excellent, so I will try to augment them with additional information.]
Another view is that quotients, while used a lot in classical mathematics, are perhaps not so great after all. Not taking quotients but sticking to Groupoids is exactly where non-commutative geometry starts from! It teaches us that some quotients are incredibly badly behaved, and the last thing we want to do (in those cases!) is to actually quotient. But that the thing itself is not so bad, even quite good, if you treat it using the 'right' tools.
It is arguably also deeply embedded in all of category theory, where one doesn't quotient out equivalent objects. Taking of 'skeletons' in category theory is regarded to be in bad taste. The same is true of strictness, and a host of other things, all of which boil down to trying to squish things down that are better left as they are, as they do no harm at all. We're just used to wanting 'uniqueness' to be reflected in our representations - something we should just get over.
Setoid hell arises not because some coherences must be proven (you need to prove them to show you have a proper equivalence, and again whenever you define functions on raw representations instead of on the quotiented version). It arises when you're forced to prove these coherences again and again when defining functions that can't possibly "go wrong". So Setoid hell is actually caused by a failure to provide proper abstraction mechanisms.
In other words, if you're doing sufficient simple mathematics, where quotients are well-behaved, then there should be some automation that lets you work with that smoothly. Currently, in type theory, working out exactly what that could look like, is ongoing research. Floris' answer outlines well what one pitfall is: at some point, you give up that computations will be well-behaved, and from then on, are forced to do everything via proofs.

Ideally one would certainly like to be able to treat arbitrary equivalence relations as Leibniz equality (eq), enabling rewriting in arbitrary contexts. That means to define the quotient of a type by an equivalence relation.
I'm not an expert on the topic, but I've been wondering the same for a while, and I think the reliance on setoids stems from the fact that quotients are still a poorly understood concept in type theory.
Setoid Hell is where we're stuck when we don't have/want quotient types.
We can axiomatize quotient types, I believe (I could be mistaken) that's what Lean does.
We can develop a type theory which can naturally express quotients, that's what HoTT/Cubical TT do with higher inductive types.
Furthermore, quotient types (or my naive imagination of them) force us to package programs and proofs together in a perhaps less-than-ideal way: a function between two quotient types is a plain function together with a proof that it respects the underlying equivalence relation. While one can technically do that, this interleaving of programming and proving is arguably indesirable because it makes programs unreadable: one often seeks to either keep programs and proofs in two completely separate worlds (so that mandates setoids, keeping types separate from their equivalence relations), or to change some representations so the program and the proof are one and the same entity (so we might not even need to explicitly reason about equivalences in the first place).

Related

How to evaluate square of a binom in coq?

I have reached a point in a structural induction proof where I have 2 equivalent algebraic expressions on different sides of the equation. One of them is just expanded form of another. I hoped reflexivity . would catch up on that, but apparently I still need some simplification. I'm not sure however what command can help me here.
It sounds like a job for the ring tactic (which requires Require Import Ring). In case this is about nat (which does not form a ring) you might be able to convert your goal to Z using the zify tactic (which should be included in Require Import Lia). In case your term is linear (does not multiply variables) you can also try lia instead of ring.
It is hard to tell why reflexivity does not work without looking at your code. You might need a few rewriting steps before reflexivity can do the job. Note that sometimes two expressions might look definitionally equal when they are printed even though they actually aren't. For instance, there could be invisible implicit arguments that are not definitionally equal and that are preventing unification. It might help to use Set Printing All to double check if you are missing such issues.

Why "time==0.5" isn't a discrete expression in Modelica language?

I build a simple model to understand the concept of "Discrete expressions", here is the code:
model Trywhen
parameter Real B[ :] = {1.0, 2.0, 3.0};
algorithm
when time>=0.5 then
Modelica.Utilities.Streams.print("message");
end when;
annotation (uses(Modelica(version="3.2.3")));
end Trywhen;
But when checking the model, I got an error showing that "time==0.5" isn't a discrete expression.
If I change time==0.5 to time>=0.5, the model would pass the check.
And if I use if-clause to when-clause, the model works fine but with a warning showing that "Variables of type Real cannot be compared for equality."
My questions are:
Why time==0.5 is NOT a discrete expression?
Why Variables of type Real cannot be compared for equality? It seems common when comparing two variables of type Real.
The first question is not important, since time==0.5 is not allowed.
The second question is the important one:
Comparing reals for equality is common in other languages, and also a common source of errors - unless special care is taken.
Merely using the floating point compare of the processor is a really bad idea on idea on some processors (like Intel) that mix 80-bit and 64-bit floating point numbers (or comes with a performance penalty), and also in other cases it may not work as intended. In this case 0.5 can be represented as a floating point number, but 0.1 and 0.2 cannot.
Often abs(x-y)<eps is a good alternative, but it depends on the intended use and the eps depends on additional factors; not only machine precision but also which algorithm is used to compute x and y and its error propagation.
In Modelica the problems are worse than in many other languages, since tools are allowed to optimize expressions a lot more (including symbolic manipulations) - which makes it even harder to figure out a good value for eps.
All those problems mean that it was decided to not allow comparison for equality - and require something more appropriate.
In particular if you know that you will only approach equality from one direction you can avoid many of the problems. In this case time is increasing, so if it has been >0.5 at an event it will not be <=0.5 at a later event, and when will only trigger the first time the expression becomes true.
Therefore when time>=0.5 will only trigger once, and will trigger about when time==0.5, so it is a good alternative. However, there might be some numerical inaccuracies and thus it might trigger at 0.500000000000001.

Are there algebraic data types outside of sum and product?

By most definitons the common or basic algebraic data types in Haskell or Scala are sum and product. Examples: 1, 2.
Sometimes a definition just says algebraic data types are sum and product, perhaps for simplicity.
However, the definitions leave an impression that other algebraic data types are possible, and sum and product are just the most useful to describe selection or combination of elements.
Given there are subtraction, division, raising to an integer power operations in a basic algebra - is it correct some implementation of other alternative algebraic types in programming is possible, but they are not useful?
Do any programming languages have algebraic data types implemented that are not sum and product types?
"Algebraic" comes from category theory. Every algebraic data type is an initial algebra of a functor. So you could in principle call anything that comes from a functor in this way algebraic, and I think it's quite a large class.
Interpreting "algebraic" to mean "high-school algebra" (I don't mean to be condescending, that's just how we refer to it) as you have, there are some nice analogies.
Arbitrary powers, not just integer powers, are closely analogous to function types, that is, A -> B is analogous to BA. In category theory, when you consider a function ("morphism") as an object of a category, it's called an exponential object, and the latter notation is used. For fun, see if you can prove the law CA+B = CA × CB by writing a bijection between the corresponding types.
Division is analogous to quotient types, which is a fascinating area of research that reaches into things as hott and trendy as homotopy type theory. The analogy of quotients to division is not as strong as product types with multiplication, as you have to divide by an equivalence relation.
At this rate, you would expect subtraction to have some beautiful analogy to go with it, but alas I know of none. Dan Piponi has explored it a little through the antidiagonal, but it is far from a general analogy.
The conventional answer is as given by #luqui, and I have nothing to add to that. The downside is that whilst you distinguish the alternants of the sum by tag ('constructor' in Haskell), you must access the components of the product by position. For homogeneous components as in a vector/array that's fine; but for typical data structures (record types) you want to access by 'field label' and abstract away the position. Haskell's record/label system a) does a very bad job of that; and b) is so deeply ingrained into the language it's proving almost impossible to improve -- see endless proposals and endless discussions resulting so far in no change.
Then an unconventional answer is indexed families aka 'indexed sets'. The idea has been developed mostly around the 'Set-Theoretic Data Structures' of D.L.Childs, for example this one. Childs'approach got a mention in the seminal paper on the Relational (database) Model Codd 1970.
The critical feature is that you can use any type to index the collection of components; the components are heterogeneous; and the compiler supports type-safe access (read and update) both by-component and whole-structure. The components might well be organised positionally within the structure, but that's an implementation detail hidden from the programmer. (Haskell's record system fails on this point.)
Do any programming languages have algebraic data types implemented that are not sum and product types?
You might or might not accept that SQL is a programming language. I might but mostly don't accept that SQL 'column names' are an implementation of 'Indexed families'. SQL's columns and rows are far too much oriented to physical layout (and indeed most vendors' SQL still allows positional notation for columns, even though it's been deprecated by the standard). That said, SQL is the nearest you'll find.
There's been a few extensible/anonymous record systems proposed/developed in Haskell (especially HList) or Haskell-like languages (like Ur/web), or even dear old Hugs' TRex. (See the Gaster & Jones paper for links to other attempts in FP languages.) All of them are limited because they're trying to put lipstick on Haskell's sum-of-product types.
I know Sum, Product, Exponential and Recursive

Significance of using the term "over" when talking about operations

Here is an example sentence which uses the term "over" when talking abut a type and one of its operations:-
"A monoid is a type along with an associative binary operation over it..."
My question is not about monoids - what I don't quite understand is the use of the term "over".
Is this from mathematics?
Does it mean that the operation works on the data defined by the type itself - as opposed to any data we might pass to it from outside the instance of that type?
Yes, it comes from Math.
In Math and FP, unlike OOP, data structures are defined independently from functions that operate on this structure.
As a consequence it's easy to have many different operations over the same data type. For example Sum monoid over Intergers and Multiplication monoid over Integers.

Existence of a 0- and 1-valent configurations in the proof of FLP impossibility result

In the known paper Impossibility of Distributed Consensus with one Faulty Process (JACM85), FLP (Fisher, Lynch and Paterson) proved the surprising result that no completely asynchronous consensus protocol can tolerate even a single unannounced process death.
In Lemma 3, after showing that D contains both 0-valent and 1-valent configurations, it says:
Call two configurations neighbors if one results from the other in a single step. By an easy induction, there exist neighbors C₀, C₁ ∈ C such that Dᵢ = e(Cᵢ) is i-valent, i = 0, 1.
I can follow the whole proof except when they claim the existence of such C₀ and C₁. Could you please give me some hints?
D (the set of possible configurations after applying e to elements of C) contains both 0-valent and 1-valent configurations (and is assumed to contain no bivalent configurations).
That is — e maps every element in C to either a 0-valent or a 1-valent configuration. By definition of C, there must be a root element that is connected to all other elements by a series of "neighbour" relationships, so there must be a boundary point where an element in C that leads to a 0-valent configuration after e is neighbours with an element in C that leads to a 1-valent configuration after e.
I once went down the path of reading all these papers only to discover its a complete waste of time.
The result is not surprising at all.
The paper you mention "[Impossibility of Distributed Consensus with One Faulty
Process]" 1
is a long list of complex mathematical proofs that simply equate to:
1) Consensus is a deterministic state
2) one (or more) faulty systems within an environment is a non deterministic environment
3) No deterministic state, action or outcome can ever be reached within a non deterministic environment.
The end. No further thought is required.
This is how it works in the real world outside of acadamia.
If you wish for agents to reach consensus then Synchronous (Timing model) approximation constructs have to be added to make the environment deterministic within a given set of constraints. For example simple constructs like Timeouts, Ack/Nack, Handshake, Witness, or way more complex constructs.
The closer you wish to get to a Synchronous deterministic model the more complex the constructs become. A hypothetical Synchronous model would have infinitely complex constructs. Also bearing in mind that a fully deterministic Synchronous model can never be achieved in a non trivial distributed system. This is because in any non trivial dynamic multi variate system with a variable initial state there exists an infinite number of possible states, actions and outcomes at any point in time. Chaos Theory
Consider the complexity of a construct for detecting a dropped TCP packets because of buffer overflow errors in a router at hop number 21. And the complexity of detecting the same buffer overflow error dropping the detection signal from the construct itself.
Define a mapping f such that f(C) = 0, if e(C) is 0-valent, otherwise, f(C) = 1, if e(C) is 1-valent.
Because e(C) could not be bivalent, if we assume that D has no bivalent configuration, f(C) could only be either 0 or 1.
Arrange accessible configurations from the initial bivalent configuration in a tree, there must be two neighbors C0, C1 in the tree that f(C0) != f(C1). Because, if not, all f(C) are the same, which means that D has only either all 0-valent configurations or all 1-valent configurations.