What does `Control.refine` do in Ltac2? - coq

I'm learning Ltac2 and reading the official documentation of coq 8.13.2.
I do not get what role Control.refine play in the evaluation inside of quotations, as there doesn't seem to be a lot of explanations to this function.
For example, using variables in tactic expressions inside a constr quotation should be done by: constr:(... $x ...), where $x is the syntactic sugar of ltac2:(Control.refine (fun () => x)).
Why does, say simply, ltac2:(x) not work? (and it indeed doesn't, as coq gives an error of Error: Cannot infer an existential variable of type ..., at the position where the constr represented by x should be inserted).
So my questions are:
What does Control.refine do in general?
It seems to be an idiom to sometimes do this: constr:( ... ltac2:(Control.refine (fun () => ...) ...), in what situation where such anti-quotation idiom should (or shouldn't) be used?
Thanks.

The ltac2:(...) in-term quotation expects an Ltac2 term of type unit, not of type constr. Actually, your example should trigger a warning because of this.
Since the content of the quotation has type unit, it means that it can only act through side-effects. The warning tells you that the value returned by the quotation will be discarded. In particular, it cannot be used to fill the hole.
Rather, the semantics of the quotation is to evaluate its content inside a goal Γ ⊢ A where Γ is the current list of hypotheses, and A the inferred type of the hole. It is expected that doing this will solve the goal, and the resulting partial proof will be used as the Coq term filler for that hole. This is precisely the rôle of Control.refine : (unit -> constr) -> unit. It takes a (for technical reasons, thunked) term as an argument and uses it to solve the goal(s) under focus.
What happens in your example is that you provide a term that is just ignored, the goal is left untouched and the quotation rightfully complains that it was not solved.
Regarding your second point, I cannot really tell. It depends on what you want do do. But in general, if you stick to antiquotations $x I would consider this more readable. It is not always possible to do so, though, e.g. if the term being built depends on a context introduced in the constr:(...) quotation.

Related

Under what conditions does "Eval cbv delta in" expand a definition in Coq?

In Proof General (with Coq 8.5), I executed the following:
Require Import Arith.
Eval cbv delta in Nat.add_comm 3 7.
The output is
Nat.add_comm 3 7 : 3 + 7 = 7 + 3
However, Print Nat.add_comm. gives a long and complicated function taking two nats as input. I expected my code to expand the definition of Nat.add_comm, which is what Eval cbv delta in _. does in similar situations. As a beginner, I know there is a naive misconception lurking. What am I missing?
Expanding a bit on Daniel's comment, cbv delta will unfold an identifier if
it has a body (i.e., is not a context variable, axiom, field in a module argument to the current module functor, bound variable, etc.), and
it's body was given transparently, i.e., either the proof script was closed with Defined rather than Qed or the body was given via := or the automatic obligation tactic (for Program Definition) or typeclass resolution (for Instances without a body), and
it has not been marked opaque via Opaque id or Strategy opaque [id], and
it was not created by "module-locking" another constant, i.e., defined within a module (or module functor) with a module-type ascription via : rather than <: (exception: if the module type itself provides the body for the constant)
Note that, at present, vm_compute can get around 3 (and the unification engine can too, but compute, lazy, cbv, hnf, cbn (usually), simpl (usually), and red cannot), but nothing will get around the others.

Is Racket (and Typed Racket) strongly or softly typed?

I realize the definitions of "strongly typed" and "softly typed" are loose and open to interpretation, but I have yet to find a clear definition in relation to untyped Racket (which from my understanding means dynamically typed) and Typed Racket on this.
Again, I'm sure its not so cut and dry, but at least I'd like to learn more about which direction leans in. The more research I've done of this the more confused I've gotten, so thank you in advance for the help!
One problem in answering questions like this is that people disagree about the meanings of nearly all of these terms. So... what follows is my opinion (though it is a fairly well-informed one, if I do say so myself).
All languages operate on some set of values, and have some runtime behavior. Trying to add a number to a function fails in nearly all languages. You can call this a "type system," but it's probably not the right term.
So what is a type system? These days, I claim that the term generally refers to a system that examines a program and statically[*] deduces properties of the program. Typically, if it's called a type system, this means attaching a "type" to each expression that constrains the set/class of values that the expression can evaluate to. Note that this definition basically makes the term "dynamically typed" meaningless.
Note the giant loophole: there's a "trivial type system", which simply assigns the "type" containing all program values to every expression. So, if you want to, you can consider literally any language to be statically typed. Or, if you prefer,
"unityped" (note the "i" in there).
Okay, down to brass tacks.
Racket is not typed. Or, if you prefer, "dynamically typed," or "unityped," or even "untyped".
Typed Racket is typed. It has a static type system that assigns to every expression a single type. Its type system is "sound", which means that evaluation
of the program will conform to the claims made by the type system: if Typed Racket
(henceforth TR) type-checks your program and assigns the type 'Natural' to an
expression, then it will definitely evaluate to a natural number (assuming no bugs
in the TR type checker or the Racket runtime system).
Typed Racket has many unusual characteristics that allow code written in TR to interoperate with code written in Racket. The most well-known of these is "occurrence typing", which allows a TR program to deal with types like (U Number String) (that is, a value that's either a number or a string) without exploding, as earlier similar type systems did.
That's kind of beside the point, though: your question was about Racket and TR, and the simple answer is that the basic Racket language does not have a static type system, and TR does.
[*] defining the term 'static' is outside the scope of this post :).
Strongly typed and weakly typed has nothing to do with static or dynamic typing. you can have a combination of them so that you have 4 variations. (strong/static, weak/static, strong/dynamic, weak/dynamic).
Scheme (and thus #lang racket) are dynamicaly and stronged typed.
> (string-append "test" 5)
string-append: contract violation
expected: string?
given: 5
argument position: 2nd
other arguments...:
All it's values have a type and the functions can demand a type. If you were to append a string to a number you get a type error. you need to explicitly cast the number to a string using number->string to satisfy the contract of all arguments being strings. With weakly typed languages, like JavaScript, it would just cast the number to a string so satisfy the function. Less code, but possibly more runtime bugs.
Since Scheme is strongly typed #lang typed/racket is definitely too.
While Scheme/#lang racket is dynamicly typed I'm not entirely sure if #lang typed/racket is completely static. The Guide calls it a gradually-typed language.
One of the definitions of "weakly typed" is that when there is a type mismatch between operands instead of giving an error the language will try its best to continue, by coercing the operands from one type to the other or giving a default result.
For example, in Perl a string containing digits will be coerced into a number if you use it in an arithmetic operation:
# This Perl program prints 6.
print 3 * "2a"
Under this definition, Racket would be categorized a dynamically typed (type errors occur at runtime) and strongly typed (it doesn't automatically convert values from one type to the other).
Since Typed Racket doesn't change the runtime semantics of Racket (except by introducing some extra contract checking) it would be just as strongly typed as regular Racket.
By the way, the usual words people use are weak and strong typing. Soft typing might refer to one specific kind of type system that was created during the 90s. It didn't turn out all that well, which is one of the reasons that people come up with the Gradual Typing system that is used in languages such as Typed Racket and Typescript.
Weakly typed language allows a legal implementation to set computer "on fire", in contrast, strongly typed language limits more buggy programs.
In spite of Racket is dynamically typed, it is strongly typed.

Definition vs Notation for constants

I'm extending an existing project (Featherweight Java formalization),
and there are a number of constants, such as:
Notation env := (list (var * typ)).
What would change if I used Definition instead:
Definition env := (list (var * typ)).
Why did the author use Notation here?
Whenever you try to apply or rewrite with a lemma, there's a component in Coq called the unifier that tries to find out how to instantiate your lemma so that it can work with the situation at hand (and checking that it indeed applies there). The behavior of this unifier is a bit different depending on whether you use notations or definitions.
Notations are invisible to Coq's theory: they only affect the parsing and printing behavior of the system. In particular, the unifier doesn't need to explicitly unfold a notation when analyzing a term. Definitions, on the other hand, must be explicitly unfolded by the unifier. The problem is that the unifier works heuristically, and cannot tell with 100% certainty when some definition must be unfolded or not. As a consequence, we often find ourselves with a goal that mentions a definition that the unifier doesn't unfold by itself, preventing us from applying a lemma or rewriting, and having to manually add a call to unfold ourselves to get it to work.
Thus, notations can be used as a hack to help the unifier understand what an abbreviation means without manual unfolding steps.

When a keyword means different things in different contexts, is that an example of context sensitivity?

According to this answer => in Scala is a keyword which has two different meanings: 1 to denote a function type: Double => Double and 2 to create a lambda expression: (x: Double): Double => 2*x.
How does this relate to formal grammars, i.e. does this make Scala context sensitive?
I know that most languages are not context free, but I'm not sure whether the situation I'm describing has anything to do with that.
Edit:
Seems like I don't understand context sensitive grammars well enough. I know how the production rules are supposed to look, and what they mean ("this production applies only if A is surrounded by these symbols"), but I'm just not sure how they relate to actual (programming) languages.
I think my confusion stems from reading something like "Chomsky introduced this term because a word's meaning can depend on its context", and I connected => with the term "word" in the quote, and those two uses of it being two separate contexts.
It be great if an answer would address my confusion.
It's been a while since I've handled formal language theory, but I'll bite.
"Context-free" means that the production rules required in the corresponding grammar do not have a "context". It does not mean that a specific symbol cannot appear in different rules.
Addressing the edit: in other words (and more informally), deciding whether a language is context-free or context-sensitive boils down not to looking at the "meaning" of a specific "word" or "words". Instead, it amounts to looking at the set of all legal expressions in that language, and seeing whether you can "encode" them only by taking into account the positional relationships the component "words" have with one another. This is essentially what the Pumping Lemma checks.
For example:
S → Type"="Body
Type → "Double"
Type → "Double""=>""Double"
Body → Lambda
Body → NormalBody
NormalBody → "x"
Lambda -> "x""=>"NormalBody
Where S is of course the start symbol, uppercased IDs are nonterminals, and quoted strings are terminals. Obviously, this can generate a string like:
Double=>Double=x=>x
but the grammar is still context-free.
So just this, as in the observation that the nonterminal "=>" can appear in two "places" of the program, does not make Scala context-sensitive.
However, it does not mean that:
the entire Scala language is context-free,
it is context-sensitive - it can be even more complex,
if you would like to encode the semantics of Scala into a grammar, you would end up with either a context-free or a context-sensitive one.
The last thing is especially relevant since you've mentioned "meaning" in the (nomen omen) context of formal languages.

Free variables in Coq

Is there any function/command to get/check if a free variable, lets say n:U, exists in a term/expression e, using Coq? Please share.
For example, I want to state this "n does not occur in the free names of e" in Coq.
Thanks,
Wilayat
Let's assume you're talking about free variables in Coq terms:
Dealing with an elementary Coq proof (using nothing exterior), what you manipulate is, outside of the proof context, a closed term, i.e. a term with only bound variables. If in proof mode a term ein your goal appears to have a free variable n(meaning the variable n is somewhere in your proof context), you can simply make the binding explicit (and close the goal term) using the generalize tactic.
In more advanced cases, your less-vanilla proof may involve free variables in the form of assumptions or parameters, in which case you can list them using Print Assumptions.
If on the other hand you are talking about using Coq terms to represent the notion of term in a particular language (e.g. you are formalizing such a language), you simply have to give a special treatment to the free variables of your language.
Provided you give them a specific constructor in the inductive definition of your language's terms, it should be easy to state whether some term has free variables or not. If you're unfamiliar with the (not so trivial) notion of representing substitution & the free variables of a language, you'll find pointers at increasing levels of precision from the TAPL to B.C.Pierce's SF course to the results of the POPLMark Challenge.