Definition in coq using keyword `exists` - coq

I'm trying to define an entity named isVector using the following syntax
Require Export Setoid.
Require Export Coq.Reals.Reals.
Require Export ArithRing.
Definition Point := Type.
Record MassPoint:Type:= cons{number : R ; point: Point}.
Variable add_MP : MassPoint -> MassPoint -> MassPoint .
Variable mult_MP : R -> MassPoint -> MassPoint .
Variable orthogonalProjection : Point -> Point -> Point -> Point.
Definition isVector (v:MassPoint):= exists A, B :Point , v= add_MP((−1)A)(1B).
And the Coq IDE keeps complaining that there's a syntax error for the definition. Currently, I haven't figured it out.

There are many problems here.
First, you'd write:
exists A B : Point, …
with no comma between the different variables.
But then, you also have syntax errors on the right-hand side. First, I'm not sure those 1 and -1 operations exist. Second, function calls would be written this way:
add_MP A B
You can write them the way you do:
add_MP(A)(B)
But in the long run you should probably become used to the syntax of function calls being just a whitespace! You might need to axiomatize this -1 operation the way you axiomatized other operations, unless they are a notation that you defined somewhere but did not post here.

Thanks for the help.
After experimenting a little bit. Below is the solution that works.
Definition Point:= Type.
Record massPoint: Type := cons{number: R; point: Point}.
Variable add_MP: massPoint -> massPoint -> massPoint.
Variable mult_MP: R -> massPoint -> massPoint.
Definition tp (p:Point) := cons (-1) p.
Definition isVector(v:massPoint):= exists A B : Point, v = add_MP(cons (-1) A)(cons 1 B).

Related

Function generation with arbitrary signature - revisited

I am resubmitting a question asked almost a decade ago on this site link - but which is not as generic as I would like.
What I am hoping for is a way to construct a function from a list of types, where the final output type can have an arbitrary/default value (such as 0.0 for a float, or "" for a string). So, from
[float; int; float;]
I would get something that amounts to
fun(f: float) ->
fun(i: int) ->
0.0
I am hopeful of achieving this, but am so far unable to. It would be helping me out a lot if I could see a sample that does the above.
The answer in the above link goes some of the way, but the example seems to know its function signature at compile time, which I won't, and also generates a compiler warning.
The scenario I have, for those that find context helpful, is that I want to be able to open a dll and one way or another identify a method which will have a given signature with argument-types limited to a known set of types (i.e. float, int). For each input parameter in this function signature I will run code to generate a 'buffer' object, which will have
a buffer of data items of the given type, i.e. [1.2; 3.2; 4.5]
a supplier of that data type (supplies may be intermittent so the receiving buffer may be empty at any one time)
a generator function that transforms data items before being dispatched. This function can be updated at any time.
a dispatch function. The dispatch target of bufferA will be bufferB, and for bufferB it will be a pub-sub thing where subscribers can subscribe to the end result of the calculation, in this case a stream of floats. Data accumulates in applicative style down the chain of buffers, until the final result is published as a new stream.
a regulator that turns the stream of data heading out to the consumer on or off. This ensures orderly function application.
The function from the dll will eventually be given to BufferA to apply to a float and pass the result on to buffer B (to pick up an int). However, while setting up the buffer infrastructure I only need a function with the correct signature, so a dummy value, such as 0.0, is fine.
For a function of a known signature I can handcraft the code that creates the necessary infrastructure, but I would like to be able to automate this, and ideally register dlls and have new calculated streams available plugin-style without rebuilding the application.
If you're willing to throw type safety out the window, you could do this:
let rec makeFunction = function
| ["int"] -> box 0
| ["float"] -> box 0.0
| ["string"] -> box ""
| "int" :: types ->
box (fun (_ : int) -> makeFunction types)
| "float" :: types ->
box (fun (_ : float) -> makeFunction types)
| "string" :: types ->
box (fun (_ : string) -> makeFunction types)
| _ -> failwith "Unexpected"
Here's a helper function for invoking one of these monstrosities:
let rec invokeFunction types (values : List<obj>) (f : obj) =
match types, values with
| [_], [] -> f
| ("int" :: types'), (value :: values') ->
let f' = f :?> (int -> obj)
let value' = value :?> int
invokeFunction types' values' (f' value')
| ("float" :: types'), (value :: values') ->
let f' = f :?> (float -> obj)
let value' = value :?> float
invokeFunction types' values' (f' value')
| ("string" :: types'), (value :: values') ->
let f' = f :?> (string -> obj)
let value' = value :?> string
invokeFunction types' values' (f' value')
| _ -> failwith "Unexpected"
And here it is in action:
let types = ["int"; "float"; "string"] // int -> float -> string
let f = makeFunction types
let values = [box 1; box 2.0]
let result = invokeFunction types values f
printfn "%A" result // output: ""
Caveat: This is not something I would ever recommend in a million years, but it works.
I got 90% of what I needed from this blog by James Randall, entitled compiling and executing fsharp dynamically at runtime. I was unable to avoid concretely specifying the top level function signature, but a work-around was to generate an fsx script file containing that signature (determined from the relevant MethodInfo contained in the inspected dll), then load and run that script. James' blog/ github repository also describes loading and running functions contained in script files. Having obtained the curried function from the dll, I then apply it to default arguments to get representative functions of n-1 arity using
let p1: 'p1 = Activator.CreateInstance(typeof<'p1>) :?> 'p1
let fArity2 = fArity3 p1
Creating and running a script file is slow, of course, but I only need to perform this once when setting up the calculation stream

Coercion within data structures

The following code gives me an error:
Require Import Reals.
Require Import List.
Import ListNotations.
Open Scope R_scope.
Definition C := (R * R)%type.
Definition RtoC (r : R) : C := (r,0).
Coercion RtoC : R >-> C.
Definition lC : list C := [0;0;0;1].
Error: The term "[0; 0; 0; 1]" has type "list R" while it is expected to have type "list C".
But I've defined RtoC as a coercion and I don't see any problems when I use
Definition myC : C := 4.
How do I get Coq to apply the coercion within the list?
Related question: If I enter Check [0;0;0;1] it returns list R, inserting an implicit IZR before every number. Why does Coq think I want Rs rather than Zs?
I'm unsure there is a fully satisfying solution to your question.
Indeed, as recalled in the Coq refman:
Given a term, possibly not typable, we are interested in the problem of determining if it can be well typed modulo insertion of appropriate coercions.
and it turns out that in your example, the term [0;0;0;1] itself is typable as a list R and it is type-checked "in one go"; thereby when the [0;0;0;1] : list C type mismatch occurs, as there's no "backtracking", a coercion can't be inserted within the list elements.
So maybe you could adapt your formalization in a different way, or just use one of these workarounds:
Rewriting your term into a β-redex:
Definition lC := (fun z o => [z;z;z;o] : list C) 0 1.
Or inserting a few more typecasts around each element:
Definition lC := [0:C; 0:C; 0:C; 1:C].
Regarding your last question
Why does Coq think I want Rs rather than Zs?
this comes from your line Open Scope R_scope., which implies numeral litterals are recognized by default as belonging to R (which deals with the classical axiomatization of the real numbers formalized in the standard library Reals). More specifically, the implementation has changed in Coq 8.7, as from coq/coq#a4a76c2 (discussed in PR coq/coq#415). To sum up, a literal such as 5%R is now parsed as IZR 5, that is, IZR (Zpos (xI (xO xH))), while it used to be parsed to a much less concise term in Coq 8.6:
Rplus R1 (Rmult (Rplus R1 R1) (Rplus R1 R1)).

How to create a state machine from inductive types in Coq?

How could be created a correct state machine (without a way to construct it in invalid way) in Coq entirely from inductive types?
Starting from something like:
Inductive Cmd :=
| Open
| Send
| Close.
Inductive SocketState :=
| Ready
| Opened
| Closed.
For example transition from Ready to Opened should happen only after Open command.
From the formal definition of deterministic finite state machine:
A deterministic finite automaton M is a 5-tuple Q, Sigma, delta, q0, F, consisting of
a finite set of states Q
a finite set of input symbols called the alphabet Sigma
a transition function delta: Q * Sigma -> Q
an initial or start state q0 in Q
a set of accept states F being a subset of Q
You gave two out of the five, namely Q = SocketState and Sigma = Cmd. Assuming that your application has an implicit initial state (probably Ready) and no specific "accept states", the only thing needed for your state machine is the transition function.
From the definition, the transition function has the type (SocketState * Cmd) -> SocketState, but the curried version SocketState -> Cmd -> SocketState is equally powerful.
If your state machine has external inputs, add them as parameters to the function. If you want side effects, or some kind of output associated with the transition itself, you can use SocketState -> Cmd -> (SomeOutput * SocketState).
Edit: Transition as a relation (Extension to keep_learning's answer)
If you want to reason about a series of valid commands and transitions, you might want to encode it into a ternary relation instead.
First, let's define what constitutes for valid transitions.
Previous state -> (Command) -> Next state
-----------------------------------------
Ready -> (Open) -> Opened
Opened -> (Send) -> Opened
Opened -> (Close) -> Closed
Then, encode it as a ternary relation. The following is just an example, similar to Hoare triples from programming language models.
Inductive Transition : SocketState -> Cmd -> SocketState -> Prop :=
| t_open : Transition Ready Open Opened
| t_send : Transition Opened Send Opened
| t_close : Transition Opened Close Closed.
The above talks about a single transition. What about a series of transitions? We can define a reflexive-transitive closure, taking a list of commands (this is very similar to Hoare triples, in the sense that both define a precondition, a sequence of steps, and a postcondition):
From Coq Require Import List.
Import ListNotations.
Inductive TransitionRTC : SocketState -> list Cmd -> SocketState -> Prop :=
| t_rtc_refl : forall s : SocketState, TransitionRTC s [] s
| t_rtc_trans1 : forall s1 c s2 clist s3,
Transition s1 c s2 ->
TransitionRTC s2 clist s3 ->
TransitionRTC s1 (c :: clist) s3.
The function analogue of RTC relation would be (the fold_left in Coq has the last two arguments swapped, compared to Haskell's foldl or Ocaml's fold_left):
Axiom transition : SocketState -> Cmd -> SocketState.
Definition multistep_transition (state0 : SocketState) (clist : list Cmd) :=
fold_left transition clist state0.
You can encode your rules (transition function) into inductive data type.
Inductive Valid_transition : SocketState -> SocketState -> Type :=
| copen x : x = Open -> Valid_transition Ready Opened (* Input command Open *)
| cready x y : x = Send -> Valid_transition y Opened ->
Valid_transition Opened Opened (* Send command would not change the
status of port *)
| cclose x y : x = Close -> Valid_transition y Opened ->
Valid_transition Opened Closed. (* Close command would close it *)
Check (cready Send _ eq_refl (copen Open eq_refl)).
The only way to move from Ready to Opened is first constructor with command Open. The second constructor states that if your command is Send, and you are in Opened state then you would continue to remain in this state. Finally, third constructor closes the open port after receiving Close command. I have encoded a similar transition function as yours (vote counting as state machine), so feel free to have a look at it [1].
[1] https://github.com/mukeshtiwari/EncryptionSchulze/blob/master/code/Workingcode/EncryptionSchulze.v#L718-L740

New Scope in Coq

I'd like my own scope, to play around with long distfixes.
Declare Scope my_scope.
Delimit Scope my_scope with my.
Open Scope my_scope.
Definition f (x y a b : nat) : nat := x+y+a+b.
Notation "x < y * a = b" := (f x y a b)
(at level 100, no associativity) : my_scope.
Check (1 < 2 * 3 = 4)%my.
How do you make a new scope?
EDIT: I chose "x < y * a = b" to override Coq's operators (each with a different precedence).
The command Declare Scope does not exist. The various commands about scopes are described in section 12.2 of the Coq manual.
Your choice of an example notation has inherent problems, because it clashes with pre-defined notations, which seem to be used before your notation.
When looking at the first components the parser sees _ < _ and thinks that you are actually talking about comparison of integers, then it sees the second part as being an instance of the notation _ * _, then it sees that all that is the left hand side of an equality. And all along the parser is happy, it constructs an expression of the form:
(1 < (2 * 3)) = 4
This is constructed by the parser, and the type system has not been solicited yet. The type checker sees a natural number as the first child of (_ < _) and is happy. It sees (_ * _) as the second child and it is happy, it now knows that the first child of that product should be a nat number and it is still happy; in the end it has an equality, and the first component of this equality is in type Prop, but the second component is in type nat.
If you type Locate "_ < _ * _ = _". the answer tells you that you did define a new notation. The problem is that this notation never gets used, because the parser always finds another notation it can use before. Understanding why a notation is preferred to another one requires more knowledge of parsing technology, as alluded to in Coq's manual, chapter 12, in the sentence (obscure to me):
Coq extensible parsing is performed by Camlp5 which is essentially a LL1 parser.
You have to choose the levels of the various variables, x, y, a, and b so that none of these variables will be able to match too much of the text. For instance, I tried defining a notation close to yours, but with a starting and an ending bracket (and I guess this simplifies the task greatly).
Notation "<< x < y * a = b >>" := (f x y a b)
(x at level 59, y at level 39, a at level 59) : my_scope.
The level of x is chosen to be lower than the level of =, the level of y is chosen to be lower than the level of *, the level of a is chosen to be lower than =. The levels were obtained by reading the answer of the command Print Grammar constr. It seems to work, as the following command is accepted.
Check << 1 < 2 * 3 = 4 >>.
But you may need to include a little more engineering to have a really good notation.
To answer the actual question in your title:
The new scope gets created when you declare a notation that uses it. That is, you don’t declare a new scope my_scope separately. You just write
Notation "x <<< y" := (f x y) (at level 100, no associativity) : my_scope.
and that declares a new scope my_scope.
The answers for this question only apply to older versions of Coq. I'm not sure when it started but in at least Coq 8.13.2, Coq prefers the user to first use Declare Scope create a new scope. What the OP has in their code is Coq's preferred way to declare scopes now.
See the current manual

Performance difference between functions and pattern matching in Mathematica

So Mathematica is different from other dialects of lisp because it blurs the lines between functions and macros. In Mathematica if a user wanted to write a mathematical function they would likely use pattern matching like f[x_]:= x*x instead of f=Function[{x},x*x] though both would return the same result when called with f[x]. My understanding is that the first approach is something equivalent to a lisp macro and in my experience is favored because of the more concise syntax.
So I have two questions, is there a performance difference between executing functions versus the pattern matching/macro approach? Though part of me wouldn't be surprised if functions were actually transformed into some version of macros to allow features like Listable to be implemented.
The reason I care about this question is because of the recent set of questions (1) (2) about trying to catch Mathematica errors in large programs. If most of the computations were defined in terms of Functions, it seems to me that keeping track of the order of evaluation and where the error originated would be easier than trying to catch the error after the input has been rewritten by the successive application of macros/patterns.
The way I understand Mathematica is that it is one giant search replace engine. All functions, variables, and other assignments are essentially stored as rules and during evaluation Mathematica goes through this global rule base and applies them until the resulting expression stops changing.
It follows that the fewer times you have to go through the list of rules the faster the evaluation. Looking at what happens using Trace (using gdelfino's function g and h)
In[1]:= Trace#(#*#)&#x
Out[1]= {x x,x^2}
In[2]:= Trace#g#x
Out[2]= {g[x],x x,x^2}
In[3]:= Trace#h#x
Out[3]= {{h,Function[{x},x x]},Function[{x},x x][x],x x,x^2}
it becomes clear why anonymous functions are fastest and why using Function introduces additional overhead over a simple SetDelayed. I recommend looking at the introduction of Leonid Shifrin's excellent book, where these concepts are explained in some detail.
I have on occasion constructed a Dispatch table of all the functions I need and manually applied it to my starting expression. This provides a significant speed increase over normal evaluation as none of Mathematica's inbuilt functions need to be matched against my expression.
My understanding is that the first approach is something equivalent to a lisp macro and in my experience is favored because of the more concise syntax.
Not really. Mathematica is a term rewriter, as are Lisp macros.
So I have two questions, is there a performance difference between executing functions versus the pattern matching/macro approach?
Yes. Note that you are never really "executing functions" in Mathematica. You are just applying rewrite rules to change one expression into another.
Consider mapping the Sqrt function over a packed array of floating point numbers. The fastest solution in Mathematica is to apply the Sqrt function directly to the packed array because it happens to implement exactly what we want and is optimized for this special case:
In[1] := N#Range[100000];
In[2] := Sqrt[xs]; // AbsoluteTiming
Out[2] = {0.0060000, Null}
We might define a global rewrite rule that has terms of the form sqrt[x] rewritten to Sqrt[x] such that the square root will be calculated:
In[3] := Clear[sqrt];
sqrt[x_] := Sqrt[x];
Map[sqrt, xs]; // AbsoluteTiming
Out[3] = {0.4800007, Null}
Note that this is ~100× slower than the previous solution.
Alternatively, we might define a global rewrite rule that replaces the symbol sqrt with a lambda function that invokes Sqrt:
In[4] := Clear[sqrt];
sqrt = Function[{x}, Sqrt[x]];
Map[sqrt, xs]; // AbsoluteTiming
Out[4] = {0.0500000, Null}
Note that this is ~10× faster than the previous solution.
Why? Because the slow second solution is looking up the rewrite rule sqrt[x_] :> Sqrt[x] in the inner loop (for each element of the array) whereas the fast third solution looks up the value Function[...] of the symbol sqrt once and then applies that lambda function repeatedly. In contrast, the fastest first solution is a loop calling sqrt written in C. So searching the global rewrite rules is extremely expensive and term rewriting is expensive.
If so, why is Sqrt ever fast? You might expect a 2× slowdown instead of 10× because we've replaced one lookup for Sqrt with two lookups for sqrt and Sqrt in the inner loop but this is not so because Sqrt has the special status of being a built-in function that will be matched in the core of the Mathematica term rewriter itself rather than via the general-purpose global rewrite table.
Other people have described much smaller performance differences between similar functions. I believe the performance differences in those cases are just minor differences in the exact implementation of Mathematica's internals. The biggest issue with Mathematica is the global rewrite table. In particular, this is where Mathematica diverges from traditional term-level interpreters.
You can learn a lot about Mathematica's performance by writing mini Mathematica implementations. In this case, the above solutions might be compiled to (for example) F#. The array may be created like this:
> let xs = [|1.0..100000.0|];;
...
The built-in sqrt function can be converted into a closure and given to the map function like this:
> Array.map sqrt xs;;
Real: 00:00:00.006, CPU: 00:00:00.015, GC gen0: 0, gen1: 0, gen2: 0
...
This takes 6ms just like Sqrt[xs] in Mathematica. But that is to be expected because this code has been JIT compiled down to machine code by .NET for fast evaluation.
Looking up rewrite rules in Mathematica's global rewrite table is similar to looking up the closure in a dictionary keyed on its function name. Such a dictionary can be constructed like this in F#:
> open System.Collections.Generic;;
> let fns = Dictionary<string, (obj -> obj)>(dict["sqrt", unbox >> sqrt >> box]);;
This is similar to the DownValues data structure in Mathematica, except that we aren't searching multiple resulting rules for the first to match on the function arguments.
The program then becomes:
> Array.map (fun x -> fns.["sqrt"] (box x)) xs;;
Real: 00:00:00.044, CPU: 00:00:00.031, GC gen0: 0, gen1: 0, gen2: 0
...
Note that we get a similar 10× performance degradation due to the hash table lookup in the inner loop.
An alternative would be to store the DownValues associated with a symbol in the symbol itself in order to avoid the hash table lookup.
We can even write a complete term rewriter in just a few lines of code. Terms may be expressed as values of the following type:
> type expr =
| Float of float
| Symbol of string
| Packed of float []
| Apply of expr * expr [];;
Note that Packed implements Mathematica's packed lists, i.e. unboxed arrays.
The following init function constructs a List with n elements using the function f, returning a Packed if every return value was a Float or a more general Apply(Symbol "List", ...) otherwise:
> let init n f =
let rec packed ys i =
if i=n then Packed ys else
match f i with
| Float y ->
ys.[i] <- y
packed ys (i+1)
| y ->
Apply(Symbol "List", Array.init n (fun j ->
if j<i then Float ys.[i]
elif j=i then y
else f j))
packed (Array.zeroCreate n) 0;;
val init : int -> (int -> expr) -> expr
The following rule function uses pattern matching to identify expressions that it can understand and replaces them with other expressions:
> let rec rule = function
| Apply(Symbol "Sqrt", [|Float x|]) ->
Float(sqrt x)
| Apply(Symbol "Map", [|f; Packed xs|]) ->
init xs.Length (fun i -> rule(Apply(f, [|Float xs.[i]|])))
| f -> f;;
val rule : expr -> expr
Note that the type of this function expr -> expr is characteristic of term rewriting: rewriting replaces expressions with other expressions rather than reducing them to values.
Our program can now be defined and executed by our custom term rewriter:
> rule (Apply(Symbol "Map", [|Symbol "Sqrt"; Packed xs|]));;
Real: 00:00:00.049, CPU: 00:00:00.046, GC gen0: 24, gen1: 0, gen2: 0
We've recovered the performance of Map[Sqrt, xs] in Mathematica!
We can even recover the performance of Sqrt[xs] by adding an appropriate rule:
| Apply(Symbol "Sqrt", [|Packed xs|]) ->
Packed(Array.map sqrt xs)
I wrote an article on term rewriting in F#.
Some measurements
Based on #gdelfino answer and comments by #rcollyer I made this small program:
j = # # + # # &;
g[x_] := x x + x x ;
h = Function[{x}, x x + x x ];
anon = Table[Timing[Do[ # # + # # &[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
jj = Table[Timing[Do[ j[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
gg = Table[Timing[Do[ g[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
hh = Table[Timing[Do[ h[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
ListLinePlot[ {anon, jj, gg, hh},
PlotStyle -> {Black, Red, Green, Blue},
PlotRange -> All]
The results are, at least for me, very surprising:
Any explanations? Please feel free to edit this answer (comments are a mess for long text)
Edit
Tested with the identity function f[x] = x to isolate the parsing from the actual evaluation. Results (same colors):
Note: results are very similar to this Plot for constant functions (f[x]:=1);
Pattern matching seems faster:
In[1]:= g[x_] := x*x
In[2]:= h = Function[{x}, x*x];
In[3]:= Do[h[RandomInteger[100]], {1000000}] // Timing
Out[3]= {1.53927, Null}
In[4]:= Do[g[RandomInteger[100]], {1000000}] // Timing
Out[4]= {1.15919, Null}
Pattern matching is also more flexible as it allows you to overload a definition:
In[5]:= g[x_] := x * x
In[6]:= g[x_,y_] := x * y
For simple functions you can compile to get the best performance:
In[7]:= k[x_] = Compile[{x}, x*x]
In[8]:= Do[k[RandomInteger[100]], {100000}] // Timing
Out[8]= {0.083517, Null}
You can use function recordSteps in previous answer to see what Mathematica actually does with Functions. It treats it just like any other Head. IE, suppose you have the following
f = Function[{x}, x + 2];
f[2]
It first transforms f[2] into
Function[{x}, x + 2][2]
At the next step, x+2 is transformed into 2+2. Essentially, "Function" evaluation behaves like an application of pattern matching rules, so it shouldn't be surprising that it's not faster.
You can think of everything in Mathematica as an expression, where evaluation is the process of rewriting parts of the expression in a predefined sequence, this applies to Function like to any other head