Import operators from TLA+ module into another file - distributed-computing

Suppose that we have these operators in tools.tla file:
---- MODULE PT ----
Max(x, y) == IF x > y THEN x ELSE y
Min(x, y) == IF x < y THEN x ELSE y
=====
and we want to pass values (like arguments in programming languages such as python) and use these in another file, call it use.tla, to just use Max and Min without reimplementing them; How is it possible?

I meet the similar problem and that's my solutions:
File -> Open module -> Add TLA+ module -> tools.tla
The empty tools.tla will be created, write your codes.
Go back to another file and
EXTENDS tools

Related

Can multiple modules have the same module type? How do I organize them and their interface files?

Currently, I have within the same OCaml file,
blah.ml:
module type blah =
sig
val a : some-type
end
module type X =
sig
val x : some-type
end
module Y : X =
struct
let x = some-def
end
module Z : X =
struct
let x = some-other-def
end
blah.mli looks like this:
module type blah =
sig
val a
end
module type X =
sig
val x : some-type
end
module Y : X
module Z : X
I want X, Y, and Z to be in separate files with separate interfaces. How do I tell in Y.mli and Z.mli that Y and Z have type X?
Any readings for this would also be appreciated. There are a lot of resources talking about modules, interfaces, and functors, but they don't mention interface files for modules that have other modules as types.
You can create x.ml containing the sig, y.ml containing the module, and z.ml containing that module. You don't need to do anything special to tell the compiler that Y : X and Z : X. The compiler infers the module type automatically from the fact that the module conforms to the type i.e. it implements every binding that the module type needs. If Y or Z don't conform, the type error will be shown at the point of use.
If you want to restrict the module type at the point of definition that's also doable, by giving each module an interface file and includeing the required signature there. For example:
(* x.ml *)
module type S = sig
val x : some-type
end
(* y.mli *)
include X.S
(* y.ml *)
let x = some-def
...and so on.
However this often becomes too restrictive as the signature hides too much detail about the types. So in reality you may actually need to add type equality sharing constraints to avoid compile errors where you want to expose more type info. E.g.:
(* y.mli *)
include X.S with type t = int
Often it is easier to not have the explicit interface file at all unless you really need to make some parts of the module private.

Definition in coq using keyword `exists`

I'm trying to define an entity named isVector using the following syntax
Require Export Setoid.
Require Export Coq.Reals.Reals.
Require Export ArithRing.
Definition Point := Type.
Record MassPoint:Type:= cons{number : R ; point: Point}.
Variable add_MP : MassPoint -> MassPoint -> MassPoint .
Variable mult_MP : R -> MassPoint -> MassPoint .
Variable orthogonalProjection : Point -> Point -> Point -> Point.
Definition isVector (v:MassPoint):= exists A, B :Point , v= add_MP((−1)A)(1B).
And the Coq IDE keeps complaining that there's a syntax error for the definition. Currently, I haven't figured it out.
There are many problems here.
First, you'd write:
exists A B : Point, …
with no comma between the different variables.
But then, you also have syntax errors on the right-hand side. First, I'm not sure those 1 and -1 operations exist. Second, function calls would be written this way:
add_MP A B
You can write them the way you do:
add_MP(A)(B)
But in the long run you should probably become used to the syntax of function calls being just a whitespace! You might need to axiomatize this -1 operation the way you axiomatized other operations, unless they are a notation that you defined somewhere but did not post here.
Thanks for the help.
After experimenting a little bit. Below is the solution that works.
Definition Point:= Type.
Record massPoint: Type := cons{number: R; point: Point}.
Variable add_MP: massPoint -> massPoint -> massPoint.
Variable mult_MP: R -> massPoint -> massPoint.
Definition tp (p:Point) := cons (-1) p.
Definition isVector(v:massPoint):= exists A B : Point, v = add_MP(cons (-1) A)(cons 1 B).

Matlab from Fortran - problems transferring big matrix

I have to call Matlab from Fortran and execute a program there. I have a large 3xN (N is around 2500) matrix of data, which needs to be transferred to Matlab. I noticed some discrepancies in the data - the last line of the Fortran matrix becomes first line in Matlab (other lines stay however on their place, shifted down by 1), and this line also looses the first value.
Like this - In Fortran
1.1 1.2 1.3
2.1 2.2 2.3
.....
1999.1 1999.2 1999.3
2000.1 2000.2 2000.3
becomes in Matlab
0.0 2000.2 2000.3
1.1 1.2 1.3
2.1 2.2 2.3
.....
1999.1 1999.2 1999.3
I cant understand what is going wrong somehow.. Spent several hours...
node_xyz_ini = mxCreateDoubleMatrix(M, N, 0) ! M, N - dimensions
call mxCopyReal8ToPtr(CoordSet, mxGetPr(node_xyz_ini), M*N)
I use Octave rather than matlab. With that as a caveat, here is an example of what I use, this for double precision twod arrays:
MODULE IO
use, intrinsic :: iso_c_binding
!! use c_float,c_double, c_double_complex, c_int,c_ptr
implicit none
real (c_double), allocatable :: x(:,:),h(:),f(:)
integer (c_int),allocatable :: t(:,:)
integer (c_int) :: nx,ne
contains
Subroutine Write_Array_RDP(varname,variable)
implicit none
integer (c_int) :: kx,ky,sh(2),ncol,nrow
character(len=7),intent(in) :: varname
character(:),allocatable :: wrtfmt
character(range(ncol)) :: res
real(c_double),intent(in) :: variable(:,:)
open(unit=10,file=varname,form="formatted",status="replace",action="write")
write(10,fmt="(A)")"# created by ?? "
sh=shape(variable)
ncol=sh(2);nrow=sh(1)
write(10,fmt="(A,A)")"# name: ",varname
write(10,fmt="(A)")"# type: matrix"
write(10,fmt="(A,i0)")"# rows: ",nrow
write(10,fmt="(A,i0)")"# columns: ",ncol
write(res,'(i0)') ncol
wrtfmt="("//trim(res)//"(e20.12))"
do ky=1,nrow
write(10,fmt=wrtfmt)(variable(ky,kx),kx=1,ncol)
end do
write(10,*)" "
write(10,*)" "
close(10)
End Subroutine Write_Array_RDP
END MODULE IO
Program Main
use IO
implicit none
real (c_double),allocatable :: DPArray(:,:)
allocate(DPArray(3,3))
DPArray=reshape((/1.0d0,2.0d0,3.0d0,1.0d0,2.0d0,3.0d0,1.0d0,2.0d0,3.0d0/),(/3,3/))
Call Write_Array_RDP('DPArray',DPArray)
End Program Main
I compile and link with 'gfortran name.f90' then run with ./a.out. The file DPArray has been created. Then in Octave :
load DPArray
DPArray
produces the output:
1 1 1
2 2 2
3 3 3
I have found it necessary to recode the Write subroutine for different variable types (Write_Array_CMPLX, Write_Array_INT) etc...

How can I know the line equivalence of two similar files?

When I add a line to the middle of a file, all following lines have their number incremented.
Is there a utility that generates the list of equivalent line numbers between two files?
The output would be something like:
1 1
2 2
3 4 (line added)
4 5
One can probably create such utility by using dynamic programming in a way similar to the diff algorithm. Seems useful, hasn't already been done?
I found out it is pretty easy to do with python's difflib:
import difflib
def seq_equivs(s1, s2):
equiv = []
s = difflib.SequenceMatcher(a=s1, b=s2)
for m in s.get_matching_blocks():
if m[2] == 0:
break
for n in range(1, 1+m[2]):
equiv.append((m[0]+n, m[1]+n))
return equiv
Example usage:
f1 = open('file1.txt').read().split('\n')
f2 = open('file2.txt').read().split('\n')
for equivs in seq_equivs(f1, f2):
print('%d %d' % equivs)

Performance difference between functions and pattern matching in Mathematica

So Mathematica is different from other dialects of lisp because it blurs the lines between functions and macros. In Mathematica if a user wanted to write a mathematical function they would likely use pattern matching like f[x_]:= x*x instead of f=Function[{x},x*x] though both would return the same result when called with f[x]. My understanding is that the first approach is something equivalent to a lisp macro and in my experience is favored because of the more concise syntax.
So I have two questions, is there a performance difference between executing functions versus the pattern matching/macro approach? Though part of me wouldn't be surprised if functions were actually transformed into some version of macros to allow features like Listable to be implemented.
The reason I care about this question is because of the recent set of questions (1) (2) about trying to catch Mathematica errors in large programs. If most of the computations were defined in terms of Functions, it seems to me that keeping track of the order of evaluation and where the error originated would be easier than trying to catch the error after the input has been rewritten by the successive application of macros/patterns.
The way I understand Mathematica is that it is one giant search replace engine. All functions, variables, and other assignments are essentially stored as rules and during evaluation Mathematica goes through this global rule base and applies them until the resulting expression stops changing.
It follows that the fewer times you have to go through the list of rules the faster the evaluation. Looking at what happens using Trace (using gdelfino's function g and h)
In[1]:= Trace#(#*#)&#x
Out[1]= {x x,x^2}
In[2]:= Trace#g#x
Out[2]= {g[x],x x,x^2}
In[3]:= Trace#h#x
Out[3]= {{h,Function[{x},x x]},Function[{x},x x][x],x x,x^2}
it becomes clear why anonymous functions are fastest and why using Function introduces additional overhead over a simple SetDelayed. I recommend looking at the introduction of Leonid Shifrin's excellent book, where these concepts are explained in some detail.
I have on occasion constructed a Dispatch table of all the functions I need and manually applied it to my starting expression. This provides a significant speed increase over normal evaluation as none of Mathematica's inbuilt functions need to be matched against my expression.
My understanding is that the first approach is something equivalent to a lisp macro and in my experience is favored because of the more concise syntax.
Not really. Mathematica is a term rewriter, as are Lisp macros.
So I have two questions, is there a performance difference between executing functions versus the pattern matching/macro approach?
Yes. Note that you are never really "executing functions" in Mathematica. You are just applying rewrite rules to change one expression into another.
Consider mapping the Sqrt function over a packed array of floating point numbers. The fastest solution in Mathematica is to apply the Sqrt function directly to the packed array because it happens to implement exactly what we want and is optimized for this special case:
In[1] := N#Range[100000];
In[2] := Sqrt[xs]; // AbsoluteTiming
Out[2] = {0.0060000, Null}
We might define a global rewrite rule that has terms of the form sqrt[x] rewritten to Sqrt[x] such that the square root will be calculated:
In[3] := Clear[sqrt];
sqrt[x_] := Sqrt[x];
Map[sqrt, xs]; // AbsoluteTiming
Out[3] = {0.4800007, Null}
Note that this is ~100× slower than the previous solution.
Alternatively, we might define a global rewrite rule that replaces the symbol sqrt with a lambda function that invokes Sqrt:
In[4] := Clear[sqrt];
sqrt = Function[{x}, Sqrt[x]];
Map[sqrt, xs]; // AbsoluteTiming
Out[4] = {0.0500000, Null}
Note that this is ~10× faster than the previous solution.
Why? Because the slow second solution is looking up the rewrite rule sqrt[x_] :> Sqrt[x] in the inner loop (for each element of the array) whereas the fast third solution looks up the value Function[...] of the symbol sqrt once and then applies that lambda function repeatedly. In contrast, the fastest first solution is a loop calling sqrt written in C. So searching the global rewrite rules is extremely expensive and term rewriting is expensive.
If so, why is Sqrt ever fast? You might expect a 2× slowdown instead of 10× because we've replaced one lookup for Sqrt with two lookups for sqrt and Sqrt in the inner loop but this is not so because Sqrt has the special status of being a built-in function that will be matched in the core of the Mathematica term rewriter itself rather than via the general-purpose global rewrite table.
Other people have described much smaller performance differences between similar functions. I believe the performance differences in those cases are just minor differences in the exact implementation of Mathematica's internals. The biggest issue with Mathematica is the global rewrite table. In particular, this is where Mathematica diverges from traditional term-level interpreters.
You can learn a lot about Mathematica's performance by writing mini Mathematica implementations. In this case, the above solutions might be compiled to (for example) F#. The array may be created like this:
> let xs = [|1.0..100000.0|];;
...
The built-in sqrt function can be converted into a closure and given to the map function like this:
> Array.map sqrt xs;;
Real: 00:00:00.006, CPU: 00:00:00.015, GC gen0: 0, gen1: 0, gen2: 0
...
This takes 6ms just like Sqrt[xs] in Mathematica. But that is to be expected because this code has been JIT compiled down to machine code by .NET for fast evaluation.
Looking up rewrite rules in Mathematica's global rewrite table is similar to looking up the closure in a dictionary keyed on its function name. Such a dictionary can be constructed like this in F#:
> open System.Collections.Generic;;
> let fns = Dictionary<string, (obj -> obj)>(dict["sqrt", unbox >> sqrt >> box]);;
This is similar to the DownValues data structure in Mathematica, except that we aren't searching multiple resulting rules for the first to match on the function arguments.
The program then becomes:
> Array.map (fun x -> fns.["sqrt"] (box x)) xs;;
Real: 00:00:00.044, CPU: 00:00:00.031, GC gen0: 0, gen1: 0, gen2: 0
...
Note that we get a similar 10× performance degradation due to the hash table lookup in the inner loop.
An alternative would be to store the DownValues associated with a symbol in the symbol itself in order to avoid the hash table lookup.
We can even write a complete term rewriter in just a few lines of code. Terms may be expressed as values of the following type:
> type expr =
| Float of float
| Symbol of string
| Packed of float []
| Apply of expr * expr [];;
Note that Packed implements Mathematica's packed lists, i.e. unboxed arrays.
The following init function constructs a List with n elements using the function f, returning a Packed if every return value was a Float or a more general Apply(Symbol "List", ...) otherwise:
> let init n f =
let rec packed ys i =
if i=n then Packed ys else
match f i with
| Float y ->
ys.[i] <- y
packed ys (i+1)
| y ->
Apply(Symbol "List", Array.init n (fun j ->
if j<i then Float ys.[i]
elif j=i then y
else f j))
packed (Array.zeroCreate n) 0;;
val init : int -> (int -> expr) -> expr
The following rule function uses pattern matching to identify expressions that it can understand and replaces them with other expressions:
> let rec rule = function
| Apply(Symbol "Sqrt", [|Float x|]) ->
Float(sqrt x)
| Apply(Symbol "Map", [|f; Packed xs|]) ->
init xs.Length (fun i -> rule(Apply(f, [|Float xs.[i]|])))
| f -> f;;
val rule : expr -> expr
Note that the type of this function expr -> expr is characteristic of term rewriting: rewriting replaces expressions with other expressions rather than reducing them to values.
Our program can now be defined and executed by our custom term rewriter:
> rule (Apply(Symbol "Map", [|Symbol "Sqrt"; Packed xs|]));;
Real: 00:00:00.049, CPU: 00:00:00.046, GC gen0: 24, gen1: 0, gen2: 0
We've recovered the performance of Map[Sqrt, xs] in Mathematica!
We can even recover the performance of Sqrt[xs] by adding an appropriate rule:
| Apply(Symbol "Sqrt", [|Packed xs|]) ->
Packed(Array.map sqrt xs)
I wrote an article on term rewriting in F#.
Some measurements
Based on #gdelfino answer and comments by #rcollyer I made this small program:
j = # # + # # &;
g[x_] := x x + x x ;
h = Function[{x}, x x + x x ];
anon = Table[Timing[Do[ # # + # # &[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
jj = Table[Timing[Do[ j[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
gg = Table[Timing[Do[ g[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
hh = Table[Timing[Do[ h[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
ListLinePlot[ {anon, jj, gg, hh},
PlotStyle -> {Black, Red, Green, Blue},
PlotRange -> All]
The results are, at least for me, very surprising:
Any explanations? Please feel free to edit this answer (comments are a mess for long text)
Edit
Tested with the identity function f[x] = x to isolate the parsing from the actual evaluation. Results (same colors):
Note: results are very similar to this Plot for constant functions (f[x]:=1);
Pattern matching seems faster:
In[1]:= g[x_] := x*x
In[2]:= h = Function[{x}, x*x];
In[3]:= Do[h[RandomInteger[100]], {1000000}] // Timing
Out[3]= {1.53927, Null}
In[4]:= Do[g[RandomInteger[100]], {1000000}] // Timing
Out[4]= {1.15919, Null}
Pattern matching is also more flexible as it allows you to overload a definition:
In[5]:= g[x_] := x * x
In[6]:= g[x_,y_] := x * y
For simple functions you can compile to get the best performance:
In[7]:= k[x_] = Compile[{x}, x*x]
In[8]:= Do[k[RandomInteger[100]], {100000}] // Timing
Out[8]= {0.083517, Null}
You can use function recordSteps in previous answer to see what Mathematica actually does with Functions. It treats it just like any other Head. IE, suppose you have the following
f = Function[{x}, x + 2];
f[2]
It first transforms f[2] into
Function[{x}, x + 2][2]
At the next step, x+2 is transformed into 2+2. Essentially, "Function" evaluation behaves like an application of pattern matching rules, so it shouldn't be surprising that it's not faster.
You can think of everything in Mathematica as an expression, where evaluation is the process of rewriting parts of the expression in a predefined sequence, this applies to Function like to any other head