How to prevent this Cyclic polynomial hash function from using a type constraint? - hash

I am trying to implement the Cyclic polynomial hash function in f#. It uses the bit-wise operators ^^^ and <<<. Here is an example of a function that hashes an array:
let createBuzhash (pattern : array<'a>) =
let n = pattern.Length
let rec loop index pow acc =
if index < n then
loop (index+1) (pow-1) (acc ^^^ ((int pattern.[index]) <<< pow))
else
acc
loop 0 (n-1) 0
My problem is that the type of 'a will be constrained to an int, while i want this function to work with any of the types that work with bit-wise operators, for example a char. I tried using inline, but that creates some problems farther down in my library. Is there a way to fix this without using inline?
Edit for clarity: The function will be part of a library, and another hash function is provided for types that don't support the bit-wise operators. I want this function to work with arrays of numeric types and/or chars.
Edit 2 (problem solved) : The problem with inline was the way how i loaded the function from my library. instead of
let hashedPattern = library.createBuzhash targetPattern
I used this binding:
let myFunction = library.createBuzhash
let hashedPattern = myFunction targetPattern
that constraints the input type for myFunction to int, although the createBuzhash function is an inline function in the library. Changing the way I call the function fixed the type constraint problem, and inline works perfectly fine, as the answer below suggests.

In the implementation, you are converting the value in the array to an Integer using the int function as follows: int pattern.[index]
This creates a constraint on the type of array elements requiring them to be "something that can be converted to int". If you mark the function as inline, it will actually work for types like char and you'll be able to write:
createBuzhash [|'a'; 'b'|]
But there are still many other types that cannot be converted to integer using the int function.
To make this work for any type, you have to decide how you want to handle types that are not numeric. Do you want to:
Provide your own hashing function for all values?
Use the built-in .NET GetHashCode operation?
Only make your function work on numeric types and arrays of numeric types?
One option would be to add a parameter that specifies how to do the conversion:
let inline createBuzhash conv (pattern : array<'a>) =
let n = pattern.Length
let rec loop index pow acc =
if index < pattern.Length then
loop (index+1) (pow-1) (acc ^^^ ((conv pattern.[index]) <<< pow))
else
acc
loop 0 (n-1) 0
When calling createBuzhash, you now need to give it a function for hashing the elements. This works on primitive types using the int function:
createBuzhash int [| 0 .. 10 |]
createBuzhash int [|'a'; 'b'|]
But you can also use built-in F# hashing mechanism:
createBuzhash hash [| (1,"foo"); (2,"bar") |]
And you can even handle nested arrays by passing the function to itself:
createBuzhash (createBuzhash int) [| [| 1 |]; [| 2 |] |]

Related

Verifying programs with heterogeneous arrays in VST

I'm verifying a c program that uses arrays to store heterogeneous data - in particular, the program uses arrays to implement cons cells, where the first element of the array is an integer value, and the second element is a pointer to the next cons cell.
For example, the free operation for this list would be:
void listfree(void * x) {
if((x == 0)) {
return;
} else {
void * n = *((void **)x + 1);
listfree(n);
free(x);
return;
}
}
Note: Not shown here, but other code sections will read the values of the array and treat it as an integer.
While I understand that the natural way to express this would be as some kind of struct, the program itself is written using an array, and I can't change this.
How should I specify the structure of the memory in VST?
I've defined an lseg predicate as follows:
Fixpoint lseg (x: val) (s: (list val)) (self_card: lseg_card) : mpred := match self_card with
| lseg_card_0 => !!(x = nullval) && !!(s = []) && emp
| lseg_card_1 _alpha_513 =>
EX v : Z,
EX s1 : (list val),
EX nxt : val,
!!(~ (x = nullval)) &&
!!(s = ([(Vint (Int.repr v))] ++ s1)) &&
(data_at Tsh (tarray tint 2) [(Vint (Int.repr v)); nxt] x) *
(lseg nxt s1 _alpha_513)
end.
However, I run into troubles when trying to evaluate void *n = *(void **)x; presumably because the specification states that the memory contains an array of ints not pointers.
The issue is probably as follows, and can almost be solved as follows.
The C semantics permit casting an integer (of the right size) to a pointer, and vice versa, as long as you don't actually do any pointer operations to an integer value, or vice versa. Very likely your C program obeys those rules. But the type system of Verifiable C tries to enforce that local variables (and array elements, etc.) of integer type will never contain pointer values, and vice versa (except the special integer value 0, which is NULL).
However, Verifiable C does support a (proved-foundationally-sound) workaround to this stricter enforcement:
typedef void * int_or_ptr
#ifdef COMPCERT
__attribute((aligned(_Alignof(void*))))
#endif
;
That is: the int_or_ptr type is void*, but with the attribute "align this as void*". So it's semantically identical to void*, but the redundant attribute is a hint to the VST type system to be less restrictive about C type enforcement.
So, when I say "can almost be solved", I'm asking: Can you modify the C program to use an array of "void* aligned as void*" ?
If so, then you can proceed. Your VST verification should use int_or_ptr_type, which is a definition of type Ctypes.type provided by VST-Floyd, when referring to the C-language type of these array elements, or of local variables that these elements are loaded into.
Unfortunately, int_or_ptr_type is not documented in the reference manual (VC.pdf), which is an omission that should be correct. You can look at progs/int_or_ptr.c and progs/verif_int_or_ptr.v, but these do much more than you want or need: They axiomatize operators that distinguish odd integers from aligned pointers, which is undefined in C11 (but consistent with C11, otherwise the ocaml garbage collector could never work). That is, those axiomatized external functions are consistent with CompCert, gcc, clang; but you won't need any of them, because the only operations you're doing on int_or_pointer are the perfectly-legal "comparison with NULL" and "cast to integer" or "cast to struct foo *".

Can Julia macros be used to generate code based on specific function implementation?

I am fairly new to Julia and I am learning about metaprogramming.
I would like to write a macro that receive in input a function and returns another function based on the implementation details of its input.
For example given:
function f(x)
x + 100
end
function g(x)
f(x)*x
end
function h(x)
g(x)-0.5*f(x)
end
I would like to write a macro that returns something like that:
function h_traced(x)
f = x + 100
println("loc 1 x: ", x)
g = f * x
println("loc 2 x: ", x)
res = g - 0.5 * f
println("loc 3 x: ", x)
Now both code_lowered and code_typed seems to give me back the AST in the form of CodeInfo, however when I try to use it programmatically in my macro I get empty object.
macro myExpand(f)
body = code_lowered(f)
println("myExpand Body lenght: ",length(body))
end
called like this
#myExpand :(h)
however the same call outside the macro works ok.
code_lowered(h)
At last even the following return an empty CodeInfo.
macro myExpand(f)
body = code_lowered(Symbol("h"))
println("myExpand Body lenght: ",length(body))
end
This might be incredible trivial but I could not work out myseld why the h symbol does not resolve to the function defined. Am I missing something about the scope of symbols?
I find it useful to think about macros as a way to transform an input syntax into an output syntax.
So you could very well define a macro #my_macro such that
#my_macro function h(x)
g(x)-0.5*f(x)
end
would expand to something like
function h_traced(x)
println("entering function: x=", x)
g(x)-0.5*f(x)
end
But to such a macro, h is merely a name, an identifier (technically, a Symbol) that can be transformed into h_traced. h is not the function that is bound to this name (in the same way as x = 2 involves binding a name x, to an integer value 2, but x is not 2; x is merely a name that can be used to refer to 2). In contrast to this, when you call code_lowered(h), h gets evaluated first, and code_lowered is passed its value (which is a function) as argument.
Back to our macro: expanding to an expression that involves the definition of g and f goes way further than mere syntax transformations: we're leaving the purely syntactic domain, since such a transformation would need to "understand" that these are functions, look up their definitions and so on.
You are right to think about code_lowered and friends: this is IMO the adequate level of abstraction for what you're trying to achieve. You should probably look into tools like Cassette.jl or IRTools.jl. That being said, if you're still relatively new to Julia, you might want to get a bit more used to the language before delving too deeply into such topics.
You don't need a macro, you need a generated function. They can not only return code (Expr), but also IR (lowered code). Usually, for this kind of thing, people use Base.uncompressed_ast, not code_lowered. Both Cassette and IRTools simplify the implementation for you, in different ways.
The basic idea is:
Have a generated function that takes a function and its arguments
In that function, get the IR of that function, and modify it to your purposes
Return the new IR from the generated function. This will then be compiled and called on the original arguments.
A short demonstration with IRTools:
julia> IRTools.#dynamo function traced(args...)
ir = IRTools.IR(args...)
p = IRTools.Pipe(ir)
for (v, stmt) in p
IRTools.insertafter!(p, v, IRTools.xcall(println, "loc $v"))
end
return IRTools.finish(p)
end
julia> function h(x)
sin(x)-0.5*cos(x)
end
h (generic function with 1 method)
julia> #code_ir traced(h, 1)
1: (%1, %2)
%3 = Base.getfield(%2, 1)
%4 = Base.getfield(%2, 2)
%5 = Main.sin(%4)
%6 = (println)("loc %3")
%7 = Main.cos(%4)
%8 = (println)("loc %4")
%9 = 0.5 * %7
%10 = (println)("loc %5")
%11 = %5 - %9
%12 = (println)("loc %6")
return %11
julia> traced(h, 1)
loc %3
loc %4
loc %5
loc %6
0.5713198318738266
The rest is left as an exercise. The numbers of the variables are off, because they are, of course, shifted during the transformation. You'd have to add some bookkeeping for that, or use the substitute function on Pipe in some way (but I never quite understood it). If you need the name of the variables, you can get the IR with slots preserved by using a different method of the IR constructor.
(And now the advertisement: I have written something like this. It's currently quite inefficient, but you might get some ideas from it.)

F#: No abstract property was found that corresponds to this override

Hello fellow Overflowers. I am working on a group project to create a ray tracer that draws a 2D rendering of a 3D scene. The task I am currently on involves matrix transformation of objects (shapes), that need to be moved around, mirrored, sheared etc.
In working with shapes we have chosen to implement an interface that defines the type for a hit function. This hit function is defined in each shape, such as sphere, box, plane etc. When transforming a shape I need to transform the rays that hit the shape and the way to do that seems to be with a higher order function that alters the original hit function.
In order to do this I have implemented the function transformHitFunction, which seems to work, but the new type transformedShape, that implements the Shape interface, is giving me the error
No abstract property was found that corresponds to this override
which doesn't make any sense to me, as it works with other hit functions of the same type. Can anyone spot what's wrong?
I have tried to strip away all modules, namespaces and code that is not relevant to this issue.
type Transformation = Matrix of float [,]
type Vector =
| V of float * float * float
let mkVector x y z = V(x, y, z)
let vgetX (V(x,_,_)) = x
let vgetY (V(_,y,_)) = y
let vgetZ (V(_,_,z)) = z
type Point =
| P of float * float * float
let mkPoint x y z = P(x, y, z)
let pgetX (P(x,_,_)) = x
let pgetY (P(_,y,_)) = y
let pgetZ (P(_,_,z)) = z
type Material = Material
type Texture =
| T of (float -> float -> Material)
type Shape =
abstract member hit: Point * Vector -> (Texture*float*Vector) option
let transformPoint (p:Point) t =
match t with
| Matrix m -> mkPoint ((pgetX(p))*m.[0,0] + (pgetY(p))*m.[0,1] + (pgetZ(p))*m.[0,2] + m.[0,3])
((pgetX(p))*m.[1,0] + (pgetY(p))*m.[1,1] + (pgetZ(p))*m.[1,2] + m.[1,3])
((pgetX(p))*m.[2,0] + (pgetY(p))*m.[2,1] + (pgetZ(p))*m.[2,2] + m.[2,3])
let transformVector (v:Vector) t =
match t with
| Matrix m -> mkVector ((vgetX(v))*m.[0,0] + (vgetY(v))*m.[0,1] + (vgetZ(v))*m.[0,2] + m.[0,3])
((vgetX(v))*m.[1,0] + (vgetY(v))*m.[1,1] + (vgetZ(v))*m.[1,2] + m.[1,3])
((vgetX(v))*m.[2,0] + (vgetY(v))*m.[2,1] + (vgetZ(v))*m.[2,2] + m.[2,3])
let transformHitFunction fn (t:Transformation) =
fun (p:Point,v:Vector) ->
let tp = transformPoint p t
let tv = transformVector v t
match fn(tp,tv) with
| None -> None
| Some (tex:Texture, d:float, n) -> let tn = transformVector n t
Some (tex, d, tn)
type transformedShape (sh:Shape, t:Transformation) =
interface Shape with
member this.hit = transformHitFunction sh.hit t
Short answer
When having problems with implementing or overriding members, provide the argument list exactly as in the abstract or virtual member's definition. (Also, mind your parentheses, because additional parentheses can change the type of a member in subtle ways.)
E.g. in this case: member this.hit (arg1, arg2) = ...
Slightly longer answer
You're encountering a situation in which the difference between F#'s first-class functions and its support of object-oriented style methods is relevant.
For compatibility with the Common Language Infrastructure's (CLI's) object-oriented languages (and object-oriented programming style in F# programs), F# sometimes discriminates between not only functions and values, but even functions in the object-oriented and functional style.
F# uses very similar syntax for two things: the "classical" CLI methods that take an argument list (and also support overloading and optional parameters) versus F#'s own favorite function type FSharpFunc, which always takes one parameter but supports currying and may take multiple parameters via tuples. But the semantics of these two can be different.
The last line of the question tries to pass a function with tupled input to implement a method that takes two arguments the way a method in C# or VB.NET takes them: a CLI method's argument list. Directly assigning an F#-style first-class function won't work here, and nether would a single tuple argument; the compiler insists to get every argument explicitly. If you write the implementation with its complete method argument list, it will work. For example:
member this.hit (arg1, arg2) = transformHitFunction sh.hit t (arg1, arg2)
Another solution would be to declare hit as:
abstract member hit: (Point * Vector -> (Texture*float*Vector) option)
(Note the parentheses!) Now it's a property that contains a first-class function; you can implement it by returning such a function, but the type of the member subtly changed.
The latter is why even implementing the original interface as a single-argument function, e.g. like this:
member this.hit a = transformHitFunction sh.hit t a // error
will not work. More precisely, The compiler will refuse to see a as a tuple. The same issue applies to
member this.hit ((arg1, arg2)) = transformHitFunction sh.hit t (arg1, arg2) // error
What's wrong now? The outer parentheses define the argument list, but the inner parentheses use a tuple pattern to decompose a single argument! So the argument list still has only one argument, and compilation fails. The outermost parentheses and commas when writing methods are a different feature than the tuples used elsewhere, even though the compiler translates between the two in some cases.
At the moment, your transformedShape.hit is a non-indexed property. When invoked, it returns a function that you need to provide with a Point*Vector tuple, and you'll get the result you want. You'll be able to see that better if you add a helper binding: Hover over f here:
type transformedShape (sh:Shape, t:Transformation) =
interface Shape with
member this.hit =
let f = transformHitFunction sh.hit t
f
As others have remarked already, all you need to do is spell out the arguments explicitly, and you're good:
type transformedShape2 (sh:Shape, t:Transformation) =
interface Shape with
member this.hit(p, v) = transformHitFunction sh.hit t (p, v)

How to pass mulitiple data types in parameters to function?

F.e. I want to implement inc function:
FUNCTION inc RETURNS INT (INPUT-OUTPUT i AS INT, AddExpression AS INT):
i = i + AddExpression.
END FUNCTION.
to use it like this:
inc(tt-data.qty,1).
I didn't found how to override my function for DEC data type or how to combine both in one. If it possible I also want my function to deal with CHAR - kind of ADD-ENTRY. Maybe this basic functions are already implemented by someone? Something like this STLib on OEHive.
Plain old user-defined functions can only have a single signature. Your function definition is a bit "off". You are using an input-output parameter (which isn't "wrong" but it is odd) and you aren't returning a value -- which is wrong. It should look like this:
function inc returns integer ( input-output i as integer, addExpression as integer ):
i = i + addExpression.
return i.
end.
Procedures have somewhat more relaxed data-type rules and will do some type conversions automatically (such as an implied decimal to integer conversion). This would, for example, support passing a decimal that gets automatically rounded:
procedure z:
define input-output parameter i as integer no-undo.
define input parameter x as integer.
i = i + x.
return.
end.
You can overload method signatures if you create your function as a method of a class.
Something along these lines (untested):
class X:
method public integer inc( input-output i as integer, input addExpression as integer ):
i = i + addExpression.
return i.
end.
method public integer inc( input-output i as integer, input addExpression as character ):
i = i + integer( addExpression ).
return i.
end.
end.

Performance difference between functions and pattern matching in Mathematica

So Mathematica is different from other dialects of lisp because it blurs the lines between functions and macros. In Mathematica if a user wanted to write a mathematical function they would likely use pattern matching like f[x_]:= x*x instead of f=Function[{x},x*x] though both would return the same result when called with f[x]. My understanding is that the first approach is something equivalent to a lisp macro and in my experience is favored because of the more concise syntax.
So I have two questions, is there a performance difference between executing functions versus the pattern matching/macro approach? Though part of me wouldn't be surprised if functions were actually transformed into some version of macros to allow features like Listable to be implemented.
The reason I care about this question is because of the recent set of questions (1) (2) about trying to catch Mathematica errors in large programs. If most of the computations were defined in terms of Functions, it seems to me that keeping track of the order of evaluation and where the error originated would be easier than trying to catch the error after the input has been rewritten by the successive application of macros/patterns.
The way I understand Mathematica is that it is one giant search replace engine. All functions, variables, and other assignments are essentially stored as rules and during evaluation Mathematica goes through this global rule base and applies them until the resulting expression stops changing.
It follows that the fewer times you have to go through the list of rules the faster the evaluation. Looking at what happens using Trace (using gdelfino's function g and h)
In[1]:= Trace#(#*#)&#x
Out[1]= {x x,x^2}
In[2]:= Trace#g#x
Out[2]= {g[x],x x,x^2}
In[3]:= Trace#h#x
Out[3]= {{h,Function[{x},x x]},Function[{x},x x][x],x x,x^2}
it becomes clear why anonymous functions are fastest and why using Function introduces additional overhead over a simple SetDelayed. I recommend looking at the introduction of Leonid Shifrin's excellent book, where these concepts are explained in some detail.
I have on occasion constructed a Dispatch table of all the functions I need and manually applied it to my starting expression. This provides a significant speed increase over normal evaluation as none of Mathematica's inbuilt functions need to be matched against my expression.
My understanding is that the first approach is something equivalent to a lisp macro and in my experience is favored because of the more concise syntax.
Not really. Mathematica is a term rewriter, as are Lisp macros.
So I have two questions, is there a performance difference between executing functions versus the pattern matching/macro approach?
Yes. Note that you are never really "executing functions" in Mathematica. You are just applying rewrite rules to change one expression into another.
Consider mapping the Sqrt function over a packed array of floating point numbers. The fastest solution in Mathematica is to apply the Sqrt function directly to the packed array because it happens to implement exactly what we want and is optimized for this special case:
In[1] := N#Range[100000];
In[2] := Sqrt[xs]; // AbsoluteTiming
Out[2] = {0.0060000, Null}
We might define a global rewrite rule that has terms of the form sqrt[x] rewritten to Sqrt[x] such that the square root will be calculated:
In[3] := Clear[sqrt];
sqrt[x_] := Sqrt[x];
Map[sqrt, xs]; // AbsoluteTiming
Out[3] = {0.4800007, Null}
Note that this is ~100× slower than the previous solution.
Alternatively, we might define a global rewrite rule that replaces the symbol sqrt with a lambda function that invokes Sqrt:
In[4] := Clear[sqrt];
sqrt = Function[{x}, Sqrt[x]];
Map[sqrt, xs]; // AbsoluteTiming
Out[4] = {0.0500000, Null}
Note that this is ~10× faster than the previous solution.
Why? Because the slow second solution is looking up the rewrite rule sqrt[x_] :> Sqrt[x] in the inner loop (for each element of the array) whereas the fast third solution looks up the value Function[...] of the symbol sqrt once and then applies that lambda function repeatedly. In contrast, the fastest first solution is a loop calling sqrt written in C. So searching the global rewrite rules is extremely expensive and term rewriting is expensive.
If so, why is Sqrt ever fast? You might expect a 2× slowdown instead of 10× because we've replaced one lookup for Sqrt with two lookups for sqrt and Sqrt in the inner loop but this is not so because Sqrt has the special status of being a built-in function that will be matched in the core of the Mathematica term rewriter itself rather than via the general-purpose global rewrite table.
Other people have described much smaller performance differences between similar functions. I believe the performance differences in those cases are just minor differences in the exact implementation of Mathematica's internals. The biggest issue with Mathematica is the global rewrite table. In particular, this is where Mathematica diverges from traditional term-level interpreters.
You can learn a lot about Mathematica's performance by writing mini Mathematica implementations. In this case, the above solutions might be compiled to (for example) F#. The array may be created like this:
> let xs = [|1.0..100000.0|];;
...
The built-in sqrt function can be converted into a closure and given to the map function like this:
> Array.map sqrt xs;;
Real: 00:00:00.006, CPU: 00:00:00.015, GC gen0: 0, gen1: 0, gen2: 0
...
This takes 6ms just like Sqrt[xs] in Mathematica. But that is to be expected because this code has been JIT compiled down to machine code by .NET for fast evaluation.
Looking up rewrite rules in Mathematica's global rewrite table is similar to looking up the closure in a dictionary keyed on its function name. Such a dictionary can be constructed like this in F#:
> open System.Collections.Generic;;
> let fns = Dictionary<string, (obj -> obj)>(dict["sqrt", unbox >> sqrt >> box]);;
This is similar to the DownValues data structure in Mathematica, except that we aren't searching multiple resulting rules for the first to match on the function arguments.
The program then becomes:
> Array.map (fun x -> fns.["sqrt"] (box x)) xs;;
Real: 00:00:00.044, CPU: 00:00:00.031, GC gen0: 0, gen1: 0, gen2: 0
...
Note that we get a similar 10× performance degradation due to the hash table lookup in the inner loop.
An alternative would be to store the DownValues associated with a symbol in the symbol itself in order to avoid the hash table lookup.
We can even write a complete term rewriter in just a few lines of code. Terms may be expressed as values of the following type:
> type expr =
| Float of float
| Symbol of string
| Packed of float []
| Apply of expr * expr [];;
Note that Packed implements Mathematica's packed lists, i.e. unboxed arrays.
The following init function constructs a List with n elements using the function f, returning a Packed if every return value was a Float or a more general Apply(Symbol "List", ...) otherwise:
> let init n f =
let rec packed ys i =
if i=n then Packed ys else
match f i with
| Float y ->
ys.[i] <- y
packed ys (i+1)
| y ->
Apply(Symbol "List", Array.init n (fun j ->
if j<i then Float ys.[i]
elif j=i then y
else f j))
packed (Array.zeroCreate n) 0;;
val init : int -> (int -> expr) -> expr
The following rule function uses pattern matching to identify expressions that it can understand and replaces them with other expressions:
> let rec rule = function
| Apply(Symbol "Sqrt", [|Float x|]) ->
Float(sqrt x)
| Apply(Symbol "Map", [|f; Packed xs|]) ->
init xs.Length (fun i -> rule(Apply(f, [|Float xs.[i]|])))
| f -> f;;
val rule : expr -> expr
Note that the type of this function expr -> expr is characteristic of term rewriting: rewriting replaces expressions with other expressions rather than reducing them to values.
Our program can now be defined and executed by our custom term rewriter:
> rule (Apply(Symbol "Map", [|Symbol "Sqrt"; Packed xs|]));;
Real: 00:00:00.049, CPU: 00:00:00.046, GC gen0: 24, gen1: 0, gen2: 0
We've recovered the performance of Map[Sqrt, xs] in Mathematica!
We can even recover the performance of Sqrt[xs] by adding an appropriate rule:
| Apply(Symbol "Sqrt", [|Packed xs|]) ->
Packed(Array.map sqrt xs)
I wrote an article on term rewriting in F#.
Some measurements
Based on #gdelfino answer and comments by #rcollyer I made this small program:
j = # # + # # &;
g[x_] := x x + x x ;
h = Function[{x}, x x + x x ];
anon = Table[Timing[Do[ # # + # # &[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
jj = Table[Timing[Do[ j[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
gg = Table[Timing[Do[ g[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
hh = Table[Timing[Do[ h[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
ListLinePlot[ {anon, jj, gg, hh},
PlotStyle -> {Black, Red, Green, Blue},
PlotRange -> All]
The results are, at least for me, very surprising:
Any explanations? Please feel free to edit this answer (comments are a mess for long text)
Edit
Tested with the identity function f[x] = x to isolate the parsing from the actual evaluation. Results (same colors):
Note: results are very similar to this Plot for constant functions (f[x]:=1);
Pattern matching seems faster:
In[1]:= g[x_] := x*x
In[2]:= h = Function[{x}, x*x];
In[3]:= Do[h[RandomInteger[100]], {1000000}] // Timing
Out[3]= {1.53927, Null}
In[4]:= Do[g[RandomInteger[100]], {1000000}] // Timing
Out[4]= {1.15919, Null}
Pattern matching is also more flexible as it allows you to overload a definition:
In[5]:= g[x_] := x * x
In[6]:= g[x_,y_] := x * y
For simple functions you can compile to get the best performance:
In[7]:= k[x_] = Compile[{x}, x*x]
In[8]:= Do[k[RandomInteger[100]], {100000}] // Timing
Out[8]= {0.083517, Null}
You can use function recordSteps in previous answer to see what Mathematica actually does with Functions. It treats it just like any other Head. IE, suppose you have the following
f = Function[{x}, x + 2];
f[2]
It first transforms f[2] into
Function[{x}, x + 2][2]
At the next step, x+2 is transformed into 2+2. Essentially, "Function" evaluation behaves like an application of pattern matching rules, so it shouldn't be surprising that it's not faster.
You can think of everything in Mathematica as an expression, where evaluation is the process of rewriting parts of the expression in a predefined sequence, this applies to Function like to any other head