How can i compare natural numbers in prolog? - numbers

I have a function that takes two arguments and compares if they are natural numbers in their unit form and if the first arg is bigger than the second!
So here is the code I've written but every time it gets me "no".
nat(0).
nat(s(X)) :- nat(X).
sum(X,0,X) :- nat(X).
sum(X,s(Y),s(Z)) :- sum(X,Y,Z).
gr(X,Y) :- nat(s(X)), nat(s(Y)), X>Y.
What goes wrong? Everything is in Prolog . the function is the gr() .

First, you probably want for sum rather this:
sum(0, Y, Y) :-
nat(Y).
sum(s(X), Y, s(Z)) :-
sum(X, Y, Z).
This is so that Prolog can recognize that the two clauses are exclusive by only looking at the first argument.
Now to your greater than:
% gr(X, Y) is true if X is greater than Y
gr(X, Y) :- sm(Y, X).
% sm(X, Y) is true if X is smaller than Y
sm(0, s(Y)) :-
nat(Y).
sm(s(X), s(Y)) :-
sm(X, Y).
To answer your actual question: what goes wrong is that the operator > works on integers (like 1 or 0 or -19), not on compound terms. The operator #> will work (see the documentation of the implementation you are using), but I have the feeling you might actually want to be explicit about it.

Related

Creating 2 occurrence predicates out of another predicate based on min/max number of other index in the original predicate

I have the following input in Clingo:
val(a,2,3).
val(a,4,5).
val(b,0,6).
val(b,1,2).
the required is to have two predicates representing the first and second occurrence of each letter where the first one is the minimum number of the third index of val (case 1: only based on min of 3rd index), in case they are the same, then based on minimum number of the second index (case 2: both conditions).
Note that the maximum number of occurrences for each letter is 2.
The expected results is having two predicates with these values (only for this case, as sometimes the input is different):
first(a,2,3), first(b,1,2), second(a,4,5), second(b,0,6)
I tried the following code for case 1:
val(X,Y,W) :- val(X,Y,W).
first(X,Y,Z) :- val(X,Y,Z), not val(X,Y',Z'), Z'>Z,Y'!=Y.
second(X,Y,Z) :- val(X,Y,Z), not first(X,_,_).
but an error messages showed saying that Z' and Y' are unsafe.
Yes, Z' and Y' are unsafe because they do not appear in any positive predicate.
Moreover:
val(X,Y,W) :- val(X,Y,W). is a tautology and, therefore, it is useless;
it is not clear to me why you are imposing that Y'!=Y;
not first(X,_,_) is wrong, since it is checking that there doesn't exist ANY predicate first with variable X in the first argument (but this should always happen).
If I've correctly understood your requirements, this should be the correct implementation:
first(X,Y,Z) :- val(X,Y,Z), val(X,Y',Z'), Z'>Z . % case 1
first(X,Y,Z) :- val(X,Y,Z), val(X,Y',Z'), Z'=Z, Y'>Y . % case 2
second(X,Y,Z) :- val(X,Y,Z), not first(X,Y,Z) .

Unable to pass multiple arguments to each function in kdb

How do i passed 2 variables to a lambda function, where x is a number and y is a symbol.
I have written this, but it wouldn't process
{[x;y]
// some calculation with x and y
}
each ((til 5) ,\:/: `a`b`c`d`f)
It seems to be complaining that i am missing another arg.
Here's an example that I think does what you're looking for:
q){string[x],string y}./: raze (til 5) ,\:/: `a`b`c`d`f
The issue with your example is that you need to raze the output of ((til 5) ,\:/: `a`b`c`d`f) to get your list of 2 inputs.
Passing a list of variables into a function is accomplished using "." (dot apply) http://code.kx.com/q/ref/unclassified/#apply
.e.g
q){x+y} . 10 2
12
In my example, I've then used an "each right" to then apply to each pair. http://code.kx.com/q/ref/adverbs/#each-right
Alternatively, you could use the each instead if you wrapped the function in another lamda
q){{string[x],string y} . x} each raze (til 5) ,\:/: `a`b`c`d`f
Instead of generating a list of arguments using cross or ",/:\:" and passing each of these into your function, modify your function with each left each right ("/:\:") to give you all combination. his should take the format;
x f/:\: y
Where x and y are both lists. Reusing the example {string[x],string y};
til[5] {string[x], string y}/:\:`a`b`c`d
This will give you a matrix of all combinations of x and y. If you want to flatten that list add a 'raze'

How does rowfun know to reference variables inside a table

From the documentation, we see the following example:
g = gallery('integerdata',3,[15,1],1);
x = gallery('uniformdata',[15,1],9);
y = gallery('uniformdata',[15,1],2);
A = table(g,x,y)
func = #(x, y) (x - y);
B = rowfun(func,A,...
'GroupingVariable','g',...
'OutputVariableName','MeanDiff')
When the function func is applied to A in rowfun how does it know that there are variables in A called x and y?
EDIT: I feel that my last statement must not be true, as you do not get the same result if you did A = table(g, y, x).
I am still very confused by how rowfun can use a function that does not actually use any variables defined within the calling environment.
Unless you specify the rows (and their order) with the Name/Value argument InputVariables, Matlab will simply take column 1 as first input, column 2 as second input etc, ignoring eventual grouping columns.
Consequently, for better readability and maintainability of your code, I consider it good practice to always specify InputVariables explicitly.

Quicksort in Q/KDB+

I found this quicksort implementation on a website:
q:{$[2>distinct x;x;raze q each x where each not scan x < rand x]};
I don't understand this part:
raze q each x where each not scan x < rand x
Can someone explain it to me step by step?
Lets do it step by step . I assume you have basic understanding of Quick Sort algo. Also, there is one correction in code you mentioned which I have corrected in step 5.
Example list:
q)x: 1 0 5 4 3
Take a random element from list which will act as pivot.
q) rand x
Suppose it gives us '4' from list.
Split list 'x' in 2 lists. One contains elements lesser that '4' and other greater(or equal) to '4'.
2.a) First compare all elements with pivot (4 in our case)
q) (x<rand x) / 11001b : output is boolean list
2.b) Using above boolean list we can get all elements from 'x' lesser than '4'. Here is the way:
q) x where 11001b / ( 1 0 3) : output
So we require other expression to get all elements greater(or equal) than pivot '4'. There are many ways to do it
but lets see the one used in code:
q)not scan (x<rand x) / (11001b;00110b) : output
So it gives the list which has 2 lists. First is result of (x < rand x) which is used to get elements lesser than pivot '4' and other is negation of this list which is done by 'not' and it is used to get all elements greater(or equal) that pivot '4'.
2.c) So now we can generate 2 lists using sample code from (2.b)
q) x where each (not scan (x<rand x)) / ((1 0 3);(5 4)): output list which has 2 lists
Now apply same function to each list to sort each of them
i.e. recursive call on each list of list ((1 0 3);(5 4))
q) q each x where each (not scan (x<rand x))
After all calculations , apply 'raze' to flatten all lists that are returned from each recursive call to output one single list.
End condition for recursive call is: when input list has only 1 distinct element just return it.
q) 2>count distinct x
Note: There is one correction. 'count' was missing in original code.

Performance difference between functions and pattern matching in Mathematica

So Mathematica is different from other dialects of lisp because it blurs the lines between functions and macros. In Mathematica if a user wanted to write a mathematical function they would likely use pattern matching like f[x_]:= x*x instead of f=Function[{x},x*x] though both would return the same result when called with f[x]. My understanding is that the first approach is something equivalent to a lisp macro and in my experience is favored because of the more concise syntax.
So I have two questions, is there a performance difference between executing functions versus the pattern matching/macro approach? Though part of me wouldn't be surprised if functions were actually transformed into some version of macros to allow features like Listable to be implemented.
The reason I care about this question is because of the recent set of questions (1) (2) about trying to catch Mathematica errors in large programs. If most of the computations were defined in terms of Functions, it seems to me that keeping track of the order of evaluation and where the error originated would be easier than trying to catch the error after the input has been rewritten by the successive application of macros/patterns.
The way I understand Mathematica is that it is one giant search replace engine. All functions, variables, and other assignments are essentially stored as rules and during evaluation Mathematica goes through this global rule base and applies them until the resulting expression stops changing.
It follows that the fewer times you have to go through the list of rules the faster the evaluation. Looking at what happens using Trace (using gdelfino's function g and h)
In[1]:= Trace#(#*#)&#x
Out[1]= {x x,x^2}
In[2]:= Trace#g#x
Out[2]= {g[x],x x,x^2}
In[3]:= Trace#h#x
Out[3]= {{h,Function[{x},x x]},Function[{x},x x][x],x x,x^2}
it becomes clear why anonymous functions are fastest and why using Function introduces additional overhead over a simple SetDelayed. I recommend looking at the introduction of Leonid Shifrin's excellent book, where these concepts are explained in some detail.
I have on occasion constructed a Dispatch table of all the functions I need and manually applied it to my starting expression. This provides a significant speed increase over normal evaluation as none of Mathematica's inbuilt functions need to be matched against my expression.
My understanding is that the first approach is something equivalent to a lisp macro and in my experience is favored because of the more concise syntax.
Not really. Mathematica is a term rewriter, as are Lisp macros.
So I have two questions, is there a performance difference between executing functions versus the pattern matching/macro approach?
Yes. Note that you are never really "executing functions" in Mathematica. You are just applying rewrite rules to change one expression into another.
Consider mapping the Sqrt function over a packed array of floating point numbers. The fastest solution in Mathematica is to apply the Sqrt function directly to the packed array because it happens to implement exactly what we want and is optimized for this special case:
In[1] := N#Range[100000];
In[2] := Sqrt[xs]; // AbsoluteTiming
Out[2] = {0.0060000, Null}
We might define a global rewrite rule that has terms of the form sqrt[x] rewritten to Sqrt[x] such that the square root will be calculated:
In[3] := Clear[sqrt];
sqrt[x_] := Sqrt[x];
Map[sqrt, xs]; // AbsoluteTiming
Out[3] = {0.4800007, Null}
Note that this is ~100× slower than the previous solution.
Alternatively, we might define a global rewrite rule that replaces the symbol sqrt with a lambda function that invokes Sqrt:
In[4] := Clear[sqrt];
sqrt = Function[{x}, Sqrt[x]];
Map[sqrt, xs]; // AbsoluteTiming
Out[4] = {0.0500000, Null}
Note that this is ~10× faster than the previous solution.
Why? Because the slow second solution is looking up the rewrite rule sqrt[x_] :> Sqrt[x] in the inner loop (for each element of the array) whereas the fast third solution looks up the value Function[...] of the symbol sqrt once and then applies that lambda function repeatedly. In contrast, the fastest first solution is a loop calling sqrt written in C. So searching the global rewrite rules is extremely expensive and term rewriting is expensive.
If so, why is Sqrt ever fast? You might expect a 2× slowdown instead of 10× because we've replaced one lookup for Sqrt with two lookups for sqrt and Sqrt in the inner loop but this is not so because Sqrt has the special status of being a built-in function that will be matched in the core of the Mathematica term rewriter itself rather than via the general-purpose global rewrite table.
Other people have described much smaller performance differences between similar functions. I believe the performance differences in those cases are just minor differences in the exact implementation of Mathematica's internals. The biggest issue with Mathematica is the global rewrite table. In particular, this is where Mathematica diverges from traditional term-level interpreters.
You can learn a lot about Mathematica's performance by writing mini Mathematica implementations. In this case, the above solutions might be compiled to (for example) F#. The array may be created like this:
> let xs = [|1.0..100000.0|];;
...
The built-in sqrt function can be converted into a closure and given to the map function like this:
> Array.map sqrt xs;;
Real: 00:00:00.006, CPU: 00:00:00.015, GC gen0: 0, gen1: 0, gen2: 0
...
This takes 6ms just like Sqrt[xs] in Mathematica. But that is to be expected because this code has been JIT compiled down to machine code by .NET for fast evaluation.
Looking up rewrite rules in Mathematica's global rewrite table is similar to looking up the closure in a dictionary keyed on its function name. Such a dictionary can be constructed like this in F#:
> open System.Collections.Generic;;
> let fns = Dictionary<string, (obj -> obj)>(dict["sqrt", unbox >> sqrt >> box]);;
This is similar to the DownValues data structure in Mathematica, except that we aren't searching multiple resulting rules for the first to match on the function arguments.
The program then becomes:
> Array.map (fun x -> fns.["sqrt"] (box x)) xs;;
Real: 00:00:00.044, CPU: 00:00:00.031, GC gen0: 0, gen1: 0, gen2: 0
...
Note that we get a similar 10× performance degradation due to the hash table lookup in the inner loop.
An alternative would be to store the DownValues associated with a symbol in the symbol itself in order to avoid the hash table lookup.
We can even write a complete term rewriter in just a few lines of code. Terms may be expressed as values of the following type:
> type expr =
| Float of float
| Symbol of string
| Packed of float []
| Apply of expr * expr [];;
Note that Packed implements Mathematica's packed lists, i.e. unboxed arrays.
The following init function constructs a List with n elements using the function f, returning a Packed if every return value was a Float or a more general Apply(Symbol "List", ...) otherwise:
> let init n f =
let rec packed ys i =
if i=n then Packed ys else
match f i with
| Float y ->
ys.[i] <- y
packed ys (i+1)
| y ->
Apply(Symbol "List", Array.init n (fun j ->
if j<i then Float ys.[i]
elif j=i then y
else f j))
packed (Array.zeroCreate n) 0;;
val init : int -> (int -> expr) -> expr
The following rule function uses pattern matching to identify expressions that it can understand and replaces them with other expressions:
> let rec rule = function
| Apply(Symbol "Sqrt", [|Float x|]) ->
Float(sqrt x)
| Apply(Symbol "Map", [|f; Packed xs|]) ->
init xs.Length (fun i -> rule(Apply(f, [|Float xs.[i]|])))
| f -> f;;
val rule : expr -> expr
Note that the type of this function expr -> expr is characteristic of term rewriting: rewriting replaces expressions with other expressions rather than reducing them to values.
Our program can now be defined and executed by our custom term rewriter:
> rule (Apply(Symbol "Map", [|Symbol "Sqrt"; Packed xs|]));;
Real: 00:00:00.049, CPU: 00:00:00.046, GC gen0: 24, gen1: 0, gen2: 0
We've recovered the performance of Map[Sqrt, xs] in Mathematica!
We can even recover the performance of Sqrt[xs] by adding an appropriate rule:
| Apply(Symbol "Sqrt", [|Packed xs|]) ->
Packed(Array.map sqrt xs)
I wrote an article on term rewriting in F#.
Some measurements
Based on #gdelfino answer and comments by #rcollyer I made this small program:
j = # # + # # &;
g[x_] := x x + x x ;
h = Function[{x}, x x + x x ];
anon = Table[Timing[Do[ # # + # # &[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
jj = Table[Timing[Do[ j[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
gg = Table[Timing[Do[ g[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
hh = Table[Timing[Do[ h[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
ListLinePlot[ {anon, jj, gg, hh},
PlotStyle -> {Black, Red, Green, Blue},
PlotRange -> All]
The results are, at least for me, very surprising:
Any explanations? Please feel free to edit this answer (comments are a mess for long text)
Edit
Tested with the identity function f[x] = x to isolate the parsing from the actual evaluation. Results (same colors):
Note: results are very similar to this Plot for constant functions (f[x]:=1);
Pattern matching seems faster:
In[1]:= g[x_] := x*x
In[2]:= h = Function[{x}, x*x];
In[3]:= Do[h[RandomInteger[100]], {1000000}] // Timing
Out[3]= {1.53927, Null}
In[4]:= Do[g[RandomInteger[100]], {1000000}] // Timing
Out[4]= {1.15919, Null}
Pattern matching is also more flexible as it allows you to overload a definition:
In[5]:= g[x_] := x * x
In[6]:= g[x_,y_] := x * y
For simple functions you can compile to get the best performance:
In[7]:= k[x_] = Compile[{x}, x*x]
In[8]:= Do[k[RandomInteger[100]], {100000}] // Timing
Out[8]= {0.083517, Null}
You can use function recordSteps in previous answer to see what Mathematica actually does with Functions. It treats it just like any other Head. IE, suppose you have the following
f = Function[{x}, x + 2];
f[2]
It first transforms f[2] into
Function[{x}, x + 2][2]
At the next step, x+2 is transformed into 2+2. Essentially, "Function" evaluation behaves like an application of pattern matching rules, so it shouldn't be surprising that it's not faster.
You can think of everything in Mathematica as an expression, where evaluation is the process of rewriting parts of the expression in a predefined sequence, this applies to Function like to any other head