Problem when trying to define an operator in Prolog - operator-overloading

I have defined a prolog file with the following code:
divisible(X, Y) :-
X mod Y =:= 0.
divisibleBy(X, Y) :-
divisible(X, Y).
op(35,xfx,divisibleBy).
Prolog is complaining that
'$record_clause'/2: No permission to modify static_procedure `op/3'
What am I doing wrong? I want to define an divisibleBy operator that will allow me to write code like the following:
4 divisibleBy 2
Thanks.

Use
:- op(35,xfx,divisibleBy).
:- tells the Prolog interpreter to evaluate the next term while loading the file, i.e. make a predicate call, instead of treating it as a definition (in this case a redefinition of op/3).

The answer given by #larsmans is spot-on regarding your original problem.
However, you should reconsider if you should define a new operator.
In general, I would strongly advise against defining new operators for the following reasons:
The gain in readability is often overrated.
It may easily introduce new problems in places you wouldn't normally expect buggy.
It doesn't "scale" well: a small number of operators can make code on presentation slides super-concise, but what if you add more discriminate union cases over time? More operators?

Related

Julia: Macros for vector aliasing

I want to be able to allow users of my package to define functions in a more mathematical manner, and I think a macro is the right direction. The problem is as follows. The code allows the users to define functions which are then used in specialized solvers to solve PDEs. However, to make things easier for the solver, some of the inputs are "matrices" in ways that you wouldn't normally wouldn't think they would be. For example, the solvers can take in functions f(x,t), but x[:,1] is what you'd think of as x and x[:,2] is what you'd think of as y (and sometimes it is 3D).
The bigger issues is that when the PDE is nonlinear, I place everything in a u vector, when in many cases (like Reaction-Diffusion equations) these things are named. So in this general case, I'd like to be able to write
#mathdefine f(RA,RABP,RAR,x,y,t) = RA*RABP + RA*x + RAR*t
and have it translate to
f(u,x,t) = u[:,1].*u[:,2] + u[:,1].*x[:,1] + u[:,3]*t
I am not up to snuff on my macro-foo, so I was hoping someone could get me started (or if macros are not the right way to approach this, explain why).
It's not too hard if the user has to give what is being translated to what, but I'd like to have it be as clean to use as possible, so somehow know that it's to the spatial variables and so everything before is part of a u, but after is part of x.
The trick to macro "find/replace" is just to pass off processing to a recursive function that updates the expression args. Your signature will come in as a bunch of symbols, so you can loop through the call signature and add to two dicts, mapping variable name to column index. Then recursively replace the arg tree when you see any of the variables. This is untested:
function replace_vars!(expr::Expr, xd::Dict{Symbol,Int}, ud::Dict{Symbol,Int})
for (i,arg) in enumerate(expr.args)
if haskey(xd, arg)
expr.arg[i] = :(x[:,$(xd[arg])])
elseif haskey(ud, arg)
expr.arg[i] = :(u[:,$(ud[arg])])
elseif isa(arg,Expr)
replace_vars!(arg, xd, ud)
end
end
end
macro mathdefine(expr)
# todo: loop through function signature (expr.args[1]?) to build xd/ud
replace_vars!(expr)
expr
end
I left a little homework for you, but this should get you started.

What is the correct way to select real solutions?

Suppose one needs to select the real solutions after solving some equation.
Is this the correct and optimal way to do it, or is there a better one?
restart;
mu := 3.986*10^5; T:= 8*60*60:
eq := T = 2*Pi*sqrt(a^3/mu):
sol := solve(eq,a);
select(x->type(x,'realcons'),[sol]);
I could not find real as type. So I used realcons. At first I did this:
select(x->not(type(x,'complex')),[sol]);
which did not work, since in Maple 5 is considered complex! So ended up with no solutions.
type(5,'complex');
(* true *)
Also I could not find an isreal() type of function. (unless I missed one)
Is there a better way to do this that one should use?
update:
To answer the comment below about 5 not supposed to be complex in maple.
restart;
type(5,complex);
true
type(5,'complex');
true
interface(version);
Standard Worksheet Interface, Maple 18.00, Windows 7, February
From help
The type(x, complex) function returns true if x is an expression of the form
a + I b, where a (if present) and b (if present) are finite and of type realcons.
Your solutions sol are all of type complex(numeric). You can select only the real ones with type,numeric, ie.
restart;
mu := 3.986*10^5: T:= 8*60*60:
eq := T = 2*Pi*sqrt(a^3/mu):
sol := solve(eq,a);
20307.39319, -10153.69659 + 17586.71839 I, -10153.69659 - 17586.71839 I
select( type, [sol], numeric );
[20307.39319]
By using the multiple argument calling form of the select command we here can avoid using a custom operator as the first argument. You won't notice it for your small example, but it should be more efficient to do so. Other commands such as map perform similarly, to avoid having to make an additional function call for each individual test.
The types numeric and complex(numeric) cover real and complex integers, rationals, and floats.
The types realcons and complex(realcons) includes the previous, but also allow for an application of evalf done during the test. So Int(sin(x),x=1..3) and Pi and sqrt(2) are all of type realcons since following an application of evalf they become floats of type numeric.
The above is about types. There are also properties to consider. Types are properties, but not necessarily vice versa. There is a real property, but no real type. The is command can test for a property, and while it is often used for mixed numeric-symbolic tests under assumptions (on the symbols) it can also be used in tests like yours.
select( is, [sol], real );
[20307.39319]
It is less efficient to use is for your example. If you know that you have a collection of (possibly non-real) floats then type,numeric should be an efficient test.
And, just to muddy the waters... there is a type nonreal.
remove( type, [sol], nonreal );
[20307.39319]
The one possibility is to restrict the domain before the calculation takes place.
Here is an explanation on the Maplesoft website regarding restricting the domain:
4 Basic Computation
UPD: Basically, according to this and that, 5 is NOT considered complex in Maple, so there might be some bug/error/mistake (try checking what may be wrong there).
For instance, try putting complex without quotes.
Your way seems very logical according to this.
UPD2: According to the Maplesoft Website, all the type checks are done with type() function, so there is rather no isreal() function.

PLT Redex: parameterizing a language definition

This is a problem that's been nagging at me for some time, and I wonder if anyone here can help.
I have a PLT Redex model of a language called lambdaLVar that is more or less a garden-variety untyped lambda calculus, but extended with a store containing "lattice variables", or LVars. An LVar is a variable whose value can only increase over time, where the meaning of "increase" is given by a partially ordered set (aka a lattice) that the user of the language specifies. Therefore lambdaLVar is really a family of languages -- instantiate it with one lattice and you get one language; with a different lattice, and you get another. You can take a look at the code here; the important stuff is in lambdaLVar.rkt.
In the on-paper definition of lambdaLVar, the language definition is parameterized by that user-specified lattice. For a long time, I've wanted to do the same kind of parameterization in the Redex model, but so far, I haven't been able to figure out how. Part of the trouble is that the grammar of the language depends on how the user instantiates the lattice: elements of the lattice become terminals in the grammar. I don't know how to express a grammar in Redex that is abstract over the lattice.
In the meantime, I tried to make lambdaLVar.rkt as modular as I could. The language defined in that file is specialized to a particular lattice: natural numbers with max as the least-upper-bound (lub) operation. (Or, equivalently, natural numbers ordered by <=. It's a very boring lattice.) The only parts of the code that are specific to that lattice are the line (define lub-op max) near the top, and natural appearing in the grammar. (There's a lub metafunction that is defined in terms of the user-specified lub-op function. The latter is just a Racket function, so lub has to escape out to Racket to call lub-op.)
Barring the ability to actually specify lambdaLVar in a way that is abstract over the choice of lattice, it seems like I ought to be able to write a version of lambdaLVar with the most bare-bones of lattices -- just Bot and Top elements, where Bot <= Top -- and then use define-extended-language to add more stuff. For instance, I could define a language called lambdaLVar-nats that is specialized to the naturals lattice I described:
;; Grammar for elements of a lattice of natural numbers.
(define-extended-language lambdaLVar-nats
lambdaLVar
(StoreVal .... ;; Extend the original language
natural))
;; All we have to specify is the lub operation; leq is implicitly <=
(define-metafunction/extension lub lambdaLVar-nats
lub-nats : d d -> d
[(lub-nats d_1 d_2) ,(max (term d_1) (term d_2))])
Then, to replace the two reduction relations slow-rr and fast-rr that I had for lambdaLVar, I could define a couple of wrappers:
(define nats-slow-rr
(extend-reduction-relation slow-rr
lambdaLVar-nats))
(define nats-fast-rr
(extend-reduction-relation fast-rr
lambdaLVar-nats))
My understanding from the documentation on extend-reduction-relation is that it should reinterpret the rules in slow-rr and fast-rr, but using lambdaLVar-nats. Putting all this together, I tried running the test suite that I had with one of the new, extended reduction relations:
> (program-test-suite nats-slow-rr)
The first thing I get is a contract violation complaint: small-step-base: input (((l 3)) new) at position 1 does not match its contract. The contract line of small-step-base is just #:contract (small-step-base Config Config), where Config is a grammar nonterminal that has a new meaning if reinterpreted under lambdaLVar-nats than it did under lambdaLVar, because of the specific lattice stuff. As an experiment, I got rid of the contracts onsmall-step-base and small-step-slow.
I was then able to actually run my 19 test programs, but 10 of them fail. Perhaps unsurprisingly, all the ones that fail are programs that use natural-number-valued LVars in some way. (The rest are "pure" programs that don't interact with the store of LVars at all.) So, the tests that fail are exactly the ones that use the extended grammar.
So I kept following the rabbit hole, and it seems like Redex wants me to extend all of the existing judgment forms and metafunctions to be associated with lambdaLVar-nats rather than lambdaLVar. That makes sense, and it seems to work OK for judgment forms as far as I can tell, but with metafunctions I get into trouble: I want the new metafunction to overload the old one of the same name (because existing judgment forms are using it) and there doesn't seem to be a way to do that. If I have to rename the metafunctions, it defeats the purpose, because I'll have to write whole new judgment forms anyway. I suppose that what I want is a sort of late binding of metafunction calls!
My question in a nutshell: Is there any way in Redex to parameterize the definition of a language in the way I want, or to extend the definition of a language in a way that will do what I want? Will I end up just having to write Redex-generating macros?
Thanks for reading!
I asked the Racket users mailing list; the thread begins here. To summarize the resulting discussion: In Redex as it stands today, the answer is no, there is no way to parameterize a language definition in the way I want. However, it should be possible in a future version of Redex with a module system, which is in the works right now.
It also doesn't work to try to use Redex's existing extension forms (define-extended-language, extend-reduction-relation, and so on) in the way I tried to do here, because -- as I discovered -- the original metafunctions do not get transitively reinterpreted to use the extended languages. But a module system would apparently help with this, too, because it would allow you to package up metafunctions, judgment-forms, and reduction relations together and simultaneously extend them (see the discussion here).
So, for now, the answer is, indeed, to write a Redex-generating macro. Something like this works:
(define-syntax-rule (define-lambdaLVar-language name lub-op lattice-values ...)
(begin
;; Entire original Redex model goes here, with `natural` replaced with
;; `lattice-values ...`, and instances of `...` replaced with `(... ...)`
))
And then you can instantiate particular lattices with, e.g.,:
(define-lambdaLVar-language lambdaLVar-nat max natural)
I hope Redex does get modules soon, but in the meantime, this seems to work well.

How to check if the value is a number in Prolog manually?

How to check if the given value is a number in Prolog without using built-in predicates like number?
Let's say I have a list [a, 1, 2, 3]. I need a way to check if every element within this list is a number. The only part of the problem that bothers me is how to do the check itself without using the number predicate.
The reason why I'm trying to figure this out is that I've got a college assignment where it's specifically said not to use any of the built-in predicates.
You need some built-in predicate to solve this problem - unless you enumerate all numbers explicitly (which is not practical since there are infinitely many of them).
1
The most straight-forward would be:
maplist(number, L).
Or, recursively
allnumbers([]).
allnumbers([N|Ns]) :-
number(N),
allnumbers(Ns).
2
In a comment you say that "the value is given as an atom". That could mean that you get either [a, '1', '2'] or '[a, 1, 2]`. I assume the first. Here again, you need a built-in predicate to analyze the name. Relying on ISO-Prolog's errors we write:
numberatom(Atom) :-
atom_chars(Atom, Chs),
catch(number_chars(_, Chs), error(syntax_error(_),_), false).
Use numberatom/1 in place of number/1, So write a recurse rule or use maplist/2
3
You might want to write a grammar instead of the catch... goal. There have been many such definitions recently, you may look at this question.
4
If the entire "value" is given as an atom, you will need again atom_chars/2or you might want some implementation specific solution like atom_to_term/3 and then apply one of the solutions above.

What is the difference between = and := in Scala?

What is the difference between = and := in Scala?
I have googled extensively for "scala colon-equals", but was unable to find anything definitive.
= in scala is the actual assignment operator -- it does a handful of specific things that for the most part you don't have control over, such as
Giving a val or var a value when it's created
Changing the value of a var
Changing the value of a field on a class
Making a type alias
Probably others
:= is not a built-in operator -- anyone can overload it and define it to mean whatever they like. The reason people like to use := is because it looks very assignmenty and is used as an assignment operator in other languages.
So, if you're trying to find out what := means in the particular library you're using... my advice is look through the Scaladocs (if they exist) for a method named :=.
from Martin Odersky:
Initially we had colon-equals for assignment—just as in Pascal, Modula, and Ada—and a single equals sign for equality. A lot of programming theorists would argue that that's the right way to do it. Assignment is not equality, and you should therefore use a different symbol for assignment. But then I tried it out with some people coming from Java. The reaction I got was, "Well, this looks like an interesting language. But why do you write colon-equals? What is it?" And I explained that its like that in Pascal. They said, "Now I understand, but I don't understand why you insist on doing that." Then I realized this is not something we wanted to insist on. We didn't want to say, "We have a better language because we write colon-equals instead of equals for assignment." It's a totally minor point, and people can get used to either approach. So we decided to not fight convention in these minor things, when there were other places where we did want to make a difference.
from The Goals of Scala's Design
= performs assignment. := is not defined in the standard library or the language specification. It's a name that is free for other libraries or your code to use, if you wish.
Scala allows for operator overloading, where you can define the behaviour of an operator just like you could write a method.
As in other languages, = is an assignment operator.
The is no standard operator I'm aware of called :=, but could define one with this name. If you see an operator like this, you should check up the documentation of whatever you're looking at, or search for where that operator is defined.
There is a lot you can do with Scala operators. You can essentially make an operator out of virtually any characters you like.