How to expand the `Classname/staticField` macro syntax - macros

From the Clojure docs, this is how to access a static field of a Java class:
Classname/staticField
Math/PI
-> 3.141592653589793
And this is the expansion:
The expansions are as follows:
Classname/staticField ==> (. Classname staticField)
I can't get this to expand using macroexpand*:
> (macroexpand 'Math/E)
Math/E
What do I use to expand Classname/staticField?
This is Clojure v1.6.0.
*Although this does work:
> (macroexpand '(Math/E))
(. Math E)

The documentation is a little bit inaccurate in that respect. Macro expansion only applies to list forms, not to bare symbols, so only the first three special forms listed (instance methods on objects and classes, static methods on classes) are handled at macro expansion time. The Classname/staticField syntax is resolved to a static field access after macro expansion, when symbols are resolved to vars, classes, or let-bound names as described in http://clojure.org/evaluation.

Related

Parameterizable return-from in Common Lisp

I'm learning blocks in Common lisp and did this example to see how blocks and the return-from command work:
(block b1
(print 1)
(print 2)
(print 3)
(block b2
(print 4)
(print 5)
(return-from b1)
(print 6)
)
(print 7))
It will print 1, 2, 3, 4, and 5, as expected. Changing the return-from to (return-from b2) it'll print 1, 2, 3, 4, 5, and 7, as one would expect.
Then I tried turn this into a function and paremetrize the label on the return-from:
(defun test-block (arg) (block b1
(print 1)
(print 2)
(print 3)
(block b2
(print 4)
(print 5)
(return-from (eval arg))
(print 6)
)
(print 7)))
and using (test-block 'b1) to see if it works, but it doesn't. Is there a way to do this without conditionals?
Using a conditional like CASE to select a block to return from
The recommended way to do it is using case or similar. Common Lisp does not support computed returns from blocks. It also does not support computed gos.
Using a case conditional expression:
(defun test-block (arg)
(block b1
(print 1)
(print 2)
(print 3)
(block b2
(print 4)
(print 5)
(case arg
(b1 (return-from b1))
(b2 (return-from b2)))
(print 6))
(print 7)))
One can't compute lexical go tags, return blocks or local functions from names
CLTL2 says about the restriction for the go construct:
Compatibility note: The ``computed go'' feature of MacLisp is not supported. The syntax of a computed go is idiosyncratic, and the feature is not supported by Lisp Machine Lisp, NIL (New Implementation of Lisp), or Interlisp. The computed go has been infrequently used in MacLisp anyway and is easily simulated with no loss of efficiency by using a case statement each of whose clauses performs a (non-computed) go.
Since features like go and return-from are lexically scoped constructs, computing the targets is not supported. Common Lisp has no way to access lexical environments at runtime and query those. This is for example also not supported for local functions. One can't take a name and ask for a function object with that name in some lexical environment.
Dynamic alternative: CATCH and THROW
The typically less efficient and dynamically scoped alternative is catch and throw. There the tags are computed.
I think these sorts of things boils down to the different types of namespaces bindings and environments in Common Lisp.
One first point is that a slightly more experienced novice learning Lisp might try to modify your attempted function to say (eval (list 'return-from ,arg)) instead. This seems to make more sense but still does not work.
Namespaces
A common beginner mistake in a language like scheme is having a variable called list as this shadows the top level definition of this as a function and stops the programmer from being able to make lists inside the scope for this binding. The corresponding mistake in Common Lisp is trying to use a symbol as a function when it is only bound as a variable.
In Common Lisp there are namespaces which are mappings from names to things. Some namespaces are:
The functions. To get the corresponding thing either call it: (foo a b c ...), or get the function for a static symbol (function foo) (aka #'foo) or for a dynamic symbol (fdefinition 'foo). Function names are either symbols or lists of setf and one symbol (e.g. (serf bar)). Symbols may alternatively be bound to macros in this namespace in which case function and fdefinition signal errors.
The variables. This maps symbols to the values in the corresponding variable. This also maps symbols to constants. Get the value of a variable by writing it down, foo or dynamically as (symbol-value). A symbol may also be bound as a symbol-macro in which case special macro expansion rules apply.
Go tags. This maps symbols to labels to which one can go (like goto in other languages).
Blocks. This maps symbols to places you can return from.
Catch tags. This maps objects to the places which catch them. When you throw to an object, the implementation effectively looks up the corresponding catch in this namespace and unwinds the stack to it.
classes (and structs, conditions). Every class has a name which is a symbol (so different packages may have a point class)
packages. Each package is named by a string and possibly some nicknames. This string is normally the name of a symbol and therefore usually in uppercase
types. Every type has a name which is a symbol. Naturally a class definition also defines a type.
declarations. Introduced with declare, declaim, proclaim
there might be more. These are all the ones I can think of.
The catch-tag and declarations namespaces aren’t like the others as they don’t really map symbols to things but they do have bindings and environments in the ways described below (note that I have used declarations to refer to the things that have been declared, like the optimisation policy or which variables are special, rather than the namespace in which e.g. optimize, special, and indeed declaration live which seems too small to include).
Now let’s talk about the different ways that this mapping may happen.
The binding of a name to a thing in a namespace is the way in which they are associated, in particular, how it may come to be and how it may be inspected.
The environment of a binding is the place where the binding lives. It says how long the binding lives for and where it may be accessed from. Environments are searched for to find the thing associated with some name in some namespace.
static and dynamic bindings
We say a binding is static if the name that is bound is fixed in the source code and a binding is dynamic if the name can be determined at run time. For example let, block and tags in a tagbody all introduce static bindings whereas catch and progv introduce dynamic bindings.
Note that my definition for dynamic binding is different from the one in the spec. The spec definition corresponds to my dynamic environment below.
Top level environment
This is the environment where names are searched for last and it is where toplevel definitions go to, for example defvar, defun, defclass operate at this level. This is where names are looked up last after all other applicable environments have been searched, e.g. if a function or variable binding can not be found at a closer level then this level is searched. References can sometimes be made to bindings at this level before they are defined, although they may signal warnings. That is, you may define a function bar which calls foo before you have defined foo. In other cases references are not allowed, for example you can’t try to intern or read a symbol foo::bar before the package FOO has been defined. Many namespaces only allow bindings in the top level environment. These are
constants (within the variables namespace)
classes
packages
types
Although (excepting proclaim) all bindings are static, they can effectively be made dynamic by calling eval which evaluates forms at the top level.
Functions (and [compiler] macros) and special variables (and symbol macros) may also be defined top level. Declarations can be defined toplevel either statically with the macro declaim or dynamically with the function proclaim.
Dynamic environment
A dynamic environment exists for a region of time during the programs execution. In particular, a dynamic environment begins when control flow enters some (specific type of) form and ends when control flow leaves it, either by returning normally or by some nonlocal transfer of control like a return-from or go. To look up a dynamically bound name in a namespace, the currently active dynamic environments are searched (effectively, ie a real system wouldn’t be implemented this way) from most recent to oldest for that name and the first binding wins.
Special variables and catch tags are bound in dynamic environments. Catch tags are bound dynamically using catch while special variables are bound statically using let and dynamically using progv. As we shall discuss later, let can make two different kinds of binding and it knows to treat a symbol as special if it has been defined with defvar or ‘defparameteror if it has been declared asspecial`.
Lexical environment
A lexical environment corresponds to a region of source code as it is written and a specific runtime instantiation of it. It (slightly loosely) begins at an opening parenthesis and ends at the corresponding closing parenthesis, and is instantiated when control flow hits the opening parenthesis. This description is a little complicated so let’s have an example with variables which are bound in a lexically environment (unless they are special. By convention the names special variables are wrapped in * symbols)
(defun foo ()
(let ((x 10))
(bar (lambda () x))))
(defun bar (f)
(let ((x 20))
(funcall f)))
Now what happens when we call (foo)? Well if x were bound in a dynamic environment (in foo and bar) then the anonymous function would be called in bar and the first dynamic environment with a binding for x would have it bound to 20.
But this call returns 10 because x is bound in a lexical environment so even though the anonymous function gets passed to bar, it remembers the lexical environment corresponding to the application of foo which created it and in that lexical environment, x is bound to 10. Let’s now have another example to show what I mean by ‘specific runtime instantiation’ above.
(defun baz (islast)
(let ((x (if islast 10 20)))
(let ((lx (lambda () x)))
(if islast
lx
(frob lx (baz t))))))
(defun frob (a b)
(list (funcall a) (funcall b)))
Now running (baz nil) will give us (20 10) because the first function passed to frob remembers the lexical environment for the outer call to baz (where islast is nil) whilst the second remembers the environment for the inner call.
For variables which are not special, let creates static lexical bindings. Block names (introduced statically by block), go tags (scopes inside a tagbody), functions (by felt or labels), macros (macrolet), and symbol macros (symbol-macrolet) are all bound statically in lexical environments. Bindings from a lambda list are also lexically bound. Declarations can be created lexically using (declare ...) in one of the allowed places or by using (locally (declare ...) ...) anywhere.
We note that all lexical bindings are static. The eval trick described above does not work because eval happens in the toplevel environment but references to lexical names happen in the lexical environment. This allows the compiler to optimise references to them to know exactly where they are without running code having to carry around a list of bindings or accessing global state (e.g. lexical variables can live in registers and the stack). It also allows the compiler to work out which bindings can escape or be captured in closures or not and optimise accordingly. The one exception is that the (symbol-)macro bindings can be dynamically inspected in a sense as all macros may take an &environment parameter which should be passed to macroexpand (and other expansion related functions) to allow the macroexpander to search the compile-time lexical environment for the macro definitions.
Another thing to note is that without lambda-expressions, lexical and dynamic environments would behave the same way. But note that if there were only a top level environment then recursion would not work as bindings would not be restored as control flow leaves their scope.
Closure
What happens to a lexical binding captured by an anonymous function when that function escapes the scope it was created in? Well there are two things that can happen
Trying to access the binding results in an error
The anonymous function keeps the lexical environment alive for as long as the functions referencing it are alive and they can read and write it as they please.
The second case is called a closure and happens for functions and variables. The first case happens for control flow related bindings because you can’t return from a form that has already returned. Neither happens for macro bindings as they cannot be accessed at run time.
Nonlocal control flow
In a language like Java, control (that is, program execution) flows from one statement to the next, branching for if and switch statements, looping for others with special statements like break and return for certain kinds of jumping. For functions control flow goes into the function until it eventually comes out again when the function returns. The one nonlocal way to transfer control is by using throw and try/catch where if you execute a throw then the stack is unwound piece by piece until a suitable catch is found.
In C there are is no throw or try/catch but there is goto. The structure of C programs is secretly flat with the nesting just specifying that “blocks” end in the opposite order to the order they start. What I mean by this is that it is perfectly legal to have a while loop in the middle of a switch with cases inside the loop and it is legal to goto the middle of a loop from outside of that loop. There is a way to do nonlocal control transfer in C: you use setjmp to save the current control state somewhere (with the return value indicating whether you have successfully saved the state or just nonlocally returned there) and longjmp to return control flow to a previously saved state. No real cleanup or freeing of memory happens as the stack unwinds and there needn’t be checks that you still have the function which called setjmp on the callstack so the whole thing can be quite dangerous.
In Common Lisp there’s a range of ways to do nonlocal control transfer but the rules are more strict. Lisp doesn’t really have statements but rather everything is built out of a tree of expressions and so the first rule is that you can’t nonlocally transfer control into a deeper expression, you may only transfer out. Let’s look at how these different methods of control transfer work.
block and return-from
You’ve already seen how these work inside a single function but recall that I said block names are lexically scoped. So how does this interact with anonymous functions?
Well suppose you want to search some big nested data structure for something. If you were writing this function in Java or C then you might implement a special search function to recurse through your data structure until it finds the right thing and then return it all the way up. If you were implementing it in Haskell then you would probably want to do it as some kind of fold and rely on lazy evaluation to not do too much work. In Common Lisp you might have a function which applies some other function passed as a parameter to each item in the data structure. And now you can call that with a searching function. How might you get the result out? Well just return-from to the outer block.
tagbody and go
A tagbody is like a progn but instead of evaluating single symbols in the body, they are called tags and any expression within the tagbody can go to them to transfer control to it. This is partly like goto, if you’re still in the same function but if your go expression happens inside some anonymous function then it’s like a safe lexically scoped longjmp.
catch and throw
These are most similar to the Java model. The key difference between block and catch is that block uses lexical scoping and catch uses dynamic scoping. Therefore their relationship is like that between special and regular variables.
Finally
In Java one can execute code to tidy things up if the stack has to unwind through it as an exception is thrown. This is done with try/finally. The Common Lisp equivalent is called unwind-protect which ensures a form is executed however control flow may leave it.
Errors
It’s perhaps worth looking a little at how errors work in Common Lisp. Which of these methods do they use?
Well it turns out that the answer is that errors instead of generally unwinding the stack start by calling functions. First they look up all the possible restarts (ways to deal with an error) and save them somewhere. Next they look up all applicable handlers (a list of handlers could, for example, be stored in a special variable as handlers have dynamic scope) and try each one at a time. A handler is just a function so it might return (ie not want to handle the error) or it might not return. A handler might not return if it invokes a restart. But restarts are just normal functions so why might these not return? Well restarts are created in a dynamic environment below the one where the error was raised and so they can transfer control straight out of the handler and the code that threw the error to some code to try to do something and then carry on. Restarts can transfer control using go or return-from. It is worth noting that it is important here that we have lexical scope. A recursive function could define a restart on each successive call and so it is necessary to have lexical scope for variables and tags/block names so that we can make sure we transfer control to the right level on the call stack with the right state.

When is macro expansion performed?

I'm learning about macros in Racket (language successor of Scheme). There is no mentioning of when the macro expansion is performed. On page 17 of this document I found a paragraph that says it happens before type-checking, evaluation and compiling.
So if I understand correctly, macro expansion occurs right after building the abstract syntax tree (AST)?
Although a Racket expert might correct me, my understanding is that the main stages are:
A read pass that processes the input characters into a syntax object.
An expansion pass that recursively expands the syntax object, including using user-defined macros.
Evaluation. (JIT compilation happens during evaluation, whenever a not-yet-compiled function is called.)
In other words the REPL (read eval print loop) is really more like a REEPL (read expand eval print loop).
For an extreme level of detail see Language Model including e.g. the Syntax Model section.
You mentioned "type-checking".
Plain Racket (e.g. #lang racket) is dynamically typed and checking is done at runtime.
Typed Racket (e.g. #lang typed/racket) does static type-checking during expansion: the Typed Racket system is implemented via macros. See Section 10, "Implementation", of Sam Tobin-Hochstadt's dissertation.
(Edited to note the JIT is actually part of evaluation, not a separate stage.)

How does symbol-macrolet handle let shadowing?

From the CLHS
symbol-macrolet lexically establishes expansion functions for each of the symbol macros named by symbols.
...
The use of symbol-macrolet can be shadowed by let.
This allows the following code to work (inside *b* x is bound to '1'):
CT> (with-slots (x y z) *b*
(let ((x 10))
(format nil "~a ~a ~a" x y z)))
"10 2 3"
My question is: How does symbol-macro let know which forms are allowed to shadow it? I ask as macros cannot guarentee that let has not been redefined or that the user has not created another form to do the same job as let. Is this a special 'dumb' case that just looks for the cl:let symbol? Or is there some more advance technique going on?
I am happy to edit this question if it is too vague, I am having difficulty articulating the issue.
See 3.1.1.4 and the surrounding materials.
Where is that quote from? I don't think it's entirely correct, since let is not the only thing that can shadow the name established by macrolet in the lexical environment.
I don't think it does much harm to reveal that the lexical environment isn't just an abstract concept, there is an actual data structure that manifests it. The lexical environment is available to macros at compile time via the &environment binding mechanism. Macros can use that to get a window into the environment and there is a protocol for using the environment. So, to give a simple example, macros can be authored that are sensitive to declarations in the lexical environment, for example expanding one way if a variable is declared fixnum.
The implementation of the environment is left up to the implementers, but heh it is just stack of names with information about the names. So lambda bindings, macrolet, labels, let*, etc. etc. are merely pushing new names into that stack and thus shadowing the old names. And lexical declarations are adding to the information about the names.
The compiler or evaluator then uses the environment (stack) to guide how the resulting code or execution behaves. It's worth noting that this data structure need not survive into runtime, though often some descendent of it does to help the debugger.
So to answer you question: macrolet doesn't know anything about what forms in it's &body might be doing.
As you can see SYMBOL-MACROLET is a built-in feature of Common Lisp. Just as LET. These special operators can't be redefined and it is not allowed to do so.
Common Lisp only has a fixed set of special operators and no way to define one by the user. There are only these ones defined: Common Lisp special operators.
Since macros will be expanded, they expand to the basic primitives: function calls and special forms. Thus the compiler/interpreter implements symbol-macrolet and this task is limited by the number of primitive forms. If the user implements his/her own LET, eventually this implementation boils also down to function calls and special forms - all uses of macros will be expanded to those, eventually. But those are known and there is nothing new for symbol-macrolet.

How do keywords work in Common Lisp?

The question is not about using keywords, but actually about keyword implementation. For example, when I create some function with keyword parameters and make a call:
(defun fun (&key key-param) (print key-param)) => FUN
(find-symbol "KEY-PARAM" 'keyword) => NIL, NIL ;;keyword is not still registered
(fun :key-param 1) => 1
(find-symbol "KEY-PARAM" 'keyword) => :KEY-PARAM, :EXTERNAL
How a keyword is used to pass an argument? Keywords are symbols whos values are themselves, so how a corresponding parameter can be bound using a keyword?
Another question about keywords — keywords are used to define packages. We can define a package named with already existing keyword:
(defpackage :KEY-PARAM) => #<The KEY-PARAMETER package, 0/16 ...
(in-package :KEY-PARAM) => #<The KEY-PARAMETER package, 0/16 ...
(defun fun (&key key-param) (print key-param)) => FUN
(fun :KEY-PARAM 1) => 1
How does system distinguish the usage of :KEY-PARAM between package name and function parameter name?
Also we can make something more complicated, if we define function KEY-PARAM and export it (actually not function, but name):
(in-package :KEY-PARAM)
(defun KEY-PARAM (&key KEY-PARAM) KEY-PARAM) => KEY-PARAM
(defpackage :KEY-PARAM (:export :KEY-PARAM))
;;exporting function KEY-PARAM, :KEY-PARAM keyword is used for it
(in-package :CL-USER) => #<The COMMON-LISP-USER package, ...
(KEY-PARAM:KEY-PARAM :KEY-PARAM 1) => 1
;;calling a function KEY-PARAM from :KEY-PARAM package with :KEY-PARAM parameter...
The question is the same, how Common Lisp does distinguish the usage of keyword :KEY-PARAM here?
If there is some manual about keywords in Common Lisp with explanation of their mechanics, I would be grateful if you posted a link here, because I could find just some short articles only about usage of the keywords.
See Common Lisp Hyperspec for full details of keyword parameters. Notice that
(defun fun (&key key-param) ...)
is actually short for:
(defun fun (&key ((:key-param key-param)) ) ...)
The full syntax of a keyword parameter is:
((keyword-name var) default-value supplied-p-var)
default-value and supplied-p-var are optional. While it's conventional to use a keyword symbol as the keyword-name, it's not required; if you just specify a var instead of (keyword-name var), it defaults keyword-name to being a symbol in the keyword package with the same name as var.
So, for example, you could do:
(defun fun2 (&key ((myoption var))) (print var))
and then call it as:
(fun 'myoption 3)
The way it works internally is when the function is being called, it steps through the argument list, collecting pairs of arguments <label, value>. For each label, it looks in the parameter list for a parameter with that keyword-name, and binds the corresponding var to value.
The reason we normally use keywords is because the : prefix stands out. And these variables have been made self-evaluating so that we don't have to quote them as well, i.e. you can write :key-param instead of ':key-param (FYI, this latter notation was necessary in earlier Lisp systems, but the CL designers decided it was ugly and redundant). And we don't normally use the ability to specify a keyword with a different name from the variable because it would be confusing. It was done this way for full generality. Also, allowing regular symbols in place of keywords is useful for facilities like CLOS, where argument lists get merged and you want to avoid conflicts -- if you're extending a generic function you can add parameters whose keywords are in your own package and there won't be collisions.
The use of keyword arguments when defining packages and exporting variables is again just a convention. DEFPACKAGE, IN-PACKAGE, EXPORT, etc. only care about the names they're given, not what package it's in. You could write
(defpackage key-param)
and it would usually work just as well. The reason many programmers don't do this is because it interns a symbol in their own package, and this can sometimes cause package conflicts if this happens to have the same name as a symbol they're trying to import from another package. Using keywords divorces these parameters from the application's package, avoiding potential problems like this.
The bottom line is: when you're using a symbol and you only care about its name, not its identity, it's often safest to use a keyword.
Finally, about distinguishing keywords when they're used in different ways. A keyword is just a symbol. If it's used in a place where the function or macro just expects an ordinary parameter, the value of that parameter will be the symbol. If you're calling a function that has &key arguments, that's the only time they're used as labels to associate arguments with parameters.
The good manual is Chapter 21 of PCL.
Answering your questions briefly:
keywords are exported symbols in keyword package, so you can refer to them not only as :a, but also as keyword:a
keywords in function argument lists (called lambda-lists) are, probably, implemented in the following way. In presence of &key modifier the lambda form is expanded into something similar to this:
(let ((key-param (getf args :key-param)))
body)
when you use a keyword to name a package it is actually used as a string-designator. This is a Lisp concept that allows to pass to a certain function dealing with strings, that are going to be used as symbols later (for different names: of packages, classes, functions, etc.) not only strings, but also keywords and symbols. So, the basic way to define/use package is actually this:
(defpackage "KEY-PARAM" ...)
But you can as well use:
(defpackage :key-param ...)
and
(defpackage #:key-param ...)
(here #: is a reader macro to create uninterned symbols; and this way is the preferred one, because you don't create unneeded keywords in the process).
The latter two forms will be converted to upper-case strings. So a keyword stays a keyword, and a package gets its named as string, converted from that keyword.
To sum up, keywords have the value of themselves, as well as any other symbols. The difference is that keywords don't require explicit qualification with keyword package or its explicit usage. And as other symbols they can serve as names for objects. Like, for example, you can name a function with a keyword and it will be "magically" accessible in every package :) See #Xach's blogpost for details.
There is no need for "the system" to distinguish different uses of keywords. They are just used as names. For example, imagine two plists:
(defparameter *language-scores* '(:basic 0 :common-lisp 5 :python 3))
(defparameter *price* '(:basic 100 :fancy 500))
A function yielding the score of a language:
(defun language-score (language &optional (language-scores *language-scores*))
(getf language-scores language))
The keywords, when used with language-score, designate different programming languages:
CL-USER> (language-score :common-lisp)
5
Now, what does the system do to distinguish the keywords in *language-scores* from those in *price*? Absolutely nothing. The keywords are just names, designating different things in different data structures. They are no more distinguished than homophones are in natural language – their use determines what they mean in a given context.
In the above example, nothing prevents us from using the function with a wrong context:
(language-score :basic *prices*)
100
The language did nothing to prevent us from doing this, and the keywords for the not-so-fancy programming language and the not-so-fancy product are just the same.
There are many possibilities to prevent this: Not allowing the optional argument for language-score in the first place, putting *prices* in another package without externing it, closing over a lexical binding instead of using the globally special *language-scores* while only exposing means to add and retrieve entries. Maybe just our understanding of the code base or convention are enough to prevent us from doing that. The point is: The system's distinguishing the keywords themselves is not necessary to achieve what we want to implement.
The specific keyword uses you ask about are not different: The implementation might store the bindings of keyword-arguments in an alist, a plist, a hash-table or whatever. In the case of package names, the keywords are just used as package designators and the package name as a string (in uppercase) might just be used instead. Whether the implementation converts strings to keywords, keywords to strings, or something entirely different internally doesn't really matter. What matters is just the name, and in which context it is used.

Why does Clojure have "keywords" in addition to "symbols"?

I have a passing knowledge of other Lisps (particularly Scheme) from way back. Recently I've been reading about Clojure. I see that it has both "symbols" and "keywords". Symbols I'm familiar with, but not with keywords.
Do other Lisps have keywords? How are keywords different from symbols other than having different notation (ie: colons)?
Here's the Clojure documentation for Keywords and Symbols.
Keywords are symbolic identifiers that evaluate to themselves. They provide very fast equality tests...
Symbols are identifiers that are normally used to refer to something else. They can be used in program forms to refer to function parameters, let bindings, class names and global vars...
Keywords are generally used as lightweight "constant strings", e.g. for the keys of a hash-map or the dispatch values of a multimethod. Symbols are generally used to name variable and functions and it's less common to manipulate them as objects directly except in macros and such. But there's nothing stopping you from using a symbol everywhere you use a keyword (if you don't mind quoting them all the time).
The easiest way to see the difference is to read Keyword.java and Symbol.java in the Clojure source. There are a few obvious implementation differences. For example a Symbol in Clojure can have metadata and a Keyword can't.
In addition to single-colon syntax, you can use a double-colon to make a namespace-qualified keyword.
user> :foo
:foo
user> ::foo
:user/foo
Common Lisp has keywords, as do Ruby and other languages. They are slightly different in those languages of course. Some differences between Common Lisp keywords and Clojure keywords:
Keywords in Clojure are not Symbols.
user> (symbol? :foo)
false
Keywords don't belong to any namespace unless you specifically qualify them:
user> (namespace :foo)
nil
user> (namespace ::foo)
"user"
(Thanks Rainer Joswig for giving me ideas of things to look at.)
Common Lisp has keyword symbols.
Keywords are symbols, too.
(symbolp ':foo) -> T
What makes keywords special:
:foo is parsed by the Common Lisp reader as the symbol keyword::foo
keywords evaluate to themselves: :foo -> :foo
the home package of keyword symbols is the KEYWORD package: keyword:foo -> :foo
keywords are exported from the package KEYWORD
keywords are constants, it is not allowed to assign a different value
Otherwise keywords are ordinary symbols. So keywords can name functions or have property lists.
Remember: in Common Lisp symbols belong to a package: the keyword package. This can be written as:
foo, when the symbol is accessible in the current package
foo:bar, when the symbol FOO is exported from the package BAR
foo::bar, when the symbol FOO is in the package BAR
For keyword symbols that means that :foo, keyword:foo and keyword::foo are all the same symbol. Thus the latter two notations are usually not used.
So :foo is just parsed to be in the package KEYWORD, assuming that giving no package name before the symbol name means by default the KEYWORD package.
Keywords are symbols that evaluate to themselves, so you don't have to remember to quote them.
:keywords are also treated specially by many of the collections, allowing for some really convenient syntax.
(:user-id (get-users-map))
is the same as
((get-users-map) :user-id)
this makes things just a little more flexable
For keywords, hash values are calculated and cached when the keyword is
first constructed. When looking up a keyword as a hash key, it simply
returns the precomputed hashed value. For strings and symbols, the hash is
recalculated on every lookup.
The why same named keywords are always identical, they contains their own hash values.
As search in maps and sets are made from hash keys this envolves better search efficiency in case of numerous searches, not in the search itself.
Keywords are global, symbols are not.
This example is written in JavaScript, but I hope it helps to carry the point across.
const foo = Symbol.for(":foo") // this will create a keyword
const foo2 = Symbol.for(":foo") // this will return the same keyword
const foo3 = Symbol(":foo") // this will create a new symbol
foo === foo2 // true
foo2 === foo3 // false
When you construct a symbol using the Symbol function you get a distinct/private symbol every time. When you ask for a symbol via the Symbol.for function, you will get back the same symbol every time.
(println :foo) ; Clojure
System.out.println(RT.keyword(null, "foo")) // Java
console.log(System.for(":foo")) // JavaScript
These are all the same.
Function argument names are local. i.e. not keywords.
(def foo (fn [x] (println x))) ; x is a symbol
(def bar (fn [x] (println x))) ; not the same x (different symbol)