Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
So having watched 3 hours of youtube videos, and spent equally long reading about Lisp, I've yet to see these "magic macros" that allow one to write DSLs, or even do simple things like 4 + 5 without nesting this inside some braces.
There is some discussion here: Common lisp: is there a less painful way to input math expressions? but the syntax doesn't look any nicer, and still requires some enclosing fluff to define where the macro starts and ends.
So here is the challenge, define an infix macro, then use it without having to enclose it with some kind of additional syntax. I.e.
Some macro definition here
1 + 2
NOT
Some macro definition here
(my-macro 1 + 2)
NOT
Some macro definition here
ugly-syntax 1 + 2 end-ugly-syntax
If this is not possible in Lisp, then what is all the fuss about? It's kinda like saying "you have awesome powerful macros that allow for cool DSLs, provided those DSLs are wrapped in Lisp like syntax".
Of course you can implement what you're talking about in Lisp.
One big difference between Lisp and other languages is that nothing is fixed and you've control on what the reader does on every single character of the input source code. Literally.
Consider for example the project CL-Python and its mixed-syntax Lisp/Python mode.
Basically in that mode if what you type starts with an open parenthesis then it's considered Lisp, otherwise it's considered Python (including multi-line constructs).
...HOWEVER...
this kind of macro/reader-macro library is implemented rarely, because one of the main advantages of the s-expression approach is that it's well composable and you can build other macros on top of existing macros, raising the level of the language.
Once abandoned the regularity of s-expression approach, writing macros becomes annoying because to write code that manipulates code you need to consider the several different constructs of the language, precedence rules, special rules, special syntax forms.
Languages that are not based on s-expression sometimes provide real macro processing capability but working on the AST level, i.e. after some parsing processing has already been done in a fixed way. Moreover the macro code in these languages looks really weird because the code the macro is manipulating, inspecting or building doesn't look like real code.
Sometimes in other languages instead you only find text-based macros that are basically search-replace. To see an example of how ugly things can get consider the Python standard library implementation of collections.namedtuple (around line 284) and its set of absurd limitations induced just because of that implementation.
Another example of how things can get horribly complex once you force yourself to a template-only approach to avoid the complexity of manipulating an irregular and special cased language is C++ template metaprogramming.
A simple s-expression based language and quasi-quotation instead makes macro code much easier to write and read and understand, and that's why Lisp code doesn't move away from it. Not because it cannot, but because it doesn't make sense to go to a worse syntax for no reason.
In Lisp you "bend" the language a little by adding the abstractions that are really needed without however breaking everything else and, most important, without dropping the ability to do more bending in the future if needed. Writing a macro/reader-macro that makes the expression parsing the nightmare of say C++ and at the same time removes the ability to write further macros and add more constructs (or makes it impossibly hard) would be a nonsensical suicide.
Even a macro like (infix x + y * z) is just an exercise... I doubt that any lisper would use that to write real code... why on earth would someone reintroduce the absurd function/operator duality and the nightmare of precedence/associativity rules? If you don't like Lisp then just don't use Lisp.
For a lisper it's not the (infix and ) part that is ugly... it's what is in the middle.
Also why do you think that 2+3*6 is "naturally" 20? Because the teacher hit you on the palms of your hands with a stick when you were a kid until you got it right?
http://julia.readthedocs.org/en/latest/manual/metaprogramming/ discusses macros in Julia, which usually start with #, but also lists two special macros, text_str, and cmd, which handle text"string" and `shell command`, respectively. Is there a comprehensive list of these special macros supported by Julia? Is it possible to define your own?
So all the macros, including string literal macros, are in exports.jl.
If you are asking about these special syntax transformations in general like string literal macros, I don't think thats a question thats easily answerable: there are multiple arbitrary syntax translations like that that you can't do in user code (without using an # to denote you are transforming syntax with a macro). Most Julia macro-or-function-looking things aren't magic, but string literals, ccall, and maybe even things like A'c and the like would qualify.
The most sure-to-be-up-to-date way to find out is to enter the folder base and say grep # exports.jl. If you're not on a Unix-like platform, then opening that file and looking at the # Macros section will also work.
It is indeed possible to make your own; in fact every macro of the form
macro x_str(...)
end
is a String macro. Since 0.6, command macros are also supported by
macro x_cmd(...)
end
As described here, emacs-lisp-mode provides for special handling of s-expressions in docstrings that start in the first column. This requires them to be escaped with a backslash to avoid mucking up font-lock later on in the file.
This may be a feature for elisp, but is unfortunate in other lisp modes that reuse emacs-lisp-mode for convenience that don't have special handling of expressions in docstrings, as described/shown here.
My question is, is there any way for such "descendant" modes to configure emacs-lisp-mode to disregard "calling convention expressions" in docstrings?
The short answer is no.
The longer answer is that those other modes are simply broken. They should adapt to Emacs Lisp in this regard. There is no reason not to, is there? It is simply a bad idea to use workarounds (e.g. indent all doc-string lines), such are suggested in the link you provided (and its linked duplicate post).
Emacs doc string are not trivial strings. They have several special properties, including the handling of \\[...], \\{...}, and \\<...>, as well as the property you mention here.
If some mode cannot adjust to Emacs doc strings then it should use macros that define the things it needs without creating Emacs doc strings for them but by handling a different string argument in the special way desired. IOW, create pseudo doc strings that correspond to what the mode wants instead of what Emacs wants.
Of course, that means that you cannot directly take advantage of the Emacs documentation features. You would need to also define mode-specific doc commands that would, for example, wrap the existing doc functions such as describe-function with code that picks up the mode's pseudo-doc string and DTRT, following the mode's conventions instead of the Emacs doc-string conventions.
But I would think that the easiest approach would be to just adapt the mode to the existing Emacs behavior, so that it DTRT.
Many Emacs programming modes, and various Lisp modes are no exception, have been implemented based on parsers with regular expressions. This, unfortunately, gives the editor little idea of the document being edited. Eclipse, for example, has a very different idea of how to edit code, which is more structured, and JetBrain MPS editors are even more rigid and structured in this sense (almost like spreadsheets).
This makes Emacs modes faster and easier to implement, but it also means the code that supports the proper indentation, syntactic validation and highlighting has to re-parse more text every time it is being edited. CEDET, afaik, is trying to address this issue.
Thus, historically, there had been conventions designed to reduce the amount of code to parse on each edit. Parenthesis in the first column is one such convention. However, it also has been known to be an annoyance some times, that's why there's a open-paren-in-column-0-is-defun-start variable one can set to nil to inhibit this behaviour.
But It's hard to say what exactly the performance issues you may face when changing this setting. Lisp grammar is very regular, unless you are using many reader macros, so, perhaps, that won't be a problem.
If beginning-of-defun-function is set accordingly, i.e. checking if inside a comment or string, should be no need for such escaping.
I'm trying to improve Emacs highlighting of Common Lisp and I'm stuck at regexp approach to highlighting used by font-lock. Regexps aren't enough as I want to be able to recognize structure of such forms as defun - highlighting of functions' argument list should be different from the bodys' highlighting, not just global search-and-highlight.
So, are there any alternatives to font-lock in Emacs itself or somewhere in the Internet? And if so, does they operate on symbolic expressions?
Emacs' font-lock matching is not restricted to regular-expression; you can use any function as matcher provided it satisfies certain protocol. Take a look at the variable font-lock-keywords for more details.
C-h vfont-lock-keywords
I think, that something like could be done on base of Semantic (part of CEDET package) - you can get syntactic information from parsed buffer and apply different color for different types of objects. Although I don't know any existing implementation right now
I've heard that Lisp lets you redefine the language itself, and I have tried to research it, but there is no clear explanation anywhere. Does anyone have a simple example?
Lisp users refer to Lisp as the programmable programming language. It is used for symbolic computing - computing with symbols.
Macros are only one way to exploit the symbolic computing paradigm. The broader vision is that Lisp provides easy ways to describe symbolic expressions: mathematical terms, logic expressions, iteration statements, rules, constraint descriptions and more. Macros (transformations of Lisp source forms) are just one application of symbolic computing.
There are certain aspects to that: If you ask about 'redefining' the language, then redefine strictly would mean redefine some existing language mechanism (syntax, semantics, pragmatics). But there is also extension, embedding, removing of language features.
In the Lisp tradition there have been many attempts to provide these features. A Lisp dialect and a certain implementation may offer only a subset of them.
A few ways to redefine/change/extend functionality as provided by major Common Lisp implementations:
s-expression syntax. The syntax of s-expressions is not fixed. The reader (the function READ) uses so-called read tables to specify functions that will be executed when a character is read. One can modify and create read tables. This allows you for example to change the syntax of lists, symbols or other data objects. One can also introduce new syntax for new or existing data types (like hash-tables). It is also possible to replace the s-expression syntax completely and use a different parsing mechanism. If the new parser returns Lisp forms, there is no change needed for the Interpreter or Compiler. A typical example is a read macro that can read infix expressions. Within such a read macro, infix expressions and precedence rules for operators are being used. Read macros are different from ordinary macros: read macros work on the character level of the Lisp data syntax.
replacing functions. The top-level functions are bound to symbols. The user can change the this binding. Most implementations have a mechanism to allow this even for many built-in functions. If you want to provide an alternative to the built-in function ROOM, you could replace its definition. Some implementations will raise an error and then offer the option to continue with the change. Sometimes it is needed to unlock a package. This means that functions in general can be replaced with new definitions. There are limitations to that. One is that the compiler may inline functions in code. To see an effect then one needs to recompile the code that uses the changed code.
advising functions. Often one wants to add some behavior to functions. This is called 'advising' in the Lisp world. Many Common Lisp implementations will provide such a facility.
custom packages. Packages group the symbols in name spaces. The COMMON-LISP package is the home of all symbols that are part of the ANSI Common Lisp standard. The programmer can create new packages and import existing symbols. So you could use in your programs an EXTENDED-COMMON-LISP package that provides more or different facilities. Just by adding (IN-PACKAGE "EXTENDED-COMMON-LISP") you can start to develop using your own extended version of Common Lisp. Depending on the used namespace, the Lisp dialect you use may look slighty or even radically different. In Genera on the Lisp Machine there are several Lisp dialects side by side this way: ZetaLisp, CLtL1, ANSI Common Lisp and Symbolics Common Lisp.
CLOS and dynamic objects. The Common Lisp Object System comes with change built-in. The Meta-Object Protocol extends these capabilities. CLOS itself can be extended/redefined in CLOS. You want different inheritance. Write a method. You want different ways to store instances. Write a method. Slots should have more information. Provide a class for that. CLOS itself is designed such that it is able to implement a whole 'region' of different object-oriented programming languages. Typical examples are adding things like prototypes, integration with foreign object systems (like Objective C), adding persistance, ...
Lisp forms. The interpretation of Lisp forms can be redefined with macros. A macro can parse the source code it encloses and change it. There are various ways to control the transformation process. Complex macros use a code walker, which understands the syntax of Lisp forms and can apply transformations. Macros can be trivial, but can also get very complex like the LOOP or ITERATE macros. Other typical examples are macros for embedded SQL and embedded HTML generation. Macros can also used to move computation to compile time. Since the compiler is itself a Lisp program, arbitrary computation can be done during compilation. For example a Lisp macro could compute an optimized version of a formula if certain parameters are known during compilation.
Symbols. Common Lisp provides symbol macros. Symbol macros allow to change the meaning of symbols in source code. A typical example is this: (with-slots (foo) bar (+ foo 17)) Here the symbol FOO in the source enclosed with WITH-SLOTS will be replaced with a call (slot-value bar 'foo).
optimizations, with so-called compiler macros one can provide more efficient versions of some functionality. The compiler will use those compiler macros. This is an effective way for the user to program optimizations.
Condition Handling - handle conditions that result from using the programming language in a certain way. Common Lisp provides an advanced way to handle errors. The condition system can also be used to redefine language features. For example one could handle undefined function errors with a self-written autoload mechanism. Instead of landing in the debugger when an undefined function is seen by Lisp, the error handler could try to autoload the function and retry the operation after loading the necessary code.
Special variables - inject variable bindings into existing code. Many Lisp dialects, like Common Lisp, provide special/dynamic variables. Their value is looked up at runtime on the stack. This allows enclosing code to add variable bindings that influence existing code without changing it. A typical example is a variable like *standard-output*. One can rebind the variable and all output using this variable during the dynamic scope of the new binding will go to a new direction. Richard Stallman argued that this was very important for him that it was made default in Emacs Lisp (even though Stallman knew about lexical binding in Scheme and Common Lisp).
Lisp has these and more facilities, because it has been used to implement a lot of different languages and programming paradigms. A typical example is an embedded implementation of a logic language, say, Prolog. Lisp allows to describe Prolog terms with s-expressions and with a special compiler, the Prolog terms can be compiled to Lisp code. Sometimes the usual Prolog syntax is needed, then a parser will parse the typical Prolog terms into Lisp forms, which then will be compiled. Other examples for embedded languages are rule-based languages, mathematical expressions, SQL terms, inline Lisp assembler, HTML, XML and many more.
I'm going to pipe in that Scheme is different from Common Lisp when it comes to defining new syntax. It allows you to define templates using define-syntax which get applied to your source code wherever they are used. They look just like functions, only they run at compile time and transform the AST.
Here's an example of how let can be defined in terms of lambda. The line with let is the pattern to be matched, and the line with lambda is the resulting code template.
(define-syntax let
(syntax-rules ()
[(let ([var expr] ...) body1 body2 ...)
((lambda (var ...) body1 body2 ...) expr ...)]))
Note that this is NOTHING like textual substitution. You can actually redefine lambda and the above definition for let will still work, because it is using the definition of lambda in the environment where let was defined. Basically, it's powerful like macros but clean like functions.
Macros are the usual reason for saying this. The idea is that because code is just a data structure (a tree, more or less), you can write programs to generate this data structure. Everything you know about writing programs that generate and manipulate data structures, therefore, adds to your ability to code expressively.
Macros aren't quite a complete redefinition of the language, at least as far as I know (I'm actually a Schemer; I could be wrong), because there is a restriction. A macro can only take a single subtree of your code, and generate a single subtree to replace it. Therefore you can't write whole-program-transforming macros, as cool as that would be.
However, macros as they stand can still do a whole lot of stuff - definitely more than any other language will let you do. And if you're using static compilation, it wouldn't be hard at all to do a whole-program transformation, so the restriction is less of a big deal then.
A reference to 'structure and interpretation of computer programs' chapter 4-5 is what I was missing from the answers (link).
These chapters guide you in building a Lisp evaluator in Lisp. I like the read because not only does it show how to redefine Lisp in a new evaluator, but also let you learn about the specifications of Lisp programming language.
This answer is specifically concerning Common Lisp (CL hereafter), although parts of the answer may be applicable to other languages in the lisp family.
Since CL uses S-expressions and (mostly) looks like a sequence of function applications, there's no obvious difference between built-ins and user code. The main difference is that "things the language provides" is available in a specific package within the coding environment.
With a bit of care, it is not hard to code replacements and use those instead.
Now, the "normal" reader (the part that reads source code and turns it into internal notation) expects the source code to be in a rather specific format (parenthesised S-expressions) but as the reader is driven by something called "read-tables" and these can be created and modified by the developer, it is also possible to change how the source code is supposed to look.
These two things should at least provide some rationale as to why Common Lisp can be considered a re-programmable programming language. I don't have a simple example at hand, but I do have a partial implementation of a translation of Common Lisp to Swedish (created for April 1st, a few years back).
From the outside, looking in...
I always thought it was because Lisp provided, at its core, such basic, atomic logical operators that any logical process can be built (and has been built and provided as toolsets and add-ins) from the basic components.
It is not so much that it can redefine itself as that its basic definition is so malleable that it can take any form and that no form is assumed/presumed into the structure.
As a metaphor, if you only have organic compounds you do organic chemistry, if you only have metal oxides you do metallurgy but if you have only elements you can do everything but you have extra initial steps to complete....most of which others have already done for you....
I think.....
Cool example at http://www.cs.colorado.edu/~ralex/papers/PDF/X-expressions.pdf
reader macros define X-expressions to coexist with S-expressions, e.g.,
? (cx <circle cx="62" cy="135" r="20"/>)
62
plain vanilla Common Lisp at http://www.AgentSheets.com/lisp/XMLisp/XMLisp.lisp
...
(eval-when (:compile-toplevel :load-toplevel :execute)
(when (and (not (boundp '*Non-XMLISP-Readtable*)) (get-macro-character #\<))
(warn "~%XMLisp: The current *readtable* already contains a #/< reader function: ~A" (get-macro-character #\<))))
... of course the XML parser is not so simple but hooking it into the lisp reader is.