I'm learning Haxe now, and I'm wondering if it's possible for any programming language to be compiled to Haxe (instead of from Haxe.) If there isn't any programming language that can be compiled to Haxe in its entirety, then could at least a small subset of a programming language (such as Coffeescript) be compiled to Haxe?
At this time, there is no way to compile coffeescript or similar into Haxe.
CoffeeScript is a source-to-source compiler, so you would change need to change it from going CoffeeScript->JS to CoffeeScript->Haxe,
I'm not sure how difficult it would be, and you have to remember that Haxe has a bunch of features that Javascript doesn't, all of which would need to be represented in the "new" coffeescript. Things such as: type information, enums, typedefs, iterators, macros, conditional compilation, untyped blocks, metadata, property access etc. You would need to figure out how to represent each of these in coffeescript in a way which doesn't conflict with itself or with existing syntax.
I too have thought it might be nice, as CoffeeScript has such a clean syntax, but then looking at the complexity of getting it to work I decided that curly brackets and semi colons aren't so bad :)
Related
While reading Peter Seibel's "Practical Common Lisp", I learned that aside from the core parts of the language like list processing and evaluating, there are macros like loop, do, etc that were written using those core constructs.
My question is two-fold. First is what exactly is the "core" of Lisp? What is the bare minimum from which other things can be re-created, if needed? The second part is where can one look at the code of the macros which come as part of Common Lisp, but were actually written in Lisp? As a side question, when one writes a Lisp implementation, in what language does he do it?
what exactly is the "core" of Lisp? What is the bare minimum from which other things can be re-created, if needed?
The minimum set of syntactic operators were called "special forms" in CLtL. The term was renamed to "special operators" in ANSI CL. There are 24 of them. This is well explained in CLtL section "Special Forms". In ANSI CL they are 25.
where can one look at the code of the macros which come as part of Common Lisp, but were actually written in Lisp?
Many Common Lisp implementations are free software (list); you can look at their source code. For example, here for SBCL, here for GNU clisp, etc.
when one writes a Lisp implementation, in what language does he do it?
Usually a Lisp implementation consists of
a lower-level part, written in a system programming language. This part includes the implementation of the function call mechanism and of the runtime part of the 24 special forms. And
a higher-level part, for which Lisp itself is used because it would be too tedious to write everything in the system programming language. This usually includes the macros and the compiler.
The choice of the system programming language depends. For implementations that are built on top of the Java VM, for example, the natural choice is Java. For implementations that include their own memory management, it is often C, or some Lisp extensions with similar semantics than C (i.e. where you have fixed-width integer types, explicit pointers etc.).
First is what exactly is the "core" of Lisp? What is the bare minimum from which other things can be re-created, if needed?
Most Lisps have a core of primitive constructs, which is usually written in C (or maybe assembly). The usual reason for choosing those languages is performance. The bare minimum from which other things can be re-created depends on how bare-minimum you want to go. That is to say, you don't need much to be Turing-complete. You really only need lambdas for your language to have a bare minimum, from which other things can be created. Though, typically, people also include defmacro, cond, defun, etc. Those things aren't strictly necessary, but are probably what you mean by "bare minimum" and what people usually include as primitive language constructs.
The second part is where can one look at the code of the macros which come as part of Common Lisp, but were actually written in Lisp?
Typically, you look in in the Lisp sources of the language. Sometimes, though, your macro is not a genuine macro and is a primitive language construct. For such things, you may also need to look in the C sources to see how these really primitive things are implemented.
Of course, if your Lisp implementation is not open-source, you need to disassemble its binary files and look at them piece-by-piece in order to understand how primitives are implemented.
As a side question, when one writes a Lisp implementation, in what language does he do it?
As I said above, C is a common choice, and assembly used to be more common. Though, there are Lisps written in high-level languages like Ruby, Python, Haskell, and even Lisp itself. The trade-off here is performance vs. readability and comprehensibility.
If you want a more-or-less canonical example of a Lisp to look at which is totally open-source, check out Emacs' source code. Of course, this isn't Common Lisp, although there is a cl package in the Emacs core which implements a fairly large subset of Common Lisp.
In Lisp, any program's code is actually a valid data structure. For example, this adds one and two together, but it's also a list of three items.
(+ 1 2)
What benefit does that provide? What does that enable you to do that's impossible and/or less elegant in other languages?
To make things a little clearer with respect to code representation, consider that in every language code is data: all you need is strings. (And perhaps a few file operations.) Contemplating how that helps you to mimic the Lisp benefits of having a macro system is a good way for enlightenment. Even better if you try to implement such a macro system. You'll run into the advantages of having a structured representation vs the flatness of strings, the need to run the transformations first and define "syntactic hooks" to tell you where to apply them, etc etc.
But the main thing that you'll see in all of this is that macros are essentially a convenient facility for compiler hooks -- ones that are hooked on newly created keywords. As such, the only thing that is really needed is some way to have user code interact with compiler code. Flat strings are one way to do it, but they provide so little information that the macro writer is left with the task of implementing a parser from scratch. On the other hand, you could expose some internal compiler structure like the pre-parsed AST trees, but those tend to expose too much information to be convenient and it means that the compiler needs to somehow be able to parse the new syntactic extension that you intend to implement. S-expressions are a nice solution to the latter: the compiler can parse anything since the syntax is uniform. They're also a solution to the former, since they're simple structures with rich support by the language for taking them apart and re-combining them in new ways.
But of course that's not the end of the story. For example, it's interesting to compare plain symbolic macros as in CL and hygienic macros as in Scheme implementations: those are usually implemented by adding more information to the represented data, which means that you can do more with those macro system. (You can also do more with CL macros since the extra information is also available, but instead of making it part of the syntax representation, it's passed as an extra environment argument to macros.)
My favorite example... In college some friends of mine were writing a compiler in Lisp. So all their data structures including the parse tree was lisp s-expressions. When it was time to implement their code generation phase, they simply executed their parse tree.
Lisp has been developed for the manipulation of symbolic data of all kinds. It turns out that one can also see any Lisp program as symbolic data. So you can apply the manipulation capabilities of Lisp to itself.
If you want to compute with programs (to create new programs, to compile programs to machine code, to translate programs from Lisp to other languages) one has basically three choices:
use strings. This gets tedious parsing and unparsing strings all the time.
use parse trees. Useful but gets complex.
use symbolic expressions like in Lisp. The programs are preparsed into familiar datastructures (lists, symbols, strings, numbers, ...) and the manipulation routines are written in the usual language provided functionality.
So what you get with Lisp? A relatively simple way to write programs that manipulate other programs. From Macros to Compilers there are many examples of this. You also get that you can embed languages into Lisp easily.
It allows you to write macros that simply transform one tree of lists to another. Simple (Scheme) example:
(define-syntax and
(syntax-rules ()
((and) #t)
((and thing) thing)
((and thing rest ...) (if thing (and rest ...) #f))))
Notice how the macro invocations are simply matched to the (and), (and thing), and (and thing rest ...) clauses (depending on the arity of the invocation), and handled appropriately.
With other languages, macros would have to deal with some kind of internal AST of the code being transformed---there wouldn't otherwise be an easy way to "see" your code in a programmatic format---and that would increase the friction of writing macros.
In Lisp programs, macros are generally used pretty frequently, precisely because of the low friction of writing them.
Paul Graham writes:
For example, types seem to be an
inexhaustible source of research
papers, despite the fact that static
typing seems to preclude true macros--
without which, in my opinion, no
language is worth using.
What's the big deal with macros? I haven't spent a whole lot of time with them, but from the legacy C/C++ I've worked with they appear to be mostly used as a hack before templates/generics existed.
It's hard to imagine that
DECLARELIST(StrList, string);
StrList slist;
is somehow preferable to
List<String> slist;
Am I missing something?
Then there's the usage as a pseudo-function, like MAKEPOINTS:
POINTS MAKEPOINTS(
DWORD dwValue
);
Why not define it as a function instead? Is this some optimization, where you avoid code duplication without having the added overhead of another stack frame?
Then there's also tricky control flow things involving GOTO, which seem to be of dubious value.
What's so great about macros? They're less type safe (in C and C++) (right?). Why won't Paul Graham program without them?
LISP macros are an entirely different beast. C/C++ macros can merely replace a piece of text with abother piece of text using an extremely basic language. Whereas a LISP program is (after "reading") is a LISP data structure and can therefore be manipulated using the whole language.
With such macros, you could (given you're a really clever hacker) vastly extend the language and everybody could use it relatively easily, since you did it with macros. Take for example the the Common Lisp Object System. At its core, the language has nothing even remotely like objects. It is entirely implemented in the language itself, including a relatively simple syntax for use - using macros.
Of course macros are less necessary when the language has most things you'd every want built-in. OTOH, the LISP fans are of the opinion that a sufficiently simple language (LISP) with sufficiently powerful metaprogramming capabilities (macros) is better since new concepts can be incorporated into the language without changing the spec or working implementations. But the most compelling example for macro usage is the DSL area. Ruby on Rails and others show every day how useful DSLs can be. Yes, Ruby doesn't have macros, it just exploits how much Ruby syntax can be bent. In other languages, or when even Ruby's syntax isn't flexible enough, you need macros or a fully-blown parser/interpreter to implement a complex DSL.
Macros are really only good for two things in C/C++, and should generally be the tool of last resort (if you can accomplish something without using macros, do so).
Creating new syntactic structures or abstractions that do not exist in the language.
Eliminating duplication, especially between things that must be in sync with each other.
It's almost never to use a macro as a function.
You also have to realize that LISP macros are not C/C++ macros.
I've heard that Lisp lets you redefine the language itself, and I have tried to research it, but there is no clear explanation anywhere. Does anyone have a simple example?
Lisp users refer to Lisp as the programmable programming language. It is used for symbolic computing - computing with symbols.
Macros are only one way to exploit the symbolic computing paradigm. The broader vision is that Lisp provides easy ways to describe symbolic expressions: mathematical terms, logic expressions, iteration statements, rules, constraint descriptions and more. Macros (transformations of Lisp source forms) are just one application of symbolic computing.
There are certain aspects to that: If you ask about 'redefining' the language, then redefine strictly would mean redefine some existing language mechanism (syntax, semantics, pragmatics). But there is also extension, embedding, removing of language features.
In the Lisp tradition there have been many attempts to provide these features. A Lisp dialect and a certain implementation may offer only a subset of them.
A few ways to redefine/change/extend functionality as provided by major Common Lisp implementations:
s-expression syntax. The syntax of s-expressions is not fixed. The reader (the function READ) uses so-called read tables to specify functions that will be executed when a character is read. One can modify and create read tables. This allows you for example to change the syntax of lists, symbols or other data objects. One can also introduce new syntax for new or existing data types (like hash-tables). It is also possible to replace the s-expression syntax completely and use a different parsing mechanism. If the new parser returns Lisp forms, there is no change needed for the Interpreter or Compiler. A typical example is a read macro that can read infix expressions. Within such a read macro, infix expressions and precedence rules for operators are being used. Read macros are different from ordinary macros: read macros work on the character level of the Lisp data syntax.
replacing functions. The top-level functions are bound to symbols. The user can change the this binding. Most implementations have a mechanism to allow this even for many built-in functions. If you want to provide an alternative to the built-in function ROOM, you could replace its definition. Some implementations will raise an error and then offer the option to continue with the change. Sometimes it is needed to unlock a package. This means that functions in general can be replaced with new definitions. There are limitations to that. One is that the compiler may inline functions in code. To see an effect then one needs to recompile the code that uses the changed code.
advising functions. Often one wants to add some behavior to functions. This is called 'advising' in the Lisp world. Many Common Lisp implementations will provide such a facility.
custom packages. Packages group the symbols in name spaces. The COMMON-LISP package is the home of all symbols that are part of the ANSI Common Lisp standard. The programmer can create new packages and import existing symbols. So you could use in your programs an EXTENDED-COMMON-LISP package that provides more or different facilities. Just by adding (IN-PACKAGE "EXTENDED-COMMON-LISP") you can start to develop using your own extended version of Common Lisp. Depending on the used namespace, the Lisp dialect you use may look slighty or even radically different. In Genera on the Lisp Machine there are several Lisp dialects side by side this way: ZetaLisp, CLtL1, ANSI Common Lisp and Symbolics Common Lisp.
CLOS and dynamic objects. The Common Lisp Object System comes with change built-in. The Meta-Object Protocol extends these capabilities. CLOS itself can be extended/redefined in CLOS. You want different inheritance. Write a method. You want different ways to store instances. Write a method. Slots should have more information. Provide a class for that. CLOS itself is designed such that it is able to implement a whole 'region' of different object-oriented programming languages. Typical examples are adding things like prototypes, integration with foreign object systems (like Objective C), adding persistance, ...
Lisp forms. The interpretation of Lisp forms can be redefined with macros. A macro can parse the source code it encloses and change it. There are various ways to control the transformation process. Complex macros use a code walker, which understands the syntax of Lisp forms and can apply transformations. Macros can be trivial, but can also get very complex like the LOOP or ITERATE macros. Other typical examples are macros for embedded SQL and embedded HTML generation. Macros can also used to move computation to compile time. Since the compiler is itself a Lisp program, arbitrary computation can be done during compilation. For example a Lisp macro could compute an optimized version of a formula if certain parameters are known during compilation.
Symbols. Common Lisp provides symbol macros. Symbol macros allow to change the meaning of symbols in source code. A typical example is this: (with-slots (foo) bar (+ foo 17)) Here the symbol FOO in the source enclosed with WITH-SLOTS will be replaced with a call (slot-value bar 'foo).
optimizations, with so-called compiler macros one can provide more efficient versions of some functionality. The compiler will use those compiler macros. This is an effective way for the user to program optimizations.
Condition Handling - handle conditions that result from using the programming language in a certain way. Common Lisp provides an advanced way to handle errors. The condition system can also be used to redefine language features. For example one could handle undefined function errors with a self-written autoload mechanism. Instead of landing in the debugger when an undefined function is seen by Lisp, the error handler could try to autoload the function and retry the operation after loading the necessary code.
Special variables - inject variable bindings into existing code. Many Lisp dialects, like Common Lisp, provide special/dynamic variables. Their value is looked up at runtime on the stack. This allows enclosing code to add variable bindings that influence existing code without changing it. A typical example is a variable like *standard-output*. One can rebind the variable and all output using this variable during the dynamic scope of the new binding will go to a new direction. Richard Stallman argued that this was very important for him that it was made default in Emacs Lisp (even though Stallman knew about lexical binding in Scheme and Common Lisp).
Lisp has these and more facilities, because it has been used to implement a lot of different languages and programming paradigms. A typical example is an embedded implementation of a logic language, say, Prolog. Lisp allows to describe Prolog terms with s-expressions and with a special compiler, the Prolog terms can be compiled to Lisp code. Sometimes the usual Prolog syntax is needed, then a parser will parse the typical Prolog terms into Lisp forms, which then will be compiled. Other examples for embedded languages are rule-based languages, mathematical expressions, SQL terms, inline Lisp assembler, HTML, XML and many more.
I'm going to pipe in that Scheme is different from Common Lisp when it comes to defining new syntax. It allows you to define templates using define-syntax which get applied to your source code wherever they are used. They look just like functions, only they run at compile time and transform the AST.
Here's an example of how let can be defined in terms of lambda. The line with let is the pattern to be matched, and the line with lambda is the resulting code template.
(define-syntax let
(syntax-rules ()
[(let ([var expr] ...) body1 body2 ...)
((lambda (var ...) body1 body2 ...) expr ...)]))
Note that this is NOTHING like textual substitution. You can actually redefine lambda and the above definition for let will still work, because it is using the definition of lambda in the environment where let was defined. Basically, it's powerful like macros but clean like functions.
Macros are the usual reason for saying this. The idea is that because code is just a data structure (a tree, more or less), you can write programs to generate this data structure. Everything you know about writing programs that generate and manipulate data structures, therefore, adds to your ability to code expressively.
Macros aren't quite a complete redefinition of the language, at least as far as I know (I'm actually a Schemer; I could be wrong), because there is a restriction. A macro can only take a single subtree of your code, and generate a single subtree to replace it. Therefore you can't write whole-program-transforming macros, as cool as that would be.
However, macros as they stand can still do a whole lot of stuff - definitely more than any other language will let you do. And if you're using static compilation, it wouldn't be hard at all to do a whole-program transformation, so the restriction is less of a big deal then.
A reference to 'structure and interpretation of computer programs' chapter 4-5 is what I was missing from the answers (link).
These chapters guide you in building a Lisp evaluator in Lisp. I like the read because not only does it show how to redefine Lisp in a new evaluator, but also let you learn about the specifications of Lisp programming language.
This answer is specifically concerning Common Lisp (CL hereafter), although parts of the answer may be applicable to other languages in the lisp family.
Since CL uses S-expressions and (mostly) looks like a sequence of function applications, there's no obvious difference between built-ins and user code. The main difference is that "things the language provides" is available in a specific package within the coding environment.
With a bit of care, it is not hard to code replacements and use those instead.
Now, the "normal" reader (the part that reads source code and turns it into internal notation) expects the source code to be in a rather specific format (parenthesised S-expressions) but as the reader is driven by something called "read-tables" and these can be created and modified by the developer, it is also possible to change how the source code is supposed to look.
These two things should at least provide some rationale as to why Common Lisp can be considered a re-programmable programming language. I don't have a simple example at hand, but I do have a partial implementation of a translation of Common Lisp to Swedish (created for April 1st, a few years back).
From the outside, looking in...
I always thought it was because Lisp provided, at its core, such basic, atomic logical operators that any logical process can be built (and has been built and provided as toolsets and add-ins) from the basic components.
It is not so much that it can redefine itself as that its basic definition is so malleable that it can take any form and that no form is assumed/presumed into the structure.
As a metaphor, if you only have organic compounds you do organic chemistry, if you only have metal oxides you do metallurgy but if you have only elements you can do everything but you have extra initial steps to complete....most of which others have already done for you....
I think.....
Cool example at http://www.cs.colorado.edu/~ralex/papers/PDF/X-expressions.pdf
reader macros define X-expressions to coexist with S-expressions, e.g.,
? (cx <circle cx="62" cy="135" r="20"/>)
62
plain vanilla Common Lisp at http://www.AgentSheets.com/lisp/XMLisp/XMLisp.lisp
...
(eval-when (:compile-toplevel :load-toplevel :execute)
(when (and (not (boundp '*Non-XMLISP-Readtable*)) (get-macro-character #\<))
(warn "~%XMLisp: The current *readtable* already contains a #/< reader function: ~A" (get-macro-character #\<))))
... of course the XML parser is not so simple but hooking it into the lisp reader is.
I was working with a Lisp dialect but also learning some Haskell as well. They share some similarities but the main difference in Common Lisp seems to be that you don't have to define a type for each function, argument, etc. whereas in Haskell you do. Also, Haskell is mostly a compiled language. Run the compiler to generate the executable.
My question is this, are there different applications or uses where a language like Haskell may make more sense than a more dynamic language like Common Lisp. For example, it seems that Lisp could be used for more bottom programming, like in building websites or GUIs, where Haskell could be used where compile time checks are more needed like in building TCP/IP servers or code parsers.
Popular Lisp applications:
Emacs
Popular Haskell applications:
PUGS
Darcs
Do you agree, and are there any studies on this?
Programming languages are tools for thinking with. You can express any program in any language, if you're willing to work hard enough. The chief value provided by one programming language over another is how much support it gives you for thinking about problems in different ways.
For example, Haskell is a language that emphasizes thinking about your problem in terms of types. If there's a convenient way to express your problem in terms of Haskell's data types, you'll probably find that it's a convenient language to write your program in.
Common Lisp's strengths (which are numerous) lie in its dynamic nature and its homoiconicity (that is, Lisp programs are very easy to represent and manipulate as Lisp data) -- Lisp is a "programmable programming language". If your program is most easily expressed in a new domain-specific language, for example, Lisp makes it very easy to do that. Lisp (and other dynamic languages) are a good fit if your problem description deals with data whose type is poorly specified or might change as development progresses.
Language choice is often as much an aesthetic decision as anything. If your project requirements don't limit you to specific languages for compatibility, dependency, or performance reasons, you might as well pick the one you feel the best about.
You're opening multiple cans of very wriggly worms. First off, the whole strongly vs weakly typed languages can. Second, the functional vs imperative language can.
(Actually, I'm curious: by "lisp dialect" do you mean Clojure by any chance? Because it's largely functional and closer in some ways to Haskell.)
Okay, so. First off, you can write pretty much any program in pretty much any normal language, with more or less effort. The purported advantage to strong typing is that a large class of errors can be detected at compile time. On the other hand, less typeful languages can be easier to code in. Common Lisp is interesting because it's a dynamic language with the option of declaring and using stronger types, which gives the CL compiler hints on how to optimize. (Oh, and real Common Lisp is usually implemented with a compiler, giving you the option of compiling or sticking with interpreted code.)
There are a number of studies about comparing untyped, weakly typed, and strongly typed languages. These studies invariably either say one of them is better, or say there's no observable difference. There is, however, little agreement among the studies.
The biggest area in which there may be some clear advantage is in dealing with complicated specifications for mathematical problems. In those cases (cryptographic algorithms are one example) a functional language like Haskell has advantages because it is easier to verify the correspondence between the Haskell code and the underlying algorithm.
I come mostly from a Common Lisp perspective, and as far as I can see, Common Lisp is suited for any application.
Yes, the default is dynamic typing (i.e. type detection at runtime), but you can declare types anyway for optimization (as a side note for other readers: CL is strongly typed; don't confuse weak/strong with static/dynamic!).
I could imagine that Haskell could be a bit better suited as a replacement for Ada in the avionics sector, since it forces at least all type checks at compile time.
I do not see how CL should not be as useful as Haskell for TCP/IP servers or code parsers -- rather the opposite, but my contacts with Haskell have been brief so far.
Haskell is a pure functional language. While it does allow imperative constructs (using monads), it generally forces the programmer to think the problem in a rather different way, using a more mathematical-oriented approach. You can't reassign another value to a variable, for example.
It is claimed that this reduces the probability of making some types of mistakes. Moreover, programs written in Haskell tend to be shorter and more concise than those written in typical programming languages. Haskell also makes heavy use of non-strict, lazy evaluation, which could theoretically allow the compiler to make optimizations not otherwise possible (along with the no-side-effects paradigm).
Since you asked about it, I believe Haskell's typing system is quite nice and useful. Not only it catches common errors, but it can also make code more concise (!) and can effectively replace object-oriented constructs from common OO languages.
Some Haskell development kits, like GHC, also feature interactive environments.
The best use for dynamic typing that I've found is when you depend on things that you have no control over so it could as well be used dynamically. For example getting information from XML document we could do something like this:
var volume = parseXML("mydoc.xml").speaker.volume()
Not using duck typing would lead to something like this:
var volume = parseXML("mydoc.xml").getAttrib["speaker"].getAttrib["volume"].ToString()
The benefit of Haskell on the other hand is in safety. You can for example make sure, using types, that degrees in Fahrenheit and Celsius are never mixed unintentionally. Besides that I find that statically typed languages have better IDEs.