I'm a lisp noobie, and I need a hash table class. - lisp

I'm kind of new to lisp, and after coming from languages like C, Java, and Python, where there is a well defined set of standard libraries, I'm a little lost in the sea of implementations and libraries out there.
I'm looking for a few nice data structures to use as primitives, such as RB trees and dictionaries.

Common Lisp has a specification: CL HyperSpec.
Hash tables are part of that.

Common Lisp has some built-in data structures, like single-linked lists (also used for the language itself), arrays, and hash-tables. There are lots of data structure libraries available from quicklisp, e.g. trees, spatial-trees, bk-tree. Look at CLiki's data structure directory for some directions.

Common Lisp the Language, 2nd ed., specifically chapter 16: Hash Tables

Related

Discovering the "Core" Entities and Macros of Common Lisp

While reading Peter Seibel's "Practical Common Lisp", I learned that aside from the core parts of the language like list processing and evaluating, there are macros like loop, do, etc that were written using those core constructs.
My question is two-fold. First is what exactly is the "core" of Lisp? What is the bare minimum from which other things can be re-created, if needed? The second part is where can one look at the code of the macros which come as part of Common Lisp, but were actually written in Lisp? As a side question, when one writes a Lisp implementation, in what language does he do it?
what exactly is the "core" of Lisp? What is the bare minimum from which other things can be re-created, if needed?
The minimum set of syntactic operators were called "special forms" in CLtL. The term was renamed to "special operators" in ANSI CL. There are 24 of them. This is well explained in CLtL section "Special Forms". In ANSI CL they are 25.
where can one look at the code of the macros which come as part of Common Lisp, but were actually written in Lisp?
Many Common Lisp implementations are free software (list); you can look at their source code. For example, here for SBCL, here for GNU clisp, etc.
when one writes a Lisp implementation, in what language does he do it?
Usually a Lisp implementation consists of
a lower-level part, written in a system programming language. This part includes the implementation of the function call mechanism and of the runtime part of the 24 special forms. And
a higher-level part, for which Lisp itself is used because it would be too tedious to write everything in the system programming language. This usually includes the macros and the compiler.
The choice of the system programming language depends. For implementations that are built on top of the Java VM, for example, the natural choice is Java. For implementations that include their own memory management, it is often C, or some Lisp extensions with similar semantics than C (i.e. where you have fixed-width integer types, explicit pointers etc.).
First is what exactly is the "core" of Lisp? What is the bare minimum from which other things can be re-created, if needed?
Most Lisps have a core of primitive constructs, which is usually written in C (or maybe assembly). The usual reason for choosing those languages is performance. The bare minimum from which other things can be re-created depends on how bare-minimum you want to go. That is to say, you don't need much to be Turing-complete. You really only need lambdas for your language to have a bare minimum, from which other things can be created. Though, typically, people also include defmacro, cond, defun, etc. Those things aren't strictly necessary, but are probably what you mean by "bare minimum" and what people usually include as primitive language constructs.
The second part is where can one look at the code of the macros which come as part of Common Lisp, but were actually written in Lisp?
Typically, you look in in the Lisp sources of the language. Sometimes, though, your macro is not a genuine macro and is a primitive language construct. For such things, you may also need to look in the C sources to see how these really primitive things are implemented.
Of course, if your Lisp implementation is not open-source, you need to disassemble its binary files and look at them piece-by-piece in order to understand how primitives are implemented.
As a side question, when one writes a Lisp implementation, in what language does he do it?
As I said above, C is a common choice, and assembly used to be more common. Though, there are Lisps written in high-level languages like Ruby, Python, Haskell, and even Lisp itself. The trade-off here is performance vs. readability and comprehensibility.
If you want a more-or-less canonical example of a Lisp to look at which is totally open-source, check out Emacs' source code. Of course, this isn't Common Lisp, although there is a cl package in the Emacs core which implements a fairly large subset of Common Lisp.

In Lisp, code is data. What benefit does that provide?

In Lisp, any program's code is actually a valid data structure. For example, this adds one and two together, but it's also a list of three items.
(+ 1 2)
What benefit does that provide? What does that enable you to do that's impossible and/or less elegant in other languages?
To make things a little clearer with respect to code representation, consider that in every language code is data: all you need is strings. (And perhaps a few file operations.) Contemplating how that helps you to mimic the Lisp benefits of having a macro system is a good way for enlightenment. Even better if you try to implement such a macro system. You'll run into the advantages of having a structured representation vs the flatness of strings, the need to run the transformations first and define "syntactic hooks" to tell you where to apply them, etc etc.
But the main thing that you'll see in all of this is that macros are essentially a convenient facility for compiler hooks -- ones that are hooked on newly created keywords. As such, the only thing that is really needed is some way to have user code interact with compiler code. Flat strings are one way to do it, but they provide so little information that the macro writer is left with the task of implementing a parser from scratch. On the other hand, you could expose some internal compiler structure like the pre-parsed AST trees, but those tend to expose too much information to be convenient and it means that the compiler needs to somehow be able to parse the new syntactic extension that you intend to implement. S-expressions are a nice solution to the latter: the compiler can parse anything since the syntax is uniform. They're also a solution to the former, since they're simple structures with rich support by the language for taking them apart and re-combining them in new ways.
But of course that's not the end of the story. For example, it's interesting to compare plain symbolic macros as in CL and hygienic macros as in Scheme implementations: those are usually implemented by adding more information to the represented data, which means that you can do more with those macro system. (You can also do more with CL macros since the extra information is also available, but instead of making it part of the syntax representation, it's passed as an extra environment argument to macros.)
My favorite example... In college some friends of mine were writing a compiler in Lisp. So all their data structures including the parse tree was lisp s-expressions. When it was time to implement their code generation phase, they simply executed their parse tree.
Lisp has been developed for the manipulation of symbolic data of all kinds. It turns out that one can also see any Lisp program as symbolic data. So you can apply the manipulation capabilities of Lisp to itself.
If you want to compute with programs (to create new programs, to compile programs to machine code, to translate programs from Lisp to other languages) one has basically three choices:
use strings. This gets tedious parsing and unparsing strings all the time.
use parse trees. Useful but gets complex.
use symbolic expressions like in Lisp. The programs are preparsed into familiar datastructures (lists, symbols, strings, numbers, ...) and the manipulation routines are written in the usual language provided functionality.
So what you get with Lisp? A relatively simple way to write programs that manipulate other programs. From Macros to Compilers there are many examples of this. You also get that you can embed languages into Lisp easily.
It allows you to write macros that simply transform one tree of lists to another. Simple (Scheme) example:
(define-syntax and
(syntax-rules ()
((and) #t)
((and thing) thing)
((and thing rest ...) (if thing (and rest ...) #f))))
Notice how the macro invocations are simply matched to the (and), (and thing), and (and thing rest ...) clauses (depending on the arity of the invocation), and handled appropriately.
With other languages, macros would have to deal with some kind of internal AST of the code being transformed---there wouldn't otherwise be an easy way to "see" your code in a programmatic format---and that would increase the friction of writing macros.
In Lisp programs, macros are generally used pretty frequently, precisely because of the low friction of writing them.

What are the actual differences between Scheme and Common Lisp? (Or any other two dialects of Lisp)

Note: I am not asking which to learn, which is better, or anything like that.
I picked up the free version of SICP because I felt it would be nice to read (I've heard good stuff about it, and I'm interested in that sort of side of programming).
I know Scheme is a dialect of Lisp and I wondered: what is the actual difference is between Scheme and, say, Common Lisp?
There seems to be a lot about 'CL has a larger stdlib...Scheme is not good for real-world programming..' but no actual thing saying 'this is because CL is this/has this'.
This is a bit of a tricky question, since the differences are both technical and (more importantly, in my opinion) cultural. An answer can only ever provide an imprecise, subjective view. This is what I'm going to provide here. For some raw technical details, see the Scheme Wiki.
Scheme is a language built on the principle of providing an elegant, consistent, well thought-through base language substrate which both practical and academic application languages can be built upon.
Rarely will you find someone writing an application in pure R5RS (or R6RS) Scheme, and because of the minimalistic standard, most code is not portable across Scheme implementations. This means that you will have to choose your Scheme implementation carefully, should you want to write some kind of end-user application, because the choice will largely determine what libraries are available to you. On the other hand, the relative freedom in designing the actual application language means that Scheme implementations often provide features unheard of elsewhere; PLT Racket, for example, enables you to make use of static typing and provides a very language-aware IDE.
Interoperability beyond the base language is provided through the community-driven SRFI process, but availability of any given SRFI varies by implementation.
Most Scheme dialects and libraries focus on functional programming idioms like recursion instead of iteration. There are various object systems you can load as libraries when you want to do OOP, but integration with existing code heavily depends on the Scheme dialect and its surrounding culture (Chicken Scheme seems to be more object-oriented than Racket, for instance).
Interactive programming is another point that Scheme subcommunities differ in. MIT Scheme is known for strong interactivitiy support, while PLT Racket feels much more static. In any case, interactive programming does not seem to be a central concern to most Scheme subcommunities, and I have yet to see a programming environment similarly interactive as most Common Lisps'.
Common Lisp is a battle-worn language designed for practical programming. It is full of ugly warts and compatibility hacks -- quite the opposite of Scheme's elegant minimalism. But it is also much more featureful when taken for itself.
Common Lisp has bred a relatively large ecosystem of portable libraries. You can usually switch implementations at any time, even after application deployment, without too much trouble. Overall, Common Lisp is much more uniform than Scheme, and more radical language experiments, if done at all, are usually embedded as a portable library rather than defining a whole new language dialect. Because of this, language extensions tend to be more conservative, but also more combinable (and often optional).
Universally useful language extensions like foreign-function interfaces are not developed through formal means but rely on quasi-standard libraries available on all major Common Lisp implementations.
The language idioms are a wild mixture of functional, imperative, and object-oriented approaches, and in general, Common Lisp feels more like an imperative language than a functional one. It is also extremely dynamic, arguably more so than any of the popular dynamic scripting languages (class redefinition applies to existing instances, for example, and the condition handling system has interactivity built right in), and interactive, exploratory programming is an important part of "the Common Lisp way." This is also reflected in the programming environments available for Common Lisp, practically all of which offer some sort of direct interaction with the running Lisp compiler.
Common Lisp features a built-in object system (CLOS), a condition handling system significantly more powerful than mere exception handling, run-time patchability, and various kinds of built-in data structures and utilites (including the notorious LOOP macro, an iteration sublanguage much too ugly for Scheme but much too useful not to mention, as well as a printf-like formatting mechanism with GOTO support in format strings).
Both because of the image-based, interactive development, and because of the larger language, Lisp implementations are usually less portable across operating systems than Scheme implementations are. Getting a Common Lisp to run on an embedded device is not for the faint of heart, for example. Similarly to the Java Virtual Machine, you also tend to encounter problems on machines where virtual memory is restricted (like OpenVZ-based virtual servers). Scheme implementations, on the other hand, tend to be more compact and portable. The increasing quality of the ECL implementation has mitigated this point somewhat, though its essence is still true.
If you care for commercial support, there are a couple of companies that provide their own Common Lisp implementations including graphical GUI builders, specialized database systems, et cetera.
Summing up, Scheme is a more elegantly designed language. It is primarily a functional language with some dynamic features. Its implementations represent various incompatible dialects with distinctive features. Common Lisp is a fully-fledged, highly dynamic, multi-paradigm language with various ugly but pragmatic features, whose implementations are largely compatible with one another. Scheme dialects tend to be more static and less interactive than Common Lisp; Common Lisp implementations tend to be heavier and trickier to install.
Whichever language you choose, I wish you a lot of fun! :)
Some basic practical differences:
Common Lisp has separate scopes for variables and functions; whereas in Scheme there is just one scope -- functions are values and defining a function with a certain name is just defining a variable set to the lambda. As a result, in Scheme you can use a function name as a variable and store or pass it to other functions, and then perform a call with that variable as if it were a function. But in Common Lisp, you need to explicitly convert a function into a value using (function ...), and explicitly call a function stored in a value using (funcall ...)
In Common Lisp, nil (the empty list) is considered false (e.g. in if), and is the only false value. In Scheme, the empty list is considered true, and (the distinct) #f is the only false value
That's a hard question to answer impartially, especially because many of the LISP folks would classify Scheme as a LISP.
Josh Bloch (and this analogy may not be his invention) describes choosing a language as being akin to choosing a local pub. In that light, then:
The "Scheme" pub has a lot of programming-languages researchers in it. These people spend a lot of attention on the meaning of the language, on keeping it well-defined and simple, and on discussing innovative new features. Everyone's got their own version of the language, designed to allow them to explore their own particular corner of programming languages. The Scheme people really like the parenthesized syntax that they took from LISP; it's flexible and lightweight and uniform and removes many barriers to language extension.
The "LISP" pub? Well... I shouldn't comment; I haven't spent enough time there :).
scheme:
orginally very few specifications (new R7RS seems to be heavier)
due to the easy syntax, scheme can be learned quickly
implementations provide additional functions, but names can differ in different implementations
common lisp:
many functions are defined by the bigger specification
different namespace for functions and variables (lisp-2)
that are some points, sure there are many more, which i don't remember right now.

How does Lisp let you redefine the language itself?

I've heard that Lisp lets you redefine the language itself, and I have tried to research it, but there is no clear explanation anywhere. Does anyone have a simple example?
Lisp users refer to Lisp as the programmable programming language. It is used for symbolic computing - computing with symbols.
Macros are only one way to exploit the symbolic computing paradigm. The broader vision is that Lisp provides easy ways to describe symbolic expressions: mathematical terms, logic expressions, iteration statements, rules, constraint descriptions and more. Macros (transformations of Lisp source forms) are just one application of symbolic computing.
There are certain aspects to that: If you ask about 'redefining' the language, then redefine strictly would mean redefine some existing language mechanism (syntax, semantics, pragmatics). But there is also extension, embedding, removing of language features.
In the Lisp tradition there have been many attempts to provide these features. A Lisp dialect and a certain implementation may offer only a subset of them.
A few ways to redefine/change/extend functionality as provided by major Common Lisp implementations:
s-expression syntax. The syntax of s-expressions is not fixed. The reader (the function READ) uses so-called read tables to specify functions that will be executed when a character is read. One can modify and create read tables. This allows you for example to change the syntax of lists, symbols or other data objects. One can also introduce new syntax for new or existing data types (like hash-tables). It is also possible to replace the s-expression syntax completely and use a different parsing mechanism. If the new parser returns Lisp forms, there is no change needed for the Interpreter or Compiler. A typical example is a read macro that can read infix expressions. Within such a read macro, infix expressions and precedence rules for operators are being used. Read macros are different from ordinary macros: read macros work on the character level of the Lisp data syntax.
replacing functions. The top-level functions are bound to symbols. The user can change the this binding. Most implementations have a mechanism to allow this even for many built-in functions. If you want to provide an alternative to the built-in function ROOM, you could replace its definition. Some implementations will raise an error and then offer the option to continue with the change. Sometimes it is needed to unlock a package. This means that functions in general can be replaced with new definitions. There are limitations to that. One is that the compiler may inline functions in code. To see an effect then one needs to recompile the code that uses the changed code.
advising functions. Often one wants to add some behavior to functions. This is called 'advising' in the Lisp world. Many Common Lisp implementations will provide such a facility.
custom packages. Packages group the symbols in name spaces. The COMMON-LISP package is the home of all symbols that are part of the ANSI Common Lisp standard. The programmer can create new packages and import existing symbols. So you could use in your programs an EXTENDED-COMMON-LISP package that provides more or different facilities. Just by adding (IN-PACKAGE "EXTENDED-COMMON-LISP") you can start to develop using your own extended version of Common Lisp. Depending on the used namespace, the Lisp dialect you use may look slighty or even radically different. In Genera on the Lisp Machine there are several Lisp dialects side by side this way: ZetaLisp, CLtL1, ANSI Common Lisp and Symbolics Common Lisp.
CLOS and dynamic objects. The Common Lisp Object System comes with change built-in. The Meta-Object Protocol extends these capabilities. CLOS itself can be extended/redefined in CLOS. You want different inheritance. Write a method. You want different ways to store instances. Write a method. Slots should have more information. Provide a class for that. CLOS itself is designed such that it is able to implement a whole 'region' of different object-oriented programming languages. Typical examples are adding things like prototypes, integration with foreign object systems (like Objective C), adding persistance, ...
Lisp forms. The interpretation of Lisp forms can be redefined with macros. A macro can parse the source code it encloses and change it. There are various ways to control the transformation process. Complex macros use a code walker, which understands the syntax of Lisp forms and can apply transformations. Macros can be trivial, but can also get very complex like the LOOP or ITERATE macros. Other typical examples are macros for embedded SQL and embedded HTML generation. Macros can also used to move computation to compile time. Since the compiler is itself a Lisp program, arbitrary computation can be done during compilation. For example a Lisp macro could compute an optimized version of a formula if certain parameters are known during compilation.
Symbols. Common Lisp provides symbol macros. Symbol macros allow to change the meaning of symbols in source code. A typical example is this: (with-slots (foo) bar (+ foo 17)) Here the symbol FOO in the source enclosed with WITH-SLOTS will be replaced with a call (slot-value bar 'foo).
optimizations, with so-called compiler macros one can provide more efficient versions of some functionality. The compiler will use those compiler macros. This is an effective way for the user to program optimizations.
Condition Handling - handle conditions that result from using the programming language in a certain way. Common Lisp provides an advanced way to handle errors. The condition system can also be used to redefine language features. For example one could handle undefined function errors with a self-written autoload mechanism. Instead of landing in the debugger when an undefined function is seen by Lisp, the error handler could try to autoload the function and retry the operation after loading the necessary code.
Special variables - inject variable bindings into existing code. Many Lisp dialects, like Common Lisp, provide special/dynamic variables. Their value is looked up at runtime on the stack. This allows enclosing code to add variable bindings that influence existing code without changing it. A typical example is a variable like *standard-output*. One can rebind the variable and all output using this variable during the dynamic scope of the new binding will go to a new direction. Richard Stallman argued that this was very important for him that it was made default in Emacs Lisp (even though Stallman knew about lexical binding in Scheme and Common Lisp).
Lisp has these and more facilities, because it has been used to implement a lot of different languages and programming paradigms. A typical example is an embedded implementation of a logic language, say, Prolog. Lisp allows to describe Prolog terms with s-expressions and with a special compiler, the Prolog terms can be compiled to Lisp code. Sometimes the usual Prolog syntax is needed, then a parser will parse the typical Prolog terms into Lisp forms, which then will be compiled. Other examples for embedded languages are rule-based languages, mathematical expressions, SQL terms, inline Lisp assembler, HTML, XML and many more.
I'm going to pipe in that Scheme is different from Common Lisp when it comes to defining new syntax. It allows you to define templates using define-syntax which get applied to your source code wherever they are used. They look just like functions, only they run at compile time and transform the AST.
Here's an example of how let can be defined in terms of lambda. The line with let is the pattern to be matched, and the line with lambda is the resulting code template.
(define-syntax let
(syntax-rules ()
[(let ([var expr] ...) body1 body2 ...)
((lambda (var ...) body1 body2 ...) expr ...)]))
Note that this is NOTHING like textual substitution. You can actually redefine lambda and the above definition for let will still work, because it is using the definition of lambda in the environment where let was defined. Basically, it's powerful like macros but clean like functions.
Macros are the usual reason for saying this. The idea is that because code is just a data structure (a tree, more or less), you can write programs to generate this data structure. Everything you know about writing programs that generate and manipulate data structures, therefore, adds to your ability to code expressively.
Macros aren't quite a complete redefinition of the language, at least as far as I know (I'm actually a Schemer; I could be wrong), because there is a restriction. A macro can only take a single subtree of your code, and generate a single subtree to replace it. Therefore you can't write whole-program-transforming macros, as cool as that would be.
However, macros as they stand can still do a whole lot of stuff - definitely more than any other language will let you do. And if you're using static compilation, it wouldn't be hard at all to do a whole-program transformation, so the restriction is less of a big deal then.
A reference to 'structure and interpretation of computer programs' chapter 4-5 is what I was missing from the answers (link).
These chapters guide you in building a Lisp evaluator in Lisp. I like the read because not only does it show how to redefine Lisp in a new evaluator, but also let you learn about the specifications of Lisp programming language.
This answer is specifically concerning Common Lisp (CL hereafter), although parts of the answer may be applicable to other languages in the lisp family.
Since CL uses S-expressions and (mostly) looks like a sequence of function applications, there's no obvious difference between built-ins and user code. The main difference is that "things the language provides" is available in a specific package within the coding environment.
With a bit of care, it is not hard to code replacements and use those instead.
Now, the "normal" reader (the part that reads source code and turns it into internal notation) expects the source code to be in a rather specific format (parenthesised S-expressions) but as the reader is driven by something called "read-tables" and these can be created and modified by the developer, it is also possible to change how the source code is supposed to look.
These two things should at least provide some rationale as to why Common Lisp can be considered a re-programmable programming language. I don't have a simple example at hand, but I do have a partial implementation of a translation of Common Lisp to Swedish (created for April 1st, a few years back).
From the outside, looking in...
I always thought it was because Lisp provided, at its core, such basic, atomic logical operators that any logical process can be built (and has been built and provided as toolsets and add-ins) from the basic components.
It is not so much that it can redefine itself as that its basic definition is so malleable that it can take any form and that no form is assumed/presumed into the structure.
As a metaphor, if you only have organic compounds you do organic chemistry, if you only have metal oxides you do metallurgy but if you have only elements you can do everything but you have extra initial steps to complete....most of which others have already done for you....
I think.....
Cool example at http://www.cs.colorado.edu/~ralex/papers/PDF/X-expressions.pdf
reader macros define X-expressions to coexist with S-expressions, e.g.,
? (cx <circle cx="62" cy="135" r="20"/>)
62
plain vanilla Common Lisp at http://www.AgentSheets.com/lisp/XMLisp/XMLisp.lisp
...
(eval-when (:compile-toplevel :load-toplevel :execute)
(when (and (not (boundp '*Non-XMLISP-Readtable*)) (get-macro-character #\<))
(warn "~%XMLisp: The current *readtable* already contains a #/< reader function: ~A" (get-macro-character #\<))))
... of course the XML parser is not so simple but hooking it into the lisp reader is.

Uses for both static strong typed languages like Haskell and dynamic (strong) languages like Common LIsp

I was working with a Lisp dialect but also learning some Haskell as well. They share some similarities but the main difference in Common Lisp seems to be that you don't have to define a type for each function, argument, etc. whereas in Haskell you do. Also, Haskell is mostly a compiled language. Run the compiler to generate the executable.
My question is this, are there different applications or uses where a language like Haskell may make more sense than a more dynamic language like Common Lisp. For example, it seems that Lisp could be used for more bottom programming, like in building websites or GUIs, where Haskell could be used where compile time checks are more needed like in building TCP/IP servers or code parsers.
Popular Lisp applications:
Emacs
Popular Haskell applications:
PUGS
Darcs
Do you agree, and are there any studies on this?
Programming languages are tools for thinking with. You can express any program in any language, if you're willing to work hard enough. The chief value provided by one programming language over another is how much support it gives you for thinking about problems in different ways.
For example, Haskell is a language that emphasizes thinking about your problem in terms of types. If there's a convenient way to express your problem in terms of Haskell's data types, you'll probably find that it's a convenient language to write your program in.
Common Lisp's strengths (which are numerous) lie in its dynamic nature and its homoiconicity (that is, Lisp programs are very easy to represent and manipulate as Lisp data) -- Lisp is a "programmable programming language". If your program is most easily expressed in a new domain-specific language, for example, Lisp makes it very easy to do that. Lisp (and other dynamic languages) are a good fit if your problem description deals with data whose type is poorly specified or might change as development progresses.
Language choice is often as much an aesthetic decision as anything. If your project requirements don't limit you to specific languages for compatibility, dependency, or performance reasons, you might as well pick the one you feel the best about.
You're opening multiple cans of very wriggly worms. First off, the whole strongly vs weakly typed languages can. Second, the functional vs imperative language can.
(Actually, I'm curious: by "lisp dialect" do you mean Clojure by any chance? Because it's largely functional and closer in some ways to Haskell.)
Okay, so. First off, you can write pretty much any program in pretty much any normal language, with more or less effort. The purported advantage to strong typing is that a large class of errors can be detected at compile time. On the other hand, less typeful languages can be easier to code in. Common Lisp is interesting because it's a dynamic language with the option of declaring and using stronger types, which gives the CL compiler hints on how to optimize. (Oh, and real Common Lisp is usually implemented with a compiler, giving you the option of compiling or sticking with interpreted code.)
There are a number of studies about comparing untyped, weakly typed, and strongly typed languages. These studies invariably either say one of them is better, or say there's no observable difference. There is, however, little agreement among the studies.
The biggest area in which there may be some clear advantage is in dealing with complicated specifications for mathematical problems. In those cases (cryptographic algorithms are one example) a functional language like Haskell has advantages because it is easier to verify the correspondence between the Haskell code and the underlying algorithm.
I come mostly from a Common Lisp perspective, and as far as I can see, Common Lisp is suited for any application.
Yes, the default is dynamic typing (i.e. type detection at runtime), but you can declare types anyway for optimization (as a side note for other readers: CL is strongly typed; don't confuse weak/strong with static/dynamic!).
I could imagine that Haskell could be a bit better suited as a replacement for Ada in the avionics sector, since it forces at least all type checks at compile time.
I do not see how CL should not be as useful as Haskell for TCP/IP servers or code parsers -- rather the opposite, but my contacts with Haskell have been brief so far.
Haskell is a pure functional language. While it does allow imperative constructs (using monads), it generally forces the programmer to think the problem in a rather different way, using a more mathematical-oriented approach. You can't reassign another value to a variable, for example.
It is claimed that this reduces the probability of making some types of mistakes. Moreover, programs written in Haskell tend to be shorter and more concise than those written in typical programming languages. Haskell also makes heavy use of non-strict, lazy evaluation, which could theoretically allow the compiler to make optimizations not otherwise possible (along with the no-side-effects paradigm).
Since you asked about it, I believe Haskell's typing system is quite nice and useful. Not only it catches common errors, but it can also make code more concise (!) and can effectively replace object-oriented constructs from common OO languages.
Some Haskell development kits, like GHC, also feature interactive environments.
The best use for dynamic typing that I've found is when you depend on things that you have no control over so it could as well be used dynamically. For example getting information from XML document we could do something like this:
var volume = parseXML("mydoc.xml").speaker.volume()
Not using duck typing would lead to something like this:
var volume = parseXML("mydoc.xml").getAttrib["speaker"].getAttrib["volume"].ToString()
The benefit of Haskell on the other hand is in safety. You can for example make sure, using types, that degrees in Fahrenheit and Celsius are never mixed unintentionally. Besides that I find that statically typed languages have better IDEs.