Generic Bit Macros - macros

I'm looking for generic bit macros (e.g. extracting or setting multiple bits), so that I don't have to reinvent them. On NetBSD I found at least __BIT and __BITS in <sys/cdefs.h>, but glibc doesn't seem to have such macros (though GCC provides some more complex built-in bit functions). I haven't looked into other platforms yet. Does anyone know other predefined bit macros or functions?

I think there're none on MS (except the trivial HIWORD/LOWORD, etc) but on the other hand, why not use bit fields instead? If you have to deal with bits having a predefined layout, that is.

Related

Perl does not have a simple #include, OK, but WHY?

Perl does not have a C style preprocessor level "include" function. That is how it is, and there are numerous sites that explain how to more or less emulate the same sort of behavior.
The one thing I couldn't find on any of these sites is any explanation for WHY perl does not have this functionality. Given that Perl often provides many different ways to accomplish the same thing, it is a curious omission.
Can somebody please explain why the decision was made to exclude this sort of functionality?
Perl already has require, do, eval and here documents among other things. It doesn't need a builtin preprocessor, if you need one that badly, there are filters. http://perldoc.perl.org/perlfilter.html
In general, nobody wants #include, even C and C++ programmers would mostly be happy to give it up in exchange for:
Faster compiles
Clean module system
#include is legacy, period. If a mainstream language designer announced tomorrow that they were adding #include to (your favorite language here) you'd probably see mass hysteria, laughing, and loss of confidence in that designer.
Language designers don't implement #include in any new language, there are simply better ways to do it. In general the trend is to attempt to achieve single pass lexing. Preprocessing requires you to incrementally expand #includes and potentially revisit the same characters repeatedly. It has been wrought with problems, and is one of the reasons that C++ is such dog to compile. It was ok in the 60s and 70s when memory and CPU were tiny and languages and problems were simpler, as were codebases. Nowadays, you want to be able to compile a "library" once, store its type metadata with it so the compiler can access it efficiently without rescanning it. That is what Microsoft does anyway with precompiled headers.
So what would #include be good for?
Modules ? No. See above. Modules are compiled once, export their metadata efficiently, they don't pollute the namespace of the clients, they don't recursively inject other includes, they can be distributed in binary form, among umpteen other advantages that I'm not even smart enough to think of.
Including macros ? No. Replace with constants, inlining and generic programming. All of which can be precompiled and expored from a module.
Splicing in generated code ? Better ways to do it anyway. See modules.
The only useful functionality for the preprocessor, IMO, is conditional compilation.
#ifdef _WIN32_
// do windowsy stuff
#else
#endif
Again, Perl can do this with do, eval or require as well.
Perl doesn't have or lack it any more than C does.
The C preprocessor was designed such that it and C need to know as little as possible about each other. There is no reason why you can't use it with Perl.
So why don't Perl programmers do it?
As codenhein explains, it's generally a bad idea to use an include mechanism with a compiler that don't know anything about each other, as it leaves you open to some crazy errors that neither can diagnose; the fact that C programmers are used to it doesn't change that.

Macros: What's the benefit?

Paul Graham writes:
For example, types seem to be an
inexhaustible source of research
papers, despite the fact that static
typing seems to preclude true macros--
without which, in my opinion, no
language is worth using.
What's the big deal with macros? I haven't spent a whole lot of time with them, but from the legacy C/C++ I've worked with they appear to be mostly used as a hack before templates/generics existed.
It's hard to imagine that
DECLARELIST(StrList, string);
StrList slist;
is somehow preferable to
List<String> slist;
Am I missing something?
Then there's the usage as a pseudo-function, like MAKEPOINTS:
POINTS MAKEPOINTS(
DWORD dwValue
);
Why not define it as a function instead? Is this some optimization, where you avoid code duplication without having the added overhead of another stack frame?
Then there's also tricky control flow things involving GOTO, which seem to be of dubious value.
What's so great about macros? They're less type safe (in C and C++) (right?). Why won't Paul Graham program without them?
LISP macros are an entirely different beast. C/C++ macros can merely replace a piece of text with abother piece of text using an extremely basic language. Whereas a LISP program is (after "reading") is a LISP data structure and can therefore be manipulated using the whole language.
With such macros, you could (given you're a really clever hacker) vastly extend the language and everybody could use it relatively easily, since you did it with macros. Take for example the the Common Lisp Object System. At its core, the language has nothing even remotely like objects. It is entirely implemented in the language itself, including a relatively simple syntax for use - using macros.
Of course macros are less necessary when the language has most things you'd every want built-in. OTOH, the LISP fans are of the opinion that a sufficiently simple language (LISP) with sufficiently powerful metaprogramming capabilities (macros) is better since new concepts can be incorporated into the language without changing the spec or working implementations. But the most compelling example for macro usage is the DSL area. Ruby on Rails and others show every day how useful DSLs can be. Yes, Ruby doesn't have macros, it just exploits how much Ruby syntax can be bent. In other languages, or when even Ruby's syntax isn't flexible enough, you need macros or a fully-blown parser/interpreter to implement a complex DSL.
Macros are really only good for two things in C/C++, and should generally be the tool of last resort (if you can accomplish something without using macros, do so).
Creating new syntactic structures or abstractions that do not exist in the language.
Eliminating duplication, especially between things that must be in sync with each other.
It's almost never to use a macro as a function.
You also have to realize that LISP macros are not C/C++ macros.

What features is lisp lacking? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have read that most languages are becoming more and more like lisp, adopting features that lisp has had for a long time. I was wondering, what are the features, old or new, that lisp does not have? By lisp I mean the most common dialects like Common Lisp and Scheme.
This question has been asked a million times, but here goes. Common Lisp was created at a time when humans were considered cheap, and machines were considered expensive. Common Lisp made things easier for humans at the expense of making it harder for computers. Lisp machines were expensive; PCs with DOS were cheap. This was not good for its popularity; better to get a few more humans making mistakes with less expressive languages than it was to buy a better computer.
Fast forward 30 years, and it turns out that this isn't true. Humans are very, very expensive (and in very short supply; try hiring a programmer), and computers are very, very cheap. Cheaper than dirt, even. What today's world needs is exactly what Common Lisp offered; if Lisp were invented now, it would become very popular. Since it's 30-year-old (plus!) technology, though, nobody thought to look at it, and instead created their own languages with similar concepts. Those are the ones you're using today. (Java + garbage collection is one of the big innovations. For years, GC was looked down upon for being "too slow", but of course, a little research and now it's faster than managing your own memory. And easier for humans, too. How times change...)
Pass-by-reference (C++/C#)
String interpolation (Perl/Ruby) (though a feature of CL21)
Nice infix syntax (though it's not clear that it's worth it) (Python)
Monadic 'iteration' construct which can be overloaded for other uses (Haskell/C#/F#/Scala)
Static typing (though it's not clear that it's worth it) (many languages)
Type inference (not in the standard at least) (Caml and many others) (though CL does some type inference, unlike Python)
Abstract Data Types (Haskell/F#/Caml)
Pattern matching (Haskell/F#/Caml/Scala/others) (in CL, there are libs like optima)
Backtracking (though it's not clear that it's worth it) (Prolog)
ad-hoc polymorphism (see Andrew Myers' answer)
immutable data structures (many languages) (available through libraries, like Fsets
lazy evaluation (Haskell) (available through libraries, like clazy or a cl21 module)
(Please add to this list, I have marked it community wiki.)
This just refers to the Common Lisp and Scheme standards, because particular implementations have added a lot of these features independently. In fact, the question is kind of mistaken. It's so easy to add features to Lisp that it's better to have a core language without many features. That way, people can customize their language to perfectly fit their needs.
Of course, some implementations package the core Lisp with a bunch of these features as libraries. At least for Scheme, PLT Scheme provides all of the above features*, mostly as libraries. I don't know of an equivalent for Common Lisp, but there may be one.
*Maybe not infix syntax? I'm not sure, I never looked for it.
For Common Lisp, I think the following features would be worth adding to a future standard, in the ridiculously unlikely hypothetical situation that another standard is produced. All of these are things that are provided by pretty much every actively maintained CL implementation in subtly incompatible ways, or exist in widely used and portable libraries, so having a standard would provide significant benefits to users while not making life unduly difficult for implementors.
Some features for working with an underlying OS, like invoking other programs or handling command line arguments. Every implementation of CL I've used has something like this, and all of them are pretty similar.
Underlying macros or special forms for BACKQUOTE, UNQUOTE and UNQUOTE-SPLICING.
The meta-object protocol for CLOS.
A protocol for user-defined LOOP clauses. There are some other ways LOOP could be enhanced that probably wouldn't be too painful, either, like clauses to bind multiple values, or iterate over a generic sequence (instead of requiring different clauses for LISTs and VECTORs).
A system-definition facility that integrates with PROVIDE and REQUIRE, while undeprecating PROVIDE and REQUIRE.
Better and more extensible stream facilities, allowing users to define their own stream classes. This might be a bit more painful because there are two competing proposals out there, Gray streams and "simple streams", both of which are implemented by some CL implementations.
Better support for "environments", as described in CLTL2.
A declaration for merging tail calls and a description of the situations where calls that look like tail calls aren't (because of UNWIND-PROTECT forms, DYNAMIC-EXTENT declarations, special variable bindings, et c.).
Undeprecate REMOVE-IF-NOT and friends. Eliminate the :TEST-NOT keyword argument and SET.
Weak references and weak hash tables.
User-provided hash-table tests.
PARSE-FLOAT. Currently if you want to turn a string into a floating point number, you either have to use READ (which may do all sorts of things you don't want) or roll your own parsing function. This is silly.
Here are some more ambitious features that I still think would be worthwhile.
A protocol for defining sequence classes that will work with the standard generic sequence functions (like MAP, REMOVE and friends). Adding immutable strings and conses alongside their mutable kin might be nice, too.
Provide a richer set of associative array/"map" data types. Right now we have ad-hoc stuff built out of conses (alists and plists) and hash-tables, but no balanced binary trees. Provide generic sequence functions to work with these.
Fix DEFCONSTANT so it does something less useless.
Better control of the reader. It's a very powerful tool, but it has to be used very carefully to avoid doing things like interning new symbols. Also, it would be nice if there were better ways to manage readtables and custom reader syntaxes.
A read syntax for "raw strings", similar to what Python offers.
Some more options for CLOS classes and slots, allowing for more optimizations and better performance. Some examples are "primary" classes (where you can only have one "primary class" in a class's list of superclasses), "sealed" generic functions (so you can't add more methods to them, allowing the compiler to make a lot more assumptions about them) and slots that are guaranteed to be bound.
Thread support. Most implementations either support SMP now or will support it in the near future.
Nail down more of the pathname behavior. There are a lot of gratuitously annoying incompatibilities between implementations, like CLISP's insistance on signaling an error when you use PROBE-FILE on a directory, or indeed the fact that there's no standard function that tells you whether a pathname is the name of a directory or not.
Support for network sockets.
A common foreign function interface. It would be unavoidably lowest-common-denominator, but I think having something you could portably rely upon would be a real advantage even if using some of the cooler things some implementations provide would still be relegated to the realm of extensions.
This is in response to the discussion in comments under Nathan Sanders reply. This is a bit much for a comment so I'm adding it here. I hope this isn't violating Stackoverflow etiquette.
ad-hoc polymorphism is defined as different implementations based on specified types. In Common Lisp using generic methods you can define something like the following which gives you exactly that.
;This is unnecessary and created implicitly if not defined.
;It can be explicitly provided to define an interface.
(defgeneric what-am-i? (thing))
;Provide implementation that works for any type.
(defmethod what-am-i? (thing)
(format t "My value is ~a~%" thing))
;Specialize on thing being an integer.
(defmethod what-am-i? ((thing integer))
(format t "I am an integer!~%")
(call-next-method))
;Specialize on thing being a string.
(defmethod what-am-i? ((thing string))
(format t "I am a string!~%")
(call-next-method))
CL-USER> (what-am-i? 25)
I am an integer!
My value is 25
NIL
CL-USER> (what-am-i? "Andrew")
I am a string!
My value is Andrew
NIL
It can be harder than in more popular languages to find good libraries.
It is not purely functional like haskell
Whole-program transformations. (It would be just like macros, but for everything. You could use it to implement declarative language features.) Equivalently, the ability to write add-ons to the compiler. (At least, Scheme is missing this. CL may not be.)
Built-in theorem assistant / proof checker for proving assertions about your program.
Of course, I don't know of any other language that has these, so I don't think there's much competition in terms of features.
You are asking the ronge question. The language with the most features isnt the best. A language needs a goal.
We could add all of this and more
* Pass-by-reference (C++/C#)
* String interpolation (Perl/Ruby)
* Nice infix syntax (though it's not clear that it's worth it) (Python)
* Monadic 'iteration' construct which can be overloaded for other uses (Haskell/C#/F#/Scala)
* Static typing (though it's not clear that it's worth it) (many languages)
* Type inference (not in the standard at least) (Caml and many others)
* Abstract Data Types (Haskell/F#/Caml)
* Pattern matching (Haskell/F#/Caml/Scala/others)
* Backtracking (though it's not clear that it's worth it) (Prolog)
* ad-hoc polymorphism (see Andrew Myers' answer)
* immutable data structures (many languages)
* lazy evaluation (Haskell)
but that would make a good language. A language is not functional if you use call by ref.
If you look at the new list Clojure. Some of them are implemented but other that CL has are not and that makes for a good language.
Clojure for example added some:
ad-hoc polymorphism
lazy evaluation
immutable data structures
Type inference (most dynamic languages have compilers that do that)
My Answer is:
Scheme schooled stay as it is.
CL could add some ideos to the standard if they would make a new one.
Its LISP most can be added with libs.
Decent syntax. (Someone had to say it.) It may be simple/uniform/homoiconic/macro-able/etc, but as a human, I just loathe looking at it :)
It's missing a great IDE

Why is the Lisp community so fragmented? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
To begin, not only are there two main dialects of the language (Common Lisp and Scheme), but each of the dialects has many individual implementations. For example, Chicken Scheme, Bigloo, etc... each with slight differences.
From a modern point of view this is strange, as languages these days tend to have definitive implementations/specs. Think Java, C#, Python, Ruby, etc, where each has a single definitive site you can go to for API docs, downloads, and such. Of course Lisp predates all of these languages. But then again, even C/C++ are standardized (more or less).
Is the fragmentation of this community due to the age of Lisp? Or perhaps different implementations/dialects are intended to solve different problems? I understand there are good reasons why Lisp will never be as united as languages that have grown up around a single definitive implementation, but at this point is there any good reason why the Lisp community should not move in this direction?
The Lisp community is fragmented, but everything else is too.
Why are there so many Linux distributions?
Why are there so many BSD variants? OpenBSD, NetBSD, FreeBSD, ... even Mac OS X.
Why are there so many scripting languages? Ruby, Python, Rebol, TCL, PHP, and countless others.
Why are there so many Unix shells? sh, csh, bash, ksh, ...?
Why are there so many implementations of Logo (>100), Basic (>100), C (countless), ...
Why are there so many variants of Ruby? Ruby MRI, JRuby, YARV, MacRuby, HotRuby?
Python may have a main site, but there are several slightly different implementations: CPython, IronPython, Jython, Python for S60, PyPy, Unladen Swallow, CL-Python, ...
Why is there C (Clang, GCC, MSVC, Turbo C, Watcom C, ...), C++, C#, Cilk, Objective-C, D, BCPL, ... ?
Just let some of them get fifty and see how many dialects and implementations it has then.
I guess Lisp is diverse, because every language is diverse or gets diverse. Some start with a single implementation (McCarthy's Lisp) and after fifty years you got a zoo. Common Lisp even started with multiple implementations (for different machine types, operating systems, with different compiler technology, ...).
Nowadays Lisp is a family of languages, not a single language. There is not even consensus what languages belong to that family or not. There might be some criteria to check (s-expressions, functions, lists, ...), but not every Lisp dialect supports all these criteria. The language designers have experimented with different features and we got many, more or less, Lisp-like languages.
If you look at Common Lisp, there are about three or four different active commercial vendors. Try to get them behind one offering! Won't work. Then you have a bunch of active open source implementations with different goals: one compiles to C, another one is written in C, one tries to have a fast optimizing compiler, one tries to have some middlle ground with native compilation, one is targeting the JVM ... and so on. Try to tell the maintainers to drop their implementations!
Scheme has around 100 implementations. Many are dead, or mostly dead. At least ten to twenty are active. Some are hobby projects. Some are university projects, some are projects by companies. The users have diverse needs. One needs a real-time GC for a game, another one needs embedding in C, one needs only barebones constructs for educational purposes, and so on. How to tell the developers to keep from hacking their implementation.
Then there are some who don't like Commmon Lisp (too big, too old, not functional enough, not object oriented enough, too fast, not fast enough, ...). Some don't like Scheme (too academic, too small, does not scale, too functional, not functional enough, no modules, the wrong modules, not the right macros, ...).
Then somebody needs a Lisp combined with Objective-C, then you get Nu. Somebody hacks some Lisp for .net. Then you get some Lisp with concurrency and fresh ideas, then you have Clojure.
It's language evolution at work. It is like the cambrian explosion (when lots of new animals appeared). Some will die, others will live on, some new will appear. At some point in time some dialects appear that pick up the state of art (Scheme for everything with functional programming in Lisp in the 70s/80s and Common Lisp for everything MacLisp-like in the 80s) - that causes some dialects to disappear mostly (namely Standard Lisp, InterLisp, and others).
Common Lisp is the alligator of Lisp dialects. It is a very old design (hundred million years) with little changes, looks a little bit frightening, and from time to time it eats some young...
If you want to know more, The Evolution of Lisp (and the corresponding slides) is a very good start!
I think it is because "Lisp" is such a broad description of a language. The only common thing between all the lisps that I know is most things are in brackets, and uses prefix function notation. Eg
(fun (+ 3 4))
However nearly everything else can vary between implementations. Scheme and CL are completely different languages, and should be considered like that.
I think calling the lisp community fragmented is like calling the "C like" community fragmented. It has c,c++,d,java,c#, go, javascript, python and many other languages which I can't think of.
In summary: Lisp is more of a language property (like garbage collection, static typing) than an actual language implementation, so it is completely normal that there are many languages that have the Lisp like property, just like many languages have garbage collection.
I think it's because Lisp was born out of, and maintains the spirit of the hacker culture. The hacker culture is to to take something and make it "better" according to your belief in "better".
So when you have a bunch of opinionated hackers and a culture of modification, fragmentation happens. You get Scheme, Common Lisp, ELISP, Arc. These are all pretty different languages, but they're all "Lisp" at the same time.
Now why is the community fragmented? Well, I'll blame time and maturity on that. The language is 50 years old! :-)
Scheme and Common Lisp are standardized. SBCL seems like the defacto open source lisp and there are plenty of examples out there on how to use it. It's fast and free. ClozureCL also looks pretty darn good.
PLT Scheme seems like the defacto open source scheme and there are plenty of examples out there how to use it. It's fast and free.
The CL HyperSpec seems as good as the JavaDoc to me.
As far as community fragmentation I think this has little to standards or resources. I think this has far more to do with what has been a relatively small community until recently.
Clojure I think has a good chance to become The Lisp for the new generation of coders.
Perhaps my point is a very popular implementation is all that is required to give the illusion of a cohesive community.
LISP is not nearly as fragmented as BASIC.
There are so many dialects and versions of BASIC out there I have lost count.
Even the most commonly used implementation (MS VB) is different between versions.
The fact that there are many implementations of Common LISP should be considered a good thing. In fact, given that there are roughly the same number of free implementations of Common LISP as there are free implementations of C++ is remarkable, considering the relative popularity of the languages.
Free Common LISP implementations include CMU CL, SBCL, OpenMCL / Clozure CL, CLISP, GCL and ECL.
Free C++ implementations include G++ (with Cygwin and MinGW32 variants), Digital Mars, Open Watcom, Borland C++ (legacy?) and CINT (interpreter). There are also various STL implementations for C++.
With regards to Scheme and Common LISP, although admittedly, an inaccurate analogy, there are times when I would consider Scheme is to Common LISP what C is to C++, i.e. while Scheme and C are small and elegant, Common LISP and C++ are large and (arguably) more suited for larger applications.
Having many implementations is beneficial, because each implementation is optimal in unique places. And modern mainstream languages don't have one implementation anyway. Think about Python: its main implementation is CPython, but thanks to JPython you can run Python on the JVM too; thanks to Stackless Python you can have massive concurrency thanks to microthreads; etc. Such implementations will be encompatible in some ways: JPython integrates seamlessly with Java, whilst CPython doesn't. Same for Ruby.
What you don't want is having many implementations which are incompatible to the bone. That's the case with Scheme, where you can't share libraries among implementations without rewriting a lot of code, because Schemers can't agree on how to import/export libraries. Common Lisp libraries, OTOH, because of standardization in core areas, are more likely to be portable, and facilities exist to conditionally write code handling each implementation's peculiarities. Actually, nowadays you may say that Common Lisp is defined by its implementations (think about the ASDF package installation library), just like mainstream languages.
Two possible contributing factors:
Lisp languages aren't hugely popular in comparison to other languages like C/C++/Ruby and so on - that alone may give the illusion of a fragmented community. There may be equal fragmentation in the other language-communities, but a larger community will have larger fragments..
Lisp languages are easier than most to implement. I've seen many, many "toy" Lisp implementations people have made for fun, many "proper" Lisp implementations to solve specific tasks. There are far more Lisp implementations than there are, say, Python interpreters (I'm aware of about.. 5, most of which are generally interchangeable)
There are promising projects like Clojure, which is a new language, with a clear goal (concurrency), without much "historical baggage", easy to install/setup, can piggyback on Java's library "ecosystem", has a good site with documentation/libraries, and has an official mailing list. This pretty much checks off every issue I encountered while trying to learn Common Lisp a while ago, and encourages a more centralised community.
My point of view is that Lisp is a small language so it is easy to implement (compare to Java, C#, C, ...).
Note: As many comment that it is indeed not that small it miss my point. Let me try to be more precise: This boll down to the entry point price. Building a VM that compile some well know mainstream language is quit hard compare to building a VM that deal with LISP. This would then make it easy to build small community around a small project. Now the library and spec may or may not be fully implemented but the value proposition is still there. Closure it a typical example where the R5RS is certainly not in the scope.

Uses for both static strong typed languages like Haskell and dynamic (strong) languages like Common LIsp

I was working with a Lisp dialect but also learning some Haskell as well. They share some similarities but the main difference in Common Lisp seems to be that you don't have to define a type for each function, argument, etc. whereas in Haskell you do. Also, Haskell is mostly a compiled language. Run the compiler to generate the executable.
My question is this, are there different applications or uses where a language like Haskell may make more sense than a more dynamic language like Common Lisp. For example, it seems that Lisp could be used for more bottom programming, like in building websites or GUIs, where Haskell could be used where compile time checks are more needed like in building TCP/IP servers or code parsers.
Popular Lisp applications:
Emacs
Popular Haskell applications:
PUGS
Darcs
Do you agree, and are there any studies on this?
Programming languages are tools for thinking with. You can express any program in any language, if you're willing to work hard enough. The chief value provided by one programming language over another is how much support it gives you for thinking about problems in different ways.
For example, Haskell is a language that emphasizes thinking about your problem in terms of types. If there's a convenient way to express your problem in terms of Haskell's data types, you'll probably find that it's a convenient language to write your program in.
Common Lisp's strengths (which are numerous) lie in its dynamic nature and its homoiconicity (that is, Lisp programs are very easy to represent and manipulate as Lisp data) -- Lisp is a "programmable programming language". If your program is most easily expressed in a new domain-specific language, for example, Lisp makes it very easy to do that. Lisp (and other dynamic languages) are a good fit if your problem description deals with data whose type is poorly specified or might change as development progresses.
Language choice is often as much an aesthetic decision as anything. If your project requirements don't limit you to specific languages for compatibility, dependency, or performance reasons, you might as well pick the one you feel the best about.
You're opening multiple cans of very wriggly worms. First off, the whole strongly vs weakly typed languages can. Second, the functional vs imperative language can.
(Actually, I'm curious: by "lisp dialect" do you mean Clojure by any chance? Because it's largely functional and closer in some ways to Haskell.)
Okay, so. First off, you can write pretty much any program in pretty much any normal language, with more or less effort. The purported advantage to strong typing is that a large class of errors can be detected at compile time. On the other hand, less typeful languages can be easier to code in. Common Lisp is interesting because it's a dynamic language with the option of declaring and using stronger types, which gives the CL compiler hints on how to optimize. (Oh, and real Common Lisp is usually implemented with a compiler, giving you the option of compiling or sticking with interpreted code.)
There are a number of studies about comparing untyped, weakly typed, and strongly typed languages. These studies invariably either say one of them is better, or say there's no observable difference. There is, however, little agreement among the studies.
The biggest area in which there may be some clear advantage is in dealing with complicated specifications for mathematical problems. In those cases (cryptographic algorithms are one example) a functional language like Haskell has advantages because it is easier to verify the correspondence between the Haskell code and the underlying algorithm.
I come mostly from a Common Lisp perspective, and as far as I can see, Common Lisp is suited for any application.
Yes, the default is dynamic typing (i.e. type detection at runtime), but you can declare types anyway for optimization (as a side note for other readers: CL is strongly typed; don't confuse weak/strong with static/dynamic!).
I could imagine that Haskell could be a bit better suited as a replacement for Ada in the avionics sector, since it forces at least all type checks at compile time.
I do not see how CL should not be as useful as Haskell for TCP/IP servers or code parsers -- rather the opposite, but my contacts with Haskell have been brief so far.
Haskell is a pure functional language. While it does allow imperative constructs (using monads), it generally forces the programmer to think the problem in a rather different way, using a more mathematical-oriented approach. You can't reassign another value to a variable, for example.
It is claimed that this reduces the probability of making some types of mistakes. Moreover, programs written in Haskell tend to be shorter and more concise than those written in typical programming languages. Haskell also makes heavy use of non-strict, lazy evaluation, which could theoretically allow the compiler to make optimizations not otherwise possible (along with the no-side-effects paradigm).
Since you asked about it, I believe Haskell's typing system is quite nice and useful. Not only it catches common errors, but it can also make code more concise (!) and can effectively replace object-oriented constructs from common OO languages.
Some Haskell development kits, like GHC, also feature interactive environments.
The best use for dynamic typing that I've found is when you depend on things that you have no control over so it could as well be used dynamically. For example getting information from XML document we could do something like this:
var volume = parseXML("mydoc.xml").speaker.volume()
Not using duck typing would lead to something like this:
var volume = parseXML("mydoc.xml").getAttrib["speaker"].getAttrib["volume"].ToString()
The benefit of Haskell on the other hand is in safety. You can for example make sure, using types, that degrees in Fahrenheit and Celsius are never mixed unintentionally. Besides that I find that statically typed languages have better IDEs.