What do you think is the best language for Bioinformatics? [closed] - matlab

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have done a couple research jobs in Bio-informatics and I have used Matlab for them. Matlab had a lot of powerful tools and was easy to use. I did thinks with genome sequencing and predicting metabolic pathways. I am wondering what other people think is best? or there might not be one specific language but a few that lend themselves best to Bio-informatics work that is math heavy and deals with a large amount of data.

You'll likely be interested in this thread over at BioStar:
Which are the best programming languages to study for a bioinformatician?
For most of us bioinformaticians, this includes Python, R, Perl, and bash command line utilities (like sed, awk, cut, sort, etc). There are also people who code in Java, Ruby, C++, and Matlab.
So the bottom line? Whichever language lets you get the work done most easily is the right one for you. Answering this question should include a careful survey of the libraries and other code that you can pull from, as well as information on your own preferences and experience. If you're doing microarray analysis, it's hard to beat the R/bioconductor libraries, but that's absolutely the wrong language for someone wrangling most types of large sequencing data sets.

There's no one right language for bioinformatics.
The important BLAST sequencing tool is written in C++
The MATT tool for aligning protein structures is written in C
Some of my colleagues in computational biology use Ruby.
In general, I see a lot of C and C++ for performance-critical code and a lot of scripting languages otherwise.

Python + scipy are decent (and FREE).
http://www.vetta.org/2008/05/scipy-the-embarrassing-way-to-code/
http://www.google.com/search?hl=en&source=hp&q=python+bioinformatics&aq=0&aqi=g9g-m1&aql=&oq=python+bio&gs_rfai=CeE1nPpMNTN2IJZ-yMZX6pcIKAAAAqgQFT9DLSgo
You do not even need to learn new syntax really when when dropping Matlab for SciPy.

Best or not, SAS is the de facto programming enviroment in biopharmas. If you were to work for the Pfizers, Mercks and Bayers of the world in bioinformatics, you had better have SAS skills. SAS programmers are in great demand.

What's the "best" language is both subjective and potentially different from task to task, but for bioinformatic work, I personally use R, Perl, Delphi and C (quite frequently a combination of several of these).

I work mainly with HMMs and protein sequences. I started out writing in C, but have since switched to Python, which I'm happy with. I find it's easier to prototype something quickly and results in easier to maintain code.

Here's a freely available academic paper written on the subject that evaluates the different languages, and in different situations: http://www.biomedcentral.com/1471-2105/9/82
They grouped 6 commonly used languages into 3 different levels.
2 compiled languages: C, C++
2 semi-compiled languages: C#, Java
2 interpreted languages: Perl, Python
Some general conclusions:
Compiled languages outperformed interpreted languages in global alignments and Neighbour-Joining programs
Interpreted languages generally used more memory
All languages performed roughly the same for BLAST computations, except for Python
Compiled languages require more written lines of code to perform the same tasks
Compiled languages tend to be better for algorithm implementation
Interpreted languages tend to be better for file parsing/manipulation
Here's another good free academic article discussing ways to build bioinformatics skills: http://dx.plos.org/10.1371/journal.pcbi.1000589

Related

Why people say "emacs is good for writing lisp program because it's written in emacs lisp"? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I can imagine that IDEs like eclipse written in java are good for writing java program because the tool and the language are tightly integrated. So if some one would say "emacs is good for writing emacs lisp programs" then it would make sense in the same way to me.
But people say emacs is good for all lisp dialects, as if all lisp dialects and emacs are tightly integrated naturally. Not just emacs lisp, not any language, not any langauge use repl, but all lisp dialects. Why? Is there anything in emacs lisp that is shared between all lisp dialects but not really in other languages -- including all these using repl, that is so unique that only lisp dialects could benefit from this? Could you come up with some examples?
You're right that there's little reason that Emacs would have any technical benefit as a generic Lisp environment. It's not completely zero though -- since Emacs has support that makes it easy to parse most lisps, since their surface syntax (S-expressions) are usually very similar. But this is still just a technicality.
The real reason is that Emacs comes from times when there was much less variance among lisps (in the syntax sense, semantics is a different issue), so it was basically the only choice of a sophisticated Lisp editing environment. As such, it enjoys the benefits of literally decades of Lispers who used it for Lisp editing and therefore it was always one of its stronger points -- and on the other hand very few Lispers chose other editors so they suffered from the natural lack of need. (IIRC, up until not too long ago even vi-clones had little more than basic paren matching.)
More than that, even these days there's a strong relationship since reading elisp is much easier for your J Random Lisper, so they are still the crowd that finds it easy to extend Emacs. You could make the exact same point for Java & Eclipse, of course -- both are examples of tools that are very extensible, and therefore are very entrenched in their respective communities.
When it gets to executing code, with most lisps (= all except elisp) the work is done in a subprocess so it's no different than using any other editor -- technically speaking.

Can Marpa be used to speed up Perl interpreter's parsing?

Can the existing Marpa parser be used to improve parsing of Perl 5 (e.g., replace all or chunks of existing Perl interpreter's parser)?
I am asking on a theoretical level, e.g. ignoring practical considerations such as "if it can, it would cost 10,000 man hours of work".
If not, what are the specific issues preventing the use of Marpa? (again, preferably theoretical ones).
For background of why this is interesting, Jeffrey Kegler (Marpa's author) has posted a somewhat famous article "Perl Cannot Be Parsed: A Formal Proof" on PerlMonks in 2008, which was influenced by his then-current work on Marpa.
Thanks for asking. The perlmonks post and my current parsing work address two different if related questions. Question 1: Is Perl parsing, in its full generality, decidable by a Turing machine? Question 2: Can Marpa, as a practical matter, parse Perl 5?
You might compare the two questions: "Is the behavior of every C program decidable?" and "Can machine X run programs compiled in C?" The answers are, respectively, "no" and "yes for all practical purposes and reasonable choices of X". So my perlmonks post (updated here) is about the theoretical question of whether the syntax of Perl programs in, in its full generality, decidable. Note that the decidability of Perl parsing in that context has nothing to do with Marpa, recursive descent, bison, etc. -- it's about Turing machines.
Question 2 is "Can Marpa drive a practical Perl 5 parser?" The current Perl 5 parser is LALR, with a separate lexer and lots of procedural assistance. Marpa is more powerful than LALR, allows a separate lexer, and offers much more help to procedural code than LALR does. I addressed the speed question in a recent blog post: "Is Earley parsing fast enough?" What I've just said is very telegraphic -- but I hope it will do to outline how I'd justify my "yes" answer to Question 2.
No deep architectural problem stands in the way of a Marpa-driven Perl 5 parser. At this point, it's really a question of comfort level.

Why is Perl used so extensively in biology research? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I work as support staff in a biology research institute as a student, and Perl seems to be used everywhere. Not for every single project, but it seems that more than half the people here have a few Perl books in/on their office/desk.
Why is Perl used so much in biology?
Lincoln Stein highlighted some of the saving graces of Perl for bioinformatics in his article:
How Perl Saved the Human Genome Project.
From his analysis:
I think several factors are responsible:
Perl is remarkably good for slicing, dicing, twisting, wringing, smoothing, summarizing and otherwise mangling text. Although the biological sciences do involve a good deal of numeric analysis now, most of the primary data is still text: clone names, annotations, comments, bibliographic references. Even DNA sequences are textlike. Interconverting incompatible data formats is a matter of text mangling combined with some creative guesswork. Perl's powerful regular expression matching and string manipulation operators simplify this job in a way that isn't equalled by any other modern language.
Perl is forgiving. Biological data is often incomplete, fields can be missing, or a field that is expected to be present once occurs several times (because, for example, an experiment was run in duplicate), or the data was entered by hand and doesn't quite fit the expected format. Perl doesn't particularly mind if a value is empty or contains odd characters. Regular expressions can be written to pick up and correct a variety of common errors in data entry. Of course this flexibility can be also be a curse. I talk more about the problems with Perl below.
Perl is component-oriented. Perl encourages people to write their software in small modules, either using Perl library modules or with the classic Unix tool-oriented approach. External programs can easily be incorporated into a Perl script using a pipe, system call or socket. The dynamic loader introduced with Perl5 allows people to extend the Perl language with C routines or to make entire compiled libraries available for the Perl interpreter. An effort is currently under way to gather all the world's collected wisdom about biological data into a set of modules called "bioPerl" (discussed at length in an article to be published later in the Perl Journal).
Perl is easy to write and fast to develop in. The interpreter doesn't require you to declare all your function prototypes and data types in advance, new variables spring into existence as needed, calls to undefined functions only cause an error when the function is needed. The debugger works well with Emacs and allows a comfortable interactive style of development.
Perl is a good prototyping language. Because Perl is quick and dirty, it often makes sense to prototype new algorithms in Perl before moving them to a fast compiled language.
Sometimes it turns out that Perl is fast enough so that of the algorithm doesn't have to be ported; more frequently one can write a small core of the algorithm in C, compile it as a dynamically loaded module or external executable, and leave the rest of the application in Perl (for an example of a complex genome mapping application implemented in this way, see http://waldo.wi.mit.edu/ftp/distribution/software/rhmapper/).
Perl is a good language for Web CGI scripting, and is growing in importance as more labs turn to the Web for publishing their data.
The real answer probably has less to do with Perl than you think. Many of the things that happen are accidents of history. At the time, way back when, Perl was pretty popular, Java was getting more popular, not too many people were paying attention to Python, and Ruby was just getting started.
The people who needed to get work done used Perl and made some libraries in Perl, and other people started using those libraries. Once people start using something that is moderately useful to them, they tend not to switch (economists call those "switching costs"). From there, even more people start using it because a lot of other people are using it.
The same evolution might not happen today. I'd say that Perl, Python, and Ruby are all completely adequate and up to the task. All the things that mobrule quotes from Lincoln Stein could apply to any of the three today. If everyone had to start from scratch today, any one of those languages could be the one that everyone uses.
I've noticed, from my own client base though (a very small and unrepresentative sample of biotech), that the people pushing the programming for a lot of the biological stuff seemed to be at least part-time sysadmins who were supporting scientists. The scientists worried about the science and did some light programming, but the IT support people were doing a lot of the heavy lifting for the non-science parts. Perl is very well positioned as a sysadmin tool since it's the duct-tape of the internet.
Probably because Perl is good at manipulating strings, and much research in genetics involves the manipulation of veeery long "ACTGCATG..." strings. Just guessing...
I use lots of Perl for dealing with qualitative and quantitative data in social science research. In terms of getting things done (largely with text) quickly, finding libraries on CPAN (nice central location), and generally just getting things done quickly, it can't be surpassed.
Perl is also excellent glue, so if you have some instrumental records, and you need to glue them to data analysis routines, then Perl is your language.
Perl seems to be the language of choice for bioinformatics - there's even an O'Reilly title on just this subject: Beginning Perl for Bioinformatics.
Perl is very powerful when it comes to deal with text and it's present in almost every Linux/Unix distribution. In bioinformatics, not only are sequence data very easy to manipulate with Perl, but also most of the bionformatics algorithms will output some kind of text results.
Then, the biggest bioinformatics centers like the EBI had that great guy, Ewan Birney, who was leading the BioPerl project. That library has lots of parsers for every kind of popular bioinformatics algorithms' results, and for manipulating the different sequence formats used in major sequence databases.
Nowadays, however, Perl is not the only language used by bioinformaticians: along with sequence data, labs produce more and more different kinds of data types and other languages are more often used in those areas.
The R statistics programming language for example, is widely used for statistical analysis of microarray and qPCR data (among others). Again, why are we using it so much? Because it has great libraries for that kind of data (see bioconductor project).
Now when it comes to web development, CGI is not really state of the art today, but people who know Perl may stick to it. In my company though it is no longer used...
I hope this helps.
Perl basically forces very short development cycles. That's the kind of development that gets stuff done.
It's enough to outweigh Perl's disadvantages.
Bioinformatics deals primarily in text parsing and Perl is the best programming language for the job as it is made for string parsing. As the O'Reilly book (Beginning Perl for Bioinformatics) says that "With [Perl]s highly developed capacity to detect patterns in data, Perl has become one of the most popular languages for biological data analysis."
This seems to be a pretty comprehensive response. Perhaps one thing missing, however, is that most biologists (until recently, perhaps) don't have much programming experience at all. The learning curve for Perl is much lower than for compiled languages (like C or Java), and yet Perl still provides a ton of features when it comes to text processing. So what if it takes longer to run? Biologists can definitely handle that. Lab experiments routinely take one hour or more finish, so waiting a few extra minutes for that data processing to finish isn't going to kill them!
Just note that I am talking here about biologists that program out of necessity. I understand that there are some very skilled programmers and computer scientists out there that use Perl as well, and these comments may not apply to them.
People missed out DBI, the Perl abstract database interface that makes it really easy to work with bioinformatic databases.
There is also the one-liner angle. You can write something to reformat data in a single line in Perl and just use the -pe flag to embed that at the command line. Many people using AWK and sed moved to Perl. Even in full programs, file I/O is incredibly easy and quick to write, and text transformation is expressive at a high level compared to any engineering language around. People who use Java or even Python for one-off text transformation are just too lazy to learn another language. Java especially has a high dependence on the JVM implementation and its I/O performance.
At least you know how fast or slow Perl will be everywhere, slightly slower than C I/O. Don't learn grep, cut, sed, or AWK; just learn Perl as your command line tool, even if you don't produce large programs with it. Regarding CGI, Perl has plenty of better web frameworks such as Catalyst and Mojolicious, but the mindshare definitely came from CGI and bioinformatics being one of the earliest heavy users of the Internet.
Perl is very easy to learn as compared to other languages. It can fully exploit the biological data which is becoming the big data. It can manipulate big data and perform good for manipulation data curation and all type of DNA programming, automation of biology has become easy due languages like Perl, Python and Ruby. It is very easy for those who are knowing biology, but not knowing how to program that in other programming languages.
Personally, and I know this will date me, but it's because I learned Perl first. I was being asked to take FASTA files and mix with other FASTA files. Perl was the recommended tool when I asked around.
At the time I'd been through a few computer science classes, but I didn't really know programming all that well.
Perl proved fairly easy to learn. Once I'd gotten regular expressions into my head I was parsing and making new FASTA files within a day.
As has been suggested, I was not a programmer. I was a biochemistry graduate working in a lab, and I'd made the mistake of setting up a Linux server where everyone could see me. This was back in the day when that was an all-day project.
Anyway, Perl became my goto for anything I needed to do around the lab. It was awesome, easy to use, super flexible, other Perl guys in other labs we're a lot like me.
So, to cut it short, Perl is easy to learn, flexible and forgiving, and it did what I needed.
Once I really got into bioinformatics I picked up R, Python, and even Java. Perl is not that great at helping to create maintainable code, mostly because it is so flexible. Now I just use the language for the job, but Perl is still one of my favorite languages, like a first kiss or something.
To reiterate, most bioinformatics folks learned coding by just kluging stuff together, and most of the time you're just trying to get an answer for the principal investigator (PI), so you can't spend days on code design. Perl is superb at just getting an answer, it probably won't work a second time, and you will not understand anything in your own code if you see it six months later; BUT if you need something now, then it is a good choice even though I mostly use Python now.
I hope that gives you an answer from someone who lived it.

Why is the Lisp community so fragmented? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
To begin, not only are there two main dialects of the language (Common Lisp and Scheme), but each of the dialects has many individual implementations. For example, Chicken Scheme, Bigloo, etc... each with slight differences.
From a modern point of view this is strange, as languages these days tend to have definitive implementations/specs. Think Java, C#, Python, Ruby, etc, where each has a single definitive site you can go to for API docs, downloads, and such. Of course Lisp predates all of these languages. But then again, even C/C++ are standardized (more or less).
Is the fragmentation of this community due to the age of Lisp? Or perhaps different implementations/dialects are intended to solve different problems? I understand there are good reasons why Lisp will never be as united as languages that have grown up around a single definitive implementation, but at this point is there any good reason why the Lisp community should not move in this direction?
The Lisp community is fragmented, but everything else is too.
Why are there so many Linux distributions?
Why are there so many BSD variants? OpenBSD, NetBSD, FreeBSD, ... even Mac OS X.
Why are there so many scripting languages? Ruby, Python, Rebol, TCL, PHP, and countless others.
Why are there so many Unix shells? sh, csh, bash, ksh, ...?
Why are there so many implementations of Logo (>100), Basic (>100), C (countless), ...
Why are there so many variants of Ruby? Ruby MRI, JRuby, YARV, MacRuby, HotRuby?
Python may have a main site, but there are several slightly different implementations: CPython, IronPython, Jython, Python for S60, PyPy, Unladen Swallow, CL-Python, ...
Why is there C (Clang, GCC, MSVC, Turbo C, Watcom C, ...), C++, C#, Cilk, Objective-C, D, BCPL, ... ?
Just let some of them get fifty and see how many dialects and implementations it has then.
I guess Lisp is diverse, because every language is diverse or gets diverse. Some start with a single implementation (McCarthy's Lisp) and after fifty years you got a zoo. Common Lisp even started with multiple implementations (for different machine types, operating systems, with different compiler technology, ...).
Nowadays Lisp is a family of languages, not a single language. There is not even consensus what languages belong to that family or not. There might be some criteria to check (s-expressions, functions, lists, ...), but not every Lisp dialect supports all these criteria. The language designers have experimented with different features and we got many, more or less, Lisp-like languages.
If you look at Common Lisp, there are about three or four different active commercial vendors. Try to get them behind one offering! Won't work. Then you have a bunch of active open source implementations with different goals: one compiles to C, another one is written in C, one tries to have a fast optimizing compiler, one tries to have some middlle ground with native compilation, one is targeting the JVM ... and so on. Try to tell the maintainers to drop their implementations!
Scheme has around 100 implementations. Many are dead, or mostly dead. At least ten to twenty are active. Some are hobby projects. Some are university projects, some are projects by companies. The users have diverse needs. One needs a real-time GC for a game, another one needs embedding in C, one needs only barebones constructs for educational purposes, and so on. How to tell the developers to keep from hacking their implementation.
Then there are some who don't like Commmon Lisp (too big, too old, not functional enough, not object oriented enough, too fast, not fast enough, ...). Some don't like Scheme (too academic, too small, does not scale, too functional, not functional enough, no modules, the wrong modules, not the right macros, ...).
Then somebody needs a Lisp combined with Objective-C, then you get Nu. Somebody hacks some Lisp for .net. Then you get some Lisp with concurrency and fresh ideas, then you have Clojure.
It's language evolution at work. It is like the cambrian explosion (when lots of new animals appeared). Some will die, others will live on, some new will appear. At some point in time some dialects appear that pick up the state of art (Scheme for everything with functional programming in Lisp in the 70s/80s and Common Lisp for everything MacLisp-like in the 80s) - that causes some dialects to disappear mostly (namely Standard Lisp, InterLisp, and others).
Common Lisp is the alligator of Lisp dialects. It is a very old design (hundred million years) with little changes, looks a little bit frightening, and from time to time it eats some young...
If you want to know more, The Evolution of Lisp (and the corresponding slides) is a very good start!
I think it is because "Lisp" is such a broad description of a language. The only common thing between all the lisps that I know is most things are in brackets, and uses prefix function notation. Eg
(fun (+ 3 4))
However nearly everything else can vary between implementations. Scheme and CL are completely different languages, and should be considered like that.
I think calling the lisp community fragmented is like calling the "C like" community fragmented. It has c,c++,d,java,c#, go, javascript, python and many other languages which I can't think of.
In summary: Lisp is more of a language property (like garbage collection, static typing) than an actual language implementation, so it is completely normal that there are many languages that have the Lisp like property, just like many languages have garbage collection.
I think it's because Lisp was born out of, and maintains the spirit of the hacker culture. The hacker culture is to to take something and make it "better" according to your belief in "better".
So when you have a bunch of opinionated hackers and a culture of modification, fragmentation happens. You get Scheme, Common Lisp, ELISP, Arc. These are all pretty different languages, but they're all "Lisp" at the same time.
Now why is the community fragmented? Well, I'll blame time and maturity on that. The language is 50 years old! :-)
Scheme and Common Lisp are standardized. SBCL seems like the defacto open source lisp and there are plenty of examples out there on how to use it. It's fast and free. ClozureCL also looks pretty darn good.
PLT Scheme seems like the defacto open source scheme and there are plenty of examples out there how to use it. It's fast and free.
The CL HyperSpec seems as good as the JavaDoc to me.
As far as community fragmentation I think this has little to standards or resources. I think this has far more to do with what has been a relatively small community until recently.
Clojure I think has a good chance to become The Lisp for the new generation of coders.
Perhaps my point is a very popular implementation is all that is required to give the illusion of a cohesive community.
LISP is not nearly as fragmented as BASIC.
There are so many dialects and versions of BASIC out there I have lost count.
Even the most commonly used implementation (MS VB) is different between versions.
The fact that there are many implementations of Common LISP should be considered a good thing. In fact, given that there are roughly the same number of free implementations of Common LISP as there are free implementations of C++ is remarkable, considering the relative popularity of the languages.
Free Common LISP implementations include CMU CL, SBCL, OpenMCL / Clozure CL, CLISP, GCL and ECL.
Free C++ implementations include G++ (with Cygwin and MinGW32 variants), Digital Mars, Open Watcom, Borland C++ (legacy?) and CINT (interpreter). There are also various STL implementations for C++.
With regards to Scheme and Common LISP, although admittedly, an inaccurate analogy, there are times when I would consider Scheme is to Common LISP what C is to C++, i.e. while Scheme and C are small and elegant, Common LISP and C++ are large and (arguably) more suited for larger applications.
Having many implementations is beneficial, because each implementation is optimal in unique places. And modern mainstream languages don't have one implementation anyway. Think about Python: its main implementation is CPython, but thanks to JPython you can run Python on the JVM too; thanks to Stackless Python you can have massive concurrency thanks to microthreads; etc. Such implementations will be encompatible in some ways: JPython integrates seamlessly with Java, whilst CPython doesn't. Same for Ruby.
What you don't want is having many implementations which are incompatible to the bone. That's the case with Scheme, where you can't share libraries among implementations without rewriting a lot of code, because Schemers can't agree on how to import/export libraries. Common Lisp libraries, OTOH, because of standardization in core areas, are more likely to be portable, and facilities exist to conditionally write code handling each implementation's peculiarities. Actually, nowadays you may say that Common Lisp is defined by its implementations (think about the ASDF package installation library), just like mainstream languages.
Two possible contributing factors:
Lisp languages aren't hugely popular in comparison to other languages like C/C++/Ruby and so on - that alone may give the illusion of a fragmented community. There may be equal fragmentation in the other language-communities, but a larger community will have larger fragments..
Lisp languages are easier than most to implement. I've seen many, many "toy" Lisp implementations people have made for fun, many "proper" Lisp implementations to solve specific tasks. There are far more Lisp implementations than there are, say, Python interpreters (I'm aware of about.. 5, most of which are generally interchangeable)
There are promising projects like Clojure, which is a new language, with a clear goal (concurrency), without much "historical baggage", easy to install/setup, can piggyback on Java's library "ecosystem", has a good site with documentation/libraries, and has an official mailing list. This pretty much checks off every issue I encountered while trying to learn Common Lisp a while ago, and encourages a more centralised community.
My point of view is that Lisp is a small language so it is easy to implement (compare to Java, C#, C, ...).
Note: As many comment that it is indeed not that small it miss my point. Let me try to be more precise: This boll down to the entry point price. Building a VM that compile some well know mainstream language is quit hard compare to building a VM that deal with LISP. This would then make it easy to build small community around a small project. Now the library and spec may or may not be fully implemented but the value proposition is still there. Closure it a typical example where the R5RS is certainly not in the scope.

Is learning LISP useful at all these days? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I picked up a LISP book at a garage sale the other day and was just wondering if it was worth spending some time on.
Yes. I'll stick to Common Lisp, here, though Scheme is also a superb language that has a lot to recommend it.
In Common Lisp, you have a largish multi-paradigm language that provides some things that either don't exist widely outside the Lisp family of languages, or are limited to CL and even more obscure/niche languages.
The first feature, which you can get in one way or another from CL, Scheme and quite a few other dialects, is a real macro system.
I say "real" because the system is much more complete, flexible and reliable than, say, C preprocessor macros. It's extremely difficult to get CPP macros to do even simple things (like swapping the values of two variables, or making a foreach construct) in a reliable fashion, but these are trivial with Lisp macros. This turns out to be a very powerful tool for introducing new abstractions and dispensing with "boilerplate" code.
The second feature, which is effectively limited to Common Lisp, is CLOS, the Common Lisp Object System. Despite the name, it's not a conventional OO system like that of Java with methods being part of a class's definition. Instead, it provides polymorphism through "generic functions" which are what methods are attached to, and by default allow you to do multiple dispatch.
I vastly prefer CLOS to the more usual approach to object orientation, as it makes a number of "patterns" (like the Visitor pattern) completely unneccessary and because extension of existing generic functions is so easy; others loathe it because it takes an extremely cavalier approach to encapsulation and because extension of generic functions becomes arguably too easy. Either way, CLOS is different enough that I think it's worth learning just for the different perspective it provides.
The third feature, which is available outside of Lisp but still fantastic if you've never experienced it before is dynamic, interactive programming. CL debuggers tend to be extremely powerful tools, and CL provides for dynamic definition and redefinition of functions, classes and methods, all of which dramatically improves one's ability to explore a problem, test solutions of that problem and its subproblems, and finally put together a program that works correctly and efficiently.
Lastly, for a lot of classes of problems, Lisp is a great practical language. It provides good performance (usually not as fast as C, but dramatically faster than most "scripting languages"), safety, automatic memory management, a decent "standard library" of functions and tremendous opportnities for easy extension.
It is worth learning for "mind-expansion" purposes but not so popular for building apps these days.
However, it is powerful, and mature, and there are fast and free compilers out there. So there is no reason not to choose it for a program if you like.
The way in which Lisp treats data structures and program structures the same offers amazing power which is worth understanding.
Its history is fascinating and it has shaped the world of computer science.
Be sure to check out Ableson and Sussman's Structure and Interpretation of Computer Programs at MIT OpenCourseware
Clojure: a Lisp on .NET and Java VMs
GNU CLISP.
CMU Common Lisp
Depends on the book. Which book?
Common Lisp is worth learning today because it's one of the few languages that pretty much "does everything". If there's some mainstream or obscure programming idiom or technique, odds are Common Lisp has it already in some form. About the only thing CL lacks is continuations (many argue it doesn't need them, but that's not helpful if you want to explore them).
Anyone spending any serious time writing in Common Lisp will come out Changed in some way, typically for the better, IMHO.
Even if you can't carry all of the Lispy concepts you learn and use in to other environments, knowing about them and how they work is still useful.
Good programmers expose themselves to as many different programming paradigms as they can - not as many programming languages as they can.
The LISP family (there are several variants) is very worth while getting to know. Your objective should be getting your head wraped around functional programming and the lambda calculus - the paradigm that LISP is based on. Focus less on becomming an "ace" LISP programmer (that could take years).
If you find functional programming "flips your switch", try having a look at PROLOG too - here the paradigm is based on evaluation of Horn Clauses (predicate logic).
I may have spent the last 20 years earning a living as a COBOL programmer (OMG - they still have those!), but I think I am a better programmer because of the time spent learning what LISP, and a number of other programming languages were really all about.
Have a blast...
Here's a Google Tech talk worth watching about a company currently producing large, complex software for the airline industry in Common Lisp:
Lisp for High-Performance Transaction Processing (The video is now unavailable. Here's some note by Zach Beane.)
Other topics are mentioned in that video, including Clojure, a new variant of Lisp for the JVM (some work is now being done to develop Clojure for the CLR too, but that is not as far along), which is worth checking out for the way it addresses concurrency issues. See the Clojure site at:
http://clojure.org/
and, in particular at first, check out the link in the upper right on that page to some excellent Screencasts with overviews of the concurrency issues and Clojure features.
If you get interested in Lisp, and the book you found is not that great (I have an old one myself that didn't do much for me), Paul Graham's book On Lisp is available free at http://www.paulgraham.com/onlisp.html and is very good. The general Lisp idea is the same for Common Lisp or Scheme or Emacs Lisp or Clojure, but the specifics will be different - so keep that in mind if reading Graham's book, which focuses mostly on Common Lisp (with some mentions of Scheme specifics.) On Lisp is probably not the best beginner book, but it's worth going through it and just skimming over specifics you're not ready to follow in detail yet to see what is there, particularly with regard to macros, which On Lisp really explores.
One benefit of Lisp is that you develop an appreciation for prefix and wonder why everyone else in the world doesn't us it too (like with Latex or vim)
+ 1 2 3 4 5 6 7 8 9
is much easier to code/edit/paste than
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9
See this question, and especially the third answer, the one that explains that Lisp is good for solving complicated problems, those problems that are hard to decompose into more manageable modules if you were to try to solve them in another language. In other words, if you are keen on exploring ways to solve convoluted problems, that involve, let's say Natural Language Processing, or Knowledge Aggregation... then, yes, Lisp might just be useful to you.
Learn lisp to learn about macros, and that code is data, and to learn that you can reach enlightenment w/o the self-flagellation of C++.
Learn Common Lisp to learn about reader macros and compiler macros. I don't know any other language that has them.
Learn scheme for continuations.
Learn Clojure because it's going to make Java obsolete :-)
It (Common Lisp) is still heavily used by academics working in Artificial Intelligence. Scheme is a Lisp-like language used by many (most?) CS departments as well. Personally I think learning Lisp is worthwhile whether or not you end up using it. It's a classic language that we've learned a great deal from over time.
Learning LISP is a good way to learn functional programming effectively, and is often used as an introductory language for undergraduate students. Many people feel that Structure and Interpretation of Computer Programs, which uses the Scheme dialect of Lisp, is a book that should be on every programmers shelf.
Paul Graham has been a big proponent of Lisp, and in his book, Hackers and Painters, he describes how he used the power of Lisp to dominate the competition in creating ViaWeb for Yahoo Stores.
Elsewhere, I've seen Lisp dialects used prominently in the aerospace industry, as scripting tools for integration frameworks like Comet, and AML. Lisp will always be tied to the early AI experiments in the 1950's.
As other have alluded to, Lisp isn't that popular anymore for general programming, and it definitely (IMO) has some major problems for writing real systems in, but of course others disagree (for instance much of ITA's software is written in Lisp, and they make crazy bank).
Even if you never write a 'real' program in Lisp, it is absolutely worth learning. There are many programming techniques originally pioneered in Lisp that, knowing them, will help you write better code in Python, Perl, Ruby, ML, Haskell, and even C++. For instance, check out Higher Order Perl, which shows how to do all kinds of amazing tricks in Perl; to quote the introduction "Instead of telling you how wonderful Lisp is, I will tell you how wonderful Perl is, and in the end you will not have to know any Lisp, but you will know a lot more about Perl. [...] Then you can stop writing C programs in Perl. I think you will find it to be a nice change. Perl is much better at being Perl than it is at being a slow version of C."
And there are some great books out there that use Lisp, and it will be easier to understand them if you know the language - SICP, Norvig's Paradigms of Artificial Intelligence Programming, and The Reasoned Schemer all come to mind as must-reads.
Some people here are recommending Clojure. While I would say that Lisp in general is good, I would caution against Clojure, at least for beginners. This is not out of any difficulty using Clojure, just in the fact that Clojure is gratuitously inconsistent with Common Lisp and Scheme. If you want JVM integration, use Armed Bear Common Lisp: https://common-lisp.net/project/armedbear/ , though for general use, I would recommend Steel Bank Common Lisp: http://www.sbcl.org/
This is like questioning if "it's worth to learn C these days of web programming?". Only you can decide if it's worth. What you have to ask yourself is: what am I trying to achieve reading the book?
Learning new languages sometimes aren't useful in the practical sense of things (maybe you're aren't going to use LISP ever in your life), but in the long term it's going to be useful because of the knowledge acquired by different paradigms you aren't too familiar with - and you could use some of you learned in what you already use today.
It's been awhile since university, but after I took a 3rd year CS course that required learning Lisp and writing Lisp programs, writing and thinking recursively was a snap. Not that I had problems with recursion before the course, but afterwards, it was 2nd nature. I also used CLOS in the course (University of Toronto), but it was so long ago, I barely remember what I did.