Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 months ago.
Improve this question
Lisp is known to be the first high-level interpreted programming language appearing in 1958.
Quoting from Interpreter (computing):
The first interpreted high-level language was Lisp. Lisp was first
implemented in 1958 by Steve Russell on an IBM 704 computer.
But, there are older high-level programming languages, that were interpreted as well such as:
Short Code: appeared in 1950. Quoting from Wikipedia:
The language was interpreted and ran about 50 times slower than
machine code.
Speedcoding: developed by John W. Backus (the creator of Fortran) in 1953 for the IBM 701. Quoting from Wikipedia:
The speedcoding system was an interpreter and focused on ease of use
at the expense of system resources.
Is it the case that lisp is the first successful high-level interpreted programming language? Why is it particularly that people consider being the first interpreted language?
I suppose the answer depends on the definition of a "high-level programming language". Even though Wikipedia does call Speedcoding such, the description gives no indication that it provided
strong abstraction from the details of the computer.
All it apparently offered were
pseudo-instructions for common mathematical functions: logarithms, exponentiation, and trigonometric operations
which is a far cry from sophisticated language constructs (functions, loops, conditionals) that are commonly associated with "high-level".
That said, this question probably belongs on Retrocomputing.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
At my work I wrote a little parser for C-like expressions in one of our in-house Perl tools. I initially chose Parse::RecDescent because of its extreme ease of use and straightforward grammar syntax, but I'm finding that it's excessively slow (which is corroborated by general opinion found on the web). It's safe to assume that the grammar of the expressions is no more complicated than that of C.
What are the fastest (but still with a straightforward and uncumbersome grammar format) lexxer/parser modules for the use case of thousands of simple expressions (I'd guestimate the median length is 1 token, mean is 2 or so, and max is 30)? Additionally, thanks to unsavory IT choices, it must work in Perl 5.8.8 and it and any non-core dependencies must be pure Perl.
Parse::Eyapp looks like satysfying 5.8.8, pure perl and dependency requirements. As for speed, it claims LALR parsers, which must be faster than recursive descent. A grammar for expressions is given in the doc. Hope it helps.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I can imagine that IDEs like eclipse written in java are good for writing java program because the tool and the language are tightly integrated. So if some one would say "emacs is good for writing emacs lisp programs" then it would make sense in the same way to me.
But people say emacs is good for all lisp dialects, as if all lisp dialects and emacs are tightly integrated naturally. Not just emacs lisp, not any language, not any langauge use repl, but all lisp dialects. Why? Is there anything in emacs lisp that is shared between all lisp dialects but not really in other languages -- including all these using repl, that is so unique that only lisp dialects could benefit from this? Could you come up with some examples?
You're right that there's little reason that Emacs would have any technical benefit as a generic Lisp environment. It's not completely zero though -- since Emacs has support that makes it easy to parse most lisps, since their surface syntax (S-expressions) are usually very similar. But this is still just a technicality.
The real reason is that Emacs comes from times when there was much less variance among lisps (in the syntax sense, semantics is a different issue), so it was basically the only choice of a sophisticated Lisp editing environment. As such, it enjoys the benefits of literally decades of Lispers who used it for Lisp editing and therefore it was always one of its stronger points -- and on the other hand very few Lispers chose other editors so they suffered from the natural lack of need. (IIRC, up until not too long ago even vi-clones had little more than basic paren matching.)
More than that, even these days there's a strong relationship since reading elisp is much easier for your J Random Lisper, so they are still the crowd that finds it easy to extend Emacs. You could make the exact same point for Java & Eclipse, of course -- both are examples of tools that are very extensible, and therefore are very entrenched in their respective communities.
When it gets to executing code, with most lisps (= all except elisp) the work is done in a subprocess so it's no different than using any other editor -- technically speaking.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I want to learn the lisp language, since my editor is emacs, I prefer emacs lisp.
Can anyone give me some suggestions to learn lisp, emacs lisp, or common lisp?
What are the major differences between those two?
There's quite a bit of crossover, especially at the beginner level, so whichever you start with will mostly transfer to the other.
Some of the major differences:
ELisp traditionally used dynamic scoping rules; Common Lisp uses lexical scoping rules. With dynamic scoping, a function can access local variables declared in calling functions and has generally fallen out of favor. Starting with Emacs 24, Emacs allows optional lexical scoping on a file-by-file basis (and all files in the core distribution are progressively being converted).
Dynamically scoped ELisp doesn't have closures, which makes composing functions and currying difficult. There's a apply-partially function that works similarly to currying. Note that the lexical-let form introduced in Emacs 24 makes it possible to produce closures via lexical scoping.
Much of the Common Lisp library that has been built up over time isn't available in elisp. A subset is provided by the elisp cl package
elisp doesn't do tail-call optimization.
These Emacs-Wiki pages offer some info about the relation between the two Lisps and their differences:
http://www.emacswiki.org/emacs/CommonLisp
http://www.emacswiki.org/emacs/EmacsLisp
http://www.emacswiki.org/emacs/EmacsLispLimitations
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Whats the fastest perl template library that allows me to do the following:
variable substitution,
loops (Hashes & Arrays),
layout (wrapper templates)
and at least some conditional logic (< > != == %).
..also has anybody used pltenjin? the benchmarks suggest this is pretty rapid.
I recommend you the Xslate template engine (http://xslate.org/), and it's about 50-100 times faster than others. Please, see this comparative benchmarks: http://xslate.org/benchmark.html
The engine enables the use of Template Toolkit (another template engine) compatible template tokens ('[%', '%]'), and you can use commands like: INCLUDE, FOREACH, WHILE, ...
No, I didn't use plTenjin. From my experience,
this looks almost like HTML::Mason minus the
nice block syntax of Mason.
What site do you manage which is able to saturate
any modern CPU during template processing? I don't
think this would happen easily.
In most cases, there are different bottlenecks
to site performance than any cpu-bound template
processing.
(BTW, from what I read in the plTenjin doc,
you should give HTML::Mason a try..)
Regards
rbo
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have done a couple research jobs in Bio-informatics and I have used Matlab for them. Matlab had a lot of powerful tools and was easy to use. I did thinks with genome sequencing and predicting metabolic pathways. I am wondering what other people think is best? or there might not be one specific language but a few that lend themselves best to Bio-informatics work that is math heavy and deals with a large amount of data.
You'll likely be interested in this thread over at BioStar:
Which are the best programming languages to study for a bioinformatician?
For most of us bioinformaticians, this includes Python, R, Perl, and bash command line utilities (like sed, awk, cut, sort, etc). There are also people who code in Java, Ruby, C++, and Matlab.
So the bottom line? Whichever language lets you get the work done most easily is the right one for you. Answering this question should include a careful survey of the libraries and other code that you can pull from, as well as information on your own preferences and experience. If you're doing microarray analysis, it's hard to beat the R/bioconductor libraries, but that's absolutely the wrong language for someone wrangling most types of large sequencing data sets.
There's no one right language for bioinformatics.
The important BLAST sequencing tool is written in C++
The MATT tool for aligning protein structures is written in C
Some of my colleagues in computational biology use Ruby.
In general, I see a lot of C and C++ for performance-critical code and a lot of scripting languages otherwise.
Python + scipy are decent (and FREE).
http://www.vetta.org/2008/05/scipy-the-embarrassing-way-to-code/
http://www.google.com/search?hl=en&source=hp&q=python+bioinformatics&aq=0&aqi=g9g-m1&aql=&oq=python+bio&gs_rfai=CeE1nPpMNTN2IJZ-yMZX6pcIKAAAAqgQFT9DLSgo
You do not even need to learn new syntax really when when dropping Matlab for SciPy.
Best or not, SAS is the de facto programming enviroment in biopharmas. If you were to work for the Pfizers, Mercks and Bayers of the world in bioinformatics, you had better have SAS skills. SAS programmers are in great demand.
What's the "best" language is both subjective and potentially different from task to task, but for bioinformatic work, I personally use R, Perl, Delphi and C (quite frequently a combination of several of these).
I work mainly with HMMs and protein sequences. I started out writing in C, but have since switched to Python, which I'm happy with. I find it's easier to prototype something quickly and results in easier to maintain code.
Here's a freely available academic paper written on the subject that evaluates the different languages, and in different situations: http://www.biomedcentral.com/1471-2105/9/82
They grouped 6 commonly used languages into 3 different levels.
2 compiled languages: C, C++
2 semi-compiled languages: C#, Java
2 interpreted languages: Perl, Python
Some general conclusions:
Compiled languages outperformed interpreted languages in global alignments and Neighbour-Joining programs
Interpreted languages generally used more memory
All languages performed roughly the same for BLAST computations, except for Python
Compiled languages require more written lines of code to perform the same tasks
Compiled languages tend to be better for algorithm implementation
Interpreted languages tend to be better for file parsing/manipulation
Here's another good free academic article discussing ways to build bioinformatics skills: http://dx.plos.org/10.1371/journal.pcbi.1000589