Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
At my work I wrote a little parser for C-like expressions in one of our in-house Perl tools. I initially chose Parse::RecDescent because of its extreme ease of use and straightforward grammar syntax, but I'm finding that it's excessively slow (which is corroborated by general opinion found on the web). It's safe to assume that the grammar of the expressions is no more complicated than that of C.
What are the fastest (but still with a straightforward and uncumbersome grammar format) lexxer/parser modules for the use case of thousands of simple expressions (I'd guestimate the median length is 1 token, mean is 2 or so, and max is 30)? Additionally, thanks to unsavory IT choices, it must work in Perl 5.8.8 and it and any non-core dependencies must be pure Perl.
Parse::Eyapp looks like satysfying 5.8.8, pure perl and dependency requirements. As for speed, it claims LALR parsers, which must be faster than recursive descent. A grammar for expressions is given in the doc. Hope it helps.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 months ago.
Improve this question
Lisp is known to be the first high-level interpreted programming language appearing in 1958.
Quoting from Interpreter (computing):
The first interpreted high-level language was Lisp. Lisp was first
implemented in 1958 by Steve Russell on an IBM 704 computer.
But, there are older high-level programming languages, that were interpreted as well such as:
Short Code: appeared in 1950. Quoting from Wikipedia:
The language was interpreted and ran about 50 times slower than
machine code.
Speedcoding: developed by John W. Backus (the creator of Fortran) in 1953 for the IBM 701. Quoting from Wikipedia:
The speedcoding system was an interpreter and focused on ease of use
at the expense of system resources.
Is it the case that lisp is the first successful high-level interpreted programming language? Why is it particularly that people consider being the first interpreted language?
I suppose the answer depends on the definition of a "high-level programming language". Even though Wikipedia does call Speedcoding such, the description gives no indication that it provided
strong abstraction from the details of the computer.
All it apparently offered were
pseudo-instructions for common mathematical functions: logarithms, exponentiation, and trigonometric operations
which is a far cry from sophisticated language constructs (functions, loops, conditionals) that are commonly associated with "high-level".
That said, this question probably belongs on Retrocomputing.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I have doudt, Is scala interpreter(REPL) compiles the command and run by the JVM like how the scalac compiles a program ?
Simply Does it work like any other normal interpreter ?
The standard Scala interpreter, as experienced in the REPL for example, is a variant of the compiler that takes input, wraps it in an invisible object and compiles it on the fly (like any other regular Scala program), then runs the body of that virtual object.
The Scala Meta project might offer a different approach of a more direct interpretation without going through full-to-byte-code compilation.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Could you please recomend me java libraries for text prerprocessing and clean up? The lib should perform such tasks:
convert all verbs to infinitive
convert all nouns to singular form
remove useless (for the sense of a text) words
Converting words to canonical forms (verbs to infinitives and nouns to singular, for example) is called lemmatization. One Java-based lemmatizer is Standford CoreNLP.
For "useless words" you probably want "stop words" - there's no standard list, but there's a lot floating around the Internet which function in more or less the same way with the only difference being how many words they include (typically between 100 and 1000). I've known people to use this list before. When removing stop words, remember to ignore case when looking for matches.
Not sure if this does everything you need, but check out mrsqg.
http://code.google.com/p/mrsqg/
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Whats the fastest perl template library that allows me to do the following:
variable substitution,
loops (Hashes & Arrays),
layout (wrapper templates)
and at least some conditional logic (< > != == %).
..also has anybody used pltenjin? the benchmarks suggest this is pretty rapid.
I recommend you the Xslate template engine (http://xslate.org/), and it's about 50-100 times faster than others. Please, see this comparative benchmarks: http://xslate.org/benchmark.html
The engine enables the use of Template Toolkit (another template engine) compatible template tokens ('[%', '%]'), and you can use commands like: INCLUDE, FOREACH, WHILE, ...
No, I didn't use plTenjin. From my experience,
this looks almost like HTML::Mason minus the
nice block syntax of Mason.
What site do you manage which is able to saturate
any modern CPU during template processing? I don't
think this would happen easily.
In most cases, there are different bottlenecks
to site performance than any cpu-bound template
processing.
(BTW, from what I read in the plTenjin doc,
you should give HTML::Mason a try..)
Regards
rbo
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have done a couple research jobs in Bio-informatics and I have used Matlab for them. Matlab had a lot of powerful tools and was easy to use. I did thinks with genome sequencing and predicting metabolic pathways. I am wondering what other people think is best? or there might not be one specific language but a few that lend themselves best to Bio-informatics work that is math heavy and deals with a large amount of data.
You'll likely be interested in this thread over at BioStar:
Which are the best programming languages to study for a bioinformatician?
For most of us bioinformaticians, this includes Python, R, Perl, and bash command line utilities (like sed, awk, cut, sort, etc). There are also people who code in Java, Ruby, C++, and Matlab.
So the bottom line? Whichever language lets you get the work done most easily is the right one for you. Answering this question should include a careful survey of the libraries and other code that you can pull from, as well as information on your own preferences and experience. If you're doing microarray analysis, it's hard to beat the R/bioconductor libraries, but that's absolutely the wrong language for someone wrangling most types of large sequencing data sets.
There's no one right language for bioinformatics.
The important BLAST sequencing tool is written in C++
The MATT tool for aligning protein structures is written in C
Some of my colleagues in computational biology use Ruby.
In general, I see a lot of C and C++ for performance-critical code and a lot of scripting languages otherwise.
Python + scipy are decent (and FREE).
http://www.vetta.org/2008/05/scipy-the-embarrassing-way-to-code/
http://www.google.com/search?hl=en&source=hp&q=python+bioinformatics&aq=0&aqi=g9g-m1&aql=&oq=python+bio&gs_rfai=CeE1nPpMNTN2IJZ-yMZX6pcIKAAAAqgQFT9DLSgo
You do not even need to learn new syntax really when when dropping Matlab for SciPy.
Best or not, SAS is the de facto programming enviroment in biopharmas. If you were to work for the Pfizers, Mercks and Bayers of the world in bioinformatics, you had better have SAS skills. SAS programmers are in great demand.
What's the "best" language is both subjective and potentially different from task to task, but for bioinformatic work, I personally use R, Perl, Delphi and C (quite frequently a combination of several of these).
I work mainly with HMMs and protein sequences. I started out writing in C, but have since switched to Python, which I'm happy with. I find it's easier to prototype something quickly and results in easier to maintain code.
Here's a freely available academic paper written on the subject that evaluates the different languages, and in different situations: http://www.biomedcentral.com/1471-2105/9/82
They grouped 6 commonly used languages into 3 different levels.
2 compiled languages: C, C++
2 semi-compiled languages: C#, Java
2 interpreted languages: Perl, Python
Some general conclusions:
Compiled languages outperformed interpreted languages in global alignments and Neighbour-Joining programs
Interpreted languages generally used more memory
All languages performed roughly the same for BLAST computations, except for Python
Compiled languages require more written lines of code to perform the same tasks
Compiled languages tend to be better for algorithm implementation
Interpreted languages tend to be better for file parsing/manipulation
Here's another good free academic article discussing ways to build bioinformatics skills: http://dx.plos.org/10.1371/journal.pcbi.1000589