I am looking for a Scheme implementation with a reasonable BLAS and LAPACK interface package, i.e. one that supports the API subset decribted in Golub and Van Loan's "Matrix Computations". This would include, at the very least, all the BLAS operations, the major decompositions (SVD, LU, QR, Cholesky) and, for convenience, least squares. I could see that Chicken and Racket have something, but it's not covering the above. Does anyone know of either an implementation of a portable library that accomplishes this?
I don't think I remember any Scheme implementation that support that subset of BLAS and LAPACK (although I could be wrong), but if I were you I'd consider Chicken with FFI (which is quite simple). You could even write some wrappers in Scheme, or, if you're willing to do so, contribute some enhancements to the two Chicken eggs (packages) blas and atlas-lapack.
If you're using Chicken Scheme, you can use http://wiki.call-cc.org/eggref/4/atlas-lapack . Yes, you have to build atlas-lapack library before install this egg.
Related
I'm looking for a function in Julia to estimate coefficients for an ARMA process.
For example using the Prediction Error Model as pem and armax in Matlab (part of system identification toolbox) do. pem documentation and armax documentation.
I've looked at the following packages, but can't see that they do what I'm looking for:
TimeSeries.jl
TimeModels.jl
One solution is of course to use Matlab.jl and use the Matlab functions, but I was hoping to do it all in Julia.
If there isn't anything right now, does anyone know of if there are any good Julia functions for multidimensional numerical minimisation (like Newton-Raphson), that can be used for implementing a PEM function?
UPDATE: I've just pushed a module to github called RARIMA.jl. This module can be used to estimate, forecast, and simulate ARIMA models (of which ARMA is a special case). Some of the functions are implemented in Julia, others (particularly estimation) call equivalent R functions using the RCall package which you will need to install and verify it works prior to using RARIMA. The package isn't officially registered (yet), so Pkg.add("RARIMA") won't work for now. If you want to use RARIMA, instead try Pkg.clone("https://github.com/colintbowers/RARIMA.jl"). If this fails, you can file an issue on the repository github page, but be sure to check RCall is installed and working before doing this. Cheers, I'll come back and update here if/when the package is officially registered.
ORIGINAL ANSWER: I just had a glance at the source, and TimeModels does not appear to have any functionality for estimating ARIMA models, although does have one function for simulating them. Given time though, I suspect this will be the package that deals with ARIMA modelling. The TimeSeries package is more about building the object type TimeSeries rather than implementing time series models, so I would be surprised if ARIMA modelling is ever merged into that package.
As near as I can tell, at this point if you want a fully functioning ARIMA package you'll need to use Matlab or R. The R one is very good (see the forecast package written by Rob Hyndman - it is very nice) and is probably easier to interface with from Julia than the Matlab option. Of course, the other option is to start it yourself and merge the code with the TimeModels package :-)
In terms of optimization procedures, Julia has a fair few that are written in Julia, and can be found under the JuliaOpt umbrella. The Optim package in particular is quite popular and well developed. However, most of the people I know who are really into this stuff use NLOpt which is a free open source library callable from many languages (including Julia). I have heard nothing but good things about this library from people who tend to work with this stuff 24/7.
I am looking for numeric computation tooling on the JVM. My major requirements are expressiveness/readability, ease of use, evaluation and features in terms of mathematical functions. I guess I am after something like the Matlab kernel (probably including some basic libraries and w/o graphics) on the JVM. I'd like to be able to "throw" computional code at a running JVM and want this code to be evaluated. I don't want to worry about types. Arbitrary precision and performance is not so important.
I guess there are some nice libraries out there but I think an appropriate language on top is needed to get the expressiveness.
Which tooling would you guys suggest to address expressive, feature rich numeric computation on the JVM ?
From the jGroovyLab page:
The GroovyLab environment aims to provide a Matlab/Scilab like scientific computing platform that is supported by a scripting engine implemented in Groovy language. The GroovyLab user can work either with a Matlab-lke command console, or with a flexible editor based on the jsyntaxpane (http://code.google.com/p/jsyntaxpane/) component, that offers more convenient code development. Also, GroovyLab supports Computer Algebra based on the symja (http://code.google.com/p/symja/) project.
And there is also GroovyLab:
GroovyLab is a collection of Groovy classes to provide matlab-like syntax and basic features (linear algebra, 2D/3D plots). It is based on jmathplot and jmatharray libs:
Groovy has a smooth learning curve for Java programmers and a flexible syntax similar to Ruby. It is also pretty easy to write a DSL on it.
Though Groovy's performance is pretty good for a dynamic language, you can use static compilation if you are in the need for it.
Most of Mathworks Matlab is built on the Intel Math Kernel Library (MKL), which is (IMHO) the unbeatable champion in linear algebra computations. There is java support, but it costs 500 dollar (the MKL, not just the java support)...
Best second option if you want to use java is jblas, which uses BLAS and LAPACK, the industry standards for linear algebra.
Pure java libraries' performances are horrible apparently, see here...
Spire sounds like it's aiming at the area you're looking at. It takes advantage of a lot of recent scala features such as macros to get decent performance without having to sacrifice the expressiveness of being in a high level language.
There's also breeze, which is targeted at machine learning but includes a fair amount of linear algebra stuff.
Depending how much work you want to get into and what languages you're already familiar with, Incanter in the Clojure world might be worth a look. Also quickly evolving in Clojure right now is core.matrix, which aims to encapsulate high-level common abstractions in linear algebra implemented with various methods or packages.
You highlighted expressiveness in your post, and the nice thing about Clojure is that, as a Lisp, it is possible to make or extend DSLs to closely match problem domains. This is one of the big draws of the language (and of Lisps in general).
I'm the original author of core.matrix for Clojure. So I have a clear affiniy and much more knowledge in this specific space. That said, I'm still going to try and give you an honest answer :-)
I was the the same position as you a year or so back, looking for a solution for numeric computation that would be scalable, flexible and suitable for deployment as a clustered cloud service.
I ended up going with Clojure for the following reasons:
Functional Programming: Clojure is a functional programming language at heart, more so than most other language (although not as much as Haskell....). Lazy infinite sequences, persistent data structures, immutability throughout etc. Makes for elegany code when you are dealing with big computations.
Metaprogramming: I saw a need to do code generation for vector / computational experessions. Hence being a Lisp was a big plus: once you have done code generation in a homoiconic language with a "whole language" macro system then it's hard to find anything else that comes close.
Concurrency - Clojure has an impressive and movel approach to multi-code concurrency. If you haven't seen it then watch: http://www.infoq.com/presentations/Value-Identity-State-Rich-Hickey
Interactive REPL: Something I've always felt is very important for data work. You want to be able to work with your code / data "live" to get a real feel for its properties. Having a dynamically typed language with an interactive REPL works wonders here.
JVM based: big advantage for pragmantic purposes, because of the huge library / tool ecosystem and the excellent engineering in the JVM as a runtime platform.
Community: I saw a lot of innovation going on in Clojure, particularly around the general area of data and analytics.
The main thing Clojure was lacking at that time was a good library / API for matrix operations. There were some nice tools in Incanter, but they weren't very general purpose or performant. Hence I started developing core.matrix, which is shaping up to be an idiomatic Clojure-flavoured equivalent of NumPY / SciPY. Right now it is still work in progress but good enough for production use if you are careful.
In terms of low-level matrix support, I also maintain vectorz-clj, which is my attempt to provide a core.mattrix implementation that offers high performance vector/matrix operations while remaining Pure Java (i.e. no native dependencies). If you are interested in the performance of this, you may like to see:
http://clojurefun.wordpress.com/2013/03/07/achieving-awesome-numerical-performance-in-clojure/
My second choice after Clojure would have been Scala. I liked Scala's slightly greater maturity and decent static type system. Both the languages are JVM based so the library / tool side was a tie. It was probably the Lisp features that clinched it.
If you happen to have access to Mathematica, then it's fairly easy to get it working with the JVM by means of J/Link. For Clojure, Clojuratica is an excellent library to make that as seemless as possible, although it's not been maintained for a while and it may take some effort to get it working in modern environments again.
A friend of mine needs to implement some statistical calculations in hardware.
She wants it to be accomplished using VHDL.
(cross my heart, I haven't written a line of code in VHDL and know nothing about its subtleties)
In particular, she needs a direct analogue of MATLAB's betainc function.
Is there a good package around for doing this?
Any hints on the implementation are also highly appreciated.
If it's not a good idea at all, please tell me about it as well.
Thanks a lot!
There isn't a core available that performs an incomplete beta function in the Xilinx toolset. I can't speak for the other toolsets available, although I would doubt that there is such a thing.
What Xilinx does offer is a set of signal processing blocks, like multipliers, adders and RAM Blocks (amongst other things, filters, FFTs), that can be used together to implement various custom signal transforms.
In order for this to be done, there needs to be a complete understanding of the inner workings of the transform to be applied.
A good first step is to implement the function "manually" in matlab as a proof of concept:
Instead of using the built-in function in matlab, your friend can try to implement the function just using fundamental operators like multipliers and adders.
The results can be compared with those produced by the built-in function for verification.
The concept can then be moved to VHDL using the building blocks that are provided.
Doing this for the incomplete beta function isn't something for the faint-hearted, but it can be done.
As far as I know there is no tool which allow interface of VHDL and matlab.
But interface of VHDL and C is fairly easy, so if you can implement your code(MATLAB's betainc function) in C then it can be done easily with FLI(foreign language interface).
If you are using modelsim below link can be helpful.
link
First of all a word of warning, if you haven't done any VHDL/FPGA work before, this is probably not the best place to start. With VHDL (and other HDL languages) you are basically describing hardware, rather than a sequential line of commands to execute on a processor (as you are with C/C++, etc.). You thus need a completely different skill- and mind-set when doing FPGA-development. Just because something can be written in VHDL, it doesn't mean that it actually can work in an FPGA chip (that it is synthesizable).
With that said, Xilinx (one of the major manufacturers of FPGA chips and development tools) does provide the System Generator package, which interfaces with Matlab and can automatically generate code for FPGA chips from this. I haven't used it myself, so I'm not at all sure if it's usable in your friend's case - but it's probably a good place to start.
The System Generator User guide (link is on the previously linked page) also provides a short introduction to FPGA chips in general, and in the context of using it with Matlab.
You COULD write it yourself. However, the incomplete beta function is an integral. For many values of the parameters (as long as both are greater than 1) it is fairly well behaved. However, when either parameter is less than 1, a singularity arises at an endpoint, making the problem a bit nasty. The point is, don't write it yourself unless you have a solid background in numerical analysis.
Anyway, there are surely many versions in C available. Netlib must have something, or look in Numerical Recipes. Or compile it from MATLAB. Then link it in as nav_jan suggests.
As an alternative to VHDL, you could use MyHDL to write and test your beta function - that can produce synthesisable (ie. can go into an FPGA chip) VHDL (or Verilog as you wish) out of the back end.
MyHDL is an extra set of modules on top of Python which allow hardware to be modelled, verified and generated. Python will be a much more familiar environment to write validation code in than VHDL (which is missing many of the abstract data types you might take for granted in a programming language).
The code under test will still have to be written with a "hardware mindset", but that is usually a smaller piece of code than the test environment, so in some ways less hassle than figuring out how to work around the verification limitations of VHDL.
Obviously, that will depend on what you want to do: numerical analysis, threading, databases, etc. I've seen the benchmarks; Larceny and Bigloo seem to come up ahead. Is there any implementation of Scheme that performs pretty well in several different benchmarks? Are there any that can create code that runs faster than produced by SBCL? I don't see why SBCL should be so fast - Scheme is a far simpler language than Common Lisp!
http://community.schemewiki.org/?Stalin
http://en.wikipedia.org/wiki/Stalin_(Scheme_implementation)
From Wikipedia:
Stalin (STAtic Language ImplementatioN) is an aggressive optimizing
batch whole-program Scheme compiler written by Jeffrey Mark Siskind.
It uses advanced flow analysis and type inference and a variety of
other optimization techniques to produce code. Stalin is intended for
production use in generating an optimized executable.
The compiler itself runs slowly, and there is little or no support for
debugging or other niceties. Full R4RS Scheme is supported, with a few
minor and rarely encountered omissions. Interfacing to external C
libraries is straightforward. The compiler itself does lifetime
analysis and hence does not generate as much garbage as might be
expected, but global reclamation of storage is done using the Boehm
garbage collector.
It seems that Stalin is no longer being developed.
Among the Schemes that are fully standards compliant (at least with R5RS) and ready for prime-time use, Chez Scheme must be the fastest.
Based on these benchmarks, it looks like Chez Scheme, Gambit, and Racket are roughly tied for the title of Fastest Scheme.
I'm pretty intrigued by Gambit Scheme, in particular by its wide range of supported platforms, and its ability to put C code right in your Scheme source when needed. That said, it is a Scheme, which has fewer "batteries included" as compared to Common Lisp. Some people like coding lots of things from scratch, (a.k.a. vigorous yak-shaving) but not me!
This brings me to my two questions, geared to people who have used both Gambit and some flavor of Common Lisp:
1) Which effectively has better access to libraries? Scheme has fewer libraries than Common Lisp. However, Gambit Scheme has smoother access to C/C++ code & libraries, which far outnumber Common Lisp's libraries. In your opinion, does the smoothness of Gambit's FFI outweigh its lack of native libraries?
2) How do Scheme's object systems (e.g. TinyCLOS, Meroon) compare to Common Lisp's CLOS? If you found them lacking, what feature(s) did you miss most? Finally, how important is an object system in Lisp/Scheme in the first place? I have heard of entire lisp-based companies (e.g. ITA Software) forgoing CLOS altogether. Are objects really that optional in Lisp/Scheme? I do fear that if Gambit has no good object system, I may miss them (my programming background is purely object-oriented).
Thanks for helping an aspiring convert from C++/Python,
-- Matt
PS: Someone with more than 1500 rep, could you please create a "gambit" tag? :) Thanks Jonas!
Sure Scheme as a whole has fewer libraries in the defined standard, but any given Scheme implementation usually builds on that standard to include more "batteries included" type of functions.
Gambit, for example, uses the Snow package system which will give you access to several support libraries.
Other Schemes fare even better, having access to more (or better) support libraries. Both Racket (with PlaneT) and Chicken (with eggs) immediately come to mind.
That said, the Common Lisp is quite rich also and a large number of interesting and useful libraries are a simple asdf-install away.
As for Scheme object systems, I personally tend to favor Chicken Scheme and have taken to favoring coops. That said, there's absolutely nothing wrong with TinyCLOS. Either would serve well and don't really find anything to be lacking. Though that last statement might have more to do with the fact that I don't tend to rely on a lot of object oriented-isms when writing Scheme. Both systems in my experience tend to surface when I want to write "protocols" and then have a way of specializing on the protocol, if that makes sense.
1) I haven't used Gambit Scheme, so I cannot really tell how smooth the C/C++ integration is. But all Common Lisps I have used have fully functional C FFI:s. So the availability of C libraries is the same. It takes some work to integrate, but I assume this is the case with Gambit Scheme as well. After all, Lisp and C are different languages..? But maybe you have a different experience, I would like to learn more in that case.
You may be interested in Quicklisp, a really good new Common Lisp project - it makes it very easy to install a lot of quality libraries.
2) C++ and Python are designed to use OOP and classes as the typical means of encapsulating and structuring data. CLOS does not have this ambition at all. Instead, it provides generic functions that can be specialized for certain types of arguments - not necessarily classes. Essentially this enables OOP, but in Common Lisp, OOP is a convenient feature rather than something fundamental for getting things done.
I think CLOS is a lot more well-designed and flexible than the C++ object model - TinyCLOS should be no different in that aspect.