Related
Obviously F# would rule the model part, but would the choice be as clear for the VM part ?
Would the tooling support lost (any?) be compensated by the gain in the flexibility in the langage in a large application ?
Given that you don't rely on any major features of some MVVM framework, I'd say mostly it's as much as the choice of F# over C#, if that's the case of comparison.
The current F# compiler misses one feature that is useful in VM code though -- the support for CallerMemberName attribute, which is nice to use for INPC object properties. But this is probably a minor shortcoming compared to benefits you get (e.g. all the cool stuff in F#).
Since VM is often about transforming and processing the M data to use it in the V, I don't yet see why F# would be a bad fit.
I am looking for numeric computation tooling on the JVM. My major requirements are expressiveness/readability, ease of use, evaluation and features in terms of mathematical functions. I guess I am after something like the Matlab kernel (probably including some basic libraries and w/o graphics) on the JVM. I'd like to be able to "throw" computional code at a running JVM and want this code to be evaluated. I don't want to worry about types. Arbitrary precision and performance is not so important.
I guess there are some nice libraries out there but I think an appropriate language on top is needed to get the expressiveness.
Which tooling would you guys suggest to address expressive, feature rich numeric computation on the JVM ?
From the jGroovyLab page:
The GroovyLab environment aims to provide a Matlab/Scilab like scientific computing platform that is supported by a scripting engine implemented in Groovy language. The GroovyLab user can work either with a Matlab-lke command console, or with a flexible editor based on the jsyntaxpane (http://code.google.com/p/jsyntaxpane/) component, that offers more convenient code development. Also, GroovyLab supports Computer Algebra based on the symja (http://code.google.com/p/symja/) project.
And there is also GroovyLab:
GroovyLab is a collection of Groovy classes to provide matlab-like syntax and basic features (linear algebra, 2D/3D plots). It is based on jmathplot and jmatharray libs:
Groovy has a smooth learning curve for Java programmers and a flexible syntax similar to Ruby. It is also pretty easy to write a DSL on it.
Though Groovy's performance is pretty good for a dynamic language, you can use static compilation if you are in the need for it.
Most of Mathworks Matlab is built on the Intel Math Kernel Library (MKL), which is (IMHO) the unbeatable champion in linear algebra computations. There is java support, but it costs 500 dollar (the MKL, not just the java support)...
Best second option if you want to use java is jblas, which uses BLAS and LAPACK, the industry standards for linear algebra.
Pure java libraries' performances are horrible apparently, see here...
Spire sounds like it's aiming at the area you're looking at. It takes advantage of a lot of recent scala features such as macros to get decent performance without having to sacrifice the expressiveness of being in a high level language.
There's also breeze, which is targeted at machine learning but includes a fair amount of linear algebra stuff.
Depending how much work you want to get into and what languages you're already familiar with, Incanter in the Clojure world might be worth a look. Also quickly evolving in Clojure right now is core.matrix, which aims to encapsulate high-level common abstractions in linear algebra implemented with various methods or packages.
You highlighted expressiveness in your post, and the nice thing about Clojure is that, as a Lisp, it is possible to make or extend DSLs to closely match problem domains. This is one of the big draws of the language (and of Lisps in general).
I'm the original author of core.matrix for Clojure. So I have a clear affiniy and much more knowledge in this specific space. That said, I'm still going to try and give you an honest answer :-)
I was the the same position as you a year or so back, looking for a solution for numeric computation that would be scalable, flexible and suitable for deployment as a clustered cloud service.
I ended up going with Clojure for the following reasons:
Functional Programming: Clojure is a functional programming language at heart, more so than most other language (although not as much as Haskell....). Lazy infinite sequences, persistent data structures, immutability throughout etc. Makes for elegany code when you are dealing with big computations.
Metaprogramming: I saw a need to do code generation for vector / computational experessions. Hence being a Lisp was a big plus: once you have done code generation in a homoiconic language with a "whole language" macro system then it's hard to find anything else that comes close.
Concurrency - Clojure has an impressive and movel approach to multi-code concurrency. If you haven't seen it then watch: http://www.infoq.com/presentations/Value-Identity-State-Rich-Hickey
Interactive REPL: Something I've always felt is very important for data work. You want to be able to work with your code / data "live" to get a real feel for its properties. Having a dynamically typed language with an interactive REPL works wonders here.
JVM based: big advantage for pragmantic purposes, because of the huge library / tool ecosystem and the excellent engineering in the JVM as a runtime platform.
Community: I saw a lot of innovation going on in Clojure, particularly around the general area of data and analytics.
The main thing Clojure was lacking at that time was a good library / API for matrix operations. There were some nice tools in Incanter, but they weren't very general purpose or performant. Hence I started developing core.matrix, which is shaping up to be an idiomatic Clojure-flavoured equivalent of NumPY / SciPY. Right now it is still work in progress but good enough for production use if you are careful.
In terms of low-level matrix support, I also maintain vectorz-clj, which is my attempt to provide a core.mattrix implementation that offers high performance vector/matrix operations while remaining Pure Java (i.e. no native dependencies). If you are interested in the performance of this, you may like to see:
http://clojurefun.wordpress.com/2013/03/07/achieving-awesome-numerical-performance-in-clojure/
My second choice after Clojure would have been Scala. I liked Scala's slightly greater maturity and decent static type system. Both the languages are JVM based so the library / tool side was a tie. It was probably the Lisp features that clinched it.
If you happen to have access to Mathematica, then it's fairly easy to get it working with the JVM by means of J/Link. For Clojure, Clojuratica is an excellent library to make that as seemless as possible, although it's not been maintained for a while and it may take some effort to get it working in modern environments again.
A friend of mine needs to implement some statistical calculations in hardware.
She wants it to be accomplished using VHDL.
(cross my heart, I haven't written a line of code in VHDL and know nothing about its subtleties)
In particular, she needs a direct analogue of MATLAB's betainc function.
Is there a good package around for doing this?
Any hints on the implementation are also highly appreciated.
If it's not a good idea at all, please tell me about it as well.
Thanks a lot!
There isn't a core available that performs an incomplete beta function in the Xilinx toolset. I can't speak for the other toolsets available, although I would doubt that there is such a thing.
What Xilinx does offer is a set of signal processing blocks, like multipliers, adders and RAM Blocks (amongst other things, filters, FFTs), that can be used together to implement various custom signal transforms.
In order for this to be done, there needs to be a complete understanding of the inner workings of the transform to be applied.
A good first step is to implement the function "manually" in matlab as a proof of concept:
Instead of using the built-in function in matlab, your friend can try to implement the function just using fundamental operators like multipliers and adders.
The results can be compared with those produced by the built-in function for verification.
The concept can then be moved to VHDL using the building blocks that are provided.
Doing this for the incomplete beta function isn't something for the faint-hearted, but it can be done.
As far as I know there is no tool which allow interface of VHDL and matlab.
But interface of VHDL and C is fairly easy, so if you can implement your code(MATLAB's betainc function) in C then it can be done easily with FLI(foreign language interface).
If you are using modelsim below link can be helpful.
link
First of all a word of warning, if you haven't done any VHDL/FPGA work before, this is probably not the best place to start. With VHDL (and other HDL languages) you are basically describing hardware, rather than a sequential line of commands to execute on a processor (as you are with C/C++, etc.). You thus need a completely different skill- and mind-set when doing FPGA-development. Just because something can be written in VHDL, it doesn't mean that it actually can work in an FPGA chip (that it is synthesizable).
With that said, Xilinx (one of the major manufacturers of FPGA chips and development tools) does provide the System Generator package, which interfaces with Matlab and can automatically generate code for FPGA chips from this. I haven't used it myself, so I'm not at all sure if it's usable in your friend's case - but it's probably a good place to start.
The System Generator User guide (link is on the previously linked page) also provides a short introduction to FPGA chips in general, and in the context of using it with Matlab.
You COULD write it yourself. However, the incomplete beta function is an integral. For many values of the parameters (as long as both are greater than 1) it is fairly well behaved. However, when either parameter is less than 1, a singularity arises at an endpoint, making the problem a bit nasty. The point is, don't write it yourself unless you have a solid background in numerical analysis.
Anyway, there are surely many versions in C available. Netlib must have something, or look in Numerical Recipes. Or compile it from MATLAB. Then link it in as nav_jan suggests.
As an alternative to VHDL, you could use MyHDL to write and test your beta function - that can produce synthesisable (ie. can go into an FPGA chip) VHDL (or Verilog as you wish) out of the back end.
MyHDL is an extra set of modules on top of Python which allow hardware to be modelled, verified and generated. Python will be a much more familiar environment to write validation code in than VHDL (which is missing many of the abstract data types you might take for granted in a programming language).
The code under test will still have to be written with a "hardware mindset", but that is usually a smaller piece of code than the test environment, so in some ways less hassle than figuring out how to work around the verification limitations of VHDL.
I have been delivering training on Programming Practices and on Writing Quality Code to participants who have been working on Java since sometime. Object Oriented Analysis and Design is the base and I cover S.O.L.I.D. Principles and excerpts from books like Clean Code, Code Complete 2 and so on.
I am scheduled to deliver training to Perl Programmers(with less than 1 yr. exp. in Perl) in two days and they do not use the Moose(an extension of the Perl 5 object system which brings modern object-oriented language features).
I am now confused as to how to structure my training as they don't follow OOPs.
Any suggestions?
Regards,
Shardul.
Even without Moose, object-oriented programming in Perl is quite possible, and very common. Many CPAN modules offer their functionality through an object-oriented API, even if many of these also offer a non–object-oriented API. (A good example of this duality is IO::Compress::Zip.) Obviously the norms of object-oriented design in Perl are somewhat different from those in some languages — encapsulation is not enforced by the language, for example — but the overall principles and practices are the same.
And even without any sort of object-oriented programming, Moosish or otherwise, there's plenty to talk about in terms of laying out packages, organizing code into functions/subroutines/modules, structuring data, taking advantage of use warnings (or -w) and use strict and -T and CPAN modules, and so on.
I'd also recommend Mark Jason Dominus's book Higher-Order Perl, which he has made available for free download. I don't know to what extent you can race through the whole book in a day and put together something useful in time for your presentation — functional programming is a bit of a paradigm-shift for someone who's not used to it (be it you, or the programmers you're presenting to!) — but you may find some useful things in there that you can use.
A lot of the answers here are answers about teaching OOP to Perl programmers who don't use it, but your question sounds like you're stymied on how to teach a course on code quality, in light of the fact that your Perl programmers do not use OOP, not specifically that you want to teach OOP to non-OO programmers and force them into that paradigm.
That leaves us with two other paradigms of programming which Perl supports well enough:
Good ol' fashioned Structured Programming also Modular Programming
Functional programming support in Perl (also Higher-Order Perl)
I use both of these--combined with a healthy dose of objects, as well. So, I use objects for the same reason that I use good structure and modules and functional pipelines. Using the tool that brings order and sanity to the programming process. For example, object-oriented programming is the main form of polymorphism--but OOP is not polymorphism itself. Thus if you are writing idioms that assist in polymorphism, they assist in polymorphism, they don't have to be stuck in some ad-hoc library "class" and called like UtilClass->meta_operator( $object ) which has little polymorphism itself.
Moose is a great object language, but you don't call Moose->has( attribute => is => 'rw', isa => 'object' ). You call the operator has. The power of Moose lies in a library of objects that encapsulate the meta-operations on classes--but also in simple expressive operators that the rather open syntax of Perl allows. I would call that the appreciation of solving the problems that OOP solves with objects.
Also, I guess I have a problem with your problem, because "not OOP" is a big field. It can range from everything-in-the-mainline coding to not-strictly-OOP (where the process of programming is not simply OOP analysis). So I think you have to know your audience and know what it is they use to keep that code structured and sane. I can't imagine a modern Perl audience that isn't at least object-users.
From there, Perl Best Practices (often abbreviated PBP) can help you. But so would learning that
simply because OOP is one of the best supports for polymorphism it isn't polymorphism in itself
simply because OOP is one of the best supports for encapsulation it isn't encapsulation in itself.
That OOP has been assisted by structured and modular programming--and is not by itself those things. Some of its power is simply just those disciplines.
In addition, as big as an object author and consumer I am, OOP is not the way I think. Reusability is the way I think: What have I done before that I do not want to write again? What have I written that is similar? How can I make my current task just an adapter of what has been written before. (And often: how can I sneak my behavior branch an established module in a single line?)
As a result, a number of my constructs would fail the pedestrian goal of OOP. To give you a better view: I divide code into two "domains": Highly abstract and polymorphic Library code, and the Scripting that I need to do to get the particular function that I'm required in a current project. (this is essentially what "application" means, but I don't think it would be as clear). As a result, polymorphism is mainly instrumental in providing adaptability, but the adaptation itself is whatever takes the least lines of code. My optimum system would be a library that allows scripting/adaptation at any juncture between library behavior and a set of configurations or scripts that address a particular problem. Again, if I had my druthers, configuration would be injected from the scripted domain and no library code would say "I need a properties file" by itself, unless it was a library module encapsulating the algorithm of configuration instanced in properties files. It would just know that it needs "policies" (or decisions from the application domain) in order to fulfil its function.
Thus, my ideal application contains special purpose "objects", which conform to "roles" but where classes are useless overhead--except that the classes perform the behavior which allow injectable data and behavior. So some of my Perl "objects" violate OOP analysis, because they are simply encapsulations of one-off solutions, kind of like the push-pin (expando) JavaScript objects.
I will often (later) revise a special-purpose object and push it further back into the library domain as I find that I need to write something like this again. All objects in the library domain are just on some level of the spectrum of specified behavior. Also, I arrange "data networks" where there is a Sourced type of class that simply encapsulates the behavior of accessing data either in the object itself or another source object. This helps speed my solutions immensely, but I've never seen it addressed in any duck-cat-dog-car-truck OOP primer. Also templating--especially when combined with "data networks"--immensely useful in coding solutions in a half-dozen lines or a half-day of work.
So I guess I'm saying, to the extent that you only know OOP for structuring programming, you won't be able to appreciate how much some older, sound practices or other paradigms do for you--or how things that qualify as OOP can promote mediocre adaptability. (Besides components are far more current than "objects".) Encapsulation solves many problems, but it also promotes the lack of data where you need it. The idea is to get data where you need it so that your canned behavior can realize the specifics of the problem and operate on that.
Reread some stuff on structured programming
Read some stuff on functional programming (assuming that you're not already familiar with it.)
Also it's possible that even an established, "productive" Perl team is writing ... crap. If they are not OOP programmers because they are simply writing crap code, then by all means teach them OOP and if they lack even structured programming *shove both of them down their throats* (I have a hard time considering the label "professional", here).
Take a good look at 'Perl Best Practices' by Damian Conway. It has lots of solid material in it, and you won't go far wrong taking his advice.
Be aware, though, that Getopt::Clade is only available as a placeholder package - it is vapourware, in other words.
You might want to look at what's covered in the "Modern Perl" book too:
http://onyxneon.com/books/modern_perl/
As the others say - plenty to cover without Moose.
Setting up modules/distros
Testing and TAP
Deployment with cpanm / cpan / local::lib
Important changes 5.8 5.10 vs 5.12 vs 5.14, autodie etc.
Perl programmers must know about Perl's weakly functional features, like list contexts, map, grep, etc. A little functional style makes Perl infinitely more readable.
Perl programmers must also understand Perl's traditional OO features, especially modules, bless, and tie. Make them write an object or maybe tie a Cache::Memcached object around a query or something.
I am doing a compilers discipline at college and we must generate code for our invented language to any platform we want to. I think the simplest case is generating code for the Java JVM or .NET CLR. Any suggestion which one to choose, and which APIs out there can help me on this task? I already have all the semantic analysis done, just need to generate code for a given program.
Thank you
From what I know, on higher level, two VMs are actually quite similar: both are classic stack-based machines, with largely high-level operations (e.g. virtual method dispatch is an opcode). That said, CLR lets you get down to the metal if you want, as it has raw data pointers with arithmetic, raw function pointers, unions etc. It also has proper tailcalls. So, if the implementation of language needs any of the above (e.g. Scheme spec mandates tailcalls), or if it is significantly advantaged by having those features, then you would probably want to go the CLR way.
The other advantage there is that you get a stock API to emit bytecode there - System.Reflection.Emit - even though it is somewhat limited for full-fledged compiler scenarios, it is still generally enough for a simple compiler.
With JVM, two main advantages you get are better portability, and the fact that bytecode itself is arguably simpler (because of less features).
Another option that i came across what a library called run sharp that can generate the MSIL code in runtime using emit. But in a nicer more user friendly way that is more like c#. The latest version of the library can be found here.
http://code.google.com/p/runsharp/
In .NET you can use the Reflection.Emit Namespace to generate MSIL code.
See the msdn link: http://msdn.microsoft.com/en-us/library/3y322t50.aspx