Can I use gevent-socketio with cornice? - gevent-socketio

I need some web socket functionality for monitoring some tasks, is gevent-socketio good fit for back end websocket implementation? Also can I use gevent-socketio with pyramid cornice (running on openshift)?

Should be possible, but not a very good idea overall because project hasn't found a leader or any updates since Dec 2015 https://github.com/abourget/gevent-socketio/commit/1c84627980c0b77f8f9005fdbcc916ca33d0e4d1 in addition gevent monkey patches python's stdlib (which I personally don't like).
since python 3.4 asyncio is part of stdlib https://docs.python.org/3/library/asyncio.html I'll investigate housleyjk/aiopyramid as an alternative

Related

Does scala offer async non-blocking IO when working with files?

I am using scala 2.10 and I wonder If there is some package which has async IO when working with files?
I did some search o this topic but mostly found examples as following
val file = new File(canonicalFilename)
val bw = new BufferedWriter(new FileWriter(file))
bw.write(text)
bw.close()
what essentially essentially java.io package with blocking IO operations - write, read etc. I also found scala-io project with this intention but it seems that project is dead last activity 2012.
What is best practice in this scenario? Is there any scala package or the common way is wrapping java.io code to Futures and Observables ?
My use case is from an Akka actor need to manipulate files on local or remote file system. Need to avoid blocking. Or is there any better alternative?
Thnaks for clarifing this
Scala does not offer explicit API for asynchronous file IO, however the plain Java API is exactly the right thing to use in those cases (this is actually a good thing, we can use all these nice APIs without any wrapping!). You should look into using java.nio.channels.AsynchronousFileChannel, which is available since JDK7 and makes use of the underlying system async calls for file IO.
Akka IO, while not providing file IO in it's core, has a module developed by Dario Rexin, which allows to use AsynchronousFileChannel with Akka IO in a very simple manner. Have a look at this library to make use of it: https://github.com/drexin/akka-io-file
In the near future Akka will provide File IO in its akka-streams module. It may be as an external library for a while though, we're not exactly sure yet where to put this as it will require users to have JDK at-least 7, while most of Akka currently supports JDK6. Having that said, streams based asynchronous back-pressured file IO is coming soon :-)
If you're using scalaz-stream for your async support it has file functionality that's built on the java.nio async APIs - that's probably the approach I'd recommend. If you're using standard scala futures possibly you can use akka-io? which I think uses Netty as a backend. Or you can call NIO directly - it only takes a couple of lines to adapt a callback-based API to scalaz or scala futures.

Convert MIndiGolog fluents to the IndiGolog causes_val format

I am using Eclipse (version: Kepler Service Release 1) with Prolog Development Tool (PDT) plug-in for Prolog development in Eclipse. Used these installation instructions: http://sewiki.iai.uni-bonn.de/research/pdt/docs/v0.x/download.
I am working with Multi-Agent IndiGolog (MIndiGolog) 0 (the preliminary prolog version of MIndiGolog). Downloaded from here: http://www.rfk.id.au/ramblings/research/thesis/. I want to use MIndiGolog because it represents time and duration of actions very nicely (I want to do temporal planning), and it supports planning for multiple agents (including concurrency).
MIndiGolog is a high-level programming language based on situation calculus. Everything in the language is exactly according to situation calculus. This however does not fit with the project I'm working on.
This other high-level programming language, Incremental Deterministic (Con)Golog (IndiGolog) (Download from here: http://sourceforge.net/p/indigolog/code/ci/master/tree/) (also made with Prolog), is also (loosly) based on situation calculus, but uses fluents in a very different way. It makes use of causes_val-predicates to denote which action changes which fluent in what way, and it does not include the situation in the fluent!
However, this is what the rest of the team actually wants. I need to rewrite MIndiGolog so that it is still an offline planner, with the nice representation of time and duration of actions, but with the causes_val predicate of IndiGolog to change the values of the fluents.
I find this extremely hard to do, as my knowledge in Prolog and of situation calculus only covers the basics, but they see me as the expert. I feel like I'm in over my head and could use all the help and/or advice I can get.
I already removed the situations from my fluents, made a planning domain with causes_val predicates, and tried to add IndiGolog code into MIndiGolog. But with no luck. Running the planner just returns "false." And I can make little sense of the trace, even when I use the GUI-tracer version of the SWI-Prolog debugger or when I try to place spy points as strategically as possible.
Thanks in advance,
Best, PJ
If you are still interested (sounds like you might not be): this isn't actually very hard.
If you look at Reiter's book, you will find that causes_vals are just effect axioms, while the fluents that mention the situation are usually successor-state-axioms. There is a deterministic way to convert from the former to the latter, and the correct interpretation of the causes_vals is done in the implementation of regression. This is always the same, and you can just copy that part of Prolog code from indiGolog to your flavor.

Tooling for expressive, feature rich numeric computations on the JVM

I am looking for numeric computation tooling on the JVM. My major requirements are expressiveness/readability, ease of use, evaluation and features in terms of mathematical functions. I guess I am after something like the Matlab kernel (probably including some basic libraries and w/o graphics) on the JVM. I'd like to be able to "throw" computional code at a running JVM and want this code to be evaluated. I don't want to worry about types. Arbitrary precision and performance is not so important.
I guess there are some nice libraries out there but I think an appropriate language on top is needed to get the expressiveness.
Which tooling would you guys suggest to address expressive, feature rich numeric computation on the JVM ?
From the jGroovyLab page:
The GroovyLab environment aims to provide a Matlab/Scilab like scientific computing platform that is supported by a scripting engine implemented in Groovy language. The GroovyLab user can work either with a Matlab-lke command console, or with a flexible editor based on the jsyntaxpane (http://code.google.com/p/jsyntaxpane/) component, that offers more convenient code development. Also, GroovyLab supports Computer Algebra based on the symja (http://code.google.com/p/symja/) project.
And there is also GroovyLab:
GroovyLab is a collection of Groovy classes to provide matlab-like syntax and basic features (linear algebra, 2D/3D plots). It is based on jmathplot and jmatharray libs:
Groovy has a smooth learning curve for Java programmers and a flexible syntax similar to Ruby. It is also pretty easy to write a DSL on it.
Though Groovy's performance is pretty good for a dynamic language, you can use static compilation if you are in the need for it.
Most of Mathworks Matlab is built on the Intel Math Kernel Library (MKL), which is (IMHO) the unbeatable champion in linear algebra computations. There is java support, but it costs 500 dollar (the MKL, not just the java support)...
Best second option if you want to use java is jblas, which uses BLAS and LAPACK, the industry standards for linear algebra.
Pure java libraries' performances are horrible apparently, see here...
Spire sounds like it's aiming at the area you're looking at. It takes advantage of a lot of recent scala features such as macros to get decent performance without having to sacrifice the expressiveness of being in a high level language.
There's also breeze, which is targeted at machine learning but includes a fair amount of linear algebra stuff.
Depending how much work you want to get into and what languages you're already familiar with, Incanter in the Clojure world might be worth a look. Also quickly evolving in Clojure right now is core.matrix, which aims to encapsulate high-level common abstractions in linear algebra implemented with various methods or packages.
You highlighted expressiveness in your post, and the nice thing about Clojure is that, as a Lisp, it is possible to make or extend DSLs to closely match problem domains. This is one of the big draws of the language (and of Lisps in general).
I'm the original author of core.matrix for Clojure. So I have a clear affiniy and much more knowledge in this specific space. That said, I'm still going to try and give you an honest answer :-)
I was the the same position as you a year or so back, looking for a solution for numeric computation that would be scalable, flexible and suitable for deployment as a clustered cloud service.
I ended up going with Clojure for the following reasons:
Functional Programming: Clojure is a functional programming language at heart, more so than most other language (although not as much as Haskell....). Lazy infinite sequences, persistent data structures, immutability throughout etc. Makes for elegany code when you are dealing with big computations.
Metaprogramming: I saw a need to do code generation for vector / computational experessions. Hence being a Lisp was a big plus: once you have done code generation in a homoiconic language with a "whole language" macro system then it's hard to find anything else that comes close.
Concurrency - Clojure has an impressive and movel approach to multi-code concurrency. If you haven't seen it then watch: http://www.infoq.com/presentations/Value-Identity-State-Rich-Hickey
Interactive REPL: Something I've always felt is very important for data work. You want to be able to work with your code / data "live" to get a real feel for its properties. Having a dynamically typed language with an interactive REPL works wonders here.
JVM based: big advantage for pragmantic purposes, because of the huge library / tool ecosystem and the excellent engineering in the JVM as a runtime platform.
Community: I saw a lot of innovation going on in Clojure, particularly around the general area of data and analytics.
The main thing Clojure was lacking at that time was a good library / API for matrix operations. There were some nice tools in Incanter, but they weren't very general purpose or performant. Hence I started developing core.matrix, which is shaping up to be an idiomatic Clojure-flavoured equivalent of NumPY / SciPY. Right now it is still work in progress but good enough for production use if you are careful.
In terms of low-level matrix support, I also maintain vectorz-clj, which is my attempt to provide a core.mattrix implementation that offers high performance vector/matrix operations while remaining Pure Java (i.e. no native dependencies). If you are interested in the performance of this, you may like to see:
http://clojurefun.wordpress.com/2013/03/07/achieving-awesome-numerical-performance-in-clojure/
My second choice after Clojure would have been Scala. I liked Scala's slightly greater maturity and decent static type system. Both the languages are JVM based so the library / tool side was a tie. It was probably the Lisp features that clinched it.
If you happen to have access to Mathematica, then it's fairly easy to get it working with the JVM by means of J/Link. For Clojure, Clojuratica is an excellent library to make that as seemless as possible, although it's not been maintained for a while and it may take some effort to get it working in modern environments again.

How to make a code thread safe in scala?

I have a code in scala that, for various reasons, have few lines of code that cannot be accessed by more threads at the same time.
How to easily make it thread-safe? I know I could use Actors model, but I find it a bit too overkill for few lines of code.
I would use some kind of lock, but I cannot find any concrete examples on either google or on StackOverflow.
I think that the most simple solution would be to use synchronized for critical sections (just like in Java). Here is Scala syntax for it:
someObj.synchronized {
// tread-safe part
}
It's easy to use, but it blocks and can easily cause deadlocks, so I encourage you to look at java.util.concurrent or Akka for, probably, more complicated, but better/non-blocking solutions.
You can use any Java concurrency construct, such as Semaphores, but I'd recommend against it, as semaphores are error prone and clunky to use. Actors are really the best way to do it here.
Creating actors is not necessarily hard. There is a short but useful tutorial on actors over at scala-lang.org: http://www.scala-lang.org/node/242
If it is really very simple you can use synchronized: http://www.ibm.com/developerworks/java/library/j-scala02049/index.html
Or you could use some of the classes from the concurrent package in the jdk: http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/package-summary.html
If you want to use actors, you should use akka actors (they will replace scala actors in the future), see here: http://doc.akka.io/docs/akka/2.0.1/. They also support things like FSM (Finite State Machine) and STM (Software Transactional Memory).
In general try to use pure 'functions' or methods with immutable data structures that should help with thread safety.

What is a recommended R interface for Perl integration?

I never dealt with R, so I was wondering if anyone can recommend (either from personal experience or some reviews/comparisons) which of the several Perl/R integration modules are considered "best practices"? Ideally something which could somehow qualify for production readiness.
Google shows several different modules but I am not quite sure how to evaluate the options, having zero previous R or statistics experience (the question came from a co-worker who was interested in using R)
Yes, looks like Statistics::R is probably your best bet. It's been updated recently, Brian Cassidy is a competent developer, and it's passing its CPAN smoke tests.
There is also Statistics::useR, it has been touched relatively recently, but that one doesn't seem to be compliant with CPAN's smoke testing system, which makes me a bit nervous.
That said, I haven't used either of these.
I've personally not used it but Statistics::R looks interesting. Its got a 3 star review on CPAN ratings and is currently going through a face lift with a new maintainer.
/I3az/
What are your actual requirements in terms of
OS that R is running on
OS that Perl clients are running on
type of query you plan: 'canned' or interactive
etc pp.
I have long been a fan of Rserve as a headless R backend but I can't recall if there was a Perl client.
If you want to just read R data files, my module Statistics::R::IO would fit the bill. It's a pure Perl implementation that reads both RDS and RData files.
Starting with version 0.4, released last week, you can also use it as an Rserve client.
I've just released Statistics::NiceR. It has support for pretty much all R data types including data.frames.
It's an early release, so I'd like feedback. This is what it looks like:
#!/usr/bin/env perl
use v5.16;
use Statistics::NiceR;
use Data::Frame::Rlike;
my $r = Statistics::NiceR->new;
my $iris = $r->get('iris');
say "Subset of Iris data set";
say $iris->subset( sub { # like a SQL WHERE clause
( $_->('Sepal.Length') > 6.0 )
& ( $_->('Petal.Width') < 2 )
})->select_rows(0, 34); # grab the first and last rows
which outputs
Subset of Iris data set
-----------------------------------------------------------------------
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
-----------------------------------------------------------------------
51 7 3.2 4.7 1.4 versicolor
147 6.3 2.5 5 1.9 virginica
-----------------------------------------------------------------------
I have recently added Statistics::RserveClient to CPAN. This allows perl applications to interact with a (possibly remote) Rserve server via a connection-oriented binary protocol. You send R code to the server as strings, and the results are returned as perl data structures.
There are a number of shortcomings - we don't support long packets yet, or deal properly with certain heterogeneous structures, but the code is under active development, and it works quite nicely for our basic applications.
The code is GPL, hosted at https://github.com/djun-kim/Statistics--RserveClient