I am new to apache thrift and I am familiar with Scala. But I have not seen any example or any reference on internet, saying it supports Scala.
Can some one tell me is there a way to work with scala on thrif. thank you.
Yes, thrift can work with scala smoothly. It's not surprising since scala essentially works on JVM. One open-source example is Twitter's Scalding, a scala DSL for Cascading. In scalding, one is able to handle various cascading flows whose tuples are in the type of thrift-based classes.
See LongThriftTransformer for example.
I don't mean to be rude, but the very first link for the google query apache thrift scala shows up scrooge, which
notes, that due to interoperability between scala and java ordinary thrift-java will work fine
itself -- a way to work with thrift in scala-native way
So yes, there are ways to work with Thrift in Scala.
Twitters 'Scrooge' is meant as a replacement for the standard Thrift generator and generates Java and Scala code. It works with SBT and seems relatively mature.
Related
I'm trying to write a Scala UDF for Hive which acts on a JSON array -- extending org.apache.hadoop.hive.ql.exec.UDF and and relying on play-json's play.api.libs.json.parse.
When attempting to call this from within Hive, I see java.lang.NoSuchMethodError: com.fasterxml.jackson.core.JsonToken.id()I.
I'm not sure what the cause is here--some incompatibility with the jackson versions, and if so, how can I work around this?
The only component/version that I'm tied to is Hive 1.2.
Take a look at the JSON UDF's in Brickhouse ( http://github.com/klout/brickhouse ). Brickhouse has UDF's to_json and from_json , as well as a convenience functions json_map and json_split to deal directly with maps and arrays.
In regards to your versioning problem, Brickhouse uses jackson under the covers, using version 1.8.8 (among others), and I haven't come across this particular versioning problem.
The guess that this is a Jackson incompatibility makes sense.
Hive 1.2 uses Jackson 1.9.2 but later versions are used by recent versions (i.e. last couple of years) of Play-JSON.
If reverting to an old enough version of Play-JSON doesn't make sense, then perhaps the simplest workaround would be to use a Scala JSON parsing library that doesn't depend on Jackson; Rapture JSON can be used with multiple backends and so might be a good choice.
I have kind of philosophical question.
I have been a very happy user of Play Framework for Java for couple years now. Now I am trying to dive into Scala and functional programming. In Java-based play I have been using Ebean, so according to Play documentation I extended Ebean Model class and implemented my own models. In each model I declared a static variable of type Finder in order to call queries. All of this is documented and working well.
However in Scala-based Play (v2.5.x) there is not too much documentation about persistance layer. OK, I understood there is a recommendation of Play Slick as it is using the ideas of functional programming. I am kind of excited about that, but there is almost no documentation on how to use it. I found a way on how to enable Slick, how to configure data source and db server and how to inject db into Controller. There is also a very small example on how to call simple query on db.
The question is: How to actually use Slick? I researched some third party tutorials and blogs and it seems there are multiple ways.
1) How to define models? It seems that I should use case classes to define model itself. Than I should define class extending Table to define columns and its properties??
2) What is the project structure? Should I create new scala file for each model? By conventions of Java I should, but sometimes I have seen all models in one scala file (like in Python Django). I assume separate files are better.
3) Should I create DAOs for manipulating Models? Or should I create something like Service? The code would be probably very same. What I am asking is the structure of the project.
Thank you in advance for any ideas
I had the same questions about slick and came up with a solution that works for me. Have a look at this example project:
https://github.com/nemoo/play-slick3-example
Most other example projects are too basic. So I created this project with a broader scope, similar to what I found in real live play code. I tested out various approaches, including services. In the end I found the additional layer hard to work with because I never knew where to put the code. You can see the thought process in the past commits :)
Let me quote from the readme: Repositories handle interactions with domain aggregates. All public methods are exposed as Futures. Internally, in some cases we need to compose various queries into one block that is carried out within a single transaction. In this case, the individual queries return DBIO query objects. A single public method runs those queries and exposes a Future to the client.
I can wholeheartedly recommend the Getting Started part of the Slick documentation
There is also a Typesafe Activator Template for Slick - Hello Slick - which you can find here and then explore and continue from there
To get started with Slick and Play you would need to add the dependency in your build.sbt file:
"com.typesafe.play" %% "play-slick" % "2.0.0"
Also evolutions (which I recommend)
"com.typesafe.play" %% "play-slick-evolutions" % "2.0.0"
And of course the driver for the database
"com.h2database" % "h2" % "${H2_VERSION}" // replace `${H2_VERSION}` with an actual version number
Then you would have to specify the configuration for your database:
slick.dbs.default.driver="slick.driver.H2Driver$"
slick.dbs.default.db.driver="org.h2.Driver"
slick.dbs.default.db.url="jdbc:h2:mem:play"
If you want to have a nice overview of all this and more you should definitely take a look at THE BEST STARTING POINT - a complete project with Models, DAOs, Controllers, adapted to Play 2.5.x
Is there a straightforward way to generate rdbms ddl, for a set of scala classes?
I.e. to derive a table ddl for each class (whereby each case class field would translate to field of the table, with a corresponding rdbms type).
Or, to directly create the database objects in the rdbms.
I have found some documentation about Ebean being embedded in Play framework, but was not sure what side-effects may enabling Ebean in play have, and how much taming would Ebean require to avoid any of them. I have never even used Ebean before...
I would actually rather use something outside of Play, but if it's simple to accomplish in Play I would dearly like to know a clean way. Thanks in advance!
Is there a straightforward way to generate rdbms ddl, for a set of
scala classes?
Yes
Ebean
Ebean a default orm provided by play you just have to create entity and enable evolution(which is set to enable as default).It will create a (dot)sql file in conf/evolution/default directory and when you hit localhost:9000 it will show you apply script .But your tag say you are using scala so you can't really use EBean with Scala .If you do that you will have to
sacrifice the immutability of your Scala class, and to use the Java
collections API instead of the Scala one.
Using Scala this way will just bring more troubles than using Java directly.
Source
JPA
JPA (using Hibernate as implementation) is the default way to access and manage an SQL database in a standard Play Java application. It is still possible to use JPA from a Play Scala application, but it is probably not the best way, and it should be considered as legacy and deprecated.Source
Anorm(if you want to write ddl)
Anorm is Not an Object Relational Mapper so you have to manually write ddl. Source
Slick
Function relation mapping for scala .Source
Activate
Activate is a framework to persist objects in Scala.Source
Skinny
It is built upon ScalikeJDBC library which is a thin but powerful JDBC wrapper.Details1,Details2
Also check RDBMS with scala,Best data access option for play scala
In my Scala application I need to use several Maps and Lists which gets updated very often. Are there any thread safe collections in Scala which maintain the insertion order?
Yes, there is a trait in the scala.collection.concurrent package: concurrent.Map it's just a trait, so just mixin this trait into your Map and it would become thread-safe.
If you need a good concurrent map, try google's ConcurrentLinkedHashMap and convert it to Scala Map using Scala/Java converter, that will give more performance that mixin SynchronizedMap. For example my favourite Spray toolkit, use it as a core structure for implmenting it's caching module. As you can see, spray is the fastest Scala web toolkit
I'm trying to find the 'right' actor implementation. I realized there is a bunch of them and it's a bit confusing to pick one. Personally I'm especially interested in remote actors, but I guess a complete overview would be helpful to many others. This is a pretty general question, so feel free to answer just for the implementation you know about.
I know about the following Scala Actor implementations (SAI). Please add the missing ones.
Scala 2.7 (difference to)
Scala 2.8
Akka (http://www.akkasource.org/)
Lift (http://liftweb.net/)
Scalaz (http://code.google.com/p/scalaz/)
What are the target use-cases for these SAIs (lightweight vs. "heavy" enterprise framework)?
do they support remote actors? What shortcomings do remote actors have in the SAIs?
How is their performace?
How active is there community?
How easy are they to get started? How good is the documentation?
How easy are they to extend?
How stable are they? Which projects are using them?
What are their shortcomings?
What are their design principles?
Are they thread based or event based (receive/ react) or both?
Nested receiveS
hotswapping the Actor’s message loop
This is the most comprehensive comparison I have seen so far:
http://doc.akka.io/docs/misc/Comparison_between_4_actor_frameworks.pdf via http://klangism.tumblr.com/post/2497057136/all-actors-in-scala-compared
As of Scala 2.10, scala actors is now deprecated and Akka Actors is now part of standard distribution
Scala 2.7.7. vs 2.8 after The Scala 2.8.0 RC3 distribution:
New Reactors provide more lightweight, purely event-based actors with optional, implicit sender identification. Support for actors with daemon-style semantics was added. Actors can be configured to use the efficient JSR166y fork/join pool, resulting in significant performance improvements on 1.6 JVMs. Schedulers are now pluggable and easier to customize.
There's also a design document of Haller: Scala Actors: Unifying Thread-based and Event-based Programming
As far as I know, only Scala and Akka support remote actors.
Akka is backed up by scalablesolutions, which offer commerical support and plug ins for akka.
Akka seems like a heavyweight solution, which targets integration with existing frameworks (camel, AMQP, JTA, Comet, Spring, Redis) and additionally STMs and persistence.
Akka compared to Scala doesn't support nested receives, but supports hotswapping the actors message loop and has both, thread based and event based actors and so called "Event-based single-threaded" ones.
I realized that akka enforces exhaustive matches. So even if technically receive expects a partial function, the function must not be partial. This means you have to handle every message immediately.