What replaced RoutedHttpService in Spray - scala

I've been following this blog post about Spray and Akka as it seems to be a reasonable way to separate out the implementation from the routing in an async service. However, this post being > 6 months old the Spray API seems to have changed and the RoutedHttpService it uses about half way down is nowhere to be found.
I'm fairly new to Scala and very new to Spray, and the Spray docs are at best obtuse, so I've been struggling what to replace that bit of the code with.
A couple of questions then:
Is the approach outlined in this post actually sensible?
If the answer to (1) is yes, then what should RoutedHttpService be replaced with?
If the answer to (1) is no, then is there some other docs of 'the right way' to do Spray?
For easy reference, the bit of code in question is this:
trait Api extends RouteConcatenation {
this: CoreActors with Core =>
private implicit val _ = system.dispatcher
val routes =
new RegistrationService(registration).route ~
new MessengerService(messenger).route
val rootService = system.actorOf(Props(new RoutedHttpService(routes))) // :-(
}

1) The approach described in this post is really nice but it is already advanced Scala programming. My advice, don't use it if you do not understand it.
2) RoutedHttpService is actually from the Eigengo's activator template not from the Spray API, you can find the source code here.
3) You can also have a look at this project, it gives a nice skeleton with less cake pattern composition.

Related

How to create Source from Flow in Akka-stream? (Programming Reactive Systems activity)

I'm trying to complete the last assignment (named Reactive Followers) in EFPL - Programming Reactive Systems course at EDx platform.
I was able to fulfil all functions except for the outgoingFlow.
It seems to me that I should somehow create a new Source from existing Flow and after some reading, I still haven't realized how to execute the Flow to generate elements for the new Source.
I've tried to use mapConcat but with no success.
I think the existing flow is this:
eventParserFlow
.via(followersFlow)
.filter(p => isNotified(userId)(p))
Types for the existing Flows and my tentative do implement outgoingFlow can be seen here:
val eventParserFlow: Flow[ByteString, Event, NotUsed]
val followersFlow: Flow[Event, (Event, Followers), NotUsed]
def outgoingFlow(userId: Int): Source[ByteString, NotUsed] = {
eventParserFlow
.via(followersFlow)
.filter(p => isNotified(userId)(p))
.mapConcat { case (e, _) => e.render }
???
}
Can anyone point me to some reading or example of how do I solve similar problem in Akka, please?
just a note - SO is not the best resource for these type of questions. what you should use is discussion section in the corresponding edx course
regarding your question - i will not give you explicit answer, just few hints.
in akka-streams you can't just create a Source out of Flow. Flow is responsible for transformation, while Source creates new events. in your assignment you just forgot to use one of available values.
read carefully the comments in class Server (not object).
look closely at val (inboundSink, broadcastOut) = ... and try to figure out what each of the vals is for and how they relate to each other and to the app itself. it will be helpful to understand what are their types
these hints should be enough to understand how to implement outgoingFlow, which is the Source[ByteString, NotUsed]

Deploy Verticle with Vertx 3 and Scala

I´m trying to deploy a Verticle from Scala using Vertx 3, but since Scala it´s not very extended language for Vertx I cannot find a good example.
Anybody can provide me some github examples please?.
Regards.
There is not official Vert.x support for Scala — yet. Jochen Mader (#codepitbull) is currently working on vertx-lang-scala. Even though it's possible to use the Java API of Vert.x.
If you are somehow experienced with Scala you could take a look at our project tableaux on GitHub. It's a project which is still under development but we already use it in production. I think the most important file to begin with are ScalaVerticle.scala and VertxExecutionContext.scala. With the latter you are able to use Scala's Future.
Edit: vertx-lang-scala is now officially part of Vert.x 3.4.0!
It's buried deep in the documentation but the key is to prefix with "scala:" or use the helper method ScalaVerticle.nameForVerticle.
See: http://vertx.io/docs/vertx-core/scala/#_accessing_the_vertx_instance_from_a_verticle
For example if you're following the example at http://vertx.io/blog/scala-is-here/ you can add main object with the body:
object VertxMain extends App {
val vertx = Vertx.vertx()
val startFuture = vertx.deployVerticleFuture(ScalaVerticle.nameForVerticle[HttpVerticle])
startFuture.onComplete{
case Success(stat) => println(s"Successfully deployed verticle $stat")
case Failure(ex) => println(s"Failed to deploy verticle $ex")
}
}

Configure play-slick and samples

I'm currently trying to use Play! Framework 2.2 and play-slick (master branch).
In the play-slick code I would like to override driver definition in order to add the Oracle Driver (I'm using slick-extension). In the Config.Scala of play-slick I just saw /** Extend this to add driver or change driver mapping */ ...
I'm coming from far far away (currently reading Programming In Scala) so there's a lot to learn. So my questions are :
Can someone explain me how to extend this Config object ? this object is used in others classes ... Is the cake apttern useful here ?
Talking about cake pattern, I read the computer-database example provided by play-slick. This sample uses the cake pattern and import play.api.db.slick.Config.driver.simple._ If I'm using Oracle driver I cannot use this import, am I wrong ? How can I use the cake pattern to define an implicit session ?
Thanks a lot.
Waiting for your advices and I'm still studying the play-slick code at home :)
To extend the Config trait I do not think the cake pattern is required. You should be able to create your Config object like this:
import scala.slick.driver.ExtendedDriver
object MyExtendedConfig extends play.api.db.slick.Config {
override def driverByName: String => Option[ExtendedDriver] = {name: String =>
super.driverByName(name) orElse Map("oracledriverstring" -> OracleDriver).get(name)
}
lazy val app = play.api.Play.current
lazy val driver: ExtendedDriver = driver()(app)
}
To be able to use it you only need to do: import MyExtendedConfig.driver._ instead of import play.slick.db.api.Config.driver._. BTW, I see that the type of the driverByName could have been a Map instead of a Function making it easier to extend. This shouldn't break though, but it would be easier to do it.
I think Jonas Bonér's old blog is a great place to read what the cake pattern is (http://jonasboner.com/2008/10/06/real-world-scala-dependency-injection-di/). My naive understanding of it is that you have a cake pattern when you have layers that uses the self types:
trait FooComponent{ driver: ExtendedDriver =>
import driver.simple._
class Foo extends Table[Int]("") {
//...
}
}
There are 2 use cases for the cake pattern in slick/play-slick: 1) if you have tables that references other tables (as in the computer database sample) 2) to have control over exactly which database is used at which time or if you use many many different types. By using the Config you do not really need the cake pattern as long as you only have 2 different DBs (one for prod and one for test), which is the point of the Config.
Hope this answers your questions and good luck on reading Programming in Scala (loved that book :)

How to define a slick joinCondition

In the scaladays 2013 talk http://www.parleys.com/play/51c2e20de4b0d38b54f46243/chapter55/agenda it talks about "joinCondition"
For example:
implicit def autojoin1 = joinCondition[Sites,Devices](_.id === _.siteId)
implicit def autojoin2 = joinCondition[Devices,Computers](_.computerId === _.id)
sites.autoJoin(devices).further(computers)
: Query[_,(Site,Computer)]
sites.autoJoin(devices).autoJoinVia(computers)(_._2)
: Query[_,((Site,Device),Computer)]
I'm very new to scala, and can't figure out what joinCondition is, I can't find any method or anything named that in slick(1.0.0) and can't get it to work, what is it?
As said in the talk (but not listed in the slides) the complete autoJoin feature is not currently offered by Slick, but part of a demo Play project we prepared. The code is here https://github.com/cvogt/play-slick/blob/scaladays2013/samples/computer-database/app/util/autojoin.scala (and in the other files in https://github.com/cvogt/play-slick/blob/scaladays2013/samples/computer-database/app/)
Check also this blog article, that presents a solution for Slick 2.0 http://tikokelottlegyenviz.blogspot.fr/2013/08/scala-slick-left-join.html

Scala Case Class Map Expansion

In groovy one can do:
class Foo {
Integer a,b
}
Map map = [a:1,b:2]
def foo = new Foo(map) // map expanded, object created
I understand that Scala is not in any sense of the word, Groovy, but am wondering if map expansion in this context is supported
Simplistically, I tried and failed with:
case class Foo(a:Int, b:Int)
val map = Map("a"-> 1, "b"-> 2)
Foo(map: _*) // no dice, always applied to first property
A related thread that shows possible solutions to the problem.
Now, from what I've been able to dig up, as of Scala 2.9.1 at least, reflection in regard to case classes is basically a no-op. The net effect then appears to be that one is forced into some form of manual object creation, which, given the power of Scala, is somewhat ironic.
I should mention that the use case involves the servlet request parameters map. Specifically, using Lift, Play, Spray, Scalatra, etc., I would like to take the sanitized params map (filtered via routing layer) and bind it to a target case class instance without needing to manually create the object, nor specify its types. This would require "reliable" reflection and implicits like "str2Date" to handle type conversion errors.
Perhaps in 2.10 with the new reflection library, implementing the above will be cake. Only 2 months into Scala, so just scratching the surface; I do not see any straightforward way to pull this off right now (for seasoned Scala developers, maybe doable)
Well, the good news is that Scala's Product interface, implemented by all case classes, actually doesn't make this very hard to do. I'm the author of a Scala serialization library called Salat that supplies some utilities for using pickled Scala signatures to get typed field information
https://github.com/novus/salat - check out some of the utilities in the salat-util package.
Actually, I think this is something that Salat should do - what a good idea.
Re: D.C. Sobral's point about the impossibility of verifying params at compile time - point taken, but in practice this should work at runtime just like deserializing anything else with no guarantees about structure, like JSON or a Mongo DBObject. Also, Salat has utilities to leverage default args where supplied.
This is not possible, because it is impossible to verify at compile time that all parameters were passed in that map.