akka-http with multiple route configurations - scala

Quick Background
I am running through some examples learning the Akka HTTP stack for creating a new REST project (completely non-UI). I have been using and augmenting the Akka HTTP Microservice Example to work through a bunch of use cases and configurations and have been pleasantly surprised by how well Scala & Akka HTTP work.
Current Setup
Currently I have a configuration like this:
object AkkaHttpMicroservice extends App with Service {
override implicit val system = ActorSystem()
override implicit val executor = system.dispatcher
override implicit val materializer = ActorMaterializer()
override val config = ConfigFactory.load()
override val logger = Logging(system, getClass)
Http().bindAndHandle(routes, config.getString("http.interface"), config.getInt("http.port"))
}
The routes parameter is just a simple value that has the typical data within it using path, pathPrefix, etc.
The Problem
Is there any way to set up routing in multiple Scala files or an example somewhere out there?
I would really like to be able to define a set of classes that separate the concerns and deal with Actor setup and processing to deal with different areas of the application and just leave the marshaling to the root App extension.
This might be me thinking too much in terms of how I did things in Java using annotations like #javax.ws.rs.Path("/whatever") on my classes. If that is the case, please feel free to point out the change in mindset.
I tried searching for a few different set of keywords but believe I am asking the wrong question (eg, 1, 2).

Problem 1 - combine routes in multiple files
You can combine routes from multiple files quite easy.
FooRouter.scala
object FooRouter {
val route = path("foo") {
complete {
Ok -> "foo"
}
}
}
BarRouter.scala
object BarRouter {
val route = path("bar") {
complete {
Ok -> "bar"
}
}
}
MainRouter.scala
import FooRouter
import BarRouter
import akka.http.scaladsl.server.Directives._
import ...
object MainRouter {
val routes = FooRouter.route ~ BarRouter.route
}
object AkkaHttpMicroservice extends App with Service {
...
Http().bindAndHandle(MainRouter.routes, config.getString("http.interface"), config.getInt("http.port"))
}
Here you have have some docs :
http://doc.akka.io/docs/akka-http/current/scala/http/routing-dsl/overview.html
http://doc.akka.io/docs/akka-http/current/scala/http/routing-dsl/routes.html
Problem 2 - seprate routing, marshalling, etc
Yes, you can separate routing, marshalling and application logic. Here you have activator example: https://github.com/theiterators/reactive-microservices
Problem 3 - handle routes using annotations
I don't know any lib that allow you to use annotion to define routing in akka-http. Try to learn more about DSL routing. This represents a different approach to http routing but it is convenient tool too.

Related

Is there a way to "trickle down" implicits from top level applications to other imported modules?

I am trying to refactor some code for a program which uses an ActorSystem as the backbone for Http calls.
My specific goal is to make my code more modular so I can write libraries of functions which make http calls using an ActorSystem where the ActorSystem is later expected to be provided by the application.
This is a general question though as I tend to run into this problem a reasonable amount.
I have two goals:
Minimize the number of ActorSystems I create to simplify tracking of them (one per top level application is the goal)
Avoid explicitly passing around the ActorSystem and context everywhere it's needed.
Conceptually - the code below illustrates how I'm thinking about it (of course this code would not compile).
import akka.actor.ActorSystem
import intermediateModule._
import scala.concurrent.ExecutionContextExecutor
object MyApp extends App {
// Create the actorsystem and place into scope
implicit val system = ActorSystem()
implicit val context = system.dispatcher
intermediateFunc1(300)
}
// Elsewhere in the intermediate module
object intermediateModule {
import expectsActorSystemModule._
def intermediateFunc1(x: Int) = {
// Relies on ActorSystem and Execution context,
// but won't compile because, of course the application ActorSystem and
// ec is not in scope
usesActorSystem(x)
}
}
// In this modiule, usesActorSystem needs an ActorSystem
object expectsActorSystemModule {
def usesActorSystem(x: Int)
(implicit system: ActorSystem, context: ExecutionContextExecutor) = ???
//... does some stuff like sending http requests with ActorSystem
}
Is there a way to "trickle down" implicits through the sub-modules to achieve the goal of the top level application providing the needed implicits?
Can this be done in a way such that the "depth" of module imports doesn't matter (e.g. if I added a few more intermediate libraries in between the top level app and the module which requires the ActorSystem)?
The answer here is dependency injection. Every object that has dependencies on other objects should receive them as constructor parameters. The important thing here is that higher layers only get their own dependencies, and not their dependencies' dependencies.
In your example, IntermediateModule doesn't use the ActorSystem itself; it only needs it to pass it on to ExpectsActorSystemModule. This is bad, because if the latter changes and requires another dependency, you will need to change the former as well – that is too much coupling. You can refactor it like so:
import akka.actor.ActorSystem
import scala.concurrent.ExecutionContextExecutor
object MyApp extends App {
// Create the actorsystem and place into scope
// wire everything together
implicit val system = ActorSystem()
implicit val context = system.dispatcher
val expectsActorSystemModule = new ExpectsActorSystemModule
val intermediateModule = new IntermediateModule(expectsActorSystemModule)
// run stuff
intermediateModule.intermediateFunc1(300)
}
// Elsewhere in the intermediate module
class IntermediateModule(expectsActorSystemModule: ExpectsActorSystemModule) {
def intermediateFunc1(x: Int) = {
// Note: no ActorSystem or ExecutionContext is needed, because they were
// injected into expectsActorSystemModule
expectsActorSystemModule.usesActorSystem(x)
}
}
// In this module, usesActorSystem needs an ActorSystem
class ExpectsActorSystemModule(
implicit system: ActorSystem,
context: ExecutionContextExecutor) {
def usesActorSystem(x: Int) = ???
//... does some stuff like sending http requests with ActorSystem
}
Note that IntermediateModule no longer needs an ActorSystem or an ExecutionContext, because those were provided directly to ExpectsActorSystemModule.
The slightly annoying part is that at some point you have to instantiate all these objects in your application and wire them all together. In the above example it's only 4 lines in MyApp, but it will get significantly longer for more substantial programs.
There are libraries like MacWire or Guice to help with this, but I would recommend against using them. They make it much less transparent what is going on, and they don't save all that much code either – in my opinion, it's a bad tradeoff. And these two specifically have more downsides. Guice comes from the Java world and gives you basically no compile-time guarantees, meaning that your code might compile just fine and then fail to start because Guice. MacWire is better in that regard (everything is done at compile time), but it's not future-proof because it's implemented as a Scala 2 macro – it will not work on Scala 3 in its current form.
Another approach that is popular among the purely functional programming community is to use ZIO's ZLayer. But since you're working on an existing codebase that is based on the Lightbend tech stack, this is unlikely to be the means of choice in this particular case.

Access Play Framework Router routing table at run time

I am writing an EssentialFilter so I can perform an operation on every request. However all that the filter receives is a RequestHeader object and I need to know information about the actual controller that will be handling this request later down the line.
This information is plain and clear in routes.conf:
GET /foobar controllers.MyController.foobar()
GET /bashbaz controllers.MyController.bashbaz()
And I can even see that in my target folder a generated routing table is laid out very neatly in a documentation object:
// This example greatly simplified for clarity
class Routes() {
def documentation = List(
("""GET""", prefix + """foobar""", """controllers.MyController.foobar()"""),
("""GET""", prefix + """bashbaz""", """controllers.MyController.bashbaz()""")
}
My only question is: how do I access this during runtime?
This answer coincidentally shows that the routes used to be available via Play.maybeApplication.get.routes but that is now deprecated. How do I get a Routes object at run time?
Play actually makes Routes available via dependency injection (DI) of its Router object. If you already have DI set up in your app then you only need to inject it into your constructor:
import play.api.routing.Router
class YourFilter(router: Router) extends EssentialFilter { ... }
If you haven't set DI up yet then I recommend reading the official reference on the subject. This third-party blog post also details some modern libraries that can be useful.
However, if you want to see what controller handles a particular RequestHeader, then I recommend ignoring the Router and documentation objects entirely and make use of the convenient handlerDef implicit:
import play.api.routing.Router.RequestImplicits.WithHandlerDef
override def apply(next: EssentialAction) = { request: RequestHeader =>
val handlerDefOpt = request.handlerDef
handlerDefOpt.map(handlerDef =>
// Would be "controllers.MyController" in your example
handlerDef.controller
// Would be "foobar" or "bashbaz" in your example
handlerDef.method
// Would be "GET" in your example
handlerDef.verb
// Would be "/foobar" or "/bashbaz" in your example
handlerDef.path
)
}
Or you can also get the HandlerDef from within the request's attrs:
val handlerDef: Option[HandlerDef] = request.attrs.get(Router.Attrs.HandlerDef)

Using traits for mixing core libraries in Scala

I'm working on multi SBT project in Scala. I extracted core things into separate SBT project. This includes handling config, third party libraries (init RMQ client, init Redis client etc) and some models.
So I organised some things like loading config in trait and I then mix this trait where I need it and just use config method defined in Configuration trait, which loads config for specific environment (based on environment variable). I did same for database, so I load PostgreSQL, open connection and then mix that trait where I need it and just use database method which I can use for executing queries and other.
Is this good approach in you opinion? Advantage is that I do not have to handle database connection and initialisations in each project and also code is much shorter. However, there is one issue with closing connection. Where to close connection in trait where Database is mixed?
Any help on the topic is appreciated. Thanks
Amer
Regarding connections, you should be closing them (returning to pool) whenever you are done with them, that is orthogonal to mix-in implementation.
As for configuration and such, I like this approach, but the problem with it is that most of the time you want things like loaded config to be singletons, but if you simply do something like
trait Configuration {
val config = loadConfig
}
class Foo with Configuration
class Bar with Configuration
val f1 = new Foo
val f2 = new Foo
val b1 = new Bar
val b2 = new Bar
then, you'll end up loading four different copies of the config.
One way around this is to delegate loadConfig to a singleton object:
object Configuration {
val config = loadConfig
}
trait Configuration {
def config = Configration.config
}
This works, but makes it much harder to unit-test and override the functionality (what if I want my config loaded from a database sometimes?)
Another possibility is a proxy class:
trait Configuration {
def loadConfig: Config
lazy val config: Config = loadConfig
}
class ConfigurationProxy(cfg: Configuration) extends Configuration {
def loadConfig = cfg.config
}
object Main extends App with Configuration {
def loadConfig = ??? // executed only one per application
...
}
class Foo extends ConfigurationProxy(Main)
class Bar extends ConfigurationProxy(Main)
val f1 = new Foo
val f2 = new Foo
val b1 = new Bar
val b2 = new Bar
Now, all four variables are looking at the same Config instance.
But if you have a function somewhere that wants a Configuration, you can still pass any of these in:
def connectToDB(cfg: Configuration) = ???
connectToDB(Main)
connectToDB(f1)
connectToDB(b2)
etc.

Configuring actor behavior using typesafe Config and HOCON

I am building a large agent-based / multi-agent model of a stock exchange using Akka/Play/Scala, etc and I am struggling a bit to understand how to configure my application. Below is a snippet of code that illustrates an example of the type of problem I face:
class Exchange extends Actor {
val orderRoutingLogic = new OrderRoutingLogic()
val router = {
val marketsForSecurities = securities.foreach { security =>
val marketForSecurity = context.actorOf(Props[DoubleAuctionMarket](
new DoubleAuctionMarket(security) with BasicMatchingEngine), security.name
)
orderRoutingLogic.addMarket(security, marketForSecurity)
}
Router(orderRoutingLogic)
}
In the snippet above I inject a BasicMatchingEngine into the DoubleAuctionMarket. However I have written a number of different matching engines and I would like to be able to configure the type of matching engine injected into DoubleAuctionMarket in the application configuration file.
Can this level of application configuration be done using typesafe Config and HOCON configuration files?
interesting case. If I understood you right, you want to configure Market actor mixing in some MatchingEngine type specified in config?
Some clarification: you can't simply mix in dynamic type. I mean if you move MatchingEngine type to config - it will be known only at runtime, when config is parsed. And at that time you'll not be able to instantiate new DoubleAuctionMarket(security) with ???SomeClassInstance???. But maybe you could replace inheritance with aggregation. Maybe an instance of MatchingEngine can be passed to Market as parameter?
Now, how to obtain an instance of MatchingEngine from config? In short - Typesafe Config has no parser for FQCN properties, but it's not hard to do it yourself using reflection. This technique is used in many places in Akka. Look here first. provider property set as fqcn string and can be changed to other provider (i.e. RemoteActorRefProvider) in other configurations. Now look at how it's processed to obtain Provider instance. First it's just being read as string here. Then ProviderClass is used to instantiate actual (runtime) provider here. DynamicAccess is a utility helping with reflexive calls. It's not publicly accessible via context.system, but just take a piece of it or instantiate yourself, I don't think it's a big issue.
With some modifications, your code may look:
class Exchange extends Actor {
val orderRoutingLogic = new OrderRoutingLogic()
val matchingEngineClass = context.system.settings.config.getString("stocks.matching-engine")
val matchingEngine = DynamicAccess.createInstance[MatchingEngine](matchingEngineClass)
val router = {
val marketsForSecurities = securities.foreach { security =>
val marketForSecurity = context.actorOf(DoubleAuctionMarket.props(security, matchingEngine))
orderRoutingLogic.addMarket(security, marketForSecurity)
}
Router(orderRoutingLogic)
}
I've moved props to companion object of DoubleAuctionMarket as stated in Recommended Preactices of akka docs. Usage of Props(new Actor()) is dangerous practice.

Where does Boot.scala fit in?

From time to time I see projects that has a boot.scala or Boot.scala file in. Though this does not appear to be a hard and fast Scala rule, it does appear to be folklore of sorts to include a Boot.scala file in some projects. This specific project uses Akka and Spray, which translates to actor pattern and REST services.
Will someone please explain what type of functionality can normally be expected in such a file and if this is a common pattern of sorts?
As an extension to this question (please answer the first bit first :-), I would be grateful to know how to read this code, which is in a project with multiple Boot.scala files.
Boot.scala in web package:
trait Web {
this: Api with Core =>
....
}
Boot.scala in api package:
trait Api {
this: Core =>
....
}
Boot.scala in core package:
trait Core {
implicit def actorSystem: ActorSystem
implicit val timeout = Timeout(30000)
val application = actorSystem.actorOf(
props = Props[ApplicationActor],
name = "application"
)
Await.ready(application ? Start(), timeout.duration)
}
One can gather that the one package depends on the other, and that the Boot.scala files may be a common sight in actor based systems, but what how does one 'read' the relationships? For example, how would I read trait Web {this: Api with Core =>...} in english?
In the particular instance, the starting point of the application lies in a main file:
object Main extends App {
implicit val system = ActorSystem("RESTService")
class Application(val actorSystem: ActorSystem) extends Core with Api with Web {
}
new Application(system)
sys.addShutdownHook {
system.shutdown()
}
}
I realize my questions may seem trivial to some, but I'm trying to get into the Scala tribe here, and the 'secret password' is not in any manual.
I don't know whether Boot.scala is a common pattern, but it is used in Lift and contains the main configuration of the application. Since Lift is rather old (relative to other scala projects) it might have set a convention.
For the other question you should read up on the cake pattern and self types. For example
trait Web {
this: Api with Core =>
....
}
defines a trait Web that can only be mixed into classes or traits that inherit from API and Core or have those as part of their self types. Thus
class Application(val actorSystem: ActorSystem) extends Core with Api with Web {
}
would not type check, if the Api trait wasn't mixed in, because it is required by the Web trait.