Using traits for mixing core libraries in Scala - scala

I'm working on multi SBT project in Scala. I extracted core things into separate SBT project. This includes handling config, third party libraries (init RMQ client, init Redis client etc) and some models.
So I organised some things like loading config in trait and I then mix this trait where I need it and just use config method defined in Configuration trait, which loads config for specific environment (based on environment variable). I did same for database, so I load PostgreSQL, open connection and then mix that trait where I need it and just use database method which I can use for executing queries and other.
Is this good approach in you opinion? Advantage is that I do not have to handle database connection and initialisations in each project and also code is much shorter. However, there is one issue with closing connection. Where to close connection in trait where Database is mixed?
Any help on the topic is appreciated. Thanks
Amer

Regarding connections, you should be closing them (returning to pool) whenever you are done with them, that is orthogonal to mix-in implementation.
As for configuration and such, I like this approach, but the problem with it is that most of the time you want things like loaded config to be singletons, but if you simply do something like
trait Configuration {
val config = loadConfig
}
class Foo with Configuration
class Bar with Configuration
val f1 = new Foo
val f2 = new Foo
val b1 = new Bar
val b2 = new Bar
then, you'll end up loading four different copies of the config.
One way around this is to delegate loadConfig to a singleton object:
object Configuration {
val config = loadConfig
}
trait Configuration {
def config = Configration.config
}
This works, but makes it much harder to unit-test and override the functionality (what if I want my config loaded from a database sometimes?)
Another possibility is a proxy class:
trait Configuration {
def loadConfig: Config
lazy val config: Config = loadConfig
}
class ConfigurationProxy(cfg: Configuration) extends Configuration {
def loadConfig = cfg.config
}
object Main extends App with Configuration {
def loadConfig = ??? // executed only one per application
...
}
class Foo extends ConfigurationProxy(Main)
class Bar extends ConfigurationProxy(Main)
val f1 = new Foo
val f2 = new Foo
val b1 = new Bar
val b2 = new Bar
Now, all four variables are looking at the same Config instance.
But if you have a function somewhere that wants a Configuration, you can still pass any of these in:
def connectToDB(cfg: Configuration) = ???
connectToDB(Main)
connectToDB(f1)
connectToDB(b2)
etc.

Related

Scala: how to avoid passing the same object instance everywhere in the code

I have a complex project which reads configurations from a DB through the object ConfigAccessor which implements two basic APIs: getConfig(name: String) and storeConfig(c: Config).
Due to how the project is currently designed, almost every component needs to use the ConfigAccessor to talk with the DB. Thus, being this component an object it is easy to just import it and call its static methods.
Now I am trying to build some unit tests for the project in which the configurations are stored in a in-memory hashMap. So, first of all I decoupled the config accessor logic from its storage (using the cake pattern). In this way I can define my own ConfigDbComponent while testing
class ConfigAccessor {
this: ConfigDbComponent =>
...
The "problem" is that now ConfigAccessor is a class, which means I have to instantiate it at the beginning of my application and pass it everywhere to whoever needs it. The first way I can think of for passing this instance around would be through other components constructors. This would become quite verbose (adding a parameter to every constructor in the project).
What do you suggest me to do? Is there a way to use some design pattern to overcome this verbosity or some external mocking library would be more suitable for this?
Yes, the "right" way is passing it in constructors. You can reduce verbosity by providing a default argument:
class Foo(config: ConfigAccessor = ConfigAccessor) { ... }
There are some "dependency injection" frameworks, like guice or spring, built around this, but I won't go there, because I am not a fan.
You could also continue utilizing the cake pattern:
trait Configuration {
def config: ConfigAccessor
}
trait Foo { self: Configuration => ... }
class FooProd extends Foo with ProConfig
class FooTest extends Foo with TestConfig
Alternatively, use the "static setter". It minimizes changes to existing code, but requires mutable state, which is really frowned upon in scala:
object Config extends ConfigAccessor {
#volatile private var accessor: ConfigAccessor = _
def configurate(cfg: ConfigAccessor) = synchronized {
val old = accessor
accessor = cfg
old
}
def getConfig(c: String) = Option(accessor).fold(
throw new IllegalStateException("Not configurated!")
)(_.getConfig(c))
You can retain a global ConfigAccessor and allow selectable accessors like this:
object ConfigAccessor {
private lazy val accessor = GetConfigAccessor()
def getConfig(name: String) = accessor.getConfig(name)
...
}
For production builds you can put logic in GetConfigAccessor to select the appropriate accessor based on some global config such as typesafe config.
For unit testing you can have a different version of GetConfigAccessor for different test builds which return the appropriate test implementation.
Making this value lazy allows you to control the order of initialisation and if necessary do some non-functional mutable stuff in the initialisation code before creating the components.
Update following comments
The production code would have an implementation of GetConfigAccessor something like this:
object GetConfigAccessor {
private val useAws = System.getProperties.getProperty("accessor.aws") == "true"
def apply(): ConfigAccessor =
if (useAws) {
return new AwsConfigAccessor
} else {
return new PostgresConfigAccessor
}
}
Both AwsConfigAccessor and PostgresConfigAccessor would have their own unit tests to prove that they conform to the correct behaviour. The appropriate accessor can be selected at runtime by setting the appropriate system property.
For unit testing there would be a simpler implementation of GetConfigAccessor, something like this:
def GetConfigAccessor() = new MockConfigAccessor
Unit testing is done within a unit testing framework which contains a number of libraries and mock objects that are not part of the production code. These are built separately and are not compiled into the final product. So this version of GetConfigAccessor would be part of that unit testing code and would not be part of the final product.
Having said all that, I would only use this model for reading static configuration data because that keeps the code functional. The ConfigAccessor is just a convenient way to access global constants without having them passed down in the constructor.
If you are also writing data then this is more like a real DB than a configuration. In that case I would create custom accessors for each component that give access to different parts of the DB. That way it is clear which parts of the data are updated by each component. These accessors would be passed down to the component and can then be unit tested with the appropriate mock implementation as normal.
You may need to partition your data into static config and dynamic config and handle them separately.

Scala and Slick: DatabaseConfigProvider in standalone application

I have an Play 2.5.3 application which uses Slick for reading an object from DB.
The service classes are built in the following way:
class SomeModelRepo #Inject()(protected val dbConfigProvider: DatabaseConfigProvider) {
val dbConfig = dbConfigProvider.get[JdbcProfile]
import dbConfig.driver.api._
val db = dbConfig.db
...
Now I need some standalone Scala scripts to perform some operations in the background. I need to connect to the DB within them and I would like to reuse my existing service classes to read objects from DB.
To instantiate a SomeModelRepo class' object I need to pass some DatabaseConfigProvider as a parameter. I tried to run:
object SomeParser extends App {
object testDbProvider extends DatabaseConfigProvider {
def get[P <: BasicProfile]: DatabaseConfig[P] = {
DatabaseConfigProvider.get("default")(Play.current)
}
}
...
val someRepo = new SomeModelRepo(testDbProvider)
however I have an error: "There is no started application" in the line with "(Play.current)". Moreover the method current in object Play is deprecated and should be replaced with DI.
Is there any way to initialize my SomeModelRepo class' object within the standalone object SomeParser?
Best regards
When you start your Play application, the PlaySlick module handles the Slick configurations for you. With it you have two choices:
inject DatabaseConfigProvider and get the driver from there, or
do a global lookup via DatabaseConfigProvider.get[JdbcProfile](Play.current), which is not preferred.
Either way, you must have your Play app running! Since this is not the case with your standalone scripts you get the error: "There is no started application".
So, you will have to use Slick's default approach, by instantiating db directly from config:
val db = Database.forConfig("default")
You have lot's of examples at Lightbend's templates.
EDIT: Sorry, I didn't read the whole question. Do you really need to have it as another application? You can run your background operations when your app starts, like here. In this example, InitialData class is instantiated as eager singleton, so it's insert() method is run immediately when app starts.

akka-http with multiple route configurations

Quick Background
I am running through some examples learning the Akka HTTP stack for creating a new REST project (completely non-UI). I have been using and augmenting the Akka HTTP Microservice Example to work through a bunch of use cases and configurations and have been pleasantly surprised by how well Scala & Akka HTTP work.
Current Setup
Currently I have a configuration like this:
object AkkaHttpMicroservice extends App with Service {
override implicit val system = ActorSystem()
override implicit val executor = system.dispatcher
override implicit val materializer = ActorMaterializer()
override val config = ConfigFactory.load()
override val logger = Logging(system, getClass)
Http().bindAndHandle(routes, config.getString("http.interface"), config.getInt("http.port"))
}
The routes parameter is just a simple value that has the typical data within it using path, pathPrefix, etc.
The Problem
Is there any way to set up routing in multiple Scala files or an example somewhere out there?
I would really like to be able to define a set of classes that separate the concerns and deal with Actor setup and processing to deal with different areas of the application and just leave the marshaling to the root App extension.
This might be me thinking too much in terms of how I did things in Java using annotations like #javax.ws.rs.Path("/whatever") on my classes. If that is the case, please feel free to point out the change in mindset.
I tried searching for a few different set of keywords but believe I am asking the wrong question (eg, 1, 2).
Problem 1 - combine routes in multiple files
You can combine routes from multiple files quite easy.
FooRouter.scala
object FooRouter {
val route = path("foo") {
complete {
Ok -> "foo"
}
}
}
BarRouter.scala
object BarRouter {
val route = path("bar") {
complete {
Ok -> "bar"
}
}
}
MainRouter.scala
import FooRouter
import BarRouter
import akka.http.scaladsl.server.Directives._
import ...
object MainRouter {
val routes = FooRouter.route ~ BarRouter.route
}
object AkkaHttpMicroservice extends App with Service {
...
Http().bindAndHandle(MainRouter.routes, config.getString("http.interface"), config.getInt("http.port"))
}
Here you have have some docs :
http://doc.akka.io/docs/akka-http/current/scala/http/routing-dsl/overview.html
http://doc.akka.io/docs/akka-http/current/scala/http/routing-dsl/routes.html
Problem 2 - seprate routing, marshalling, etc
Yes, you can separate routing, marshalling and application logic. Here you have activator example: https://github.com/theiterators/reactive-microservices
Problem 3 - handle routes using annotations
I don't know any lib that allow you to use annotion to define routing in akka-http. Try to learn more about DSL routing. This represents a different approach to http routing but it is convenient tool too.

How to inject different Slick datasource for testing with Play controller objects?

I have two different kind of slick.driver.PostgresDriver.simple.Database vals
e.g.
implicit val db : slick.driver.PostgresDriver.simple.Database = ProdDataSource.db
and
implicit val testdb : slick.driver.PostgresDriver.simple.Database = TestDataSource.db
only difference is the database they are pointing to. testdb points to a test db which all the tests use.
I have parameterized all APIs which accept (implicit db:Database)
e.g.
def save(emp: Employee)(implicit db : Database): Employee = db.withSession{implicit session => ...}
reason to have Database as an implicit param so testing becomes easier and tests can pass in test Database.
Now when above save def is called from a Play Controller, it'll have to have an implicit val db =... given play controllers are object what's a better way to make controllers parameterized (e.g. if it's a class we can pass in a class param) to be able to test controllers properly with appropriate Database?
Current controller looks as below, which isn't tastable given implicit val db is using prod db. for test need to inject implicit val testdb.
object MyController extends Controller {
implicit val db = ProdDataSource.db
I don't want to use cake pattern which really makes code hard to work and ugly. what's the best way to achieve this?
If the controller was a class you could make it an implicit argument, but I doubt that you can tell play to pass the database into the controller as an argument. So either use mixin composition to have different controller instances for test and production. Or use play config to swap out the one provided by Play for both scenarios.

Scala Compiler Plugin Deconstruction

I've been trying to write a Scala (2.10.0) compiler plugin that analyzes some parts of a traversed code.
This is what I originally had:
class MyPlugin (val global: Global) extends Plugin {
import global._
val name = "myPlugin"
val components = List[PluginComponent](MyComponent)
private object MyComponent extends PluginComponent {
val global: MyPlugin.this.global.type = MyPlugin.this.global
val runsAfter = List ("refchecks")
val phaseName = "codeAnalysis"
def newPhase (_prev: Phase) = new AnalysisPhase (_prev)
class AnalysisPhase (prev: Phase) extends StdPhase (prev) {
override def name = phaseName
def apply (unit: CompilationUnit) {
codeTraverser traverse unit.body
printLinesToFile(counters.map{case (k,v) => k + "\t" + v},out)
}
def codeTraverser = new ForeachTreeTraverser (tree => /* Analyze tree */)
}
}
}
This code works as expected, however I don't like it because I cannot decouple the code traverser method from this object. I would like to write a separate CodeTraverser class that will perform the analysis on a given Tree. This, among other things may help me test this code better.
The main problem is that unit.body is of an internal Tree type inside scala.reflect.internal.Trees. If I could work with scala.reflect.api.Trees#Tree instead of the internal version I could decouple the traverser functionality and even test it quite easily.
I've tried to find a way to convert between the two, but to no avail. Is it even possible? From looking at their source code, many things looks too similar for this to be impossible.
You're probably struggling with the cake pattern that the compiler is implemented with and a lot of path-dependency that comes with it. I've gone through this some time ago when I was writing some really beefy macro and wanted to refactor a bunch of functions out of macro implementation into separate utility class. I found this to be quite an annoying issue.
Here's how I would implement your Traverser in a separate class:
class MyPluginUtils[G <: Global with Singleton](global: G) {
import global._
class AnalyzingTraverser extends ForeachTreeTraverser(tree => /* analyze */)
}
Now, inside your plugin you have to use it like this:
val utils = new MyPluginUtils[global.type](global)
import utils.{global => _, _}
val traverser = new AnalyzingTraverser
As you can see, it's not the most intuitive thing in the world (i.e. this is confusing as hell), but this is the best that I could come up with that actually worked, and I tried a lot of things before finally settling on this one. I would be really happy to see some nicer way to do this.
AFAIK, such extensibility is one of the general problems with the cake pattern (as used in scalac implementation). I've seen other people also complain about this.
The component (the SubComponent or PluginComponent) must be created with the global member initialized early (that is, as an early definition).
Don't forget to review the one-question faq. I may go set google calendar to remind me to do that every Monday morning.
For an example, see the continuations plugin.
The component is defined with a utility class mixed in.
The utility class follows the usual cake recipe. (Leave it as an abstract dependency and let the compiler ensure that everything was mixed correctly.)
Here is a recent edit showing more early definitions, as a demonstration that this usage is not anomalous.
val anfPhase = new {
val global = SelectiveCPSPlugin.this.global
val cpsEnabled = pluginEnabled
override val enabled = cpsEnabled
} with SelectiveANFTransform {
val runsAfter = List("pickler")
}
(In future, they plan to deprecate early definitions in favor of parameterized traits when they are available in the language.)
More generally, global, i.e., "the compiler", is routinely instantiated for testing the compiler itself. I haven't seen it mocked, but computeInternalPhases is the template method for picking what phases are assembled modulo plugins.
There is a current effort to reduce internal dependencies, for the purpose of testing, as a window onto the difficulties involved.