I'm currently baking my first cake pattern, so please bear with me.
I took my working monolithic app and I cutted it into functional layers. The cut looks clean but resulted in two of the layers that depend on an implicit ActorSystem.
I tried to solve this dependency like this:
trait LayerA {
this: ActorSystemProvider =>
private implicit val implicitActorSystem = actorSystem
import implicitActorSystem.dispatcher // implicit execution ctx
...
}
... and similarly for LayerX
My assembly class looks like:
class Assembly extends LayerA with LayerB with LayerX with ActorSystemProvider
where ActorSystemProvider simply instantiates the actor system.
This does not work given that the ActorSystem does not exist when the dependencies are resolved and the val's are instantiated, resulting in a NPE. This also looks really ugly and I'm sure there has to be a nicer/easier way to deal with it.
How should I deal with shared implicit dependencies among layers when using the cake pattern, like ActorSystem in this case?
Thanks
Self types is not a requirement for building a caked architecture, actually i use self types only in cases when a trait is a component of a layer. So when i need to place some implicit into the scope (for example ActorRefFactory for Spray Client) i just mix a trait in :
trait ActorSystemProvider {
implicit def actorSystem: ActorSystem
}
And on the lowest layer (so called "end of the world") i have the following code structure:
trait ServiceStack
extends SomeModule
with SomeModule2
with SomeModule3
with ActorSystemProvider
object ServiceLauncher extends App with ServiceStack {
val actorSystem = ActorSystem("ServiceName")
}
It's an oversimplified example (if you want a great example of a real system build on top of a Cake Pattern then you should definitely take a look at the Precog system, example where different modules/layers connects), but not you can mix implicit ActorSystem when you need it.
If you can instantiate the vals lazily rather than eagerly, you can make the implicitActorSystem a lazy val instead of a val. So it only gets executed when it is accessed the first time. I think this should solve the problem of NPE.
(Another little known interesting fact posted by #ViktorKlang FYI: If the initialization of a lazy val throws an exception, it will attempt to reinitialize the val at next access.)
Another way would be to make each of your methods which need execution context accept an implicit executionContext like:
trait LayerA {
def getUser(id: Int)(implicit ec: ExecutionContext) = {
...
}
}
Related
Consider a case class:
case class configuredThing[A, B](param: string) {
val ...
def ...
}
Test code was able to be partially written for configuredThing which has some methods on it that make some external calls to other services.
This case class is used elsewhere:
object actionObject {
private lazy val thingA = configuredThing[A, B]("A")
private lazy val thingB = configuredThing[C, D]("B")
def ...
}
Here the types A, B, C, and D are actually specified and are not native scala types but defined in a third party package we are leveraging to interface with some external services.
In trying to write tests for this object the requirement is to not have the external calls made so as to test the logic in actionObject. This lead to looking into how to mock out configuredThing within actionObject to be able to make assertions on the different interactions with the configuredThing instances. However, it is unclear how to do this.
In looking at scalatest's scalamock documentation this would need to be done with the Generated mocks system which states "rely on the ScalaMock compiler plugin". However, this seems to have not been released since scala 2.9.2, see here.
So, my question is this: how can this be tested?
I'm rather new to Scala so this may be a trivial question. I'm trying to put together a simple project in Akka and I'm not sure how to handle a situation where I need to store a reference to an actor of a constrained type.
Let's assume I have an actor trait
trait MyActorTrait extends Actor
Then somewhere else I would like to define another trait with a member
def reference: ActorRef[MyActorTrait]
That obviously doesn't work since ActorRef doesn't care about the target actor's type (or does it?). Is there any way to constrain the reference to only accept references to actors that extend MyActorTrait?
By design, there is no way you can access the underlying Actor through ActorRef (or, pretty much, any other way). So, constraining the type like you describe is pointless, there would be absolutely no difference in what you can do with ActorRef[Foo] vs. ActorRef[Bar].
Not saying, this is a good thing (few things in Akka can be characterized that way IMO), but that's just the way it is.
The way you try to reference an actor in another actor is not right. Actors form a hierarchy. So it gives you a path to refer an actor. There are mainly three ways you could reference another actor using
Absolute Paths
context.actorSelection("/user/serviceA")
Relative Paths
context.actorSelection("../brother") ! msg
Querying the Logical Actor Hierarchy
context.actorSelection("../*") ! msg
You may have a look at the doc - https://doc.akka.io/docs/akka/2.5/general/addressing.html.
After some digging around I believe I've found a way.
First I create a trait which will preserve the actor type
trait TypedActorRef[T <: Actor] {
def reference: ActorRef
type t = T
}
Then implement a generic apply which instantiates a private case class extending the trait
object TypedActorRef {
private case class Ref[T <: Actor] (reference: ActorRef) extends TypedActorRef[T]
def apply[T <: Actor](actor: T) = {
Ref[T](actor.self)
}
}
With that I can keep a reference which is restricted an actor of the type I want.
trait OtherActor extends Actor
trait MyActor extends Actor
{
def otherActorsReference: TypedActorRef[OtherActor]
}
This seems OK to me, at least this seems not to upset the compiler, but I'm not sure if this solution prevents from creating other extensions of the TypedActorRef which may not respect the same constraints.
When using a macro to materialize an implementation of a trait, I'd like to create the implementation within a package so that it has access to other package-private classes.
trait MyTrait[T]
object MyTrait {
implicit def materialize[T]: MyTrait[T] = macro materializeImpl[T]
def materializeImpl[T : c.WeakTypeTag](c: blackbox.Context): c.Expr[MyTrait[T]] = {
val tt = weakTypeTag[T]
c.Expr[MyTrait[T]](q"new MyTrait[$tt] {}")
}
}
Is it possible to materialize new MyTrait[$tt] {} within a particular package?
A macro has to expand into an AST which would compile in the place the macro call is in. Since package declarations are only allowed at top-level, and method calls aren't allowed there, the expanded tree can't create anything in another package.
As Alexey Romanov pointed out this is not possible directly. Still if you call only a few methods (and if you use macro, most probably this is so), one possible (but not perfect) workaround might be creating a public abstract class or trait that extends the target trait and "publishes" all the required package private methods as protected proxies. So you can create instances in your macro from inheriting from that abstract class rather than trait. Obviously this trick effectively "leaks" those methods to anyone but thanks to reflection anyone can call any method if he really wants. And abusing this trick will show as deliberate effort to circumvent your separation as the usage of the reflection.
I'm currently working with a codebase that requires an explicit parameter to have implicit scope for parts of its implementation:
class UsesAkka(system: ActorSystem) {
implicit val systemImplicit = system
// code using implicit ActorSystem ...
}
I have two questions:
Is there a neater way to 'promote' an explicit parameter to implicit
scope without affecting the signature of the class?
Is the general recommendation to commit to always importing certain types through implicit parameter lists, like ActorSystem for an Akka application?
Semantically speaking, I feel there's a case where one type's explicit dependency may be another type's implicit dependency, but flipping the implicit switch appears to have a systemic effect on the entire codebase.
Why don't you make systemImplicit private ?
class UsesAkka(system: ActorSystem) {
private implicit val systemImplicit = system
// ^^^^^^^
// ...
}
This way, you would not change the signature of UsesAkka.
I've written a custom trait which extends Iterator[A] and I'd like to be able to use the methods I've written on an Iterator[A] which is returned from another method. Is this possible?
trait Increment[+A] extends Iterator[A]{
def foo() = "something"
}
class Bar( source:BufferedSource){
//this ain't working
def getContents():Increment[+A] = source getLines
}
I'm still trying to get my head around the whole implicits thing and not having much like writing a method in the Bar object definition. How would I go about wrapping such an item to work in the way I'd like above?
Figured it out. Took me a few tries to understand:
object Increment{
implicit def convert( input:Iterator[String] ) = new Increment{
def next() = input next
def hasNext() = input hasNext
}
}
and I'm done. So amazingly short.
I don't think that this is possible without playing tricks. A mixin inheritance happens at compile time, when it can be type-checked statically, and it is always targeted at another class, trait, etc. Here you try to tack a trait on an existing object "on the fly" at runtime.
There are workarounds like implicit conversions, or maybe proxies. Probably the "cleanest" way would be to make Increment a wrapper class delegating to an underlying Iterator. Depending on your use case there might be other solutions.