Cannot mock the RedisClient class - method pipeline overrides nothing - scala

I have a CacheService class that uses an instance of the scala-redis library
class CacheService(redisClient: RedisClient) extend HealthCheck {
private val client = redisClient
override def health: Future[ServiceHealth] = {
client.info
...
}
In my unit test, I'm mocking the client instance and testing the service
class CacheServiceSpec extends AsyncFlatSpec with AsyncMockFactory {
val clientMock = mock[RedisClient]
val service = new CacheService(clientMock)
"A cache service" must "return a successful future when healthy" in {
(clientMock.info _).expects().returns(Option("blah"))
service.health map {
health => assert(health.status == ServiceStatus.Running)
}
}
}
yet I'm getting this compilation error
Error:(10, 24) method pipeline overrides nothing.
Note: the super classes of <$anon: com.redis.RedisClient> contain the following, non final members named pipeline:
def pipeline(f: PipelineClient => Any): Option[List[Any]]
val clientMock = mock[RedisClient]
My research so far indicates ScalaMock 4 is NOT capable of mocking companion objects. The author suggests refactoring the code with Dependency Injection.
Am I doing DI correctly (I chose constructor args injection since our codebase is still relatively small and straightforward)? Seems like the author is suggesting putting a wrapper over the client instance. If so, I'm looking for an idiomatic approach.
Should I bother with swapping out for another redis library? The libraries being actively maintained, per redis.io's suggestion, use companion objects as well. I personally think this is is not a problem of these libraries.
I'd appreciate any further recommendations. My goal here is to create a health check for our external services (redis, postgres database, emailing and more) that is at least testable. Criticism is welcomed since I'm still new to the Scala ecosystem.

Am I doing DI correctly (I chose constructor args injection since our
codebase is still relatively small and straightforward)? Seems like
the author is suggesting putting a wrapper over the client instance.
If so, I'm looking for an idiomatic approach.
Yes, you are right and this seems to be a known issue(link1). Ideally, there needs to a wrapper around the client instance. One approach could be to create a trait that has a method say connect and extend it to RedisCacheDao and implement the connect method to give you the client instance whenever you require. Then, all you have to do is to mock this connection interface and you will be able to test.
Another approach could be to use embedded redis for unit testing though usually, it is used for integration testing.You can start a simple redis server where the tests are running via code and close it once the testing is done.
Should I bother with swapping out for another redis library? The
libraries being actively maintained, per redis.io's suggestion, use
companion objects as well. I personally think this is is not a problem
of these libraries.
You can certainly do that. I would prefer Jedis as it is easy and it's performance is better than scala-redis(while performing mget).
Let me know if it helps!!

Related

Idiomatic functional way to make in memory repository for tests in Scala

I have a 2 questions.
I'm making a backend using Akka typed and wanted to make a tests. Simple approeach, no dependency injection, auto-wiring etc. So I have a trait
trait Repository {
def create(h: Model): Future[Int]
def get(id: Long): Future[Model]
}
So 2 classes are extending the trait - DatabaseRepository and InMemoryRepository
InMemoryRepository should be used for tests. The simplest solution is to create mutable.Map member for storing entities an update it on each create operation. However, that is mutating a state. I know that these are tests, but even in tests there might be a need for concurrently creating entities.
The other, maybe more functional approach to make a create method returns a Tuple (InMemoryRepository, Int) so it can be passed around when composing Futures, or any effects.
Maybe a solution is to create a simple State monad which would store a Map, implement flatMap method, which can be used in a for comprehension and on all the other places when needed and which hides mutating state.
Do you maybe have a better approach to this?
What is the best approach to pass config values around? I created a package object and have a variables there like dbHost, thirdPartyUrl (loaded from config). Then I include this package object where ever needed.
Thanks in advance
There is not a single, well documented way, but here is mine. It is a bit long, so I put it on a gist.
Abstracting away the container type, as in my project I use both Future and IO:
https://gist.github.com/V-Lamp/c8862030a2f9bba4951db985b61719b8
If InMemoryRepository is only used for tests, you don't need to implement it at all. Just use a mock.
For example:
val repo = mock[Repository]
val models = Seq(mock[Model], mock[Model], mock[Model])
models.zipWithIndex.foreach { case (m,i) =>
when(repo.create(m)).thenReturn(Future.successful(i+1))
when(repo.get(i+1)).thenReturn(Future.successful(m))
}
Or even simpler, depending on what it is you are actually testing:
when(repo.create(any)).return(Future.successful(100500))
when(repo.get(any)).return(Future(mock[Model]))
As for passing config values around, something like (implicit config: Config) is often used.

How do I create a "Locator" for Akka Actors

I am an akka newbie and I'm trying to build an application that is composed of Spray and Akka. As part of the application I would like to give my fellow developers (who are also new to akka) some prepackaged actors that do specific things which they can then "attach" to their actor systems.
Specifically :
Is there a recommended way to provide a "actor locator"/"Actor System locator" -- think service locator like API to lookup and send messages to actors ? In other words How can I implement a function like:
ActorLocator.GoogleLocationAPIActor
so that I can then use it like :
ActorLocator.GoogleLocationAPIActor ! "StreetAddress"
Assume that getGoogleLocationAPIActor returns an ActorRef that accepts Strings that are addresses and makes an HTTP call to google to resolve that to a lat/lon.
I could internally use actorSelection, but :
I would like to provide the GoogleLocationAPIActor as part of a library that my fellow developers can use
#1 means that when my fellow developer builds an actor based application, he needs a way to tell the library code where the actor system is, so that the library can go an attach the actor to it (In keeping with the one actor system per application practice). Of course in a distributed environment, this could be a guardian for a cluster of actors that are running remotely.
Currently I define the ActorSystem in an object like and access it everywhere like
object MyStage{
val system:ActorSystem = ActorSystem("my-stage")
}
then
object ActorLocator{
val GoogleLocationAPIActor = MyStage.system.actorOf(Props[GoogleLocationAPI])
}
This approach seems to be similar to this but I'm not very sure if this is a good thing. My concerns are that the system seems too open for anyone to add children to without any supervision hierarchy, it seems a bit ugly.
Is my ask a reasonable one or Am I thinking about this wrong ?
How can we have "build up" a library of actors that we can reuse across apps ?
Since this is is about designing an API, you're dangerously close to opinion territory but anyway, here is how I would be tempted to structure this. Personally I'm quite allergic to global singletons so:
Since ActorLocator is a service, I would organize it as a Trait:
trait ActorLocator {
def GoogleLocationAPIActor: ActorRef
def SomeOtherAPIActor: ActorRef
}
Then, you can have an implementation of the service such as:
class ActorLocatorLocalImpl(system: ActorSystem) extends ActorLocator {
override lazy val GoogleLocationAPIActor: ActorRef =
system.actorOf(Props[GoogleLocationAPI])
//etc
}
And a Factory object:
object ActorLocator {
def local(system: ActorSystem): ActorLocator =
new ActorLocatorLocalImpl(system)
}
If you need to create more complex implementations of the service and more complex factory methods, the users, having constructed a service, still just deal with the interface of the Trait.

Untyped vs TypedActors - why use untyped?

I am trying to understand why one would use untyped actors over typed actors.
I have read several posts on this, some of them below:
What is the difference between Typed and UnTyped Actors in Akka? When to use what?
http://letitcrash.com/post/19074284309/when-to-use-typedactors
I am interested in understanding why untyped actors are better in the context of:
a web server,
A distributed architecture
Scalability,
Interoperability with applications written in other programming
languages.
I am aware, that untyped actors are better in the context of FSM because of the become/unbecome functionality.
I can see the possibilities of untyped in a load balancer, as it does not have to be aware of the contents of the messages, but just forward them to other actors. However this could be implemented in a typedactor as well.
Can someone come up with a few use case in the areas mentioned above, where untyped actors are "better"?
There is a generic disadvantage for type actors: they are hard to extend. When you use normal traits you can easily combine them to build object that implements both interfaces
trait One {
def callOne(arg : String)
}
trait Two {
def callTwo(arg : Double)
}
trait Both extends One with Two
The Both trait supports two calls combined from two traits.
If you usage actor approach that process messages instead of making direct calls you is still capable with extending interfaces refusing type safety as price.
trait One {
val receiveOne : PartialFunction[String,Unit] = {
case msg : String => ()
}
}
trait Two {
val receiveTwo : PartialFunction[Double, Unit] = {
case msg : Double => ()
}
}
trait Both extends One with Two {
val receive : PartialFunction[Any, Unit] = receiveOne orElse receiveTwo
}
The receive value in Both trait combines two partial functions. The first accepts only Strings, the second - only Doubles. They have single common supertype: Any. So extended version should use Any as argument and becomes effectively untyped. The flaw is in scala type system that supports type multiplication using with keyword, but does not support union types. You could not define Double or String.
Typed actors lose ability for easy extension. Actors shifts type checks to contravariant position and extending it requires union types. You can see how they works in ceylon programming language.
It is not that untyped and typed actors have different sphere of application. All questioned functionality may be expressed in terms of both. The choice is more about methodology and convenience.
Typing allows you to avoid some errors before going to unit testing. It will cost boilerplate for auxiliary protocol declarations. In the example above you should declare union type explicitly:
trait Protocol
final case class First(message : String) extends Protocol
final case class Second(message : Double) extends Protocol
And you lose easy callback combination: no orElse method for you. Only hand-written
val receive : PartialFunction[Protocol, Unit] = {
case First(msg) => receiveOne(msg)
case Second(msg) => receiveTwo(msg)
}
And if you would like to add a bit of new functionality with trait Three then you would be busy with rewriting that boilerplate code.
Akka provides some useful predefined enhancements for actors. They add new functionality either by mixin (e.g. receive pipeline) or by delegating (e.g. reliable proxy). Proxy patterns are used pretty much in akka applications and they change protocol on the fly, adding control command to it. That could not be done that easily with typed actors. So instead of predefined utilities you would be forced to write you own implementations. And forsaken utilities would not be limited with FSM.
It is up to you decide whether typing improvement worth increased work. No one can give precise advise without deep understanding of your project.
Typed actors are very new; they're explicitly marked as experimental and not ready for production use.
Warning
This module is currently experimental in the sense of being the subject of active research. This means that API or semantics can change without warning or deprecation period and it is not recommended to use this module in production just yet—you have been warned.
(as of the time this is written)
I'd like to point out a confusion that seems to have surfaced here.
Casper, the "typed actors" you refer to are deprecated and will be even removed eventually, I have explained in much detail why that's the case here: Akka typed actors in Java. The link you found with Viktor Klang answering, is talking about Akka 1.2 which is "ancient" as of now (when 2.4 is the stable release).
Having that said, there is a new experimental module called "Akka Typed", to which Daenyth is referring to in his reply. That module may indeed become a new core abstraction, however it's not yet ready for prime time.
I recommend you give the typed modules: Akka Streams (the latest addition to Akka, which will become not experimental very soon) and
Akka Typed to see how Actors may become typed in the near future (perhaps). Then, actually look again at Actors and see which model best works for your use case. Untyped Actors have the advantage of being a true and tried mature module / model, so you can really trust them in that sense, if you want more types - Akka Streams has you covered in many cases, but not all, so then you may consider the experimental module (but be aware, we most likely will change the Typed API while maturing it).

Is DAO pattern obsolete in Scala?

Let's consider a simple example of DAO pattern. Let Person is a value object and PersonDAO is the correspondent trait, which provides methods to store/retrieve Person to/from the database.trait PersonDAO {
def create(p:Person)
def find(id:Int)
def update(p:Person)
def delete(id:Int)
}
We use this pattern (as opposed to Active Record, for example), if we want to separate the business domain and persistence logic.
What if we use another approach instead ?
We will create PersonDatabaseAdaptertrait PersonDatabaseAdapter{
def create
def retrieve(id:Int)
def update
def delete
}
and implicit conversion from Person to it.
implicit def toDatabaseAdapter(person:Person) = new PersonDatabaseAdapter {
def create = ...
def retrieve(id:Int) = ...
def update = ...
def delete = ...
}
Now if we import these conversions, we can write client code to manipulate Persons and store/retrieve them to/from the database in the following manner:
val person1 = new Person
...
person1.create
...
val person2 = new Person
...
person2.retrieve(id)
...
This code looks like Active Record but the business domain and persistence are still separated.
Does it make sense ?
Well, I don't know anything about "obsolete" patters. Pattern is a pattern and you use it where appropriate. Also, I don't know if any pattern should be obsolete in a language unless language itself implements it with the same functionality.
Data access object is not obsolete to my knowledge:
http://java.sun.com/blueprints/corej2eepatterns/Patterns/DataAccessObject.html
http://en.wikipedia.org/wiki/Data_access_object
It seems to me that you are still using the DAO pattern. You have just implemented it differently.
I actually found this question because I was researching whether the DAO pattern is dead in plain ol' Java, given the power of Hibernate. It seems like the Hibernate Session is a generic implementation of a DAO, defining operations such as create, save, saveOrUpdate, and more.
In practice, I have seen little value in using the DAO pattern. When using Hibernate, the DAO interfaces and implementations are redundant wrappers around one-liner Hibernate Session idioms, e.g., getSession().update(...);
What you end up with is duplicate interfaces - a Service interface that redefines all of the methods in the DAO interface plus a few others implemented in terms of those.
It seems that Spring + Hibernate has reduced persistence logic almost to a triviality. Persistence technology portability is NOT needed in almost all applications. Hibernate already gives you database portability. Sure, DAOs would give you the ability to change from Hibernate to Toplink, but in practice, one would never do this. Persistence technologies are already leaky abstractions and applications are built to deal with this fact - such as loading a proxy for setting associations vs. performing a database hit - which necessarily couples them to the persistence technology. Being coupled to Hibernate is not really so bad though since it does its best to get out of the way (no checked exceptions a la JDBC and other nonsense).
In summary, I think that your Scala implementation of the DAO pattern is fine, though you could probably create a completely generic mixin that would give basic CRUD operations to any entity (most competent devs tend to implement a "generic DAO" base class in Java, too). Is that what Active Record does?
/end of random comments/
I believe your PersonDatabaseAdapter mutates Person in its retrieve(id: Int) method. So this pattern forces your domain objects to be mutable, while the Scala community seems to favor immutability due to the functional nature (or features) of the language.
Otherwise, I think the DAO pattern still has the same advantages and disadvantages (listed here) in Scala as it does in Java.
Nowadays I notice the Repository pattern to be quite popular especially as its terminology makes it look like you're dealing with collections.

How do you do dependency injection with the Cake pattern without hardcoding?

I just read and enjoyed the Cake pattern article. However, to my mind, one of the key reasons to use dependency injection is that you can vary the components being used by either an XML file or command-line arguments.
How is that aspect of DI handled with the Cake pattern? The examples I've seen all involve mixing traits in statically.
Since mixing in traits is done statically in Scala, if you want to vary the traits mixed in to an object, create different objects based on some condition.
Let's take a canonical cake pattern example. Your modules are defined as traits, and your application is constructed as a simple Object with a bunch of functionality mixed in
val application =
new Object
extends Communications
with Parsing
with Persistence
with Logging
with ProductionDataSource
application.startup
Now all of those modules have nice self-type declarations which define their inter-module dependencies, so that line only compiles if your all inter-module dependencies exist, are unique, and well-typed. In particular, the Persistence module has a self-type which says that anything implementing Persistence must also implement DataSource, an abstract module trait. Since ProductionDataSource inherits from DataSource, everything's great, and that application construction line compiles.
But what if you want to use a different DataSource, pointing at some local database for testing purposes? Assume further that you can't just reuse ProductionDataSource with different configuration parameters, loaded from some properties file. What you would do in that case is define a new trait TestDataSource which extends DataSource, and mix it in instead. You could even do so dynamically based on a command line flag.
val application = if (test)
new Object
extends Communications
with Parsing
with Persistence
with Logging
with TestDataSource
else
new Object
extends Communications
with Parsing
with Persistence
with Logging
with ProductionDataSource
application.startup
Now that looks a bit more verbose than we would like, particularly if your application needs to vary its construction on multiple axes. On the plus side, you usually you only have one chunk of conditional construction logic like that in an application (or at worst once per identifiable component lifecycle), so at least the pain is minimized and fenced off from the rest of your logic.
Scala is also a script language. So your configuration XML can be a Scala script. It is type-safe and not-a-different-language.
Simply look at startup:
scala -cp first.jar:second.jar startupScript.scala
is not so different than:
java -cp first.jar:second.jar com.example.MyMainClass context.xml
You can always use DI, but you have one more tool.
The short answer is that Scala doesn't currently have any built-in support for dynamic mixins.
I am working on the autoproxy-plugin to support this, although it's currently on hold until the 2.9 release, when the compiler will have new features making it a much easier task.
In the meantime, the best way to achieve almost exactly the same functionality is by implementing your dynamically added behavior as a wrapper class, then adding an implicit conversion back to the wrapped member.
Until the AutoProxy plugin becomes available, one way to achieve the effect is to use delegation:
trait Module {
def foo: Int
}
trait DelegatedModule extends Module {
var delegate: Module = _
def foo = delegate.foo
}
class Impl extends Module {
def foo = 1
}
// later
val composed: Module with ... with ... = new DelegatedModule with ... with ...
composed.delegate = choose() // choose is linear in the number of `Module` implementations
But beware, the downside of this is that it's more verbose, and you have to be careful about the initialization order if you use vars inside a trait. Another downside is that if there are path dependent types within Module above, you won't be able to use delegation that easily.
But if there is a large number of different implementations that can be varied, it will probably cost you less code than listing cases with all possible combinations.
Lift has something along those lines built in. It's mostly in scala code, but you have some runtime control. http://www.assembla.com/wiki/show/liftweb/Dependency_Injection