When we implement DI via Reader, we make a dependency a part of our method signature. Assume we have (without implementations):
trait Service1 { def f1:Int = ??? }
trait Service2 { def f2:Reader[Service1, Int] = ??? }
type Env= (Service1, Service2)
def c:Reader[Env, Int] = ??? //use Service2.f2 here
Now, f2 needs additional service for implementation, say:
trait Service3
type Service2Env = (Service1, Service3)
//new dependecies on both:
trait Service2 { def f2:Reader[Service2Env, Int] = ??? }
It will break existing clients, they cannot any longer use Service2.f2 without providing Service3 additionally.
With DI via injection (via constructor or setters), which is common in OOP, I would use as a dependency of c only Service2. How it is constructed and what is its list of dependencies, I do not care. From this point, any new dependencies in Service2 will keep the signature of c function unchanged.
How is it solved in FP way? Are there options? Is there a way to inject new dependencies, but somehow protect customers from the change?
Is there a way to inject new dependencies, but somehow protect customers from the change?
That would kind of defeat the purpose, as using Reader (or alternatively Final Tagless or ZIO Environment) is a way to explicitly declare (direct and indirect) dependencies in the type signature of each function. You are doing this to be able to track where in your code these dependencies are used -- just by looking at a function signature you can tell if this code might have a dramatic side-effect such as, say, sending an email (or maybe you are doing this for other reasons, but the result is the same).
You probably want to mix-and-match this with constructor-injection for the dependencies/effects that do not need that level of static checking.
Related
Consider a case class:
case class configuredThing[A, B](param: string) {
val ...
def ...
}
Test code was able to be partially written for configuredThing which has some methods on it that make some external calls to other services.
This case class is used elsewhere:
object actionObject {
private lazy val thingA = configuredThing[A, B]("A")
private lazy val thingB = configuredThing[C, D]("B")
def ...
}
Here the types A, B, C, and D are actually specified and are not native scala types but defined in a third party package we are leveraging to interface with some external services.
In trying to write tests for this object the requirement is to not have the external calls made so as to test the logic in actionObject. This lead to looking into how to mock out configuredThing within actionObject to be able to make assertions on the different interactions with the configuredThing instances. However, it is unclear how to do this.
In looking at scalatest's scalamock documentation this would need to be done with the Generated mocks system which states "rely on the ScalaMock compiler plugin". However, this seems to have not been released since scala 2.9.2, see here.
So, my question is this: how can this be tested?
I am attempting to develop a small component for a Play-based service and am struggling with how to go about the design. I need a service that provides access to "file storage" in various environments. My initial idea is simply to create a trait that represents the two primary operations (write/read) and then autowire the implementation based on configuration at runtime. The interface looks something like the following:
trait StorageService {
def saveFile(name: String): Sink[ByteString, Future[Try[FileHandle]]]
def readFile(handle: FileHandle): Source[ByteString]
}
The service allows a user to save a file to disk and returns a FileHandle object that contains all of the information the service needs to identify the specific file. The issue I'm having is that the FileHandle needs different information for each concrete implementation. Now, I can use F-bound polymorphism on the trait to solve this like so:
trait StorageService[A <: FileHandle] {
def saveFile(name: String): Sink[ByteString, Future[Try[A]]]
def readFile(handle: A): Source[ByteString]
}
However, the issue here becomes autowiring as I cannot wire the concrete implementations as StorageService providers without specifying the type parameter. Does anyone have a suggestion on how to accomplish this autowiring or perhaps a better design?
I am using Intellij 14 and in the module settings, there is an option to export dependencies.
I noticed when I write objects that extend traits, I need to select exportin the module settings when other modules try to use these objects.
For example,
object SomeObj extends FileIO
would require me to export the FileIO dependency.
However, if I write a companion class that creates a new instance when the object is called, the exporting is no longer necessary.
object SomeObject {
private val someObject = new SomeObject()
def apply() = someObject
}
private[objectPkg] class SomeObject() extends FileIO {}
This code is more verbose and kind of a hack to the singleton pattern for Scala. Is it good to export third party dependencies with your module? If not, is my pattern the typical solution with Scala?
It all deal with code design principles in general. Basically, if you may switch underlying third party library later, or you system must be flexible to be ported over some other libs - then hiding implementation behind some facade is a must.
Often there is a ready-made set of interfaces in java/scala, which are implemented in third-party and you may just use those ones as a part of your facade to the rest of the system, and overall it is a java way. If this is not the case - you will need to derive interfaces by yourself. The worthiness of this everyone estimates by oneself in context.
As per your case: keep in mind that in java/scala you export names, and if you will just use your class (which extends FileIO) in any way outside your defining code, this means that class is accessible publicly and its type is exported/leaked outside as well. Scala should throw compile error, if some private class escapes its visibility scope (so in your second version of SomeObject it may be the case).
Consider this example: I use typesafe config library often in my applications. It has convenient methods, but I typically leave the space for possible separation (or rather my own extension):
package object config {
object Config {
private val conf: TypeSafeConfig = ConfigFactory.load()
def toTypeSafeConfig: TypeSafeConfig = conf
}
#inline implicit def confToTypeSafeConfig(conf: Config.type): TypeSafeConfig = conf.toTypeSafeConfig
}
Implicit conversion just allows me to call all TypeSafeConfig methods on my Config, and it has a bunch of convenient methods. Theoretically in future I could remove my implicit and implement methods I used right in the Config object. But I can hardly imagine why I would spend the time on this, though. This is some example of leaked implementation, that I don't consider problematic.
I have a function with the following signature:
myFunc[T <: AnyRef](arg: T)(implicit m: Manifest[T]) = ???
How can I invoke this function if I do not know the exact type of the argument at the compile time?
For example:
val obj: AnyRef = new Foo() // At compile time obj is defined as AnyRef,
val objClass = obj.getClass // At runtime I can figure out that it is actually Foo
// Now I would need to call `myFunc[Foo](obj.asInstanceOf[Foo])`,
// but how would I do it without putting [Foo] in the square braces?
I would want to write something logically similar to:
myFunc[objClass](obj.asInstanceOf[objClass])
Thank you!
UPDATE:
The question is invalid - As #DaoWen, #Jelmo and #itsbruce correctly pointed, the thing I was trying to do was a complete nonsense! I just overthought the problem severely.
THANK YOU guys! It's too bad I cannot accept all the answers as correct :)
So, the problem was caused by the following situation:
I am using Salat library to serialize the objects to/from BSON/JSON representation.
Salat has an Grater[T] class which is used for both serialization and deserialization.
The method call for deserialization from BSON looks this way:
val foo = grater[Foo].asObject(bson)
Here, the role of type parameter is clear. What I was trying to do then is to use the same Grater to serialize any entity from my domain model. So I wrote:
val json = grater[???].toCompactJSON(obj)
I immediately rushed for reflection and just didn't see an obvious solution lying on the surface. Which is:
grater[Entity].toCompactJSON(obj) // where Entity...
#Salat trait Entity // is a root of the domain model hierarchy
Sometimes things are much easier than we think they are! :)
It appears that while I was writing this answer the author of the question realized that he does not need to resolve Manifests at runtime. However, in my opinion it is perfectly legal problem which I resolved successfully when I was writing Yaml [de]serialization library, so I'm leaving the answer here.
It is possible to do what you want using ClassTags or even TypeTags. I don't know about Manifests because that API is deprecated and I haven't worked with it, but I believe that with manifests it will be easier since they weren't as sophisticated as new Scala reflection. FYI, Manifest's successor is TypeTag.
Suppose you have the following functions:
def useClasstag[T: ClassTag](obj: T) = ...
def useTypetag[T: TypeTag](obj: T) = ...
and you need to call then with obj: AnyRef as an argument while providing either ClassTag or TypeTag for obj.getClass class as the implicit parameter.
ClassTag is the easiest one. You can create ClassTag directly from Class[_] instance:
useClasstag(obj)(ClassTag(obj.getClass))
That's all.
TypeTags are harder. You need to use Scala reflection to obtain one from the object, and then you have to use some internals of Scala reflection.
import scala.reflect.runtime.universe._
import scala.reflect.api
import api.{Universe, TypeCreator}
// Obtain runtime mirror for the class' classloader
val rm = runtimeMirror(obj.getClass.getClassLoader)
// Obtain instance mirror for obj
val im = rm.reflect(obj)
// Get obj's symbol object
val sym = im.symbol
// Get symbol's type signature - that's what you really want!
val tpe = sym.typeSignature
// Now the black magic begins: we create TypeTag manually
// First, make so-called type creator for the type we have just obtained
val tc = new TypeCreator {
def apply[U <: Universe with Singleton](m: api.Mirror[U]) =
if (m eq rm) tpe.asInstanceOf[U # Type]
else throw new IllegalArgumentException(s"Type tag defined in $rm cannot be migrated to other mirrors.")
}
// Next, create a TypeTag using runtime mirror and type creator
val tt = TypeTag[AnyRef](rm, tc)
// Call our method
useTypetag(obj)(tt)
As you can see, this machinery is rather complex. It means that you should use it only if you really need it, and, as others have said, the cases when you really need it are very rare.
This isn't going to work. Think about it this way: You're asking the compiler to create a class Manifest (at compile time!) for a class that isn't known until run time.
However, I have the feeling you're approaching the problem the wrong way. Is AnyRef really the most you know about the type of Foo at compile time? If that's the case, how can you do anything useful with it? (You won't be able to call any methods on it except the few that are defined for AnyRef.)
It's not clear what you are trying to achieve and a little more context could be helpful. Anyway, here's my 2 cents.
Using Manifest will not help you here because the type parameter needs to be known at compile time. What I propose is something along these lines:
def myFunc[T](arg: AnyRef, klass: Class[T]) = {
val obj: T = klass.cast(arg)
//do something with obj... but what?
}
And you could call it like this:
myFunc(obj, Foo.class)
Note that I don't see how you can do something useful inside myFunc. At compile time, you cannot call any method on a object of type T beside the methods available for AnyRef. And if you want to use reflection to manipulate the argument of myFunc, then there is no need to cast it to a specific type.
This is the wrong way to work with a type-safe OO language. If you need to do this, your design is wrong.
myFunc[T <: AnyRef](arg: T)(implicit m: Manifest[T]) = ???
This is, of course, useless, as you have probably discovered. What kind of meaningful function can you call on an object which might be anything? You can't make any direct reference to its properties or methods.
I would want to write something logically similar to:
myFunc[objClass](obj.asInstanceOf[objClass])
Why? This kind of thing is generally only necessary for very specialised cases. Are you writing a framework that will use dependency injection, for example? If you're not doing some highly technical extension of Scala's capabilities, this should not be necessary.
I bet you know something more about the class, since you say you don't know the exact type. One big part of the way class-based OO works is that if you want to do something to a general type of objects (including all its subtypes), you put that behaviour into a method belonging to the class. Let subclasses override it if they need to.
Frankly, the way to do what you are attempting is to invoke the function in a context where you know enough about the type.
I was reading (ok, skimming) Dubochet and Odersky's Compiling Structural Types on the JVM and was confused by the following claim:
Generative techniques create Java interfaces to stand in
for structural types on the JVM. The complexity of such
techniques lies in that all classes that are to be used as
structural types anywhere in the program must implement
the right interfaces. When this is done at compile time, it
prevents separate compilation.
(emphasis added)
Consider the autoclose example from the paper:
type Closeable = Any { def close(): Unit }
def autoclose(t: Closeable)(run: Closeable => Unit): Unit = {
try { run(t) }
finally { t.close }
}
Couldn't we generate an interface for the Closeable type as follows:
public interface AnonymousInterface1 {
public void close();
}
and transform our definition of autoclose to
// UPDATE: using a view bound here, so implicit conversion is applied on-demand
def autoclose[T <% AnonymousInterface1](t: T)(run: T => Unit): Unit = {
try { run(t) }
finally { t.close }
}
Then consider a call-site for autoclose:
val fis = new FileInputStream(new File("f.txt"))
autoclose(fis) { ... }
Since fis is a FileInputStream, which does not implement AnonymousInterface1, we need to generate a wrapper:
class FileInputStreamAnonymousInterface1Proxy(val self: FileInputStream)
extends AnonymousInterface1 {
def close() = self.close();
}
object FileInputStreamAnonymousInterface1Proxy {
implicit def fis2proxy(fis: FileInputStream): FileInputStreamAnonymousInterface1Proxy =
new FileInputStreamAnonymousInterface1Proxy(fis)
}
I must be missing something, but it's unclear to me what it is. Why would this approach prevent separate compilation?
As I recall from a discussion on the Scala-Inernals mailing list, the problem with this is object identity, which is preserved by the current approach to compiling, is lost when you wrap values.
Think about it. Consider class A
class A { def a1(i: Int): String = { ... }; def a2(s: String): Boolean = { ... }
Some place in the program, possibly in a separately compiled library, this structural type is used:
{ def a1(i: Int): String }
and elsewhere, this one is used:
{ def a2(s: String): Boolean }
How, apart from global analysis, is class A to be decorated with the interfaces necessary to allow it to be used where those far-flung structural types are specified?
If every possible structural type that a given class could conform to is used to generate an interface capturing that structural type, there's an explosion of such interfaces. Remember, that a structural type may mention more than one required member, so for a class with N public elements (vals or defs) all the possible subsets of those N are required, and that's the powerset of N whose cardinality is 2^N.
I actually use the implicit approach (using typeclasses) you describe in the Scala ARM library. Remember that this is a hand-coded solution to the problem.
The biggest issue here is implicit resolution. The compiler will not generate wrappers for you on the fly, you must do so ahead of time and make sure they are one the implicit scope. This means (for Scala-ARM) that we provide "common" wrapper for whatever resources we can, and fall back to reflection-based types when we can't find the appropriate wrapper. This gives the advantage of allowing the user to specify their own wrapper using normal implicit rules.
See: The Resource Type-trait and all of it's predefined wrappers.
Also, I blogged about this technique describing the implicit resolution magic in more detail: Monkey Patching, Duck Typing and Type Classes.
In any case, you probably don't want to hand-encode a type class everytime you use structural types. If you actually wanted the compiler to automatically create an interface and do the magic for you, it could get messy. Everytime you define a structural type, the compiler will have to create an interface for it (somewhere in the ether perhaps?). We now need to add namespaces for these things. Also, with every call the compiler will have to generate some kind of a wrapper-implementation class (again with the namespace issue). Finally, if we have two different methods with the same structural type that are compiled separately, we've just exploded the number of interfaces we require.
Not that the hurdle couldn't be overcome, but if you want to have structural typing with "direct" access for particular types the type-trait pattern seems to be your best bet today.