In typical first example about using the reader monad for dependency injection we have:
this like the classic https://github.com/hermannhueck/composing-functions/blob/master/src/main/scala/demo/Demo08bDbReader.scala
Generally at the core of it is the idea of returning a function that takes as parameter the very thing we want to inject e.g.
trait UserService {
def getUserbyId(id: String): UserRepo => User
}
I have seen several baby example here and there and they all work proper for the purpose of explaining the main idea.
However i am having a hard time translating that for a real world example where you actually have a Repo that actually connect to a DB.
Indeed, in that scenario, the Account Repo itself depend on something else which is either a DB Connection or an Emvconfig from which the DB connection is created.
This also means that every method of the UserRepo will depend on that DB connection or EnvConfig. If a constructor injection is not used for that repo, then all the methods of the UserRepo will need it too which can escalate back to to whatever Service that call the UserRepo.
Am I missing something here ? I must be otherwise i do not understand the all buzz around it.
Can someone explain what i am missing here ?
Related
In ZIO we provide the Environment with initiating Traits:
program.provide(
new Console.Live with MyComponent {}
)
What I wanted to do is to inject MyComponent dynamically from a configuration file - analog Guice Modules.
The whole scenario is described in this Blog.
I can inject a dependency and then create the Environment like:
program.provide(
new Console.Live with Components.Live {
def compsService: Components.Service[Console] = service
}
)
Where service is injected.
This works but has one big disadvantage: We have to define the environment for all Service implementations. So for example if one of them wants to use Random, it is not possible, as we only provide Console.
Is there an alternative to this?
As an idea to solve this problem you could check this concept. Maybe sometimes i'll write library but i feel like it's enough to get idea.
https://gist.github.com/holinov/50fbf349fcb9f6e6c2b89ce319c20bba
If you could wrap injector creation in RIO[Config, Injector] and injection in RIO[Injector, Service] it could fit your needs
This might be a special use case that I am dealing with here. Here is what my simple C# NUnit test that uses Moq looks like
Mock<ISomeRepository> mockR = new Mock<ISomeRepository>();
mockR.Setup(x => x.GetSomething).Returns(new Something(a=1,b=2);
--use the mocked repository here
Now later in this same unit test or another test case I want to invoke the real implementation of the method GetSomething() on this mockR object.
Is there a way to do that? My repository is Singleton at its heart. So even if I create a new object, the GetSomething method still returns the same Moq'd object.
That would largely depend on your implementation of that GetSomething, which is something you're not showing here ;). Also, I'm not sure that's even a valid setup, shouldn't there be a .Setup(..).Returns(..) there?
Mocks are used to represent dependencies of a class allowing that class to be tested without using their actual dependencies. Or you can do tests which involve the actual dependencies.
But using a mocked dependency and the real dependency within the same unit test sounds like you're not clear what your test is testing.
If it's another test case, it shouldn't be a problem either. Each test should not impact another, so if you set up the class under test separately that should be fine, even with a singleton.
I'm assuming that you're injecting the singleton dependency. If not, do that.
I'm newbie to Scala, and I have years of experience programming in Java.
Usually there are two patterns passing some config:
Using a global object sounds like "ConfigManager". And every time I
needs a config I get directly from it.
Passing the config through parameter. The config param may exists in
many layers in the program.
I choose one pattern depends on how the config will be used when I'm writing Java.
But in Scala, many people talks about eliminating side effects. This makes me wonder if I should use the second patterns at any costs.
Which pattern is better in Scala?
Global objects are bad: https://softwareengineering.stackexchange.com/questions/148108/why-is-global-state-so-evil
Make each component take it's configuration (individual pieces) as constructor parameters (possibly with some defaults). That prevents the creation of invalid components or components that have not been configured.
You can collect the initial processing of configuration values in a single class to centralize configuration code and to fail-fast when things are missing. But don't make your components (classes needing the configuration) depend on a global object or take in an entire configuration as a parameter. Just what they need as constructor params.
Example:
// centralize the parsing of configuration
case class AppConfig (config: Config) {
val timeInterval = config.getInt("type_interval")
val someOtherSetting = config.getString("some_other_setting")
}
...
// don't depend on global objects
class SomeComponent (timeInterval: Int) {
...
}
object SomeApplication extends App {
val config = AppConfig(ConfigFactory.load())
val component = new SomeComponent(config.timeInterval)
}
Use global object (this object stores only read-only immutable data, so no issues) which loads configuration object and config variables at once. This has many benefits over loading the configuration deep inside the code.
object ConfigParams {
val config = ConfigFactory.load()
val timeInterval = config.getInt("time_interval")
....
}
Benefits:
Prevents runtime errors (Fail fast approach).
If you have miss spelt any property name your app fails during startup as you are trying to fetch the data eagerly. If this were to be deep inside the codebase then it would be hard to know and it fails when the control of the program goes to that line. So, it cannot be easily detected unless rigorous testing is done.
Central place for all configuration logic and configuration transformations if any.
This serves as a central place for all config logic. easy to change and maintain.
Transformations can be done without need for refactoring the code.
Maintainable and readable.
Easy refactoring.
Functional programming point of view
Yes, loading the config file eagerly is great idea from Fail fast point of view but its not a good functional programming practice.
But important thing is you are not mixing the side effect with any other logic and keeping it separate during the loading of the app. So, as you are isolating the side effect and side effecting at the starting of your project, this would not be a program.
Once the side effecting is done and app has started. Your pure code base will not effected from this and remains pure and clean. So, though it is side effecting, it is isolated and does not effect your codebase. Benefits you again from this are worth experiencing, So go ahead.
As a simplest example, say I'm starting my application in a certain mode (e.g. test), then I want to be able to check in other parts of the application what mode I'm running in. This should be extremely simple, but I'm looking for the right Scala replacement for global variables. Please give me a bit more than : "Scala objects are like global variables"
The ideal solution is that at start-up, the application will create an object, and at creation time, that object's 'mode' is set. After that, other parts of the application will just be able to read the state of 'mode'. How can I do this without passing a reference to an object all over the application?
My real scenario actually includes things such as selecting the database name, or singleton database object at start-up, and not allowing anything else to change that object afterwards. The one problem is that I'm trying to achieve this without passing around that reference to the database.
UPDATE:
Here is a simple example of what I would like to do, and my current solution:
object DB{
class PDB extends ProductionDB
class TDB extends TestComplianceDB
lazy val pdb = new PDB
lazy val tdb = new TDB
def db = tdb //(or pdb) How can I set this once at initialisation?
}
So, I've created different database configurations as traits. Depending on whether I'm running in Test or Production mode, I would like to use the correct configuration where configurations look something like:
trait TestDB extends DBConfig {
val m = new Model("H2", new DAL(H2Driver),
Database.forURL("jdbc:h2:mem:testdb", driver = "org.h2.Driver"))
// This is an in-memory database, so it will not yet exist.
dblogger.info("Using TestDB")
m.createDB
}
So now, whenever I use the database, I could use it like this:
val m = DB.db.m
m.getEmployees(departmentId)
My question really is, is this style bad, good or ok (using a singleton to hold a handle to the database). I'm using Slick, and I think this relates to having just one instance of Slick running. Could this lead to scalability issues.
Is there a better way to solve the problem?
You can use the typesafe config library, this is also used in projects like Play and Akka. Both the Play and Akka documentation explain basic parts of it's usage. From the Play documentation (Additional configuration)
Specifying alternative configuration file
The default is to load the application.conf file from the classpath. You can specify an alternative configuration file if needed:
Using -Dconfig.resource
-Dconfig.resource=prod.conf
Using -Dconfig.file
-Dconfig.file=/opt/conf/prod.conf
Using -Dconfig.url
-Dconfig.url=http://conf.mycompany.com/conf/prod.conf
Note that you can always reference the original configuration file in a new prod.conf file using the include directive, such as:
include "application.conf"
key.to.override=blah
We are trying to use RequestFactory with an existing Java entity model. Our Java entities all implement a DomainObject interface and expose a getObjectId() method (this name was chosen as getId() can be ambiguous and conflict with the domain object's actual ID from the domain being modeled.
The ServiceLayerDecorator interface allows for customization of ID and Version property lookup strategies.
public class MyServiceLayerDecorator extends ServiceLayerDecorator {
#Override
public Object getId(Object object) {
DomainObject domainObject = (DomainObject) object;
return domainObject.getObjectId();
}
}
So far, so good. However, trying to deploy this solution yields runtime errors. In particular, RequestFactoryInterfaceValidator complains:
[ERROR] There is no getId() method in type com.mycompany.server.MyEntity
Then later on:
[ERROR] Type type com.mycompany.client.MyEntityProxy was previously marked as bad
[ERROR] The type com.mycompany.client.MyEntityProxy did not pass RequestFactory validation
[ERROR] Unexpected error
com.google.web.bindery.requestfactory.server.UnexpectedException: The type com.mycompany.client.MyEntityProxy did not pass RequestFactory validation
at com.google.web.bindery.requestfactory.server.ServiceLayerDecorator.die(ServiceLayerDecorator.java:212) ~[gwt-servlet.jar:na]
My question is - why does the ServiceLayerDecorator allow for customized ID and Version lookup strategies if RequestFactoryInterfaceValidator is hardcoding the convention of getId() and getVersion()?
I guess I could override ServiceLayerDecorator.resolveClass() to ignore "poisoned" proxy classes but at this point it seems like I'm fighting the framework too much...
Couple of options, some of which have already been mentioned:
Locator. I like to make a single Locator for the entire proj, or at least for groups of related objects that have similar key types. The getId() call will be able to invoke your DomainObject.getObjectId() method and return that value. Note that the getDomainType() method is currently unused, and can return null or throw an exception.
ValueProxy. Instead of having your objects map to something RF can understand as an entity, map them to plain value objects - no id or version required. RF misses out on a lot of clever things it can do, especially with regard to avoiding sending redundant data to the server.
ServiceLayerDecorator. This worked pre 2.4, but with the annotation processing that goes on now, it works less well, since it tries to do some of the work for you. It seems ServiceLayerDecorator has lost a lot of its teeth in the last few months - in theory, you could use it to rebuild getters to talk directly to your persistence mechanism, but now that the annotation processing verifies your code, that is no longer an option.
Big issue in all of this is that RequestFactory is designed to solve a single problem, and solve it well: Allow developers to use POJOs mapped to some persistence mechanism, and refer to those objects from the client, following certain conventions to avoid writing extra code or configuration.
As a result, it solves its own problem pretty well, and ends up being a bad fit for many other problems/use-cases. You might be finding that it isn't worth it: if so, a few thoughts you might consider:
RPC. It isn't perfect for much, but it does an okay job for a lot.
AutoBeans (which RF is based on) is still a pretty fast, lightweight way to send data over the wire and get it into the app. You could build your own wrapper around it, like RF has done, and slim down the problem it is trying to solve to just your use-case.