how init a val out the object in scala? - scala

A redis cluster client should should be shared in many place,am I right? with the google, so I use a RedisCli object:
object RedisCli {
val jedisClusterNodes = new java.util.HashSet[HostAndPort]()
jedisClusterNodes.add(new HostAndPort("192.168.1.100", 6379))
lazy val jedisCluster = new JedisCluster(jedisClusterNodes)
//...method with jedisCluster
}
the problem is how can I init the jedisCluster out the object--I want init the HostAndPort in the main method of other object, get the ip from properties file the file passed by command line. should I just use class RedisCli in my circumstance?
I think I am totally lost in class and object.

In Scala all members of a singleton objects should be defined. While you are allowed to modify var members from the outside, take a step back and ask yourself what is the point of having a singleton object in your case if each client can modify its members? You will only end up with spaghetti code.
I would highly recommend using a dependency injection framework (Spring for example) where you can create beans in a specific place then inject them where you need them.
In a nutshell singleton objects should be used when you want to define methods and values (never seen a case where a var is used) that are not specific to each instance of a class (think Java static). In your case, you seem to want different instances (otherwise why should they be set from client code) but want a certain instance to be shared across different clients and this is exactly what dependency injection allows you to do.
If you don't want to use a DI framework and are okay with having clients modify your instances as they please, then simply use a class as opposed to an object. When you use the class keyword, different instances can be instantiated.
class RedisCli(val ip: String, val port: Int) {
val hostAndPort: HostAndPort = new HostAndPort(ip, port)
etc...
}
Hope this helps.

Related

Scala: how to avoid passing the same object instance everywhere in the code

I have a complex project which reads configurations from a DB through the object ConfigAccessor which implements two basic APIs: getConfig(name: String) and storeConfig(c: Config).
Due to how the project is currently designed, almost every component needs to use the ConfigAccessor to talk with the DB. Thus, being this component an object it is easy to just import it and call its static methods.
Now I am trying to build some unit tests for the project in which the configurations are stored in a in-memory hashMap. So, first of all I decoupled the config accessor logic from its storage (using the cake pattern). In this way I can define my own ConfigDbComponent while testing
class ConfigAccessor {
this: ConfigDbComponent =>
...
The "problem" is that now ConfigAccessor is a class, which means I have to instantiate it at the beginning of my application and pass it everywhere to whoever needs it. The first way I can think of for passing this instance around would be through other components constructors. This would become quite verbose (adding a parameter to every constructor in the project).
What do you suggest me to do? Is there a way to use some design pattern to overcome this verbosity or some external mocking library would be more suitable for this?
Yes, the "right" way is passing it in constructors. You can reduce verbosity by providing a default argument:
class Foo(config: ConfigAccessor = ConfigAccessor) { ... }
There are some "dependency injection" frameworks, like guice or spring, built around this, but I won't go there, because I am not a fan.
You could also continue utilizing the cake pattern:
trait Configuration {
def config: ConfigAccessor
}
trait Foo { self: Configuration => ... }
class FooProd extends Foo with ProConfig
class FooTest extends Foo with TestConfig
Alternatively, use the "static setter". It minimizes changes to existing code, but requires mutable state, which is really frowned upon in scala:
object Config extends ConfigAccessor {
#volatile private var accessor: ConfigAccessor = _
def configurate(cfg: ConfigAccessor) = synchronized {
val old = accessor
accessor = cfg
old
}
def getConfig(c: String) = Option(accessor).fold(
throw new IllegalStateException("Not configurated!")
)(_.getConfig(c))
You can retain a global ConfigAccessor and allow selectable accessors like this:
object ConfigAccessor {
private lazy val accessor = GetConfigAccessor()
def getConfig(name: String) = accessor.getConfig(name)
...
}
For production builds you can put logic in GetConfigAccessor to select the appropriate accessor based on some global config such as typesafe config.
For unit testing you can have a different version of GetConfigAccessor for different test builds which return the appropriate test implementation.
Making this value lazy allows you to control the order of initialisation and if necessary do some non-functional mutable stuff in the initialisation code before creating the components.
Update following comments
The production code would have an implementation of GetConfigAccessor something like this:
object GetConfigAccessor {
private val useAws = System.getProperties.getProperty("accessor.aws") == "true"
def apply(): ConfigAccessor =
if (useAws) {
return new AwsConfigAccessor
} else {
return new PostgresConfigAccessor
}
}
Both AwsConfigAccessor and PostgresConfigAccessor would have their own unit tests to prove that they conform to the correct behaviour. The appropriate accessor can be selected at runtime by setting the appropriate system property.
For unit testing there would be a simpler implementation of GetConfigAccessor, something like this:
def GetConfigAccessor() = new MockConfigAccessor
Unit testing is done within a unit testing framework which contains a number of libraries and mock objects that are not part of the production code. These are built separately and are not compiled into the final product. So this version of GetConfigAccessor would be part of that unit testing code and would not be part of the final product.
Having said all that, I would only use this model for reading static configuration data because that keeps the code functional. The ConfigAccessor is just a convenient way to access global constants without having them passed down in the constructor.
If you are also writing data then this is more like a real DB than a configuration. In that case I would create custom accessors for each component that give access to different parts of the DB. That way it is clear which parts of the data are updated by each component. These accessors would be passed down to the component and can then be unit tested with the appropriate mock implementation as normal.
You may need to partition your data into static config and dynamic config and handle them separately.

How do I specify type parameters via a configuration file?

I am building a market simulator using Scala/Akka/Play. I have an Akka actor with two children. The children need to have specific types which I would like to specify as parameters.
Suppose that I have the following class definition...
case class SecuritiesMarket[A <: AuctionMechanismLike, C <: ClearingMechanismLike](instrument: Security) extends Actor
with ActorLogging {
val auctionMechanism: ActorRef = context.actorOf(Props[A], "auction-mechanism")
val clearingMechanism: ActorRef = context.actorOf(Props[C], "clearing-mechanism")
def receive: Receive = {
case order: OrderLike => auctionMechanism forward order
case fill: FillLike => clearingMechanism forward fill
}
}
Instances of this class can be created as follows...
val stockMarket = SecuritiesMarket[DoubleAuctionMechanism, CCPClearingMechanism](Security("GOOG"))
val derivativesMarket = SecuritiesMarket[BatchAuctionMechanism, BilateralClearingMechanism](Security("SomeDerivative"))
There are many possible combinations of auction mechanism types and clearing mechanism types that I might use when creating SecuritiesMarket instance for a particular model/simulation.
Can I specify the type parameters that I wish to use in a given simulation in the application.conf file?
I see two questions here.
Can I get a Class instance from a String?
Yes.
val cls: Class[DoubleAuctionMechanism] = Class.forName("your.app.DoubleAuctionMechanism").asInstanceOf[Class[DoubleAuctionMechanism]]
You would still need the cast, as forName returns Class[_].
Can I instantiate a type with type parameters are not known compile time?
Well sort of, but not really.
object SecuritiesMarket {
def apply[A, C](clsAuc: Class[A], clsClr: Class[C])(security: Security): SecuritiesMarket[A, C] = {
SecuritiesMarket[A, C](security)
}
}
I think auction mechanisms and clearing mechanisms are dependencies for SecurityMarket. I'm guessing you instantiate them in its constructor somehow (how?). If that's the case why not just pass them in as a constructor parameter?
Edit:
I don't see how I could create the child actors inside SecurityMarket
Answering this in the comments; Props[T] can also be written as Props[T](classOfT), which can be simplified as Props(classOfT). Those three are the same. So the following code:
val auctionMechanism: ActorRef = context.actorOf(Props[A], "auction-mechanism")
Can be replaced with:
val classOfA = Class.forName("path.to.A")
val auctionMechanism: ActorRef = context.actorOf(Props(classOfA), "auction-mechanism")
First, application.conf is a runtime artifact and its contents are as far as I know not normally parsed at compile time. When the file is parsed at runtime, the parser creates an instance of the class Config which then controls the Akka setup.
The Typesafe Config library project readme is quite nice and the linked documentation has all of the details:
https://github.com/typesafehub/config/blob/master/README.md.
Second, since template parameters are not available at runtime because of type erasure, you can't normally use application.conf to control templating. You could create a custom build step to parse application.conf and modify your code before compilation, but this is maybe not what you want. (And if you do want a custom build step, perhaps a different .conf would be appropriate.)
Instead you might try simply eliminating the type parameters for the securities market class. Then create a single, simple implementation of the auction and clearing actors. Implement these actors by reading the names of the respective mechanisms from application.conf, instantiating the configured mechanism reflectively, and delegating to the instantiated mechanism. The mechanism classes could be independent of Akka, which is perhaps nice if that's where you keep most of your logic?

is it possible to load scala classes "dynamic"? (like pythons import_module)

at the moment I'm try to write an API with Scala. This API should handle file backends, like Smb, S3, FileSystem Storage, etc.
So I wrote some classes like Storage which is a base class for storage backends and the Subclasses like FileSystemStorage, SmbStorage which subclasses Storage but from now on, i want to use those classes if i specify them in a settings file.
I wanted it like it is in Django: https://docs.djangoproject.com/en/1.6/ref/settings/#std:setting-DEFAULT_FILE_STORAGE Where i could specifiy a string, to my default storage engine.
And then it should "magically" work so that I could use DefaultStorage to access either FileSystemStorage or SmbStorage also it should be possible to create more "storage" classes. is this even possible?
Currently I have something in my mind how i could realize this, but I'm unsure if this is good practice in scala.
JVM classes are already loaded dynamically. What you want is to choose an instance dynamically.
You can do something like:
def byName(name:String) = name match {
"FileSystemStorage" => FileSystemStorage
"SmbStorage" => SmbStorage
}
I am assuming these are objects. If they are classes, just add a new keyword.
Now, if the class name is unknown at compile time you can do Class.forName(full_qualified_classname). But this will give you a Class object, not the instance for the class, in which case you will need to invoke newInstance (assuming it has an argument-less constructor). The way you described your problem suggests you don't want this approach.

Unity IoC Explicitly ask container for new instance

It appears that Unity IoC defaults to creating a new instance of an object when it resolves a type. But my question is there someway to be explicit and tell my container that whenever I have it resolve an object type to give me a new instance of said type?
IE i want to be explicit and force the container to make sure theInstance is a new instance each time it resolves type:MyNewObject (or all types for that matter)
MyNewObject theInstance = container.Resolve<MyNewObject>();
Yes it is easily configurable by a TransientLifetimeManager
When you register a class should have something like
container.Register<IMyNewObject, MyMewObject>(new TransientLifetimeManager());
//or
container.Register<MyMewObject>(new TransientLifetimeManager())
If you're applying IoC principles properly, your class declares its dependencies and then the container handles the lifecycles of them. For example, you want to grab an HttpRequest object and the container handles providing the current thread-local one, or whatever.
Your code shouldn't really have to care about the life-cycle of its dependencies, as it should never be responsible for clearing up after them or what-have-you (all of that should be encapsulated in the dependency itself, and invoked by the container when it is shut down).
However, if you do need to care in your code about whether you get a singleton instance or a per-injected instance of the same type, I like to be explicit about it by using the type system itself, just as the Guice container for Java does with its Provider pattern. I've created a Guice-style IProvider<T> interface that I use to do this, and I just wire it up with a simple static factory method for them like so:
Provider.Of<Foo>(() => { /* Code to return a Foo goes here */})

Serialize Function1 to database

I know it's not directly possible to serialize a function/anonymous class to the database but what are the alternatives? Do you know any useful approach to this?
To present my situation: I want to award a user "badges" based on his scores. So I have different types of badges that can be easily defined by extending this class:
class BadgeType(id:Long, name:String, detector:Function1[List[UserScore],Boolean])
The detector member is a function that walks the list of scores and return true if the User qualifies for a badge of this type.
The problem is that each time I want to add/edit/modify a badge type I need to edit the source code, recompile the whole thing and re-deploy the server. It would be much more useful if I could persist all BadgeType instances to a database. But how to do that?
The only thing that comes to mind is to have the body of the function as a script (ex: Groovy) that is evaluated at runtime.
Another approach (that does not involve a database) might be to have each badge type into a jar that I can somehow hot-deploy at runtime, which I guess is how a plugin-system might work.
What do you think?
My very brief advice is that if you want this to be truly data-driven, you need to implement a rules DSL and an interpreter. The rules are what get saved to the database, and the interpreter takes a rule instance and evaluates it against some context.
But that's overkill most of the time. You're better off having a little snippet of actual Scala code that implements the rule for each badge, give them unique IDs, then store the IDs in the database.
e.g.:
trait BadgeEval extends Function1[User,Boolean] {
def badgeId: Int
}
object Badge1234 extends BadgeEval {
def badgeId = 1234
def apply(user: User) = {
user.isSufficientlyAwesome // && ...
}
}
You can either have a big whitelist of BadgeEval instances:
val weDontNeedNoStinkingBadges = Map(
1234 -> Badge1234,
5678 -> Badge5678,
// ...
}
def evaluator(id: Int): Option[BadgeEval] = weDontNeedNoStinkingBadges.get(id)
def doesUserGetBadge(user: User, id: Int) = evaluator(id).map(_(user)).getOrElse(false)
... or if you want to keep them decoupled, use reflection:
def badgeEvalClass(id: Int) = Class.forName("com.example.badge.Badge" + id + "$").asInstanceOf[Class[BadgeEval]]
... and if you're interested in runtime pluggability, try the service provider pattern.
You can try and use Scala Continuations - they can give you the ability to serialize the computation and run it at later time or even on another machine.
Some links:
Continuations
What are Scala continuations and why use them?
Swarm - Concurrency with Scala Continuations
Serialization relates to data rather than methods. You cannot serialize functionality because it is a class file which is designed to serialize that and object serialization serializes the fields of an object.
So like Alex says, you need a rule engine.
Try this one if you want something fairly simple, which is string based, so you can serialize the rules as strings in a database or file:
http://blog.maxant.co.uk/pebble/2011/11/12/1321129560000.html
Using a DSL has the same problems unless you interpret or compile the code at runtime.