Dependency Injection before generation - specman

This is a follow up question from my previous question (Difference between "new" and "gen").
Is there a way to pass dependencies into a struct before generation occurs?
I'm interested in trying to write my code in a way which is easily tested. Currently, our codebase uses get_enclosing_unit() frequently to acquire pointers to helper structs such as a translator/params. This causes there to be lots of bidirectional dependencies in our codebase. This means it is hard to test pieces independent of the other structs.
Here is an example of what I am trying to avoid.
pregenerate() is also {
var translator : my_translator_s = get_enclosing_unit(some_enclosing_unit).get_translator_pointer();
};
I'm trying to avoid depending on some_enclosing_unit since it doesn't relate to my struct and gets in the way of unit testing
With the lack of a constructor in e, I'm lost as to how to pass a dependency in from the calling unit/struct without using get_enclosing_unit(). "new... with" seems like it might be able to help, but as I learned in my last question, it doesn't generate underlying fields and "gen...keeping" doesn't set my generation needed dependencies until after generation has been completed.

There is no easy answer because your architecture seems to be entangled already.
You are right in being suspicious about these bi-directional, vertical dependencies in your instance tree. In general, one should follow the constraints-from-above (CFA) strategy where you pass dependencies down the hierarchy like in
unit child_u {
p_tr: translator_s;
keep soft p_tr == NULL; // safety catch, in case you forget to constrain it
};
unit parent_u {
tr: translator_s;
child: child_u is instance;
keep child.p_tr == tr;
};
Also, I recommend to not have generation dependencies between units. This way you can keep all your pointers to units non-generatable and connect them in the connect_pointers() method of a unit which is called after generation (see documentation).
extend child_u {
!p_parent: parent_u;
}
extend parent_u {
connect_pointers() is also {
child.p_parent = me;
};
};
But then of course you can't have constraints in child that point to parent.
In the case where you absolutely need a generated pointer, use keep soft <ptr> == NULL to provoke failure in case you forgot to constrain it.
Just my 2cents.

Related

DDD functional way: Why is it better to decouple state from the behavior when applying DDD with functional language?

I've read several articles (also the book Functional domain modeling) where they propose to decouple state of the domain object from the behavior, but I cannot understand advantage of such approach over reach domain model.
Here is an example of reach domain model:
case class Account(id: AccountId, balance: Money) {
def activate: Account = {
// check if it is already active, eg, enforce invariant
...
}
def freeze: Account = ???
}
I can chain operations with this account in following way:
account.activate.freeze
Here is example of "anemic" approach which they suggest:
case class Account(id: AccountId, balance: Money)
object AccountService {
def activate = (account: Account) => {
// check if it is already active, eg, enforce invariant
...
}
def freeze = (account: Account) => {
...
}
}
And here I can chain operations like this
activate andThen freeze apply account
What is the advantage of the second approach except of "elegant" syntax?
Also, in case of reach domain model, I will enforce invariants in single class, but in case of "anemic" model, logic/invariants can spread across services
I offer two thought processes, that can help explain this puzzle:
The concept of state in your example and the book differ.
(I do hope we both are referring to Functional and Reactive Domain Modeling).
Your example states of activate, and freeze are probably domain concepts, while the book talks about states that only serve as markers. They do not necessarily have a role in the domain logic and exist only to disambiguate states of the workflow. Ex. applied, approved and enriched.
Functional programming is all about implementing behaviors, that are independent of the data passed into them.
There are two aspects of note while implementing such behaviors.
A behavior can be reusable across contexts. It can be an abstract trait, a monoid if you will, that takes any type T, and performs the same operation on it. In your example, freeze could be such a behavior, applicable to Account, Loan, Balance, etc.
The behavior has no side effect whatsoever. One should be able to call the behavior again and again with the same data set and receive the same expected response without the system getting affected or throwing an error. Referencing your example, calling freeze repeatedly on an account should not throw an error.
Combining the two points, one could say it makes sense to implement a behavior as a reusable piece of code across different contexts (as a Service) while ensuring that the input is validated (i.e., validate the state of the object provided as input before processing).
By representing the acceptable state of the object as a separate type and parameterizing the model/object with this explicit type, we could enforce a static check of input, during compile time. Referring to the example provided in the book, you can only approve andThen enrich. Any other incorrect sequence will raise a compile-time error, which is far more preferable to using defensive guards to check input during runtime.
Thus, the second approach is not just elegant syntax at the end of the day. It is a mechanism to build compile-time checks, based on the state of an object.
So, while the output has the appearance of an anemic model, the second approach is taking advantage of some beautiful patterns bought forth by functional programming.
One advantage might be being able to add another link to the chain without having to modify and recompile domain model. For example, say we wanted to add another validation step to check for fraud
object AccountService {
def fraud = (account: Account) => ...
}
then we could compose this step like so
(fraud andThen activate andThen freeze)(account)
Conceptually, adding fraud validation step did not mutate the structure of the domain model case class Account, so why bother re-compiling it? It is a form of separation of concerns, where we want to narrow down the changes to the codebase to the minimal relevant part.

Initializing the factory at compile time

I have a factory that should return an implementation depending on the name.
val moduleMap = Map(Modules.moduleName -> new ModuleImpl)
def getModule(moduleName: String): Module =
moduleMap.get(moduleName) match {
case Some(m) => m
case _ =>
throw new ModuleNotFoundException(
s"$moduleName - Module could not be found.")
}
In order for each call to the "getModule" method not to create an instance, there is a map in which all the modules must be initialized in bootstrap class.
I would like to get rid of the need to do this manually(also all classes have a distinctive feature).
List of options that came to my mind:
Reflection(we can use Scala Reflection API or any thrid-party
library)
Automated process.
Need to initialize immediately at startup.
Reflection is a pain.
Metaprogramming(ScalaMeta) + Reflection
Macros only change the code, the execution happens later.
Can we move initialization process to compile time?
I know that compiler can optimize and replace code, next fragment before compilation
val a = 5 + 5
after compilation compiler change that piece to 10, can we use some directives or another tools to evaluate and execute some code at compile time and use only final value?
Do you use any framework or you write your own? I answered similar question about Guice here. You can use it without Guice as well: instead of Module you will have your Factory, which you need to initialize from somewhere, and during initialization, you will fill your map using reflection
In general I think it is the easiest approach. Alternatively, you can write macros, which just replaces part of reflective initialization, but not sure that it will give you some profit (if I understand your question right, this initialization will happen just once at startup).
I do not see how scalameta can help you? Probably, only in case if all your implementations are in source tree available to you, so you can analyze it and generate initialization (similar to macros)? Probably, this would add such plus as easier search for implementation, but will add minus: will work only on implementations in your sources.
Your example of compile-time optimization is not applicable. In your example, you talk about compile-time constant (even with arithmetic it could be a problem, see this comment), but in your question you need specific run-time behavior. So compile time could be only code generation from macros or based on scalameta from my point of view.

How should I test "isEqual" method of value object types in BDD?

I'm new to BDD, even to whole testing world.
I'm trying to take BDD practices when writing a simple linear algebra library in swift. So there would be many value object types like Matrix, Vector etc. When writing code, I suppose I still need to stick to the TDD principle (am I right?):
Not writing any single line of code without a failing test
To implement a value object type, I need to make it conform to Equatable protocol and implement its == operator. This is adding code, so I need a failing test. How to write spec for this kinda scenarios ?
One may suggest some approach like:
describe("Matrix") {
it("should be value object") {
let aMatrix = Matrix<Double>(rows: 3, cols:2)
let sameMatrix = Matrix<Double>(rows: 3, cols:2)
expect(sameMatrix) == aMatrix
let differentMatrix = Matrix<Double>(rows: 4, cols: 2)
expect(differentMatrix) != aMatrix
}
}
This would be an ugly boilerplate for two reasons:
There may be plenty of value object types and I need to repeat it for all of them
There may be plenty of cases that would cause two objects being not equal. Taking the spec above for example, an implementation of == like return lhs.rows == rhs.rows would pass the test. In order to reveal this "bug", I need to add another expectation like expect(matrixWithDifferentColmunCount) != aMatrix. And again, this kinda repetition happens for all value object types.
So, how should I test this "isEqual" ( or operator== ) method elegantly ? or shouldn't I test it at all ?
I'm using swift and Quick for testing framework. Quick provides a mechanism called SharedExample to reduce boilerplates. But since swift is a static typing language and Quick's shared example doesn't support generics, I can't directly use a shared example to test value objects.
I came up with a workaround but don't consider it as an elegant one.
Elegance is the enemy of test suites. Test suites are boring. Test suites are repetitive. Test suites are not "DRY." The more clever you make your test suites, the more you try to avoid boilerplate, the more you are testing your test infrastructure instead of your code.
The rule of BDD is to write the test before you write the code. It's a small amount of test code because it's testing a small amount of live code; you just write it.
Yes, that can be taken too far, and I'm not saying you never use a helper function. But when you find yourself refactoring your test suite, you need to ask yourself what the test suite was for.
As a side note, your test doesn't test what it says it does. You're testing that identical constructors create Equal objects and non-identical constructors create non-Equal objects (which in principle is two tests). This doesn't test at all that it's a value object (though it's a perfectly fine thing to test). This isn't a huge deal. Don't let testing philosophy get in the way of useful testing; but it's good to have your titles match your test intent.

Configuration data in Scala -- should I use the Reader monad?

How do I create a properly functional configurable object in Scala? I have watched Tony Morris' video on the Reader monad and I'm still unable to connect the dots.
I have a hard-coded list of Client objects:
class Client(name : String, age : Int){ /* etc */}
object Client{
//Horrible!
val clients = List(Client("Bob", 20), Client("Cindy", 30))
}
I want Client.clients to be determined at runtime, with the flexibility of either reading it from a properties file or from a database. In the Java world I'd define an interface, implement the two types of source, and use DI to assign a class variable:
trait ConfigSource {
def clients : List[Client]
}
object ConfigFileSource extends ConfigSource {
override def clients = buildClientsFromProperties(Properties("clients.properties"))
//...etc, read properties files
}
object DatabaseSource extends ConfigSource { /* etc */ }
object Client {
#Resource("configuration_source")
private var config : ConfigSource = _ //Inject it at runtime
val clients = config.clients
}
This seems like a pretty clean solution to me (not a lot of code, clear intent), but that var does jump out (OTOH, it doesn't seem to me really troublesome, since I know it will be injected once-and-only-once).
What would the Reader monad look like in this situation and, explain it to me like I'm 5, what are its advantages?
Let's start with a simple, superficial difference between your approach and the Reader approach, which is that you no longer need to hang onto config anywhere at all. Let's say you define the following vaguely clever type synonym:
type Configured[A] = ConfigSource => A
Now, if I ever need a ConfigSource for some function, say a function that gets the n'th client in the list, I can declare that function as "configured":
def nthClient(n: Int): Configured[Client] = {
config => config.clients(n)
}
So we're essentially pulling a config out of thin air, any time we need one! Smells like dependency injection, right? Now let's say we want the ages of the first, second and third clients in the list (assuming they exist):
def ages: Configured[(Int, Int, Int)] =
for {
a0 <- nthClient(0)
a1 <- nthClient(1)
a2 <- nthClient(2)
} yield (a0.age, a1.age, a2.age)
For this, of course, you need some appropriate definition of map and flatMap. I won't get into that here, but will simply say that Scalaz (or RĂșnar's awesome NEScala talk, or Tony's which you've seen already) gives you all you need.
The important point here is that the ConfigSource dependency and its so-called injection are mostly hidden. The only "hint" that we can see here is that ages is of type Configured[(Int, Int, Int)] rather than simply (Int, Int, Int). We didn't need to explicitly reference config anywhere.
As an aside, this is the way I almost always like to think about monads: they hide their effect so it's not polluting the flow of your code, while explicitly declaring the effect in the type signature. In other words, you needn't repeat yourself too much: you say "hey, this function deals with effect X" in the function's return type, and don't mess with it any further.
In this example, of course the effect is to read from some fixed environment. Another monadic effect you might be familiar with include error-handling: we can say that Option hides error-handling logic while making the possibility of errors explicit in your method's type. Or, sort of the opposite of reading, the Writer monad hides the thing we're writing to while making its presence explicit in the type system.
Now finally, just as we normally need to bootstrap a DI framework (somewhere outside our usual flow of control, such as in an XML file), we also need to bootstrap this curious monad. Surely we'll have some logical entry point to our code, such as:
def run: Configured[Unit] = // ...
It ends up being pretty simple: since Configured[A] is just a type synonym for the function ConfigSource => A, we can just apply the function to its "environment":
run(ConfigFileSource)
// or
run(DatabaseSource)
Ta-da! So, contrasting with the traditional Java-style DI approach, we don't have any "magic" occurring here. The only magic, as it were, is encapsulated in the definition of our Configured type and the way it behaves as a monad. Most importantly, the type system keeps us honest about which "realm" dependency injection is occurring in: anything with type Configured[...] is in the DI world, and anything without it is not. We simply don't get this in old-school DI, where everything is potentially managed by the magic, so you don't really know which portions of your code are safe to reuse outside of a DI framework (for example, within your unit tests, or in some other project entirely).
update: I wrote up a blog post which explains Reader in greater detail.

How to access declared script fields from within classes in Groovy?

Let's say I have the next groovy code snippet:
def weightArg = args[0]
class Box {
def width
def height
def double weight() {
//I want to return the value of weightArg here. How can I do that?
}
}
I want to let my class Box use some variables from its environment. What's the correct way to do it?
It seems that weightArg should be static and I should be able to get it from Box static initializer, but I cannot manage to overcome the compiler.
Regardless of whether it's "right" to do so or not, the way that you can access your weight variable from within the Box class is to simply remove the word "def". The reason why is described here.
Declaring a class in a middle of a script and making it dependent on scripts local variables is a definite sign of a bad design. If you can't design this whole system in OO way than stick to procedural programming. The main purpose of writing OO programs is factoring them to little independent pieces. In your case it's neither factoring, nor independent, and I'm pretty sure it has no purpose you could express in words.
In other words either don't declare a Box type at all or do it similar to this way:
class Box {
Box(weight) { this.weight = weight }
def width, height, weight
}
And use it like this:
def box = new Box(args[0])
Thus you get it abstracted from weightArg and args[0] and also become able to reuse it in different scenarios.
Otherwise you foredoom your program to be unmanageable and therefore dead after first revision. In decades of existence of OO programming it's been pretty much proven.
Another thing to note is that when you get a feeling that you need to introduce classes in your script it is a reliable sign that your program should be written as a normal application with packages and stuff - not as a script.