We are working in a very complex solution using drools 6 (Fusion) and I would like your opinion about best way to read Objects created during the correlation results over time.
My first basic approach was to read Working Memory every certain time, looking for new objects and reporting them to external Service (REST).
AgendaEventListener does not seems to be the "best" approach beacuse I dont care about most of the objects being inserted in working memory, so maybe, best approach would be to inject particular "object" in some sort of service inside DRL. Is this a good approach?
You have quite a lot of options. In decreasing order of my preference:
AgendaEventListener is probably the solution requiring the smallest amount of LOC. It might be useful for other tasks as well; all you have on the negative side is one additional method call and a class test per inserted fact. Peanuts.
You can wrap the insert macro in a DRL function and collect inserted fact of class X in a global List. The problem you have here is that you'll have to pass the KieContext as a second parameter to the function call.
If the creation of a class X object is inevitably linked with its insertion into WM, you could add the registry of new objects into a static List inside class X, to be done in a factory method (or the constructor).
I'm putting your "basic approach" last because it requires much more cycles than the listener (#1) and tons of overhead for maintaining the set of X objects that have already been put to REST.
Related
currently i am trying to design an executable activity where a logical component send a request to several other logical components to start initialization. The usual way in activity diagram is to create for each block a swimline. But, as the number of LCs are high in this case the diagram will be extremely big and also later its modification will be a hurdle.
In programming, the solution to this problem is polymorphism where the objects are casted as the mother class and pushed into a vector and then within the loop the abstract function of the mother class is called.
I wonder if there is any similar solution to this problem in sysml or not?
I tried to assign one swimline to the all the LCs which recevie the request but it seems each swimline can only be assigned to a block.
Yes, there is a similar solution. Actually, it is even easier. Just let the ActivtyPartition represent a property with a multiplicity greater than 1. Any InvocationAction in this partition will then mean a concurrent invocation on all instances held in this property.
The UML specification says about this:
section 15.6.3.1 Activity Partitions
If the Property holds more than one value, the
invocations are treated as if they were made concurrently on each
value and the invocation does not complete until all the concurrent
instances of it complete.
Typical InvocationActions are CallActions and SendSignalActions. So, you could call the initialize() operation in a partition representing the property myLogicalComponents:LC[1..*].
If you don't already have defined such a property, you can derive it, by making it a derived union and then define all existing properties with components to be initialized as subsets of it:
/myLogicalComponents:LC[1..*]{union}
LC1:LC {subsets myLogicalComponents}
LC2:LC {subsets myLogicalComponents}
Of course, this assumes, that all your logical components are specializing LC, so that they all inherit and redefine the initialize() operation.
In SysML you also have AllocateActivityPartitions. No clear semantics is described for it, so I think you are better served with the UML ActivityPartitions, which are also included in SysML.
We have a very large number of autofac resolutions that occur in our application. We've always had a very long time (about 50% of our web requests) of processing belonging to the autofac resolution stage.
It came to our attention how very important switching to Register Frequently-Used Components with Lambdas is from the guide, and our testing shows it could bring extreme gains for us.
Convert from this
builder.RegisterType<Component>();
to
builder.Register(c => new Component());
It seems the procedure to reflect to find the largest constructor, scan the properties, determine if autofac can resolve that, if not move on to the next smallest, etc. So specifying the constructor directly gives us massive improvements.
My question is can there be or should there be some sort of caching for the found constructor? Since the containers are immutable you can't add or subtract registrations so would require a different constructor later on.
I just wanted to check before we start working on switching over a lot of registrations to lambda, we have 1550 and will try to find the core ones.
Autofac does cache a lot of what it can, but even with container immutability, there's...
Registration sources: dynamic suppliers of registrations (that's how open generics get handled, for example)
Child lifetime scopes: the root container might not have the current HttpRequestMessage for the active WebAPI request, but you can add and override registrations in child scopes so the request lifetime does have it.
Parameters on resolve: you can pass parameters to a resolve operation that will get factored into that operation.
Given stuff like that, caching is a little more complicated than "find the right constructor once, cache it, never look again."
Lambdas will help, as you saw, so look into that.
I'd also look at lifetime scopes for components. If constructor lookup is happening a lot, it means you're constructing a lot, which means you're allocating a lot, which is going to cost you other time. Can more components be singletons or per-lifetime-scope? Using an existing instance will be cheaper still.
You also mentioned you have 1550 things registered. Do all of those actually get used and resolved? Don't register stuff that doesn't need to be. I've seen a lot of times where folks do assembly scanning and try to just register every type in every assembly in the whole app. I'm not saying you're doing this, but if you are, don't. Just register the stuff you need.
Finally, think about how many constructors you have. If there's only one (recommended) it'll be faster than trying many different ones. You can also specify the constructor with UsingConstructor during registration to force a choice and bypass searching.
Can I construct a value object in the event handler or should I pass the parameters to the aggregate to construct the value object itself? Seller is the aggregate and offer is the value object. Will it be better for the aggregate to pass the value object in the event?
public async Task HandleAsync(OfferCreatedEvent domainEvent)
{
var seller = await this.sellerRepository.GetByIdAsync(domainEvent.SellerId);
var offer = new Offer(domainEvent.BuyerId, domainEvent.ProductId, seller.Id);
seller.AddOffer(offer);
}
should I pass the parameters to the aggregate to construct the value object itself?
You should probably default to passing the assembled value object to the domain entity / root entity.
The supporting argument is that we want to avoid polluting our domain logic with plumbing concerns. Expressed another way, new is not a domain concept, so we'd like that expression to live "somewhere else".
Note: that by passing the value to the domain logic, you protect that logic from changes to the construction of the values; for instance, how much code has to change if you later discover that there should be a fourth constructor argument?
That said, I'd consider this to be a guideline - in cases where you discover that violating the guideline offers significant benefits, you should violate the guideline without guilt.
Will it be better for the aggregate to pass the value object in the event?
Maybe? Let's try a little bit of refactoring....
// WARNING: untested code ahead
public async Task HandleAsync(OfferCreatedEvent domainEvent)
{
var seller = await this.sellerRepository.GetByIdAsync(domainEvent.SellerId);
Handle(domainEvent, seller);
}
static Handle(OfferCreatedEvent domainEvent, Seller seller)
{
var offer = new Offer(domainEvent.BuyerId, domainEvent.ProductId, seller.Id);
seller.AddOffer(offer);
}
Note the shift - where HandleAsync needs to be aware of async/await constructs, Handle is just a single threaded procedure that manipulates two local memory references. What that procedure does is copy information from the OfferCreatedEvent to the Seller entity.
The fact that Handle here can be static, and has no dependencies on the async shell, suggests that it could be moved to another place; another hint being that the implementation of Handle requires a dependency (Offer) that is absent from HandleAsync.
Now, within Handle, what we are "really" doing is copying information from OfferCreatedEvent to Seller. We might reasonably choose:
seller.AddOffer(domainEvent);
seller.AddOffer(domainEvent.offer());
seller.AddOffer(new Offer(domainEvent));
seller.AddOffer(new Offer(domainEvent.BuyerId, domainEvent.ProductId, seller.Id));
seller.AddOffer(domainEvent.BuyerId, domainEvent.ProductId, seller.Id);
These are all "fine" in the sense that we can get the machine to do the right thing using any of them. The tradeoffs are largely related to where we want to work with the information in detail, and where we prefer to work with the information as an abstraction.
In the common case, I would expect that we'd use abstractions for our domain logic (therefore: Seller.AddOffer(Offer)) and keep the details of how the information is copied "somewhere else".
The OfferCreatedEvent -> Offer function can sensibly live in a number of different places, depending on which parts of the design we think are most stable, how much generality we can justify, and so on.
Sometimes, you have to do a bit of war gaming: which design is going to be easiest to adapt if the most likely requirements change happens?
I would also advocate for passing an already assembled value object to the aggregate in this situation. In addition to the reasons already mentioned by #VoiceOfUnreason, this also fits more naturally with the domain language. Also, when reading code and method APIs you can then focus on domain concepts (like an offer) without being distracted by details until you really need to know them.
This becomes even more important if you would need to pass in more then one value object (or entity). Rather passing in all the values required for construction as parameters not only makes the API more resilient to refactoring but also burdens the reader with more details.
The seller is receiving an offer.
Assuming this is what is meant here, fits better than something like the following:
The seller receives some buyer id, product id, etc.
This most probably would not be found in conversations using the ubiquitous language. In my opinion code should be as readable as possible and express the behaviour and business logic as close to human language as possible. Because you compile code for machines to execute it but the way you write it is for humans to easily understand it.
Note: I would even consider using factory methods on value objects in certain cases to unburden the client code of knowing what else might be needed to assemble a valid value object, for instance, if there are different valid constellations and ways of constructing the same value objects where some values need reasonable default values or values are chosen by the value object itself. In more complex situations a separate factory might even make sense.
I am writing an app that requires access to a set of information from all classes and methods.
My query is, what's the most efficient way of doing this, so as to minimise memory usage?
I have taught myself coding and have, of course, come across numerous different methods to solve this whilst trawling the internet. For example:
I could create a global variable such as var info = ..., which I'd place above a class definition, thus giving access from anywhere in the app.
Or I could create a singleton, GameData and have the data stored there, e.g. GameData.shared.info, similarly available from anywhere in the app.
Or I could load the information in the first ViewController and then pass it around as a parameter.
There are no doubt more methods that I haven't come across, but I wonder which is the most memory-efficient method, if indeed there is such a thing. In my case, I won't need access to a huge amount of data - no more than say sixty or seventy records each with half a dozen fields of text or numbers.
Many thanks
Using dependency injection, that is, passing it as a parameter to any object that requires it, would be the more memory conscious method, since the variable will stop existing "automatically" if you were to replace the whole hierarchy (as long as it's done correctly).
By using a singleton or a global variable, you would have to clear the value yourself.
If the value is not going to disappear during the lifetime of the application, then memory usage doesn't matter, but I would still advice against global variables and suggest using dependency injection.
I'm using the Asana REST API to iterate over workspaces, projects, and tasks. After I achieved the initial crawl over the data, I was surprised to see that I only retrieved the top-level tasks. Since I am required to provide the workspace and project information, I was hoping not to have to recurse any deeper. It appears that I can recurse on a single task with the \subtasks endpoint and re-query... wash/rinse/repeat... but that amounts to a potentially massive number of REST calls (one for each subtask to see if they, in turn, have subtasks to query - and so on).
I can partially mitigate this by adding to the opt_fields query parameter something like:
&opt_fields=subtasks,subtasks.subtasks
However, this doesn't scale well. It means I have to elongate the query for each layer of depth. I suppose I could say "don't put tasks deeper than x layers deep" - but that seems to fly in the face of Asana's functionality and design. Also, since I need lots of other properties, it requires me to make a secondary query for each node in the hierarchy to gather those. Ugh.
I can use the path method to try to mitigate this a bit:
&opt_fields=(this|subtasks).(id|name|etc...)
but again, I have to do this for every layer of depth. That's impractical.
There's documentation about this great REPEATER + operator. Supposedly it would work like this:
&opt_fields=this.subtasks+.name
That is supposed to apply to ALL subtasks anywhere in the hierarchy. In practice, this is completely broken, and the REST API chokes and returns only the ids of the top-level tasks. :( Apparently their documentation is just wrong here.
The only method that seems remotely functional (if not practical) is to iterate first on the top-level tasks, being sure to include opt_fields=subtasks. Whenever this is a non-empty array, I would need to recurse on that task, query for its subtasks, and continue in that manner, until I reach a null subtasks array. This could be of arbitrary depth. In practice, the first REST call yields me (hopefully) the largest number of tasks, so the individual recursion may be mitigated by real data... but it's a heck of an assumption.
I also noticed that the limit parameter applied ONLY to the top-level tasks. If I choose to expand the subtasks, say. I could get a thousand tasks back instead of 100. The call could timeout if the data is too large. The safest thing to do would be to only request the ids of subtasks until recursion, and as always, ask for all the desired top-level properties at that time.
All of this seems incredibly wasteful - what I really want is a flat list of tasks which include the parent.id and possibly a list of subtasks.id - but I don't want to query for them hierarchically. I also want to page my queries with rational data sizes in mind. I'd like to get 100 tasks at a time until Asana runs out - but that doesn't seem possible, since the limit only applies to top-level items.
Unfortunately the repeater didn't solve my problem, since it just doesn't work. What are other people doing to solve this problem? And, secondarily, can anyone with intimate Asana insight provide any hope of getting a better way to query?
While I'm at it, a suggested way to design this: the task endpoint should not require workspace or project predicate. I should be able to filter by them, but not be required to. I am limited to 100 objects already, why force me to filter unnecessarily? In the same vein - navigating the hierarchy of Asana seems an unnecessary tax for clients who are not Asana (and possibly even the Asana UI itself).
Any ideas or insights out there?
Have you ensured that the + you send is URL-encoded? Whatever library you are using should usually handle this (which language are you using, btw? We have some first-party client libraries available)
Try &opt_fields=this.subtasks%2B.name if you're creating the URL manually, or (better yet) use a library that correctly encodes URL query parameters.