I am modeling a scenario where say my source is incoming orders, but order may have different characteristics, such as lines, units # of SKUs on the orders. Based on different characteristics, my service/delay time my differ. For example, my service time may be 1slines+ 5sunits+30s*SKUs. How can I set up my source and delay block to model this scenario?
Create an agent type for orders, add parameters for #SKUs etc.
In your delay block's duration field, use agent.numSKU to refer to a parameter numSKU in your agent type Order.
In your source, make sure it creates Order agents (not the default agents).
Lots of example models do this, please follow some of the basic tutorials and check the example models :)
Related
I have been struggling to find information on how a resource that contains generated values is modified. Below is a real world example:
Let's say we have 2 endpoints:
/categories and /products.
A category is used to contain various parameters that define any product belonging to it. For example, based on a category a product expiration date might be calculated, or some other properties might or might not be attached to a product.
Let's say we create a new product by sending a POST request to /products and among other fields we include the category ID property. Based on the category set a server creates and stores a new product along with various properties generated (expiration date, delivery policies) etc.
Now the problem arises when needing to modify (PATCH/ PUT) the mentioned product.
How are generated values edited? We can for example change a delivery policy, but then the product will contain a field that doesn't match what its attached category describes. Likewise, it might be very handy to modify its generated expiration date, however yet again that can create confusion about why a category says it should expire in 3 days but the product is set to expire in 20 days.
Another solution would be to make all these properties read-only and only allow regenerating them by changing the category, just like at creation.
However that poses 2 problems:
The biggest one being that a different category might not contain the same policy layout. For example, one category might enable generating GPS coordinates to ease the delivery, the other category does not. If we change the category, what do we do with these valuable properties already present? Do we drop them for the sake of clarity?
Another issue is limited flexibility. There might be cases when a property needs to be changed but the category needs to remain the same.
I think these questions are met and answered in probably every single REST API development and probably I am just missing something very simple and obvious. Could you help me understand the right way of going about this?
Thank you very much.
I think these questions are met and answered in probably every single REST API development and probably I am just missing something very simple and obvious. Could you help me understand the right way of going about this?
You write code to ensure that all of the invariants hold for the server's copy of the resource.
That can mean either (a) inspecting the body of the request, and returning a client error if the body doesn't satisfy the constraints you need to maintain, or (b) changing your resource in a way that doesn't exactly match the request you've received.
In the second case, you need to have a little bit of care with the response metadata, so that you don't imply that the representation of the request has been adopted "as is".
The code you are writing here is part of the origin server's implementation, deliberately hidden by the HTTP facade you present. The general purpose components in the middle don't care about those details; they just want you to use messaging semantics consistent with the HTTP (and related) specifications.
I know that one can utilize multiple KieBases and multiple KieSessions, but I don't understand under what scenarios one would use one approach vs the other (I am having some trouble in general understanding the definitions and relationships between KieContainer, KieBase, KieModule, and KieSession). Can someone clarify this?
You use multiple KieBases when you have multiple sets of rules doing different things.
KieSessions are the actual session for rule execution -- that is, they hold your data and some metadata and are what actually executes the rules.
Let's say I have an application for a school. One part of my application monitors students' attendance. The other part of my application tracks their grades. I have a set of rules which decides if students are truant and we need to talk to their parents. I have a completely unrelated set of rules which determines whether a student is having trouble academically and needs to be put on probation/a performance plan.
These rules have nothing to do with one another. They have completely separate concerns, different rule inputs, and are triggered in different parts of the application. The part of the application that is tracking attendance doesn't need to trigger the rules that monitor student performance.
For this application, I would have two different KieBases: one for attendance, and one for academics. When I need to fire the rules, I fire one or the other -- there is no use case for firing both at the same time.
The KieSession is the runtime for when we fire those rules. We add to it the data we need to trigger the rules, and it also tracks some other metadata that's really not relevant to this discussion. When firing the academics rules, I would be adding to it the student's grades, their classes, and maybe some information about the student (eg the grade level, whether they're an "honors" student, tec.). For the attendance rules, we would need the student information, plus historical tardiness/absence records. Those distinct pieces of data get added to the sessions.
When we decide to fire rules, we first get the appropriate KieBase -- academics or attendance. Then we get a session for that rule set, populate the data, and fire it. We technically "execute" the session, not the rules (and definitely not the rule base.) The rule base is just the collection of the rules; the session is how we actually execute it.
There are two kinds of sessions -- stateful and stateless. As their names imply, they differ with how data is stored and tracked. In most cases, people use stateful sessions because they want their rules to do iterative work on the inputs. You can read more about the specific differences in the documentation.
For low-volume applications, there's generally little need to reuse your KieSessions. Create, use, and dispose of them as needed. There is, however, some inherent overhead in this process, so there comes a point in which reuse does become something that you should consider. The documentation discusses the solution provided out-of-the box for Drools, which is session pooling.
(When trying to wrap your head around this, I like to use an analogy of databases. A session is like a JDBC connection: for small applications you can create them, use them, then close them as you need them. But as you scale you'll quickly find that you need to look into connection pooling to minimize this overhead. In this particular analogy, the rule base would be the database that the rules are executing against -- not the tables!)
In CQRS when we need to create a custom-tailored projections for our read-models, we usually prefer a "denormalized" projections (assume we are talking about projecting onto a DB). It is not uncommon to have the information need by the application/UI come from different aggregates (possibly from different BCs).
Imagine we need a projected table to contain customer's information together with her full address and that Customer and Address are different aggregates in our system (possibly in different BCs). Meaning that, addresses are generated and maintained independently of customers. Or, in other words, when a new customer is created, there is no guarantee that there will be an AddressCreatedEvent subsequently produced by the system, this event may have already been processed prior to the creation of the customer. All we have at the time of CreateCustomerCommand is an UUID of an existing address.
We have several solutions here.
Enrich CreateCustomerCommand and the subsequent CustomerCreatedEvent to contain full address of the customer (looking up this information on the fly from the UI or the controller). This way the projection handler will just update the table directly upon receiving CustomerCreatedEvent.
Use the addrUuid provided in CustomerCreatedEvent to perform an ad-hoc query in the projection handler to get the missing part of the address information before updating the table.
These are commonly discussed solution to this problem. However, as noted by many others, there are problems with each approach. Enriching events can be difficult to justify as well described by Enrico Massone in this question, for example. Querying other views/projections (kind of JOINs) will work but introduces coupling (see the same link).
I would like describe another method here, which, as I believe, nicely addresses these concerns. I apologize beforehand for not giving a proper credit if this is a known technique. Sincerely, I have not seen it described elsewhere (at least not as explicitly).
"A picture speaks a thousand words", as they say:
The idea is that :
We keep CreateCustomerCommand and CustomerCreatedEvent simple with only addrUuid attribute (no enriching).
In API controller we send two commands to the command handler (aggregates): the first one, as usual, - CreateCustomerCommand to create customer and project customer information together with addrUuid to the table leaving other columns (full address, etc.) empty for time being. (Warning: See the update, we may have concurrency issue here and need to issue the probe command from a Saga.)
Right after this, and after we have obtained custUuid of the newly created customer, we issue a special ProbeAddrressCommand to Address aggregate triggering an AddressProbedEvent which will encapsulate the full state of the address together with the special attribute probeInitiatorUuid which is, of course our custUuid from the previous command.
The projection handler will then act upon AddressProbedEvent by simply filling in the missing pieces of the information in the table looking up the required row by matching the provided probeInitiatorUuid (i.e. custUuid) and addrUuid.
So we have two phases: create Customer and probe for the related Address. They are depicted in the diagram with (1) and (2) correspondingly.
Obviously, we can send as many such "probe" commands (in parallel) as needed by our projection: ProbeBillingCommand, ProbePreferencesCommand, etc. effectively populating or "filling in" the denormalized projection with missing data from each handled "probe" event.
The advantages of this method is that we keep the commands/events in the first phase simple (only UUIDs to other aggregates) all the while avoiding synchronous coupling (joining) of the projections. The whole approach has a nice EDA feeling about it.
My question is then: is this a known technique? Seems like I have not seen this... And what can go wrong with this approach?
I would be more then happy to update this question with any references to other sources which describe this method.
UPDATE 1:
There is one significant flaw with this approach that I can see already: command ProbeAddrressCommand cannot be issued before the projection handler had a chance to process CustomerCreatedEvent. But this is impossible to know from the API gateway (or controller).
The solution would probably involve a Saga, say CustomerAddressJoinProjectionSaga with will start upon receiving CustomerCreatedEvent and which will only then issue ProbeAddrressCommand. The Saga will end upon registering AddressProbedEvent. Or, if many other aggregates are involved in probing, when all such events have been received.
So here is the updated diagram.
UPDATE 2:
As noted by Levi Ramsey (see answer below) my example is rather convoluted with respect to the choice of aggregates. Indeed, Customer and Address are often conceptualized as belonging together (same Aggregate Root). So it is a better illustration of the problem to think of something like Student and Course instead, assuming for the sake of simplicity that there is a straightforward relation between the two: a student is taking a course. This way it is more obvious that Student and Course are independent aggregates (students and courses can be created and maintained at different times and different places in the system).
But the question still remains: how can we obtain a projection containing the full information about a student (full name, etc.) and the courses she is registered for (title, credits, the instructor's full name, prerequisites, etc.) all in the same table, if the UI requires it ?
A couple of thoughts:
I question why address needs to be a separate aggregate much less in a different bounded context, in view of the requirement that customers have an address. If in some other bounded context customer addresses are meaningful (e.g. you want to know "which addresses have more customers" etc.), then that context can subscribe to the events from the customer service.
As an alternative, if there's a particularly strong reason to model addresses separately from customers, why not have the read side prospectively listen for events from the address aggregate and store the latest address for a given address UUID in case there's a customer who ends up with that address. The reliability per unit effort of that approach is likely to be somewhat greater, I would expect.
What I understand is, that GameplayAbilities do not need to replicate to update GameplayAttributes across the network, since they don't influence attributes directly. Instead, this is the task of GameplayEffects.
So what is it that updates the GameplayAbilitySystem attribute values (FGameplayAttributeData) over the network:
do only attributes replicate?
or are the GameplayEffects sent only?
or both?
To give context: I have an attribute modification system where I need several “base values”. Those “base values” change very infrequently, while the “final value” changes often. There are two possibilities to do that with GAS
use a separate attribute for each of the “base values” and “final value” or
add additional float members to the attribute struct FGameplayAttributeData, for all “base values” and the “final value”
If only GE are sent over the network (and not attributes), I would go for (2), since the size of an attribute doesn't matter for bandwidth then.
Both are replicated!
GameplayAttributes are replicated with an OnRep function, which calls FActiveGameplayEffectsContainer::SetBaseAttributeValueFromReplication(), which - despite the name - will also updates the current value using the aggregators which are existing on the local machine. For that to work, also ...
GameplayEffects replicate (see also Unreal GAS: GameplayEffect: Difference between minimal and full replication).
So for saving bandwidth in the example given, it is more meaningful to use separate attributes for the “base values” (option (1)), since they are not updated very often. So when the “final value” changes, the constant “base values” won’t have to be replicated.
I want to simulate a supermarket with the arena to find the proper number of cashiers which market needs.
I want to start the simulation with one cashier then increase the number of cashiers in next simulations until the utilization of cashiers is less than 70%.
each cashier is a "resource module" and has a "process module" for it's service time.
am I make a separate model for each different number of the cashier(for example a model for a supermarket with one cashier, another model for a supermarket with two cashiers and so on) or is there a better way?
It's a little more advance but it sounds like Arena's Process Analyzer would help you determine the number of cashiers needed.
The Process Analyzer assists in the evaluation of alternatives
presented by the execution of different simulation model scenarios.
This is useful to simulation model developers, as well as decision-
makers
The Process Analyzer is focused at post-model development
comparison of models. The role of the Process Analyzer then is to
allow for comparison of the outputs from validated models based on
different model inputs.
via pelincec.isep.pw.edu.pl/doc/Simulation_Warsaw%20Part%205.pdf
A Google search for Arena Process Analyzer provides plenty of lecture notes, book references and examples:
https://www.google.com/search?q=arena+process+analyzer
Also, it sounds like this model isn't very complicated so, although it may be tedious, it'll probably be quicker to alter your model and run n simulations for each solution {1 cashier, 2 cashiers, ...}.
Also, if the model is indeed pretty simple, why not create multiple independent models in the same simulation file. For instance, one simulation file has three independent models of 1, 2 and 3 cashiers. The next has 4, 5 and 6 cashiers and so on. This would consolidate the statistics a little more and make analysis easier.
There are several ways to do this without making multiple models. A cashier is simply a resource, but it could also be an entity.
You can build your model to require throughput (customers) to be processed when two entities are available - a register entity and a cashier entity. This could be done with a batch module.
cashier entities would be set up according to a schedule you would like to test... from minimum cashier availability to full cashier availability.
Register entities would probably be held constant, but you could make them variable according to a schedule, too.
Your batched entity would then go into the process entity until a schedule called for the cashier to "leave" the system - split the batch and destroy the cashier entity. Register entity loops back to the batch to be grouped with another cashier or wait.