I have two actors that are overlapping at begin play. I can get the Overlapping Component(s) easy enough but, is there a way to manually call an OnComponentBeginOverlap event since it doesn't get called at BeginPlay if they are already Overlapping?
No, just create a separate event that follows the same execution as your OnComponentBeginOverlap event.
Related
I'm working with code that uses a tumbling window of one day, and would like to send early results to a different DataStream on an hourly basis.
I understand that triggers are a way to go here, but don't really see how it would work.
The current code is as follows:
myStream
.keyBy(..)
.window(TumblingEventTimeWindows.of(Time.days(1)))
.aggregate(new MyAggregateFunction(), new MyProcessWindowFunction())
In my understanding, I should register a trigger, and then on its onEventTime method get a hold of a TriggerContext and I can send data to the labeled output from there. But how do I get the current state of MyAggregateFunction from there? Or would I need to my own computation here inside of onEventTime()?
Also, the documentation states that "By specifying a trigger using trigger() you are overwriting the default trigger of a WindowAssigner.". Would my one day window then still fire correctly, or do I need to trigger it somehow differently?
Another way of doing this is creating two different operators - one that windows by 1 hour, and another that windows by 1 day. Would triggers be a preferred approach to that?
Rather than using a custom Trigger, it would be simpler to have two layers of windowing, where the hourly results are further aggregated into daily results. Something like this:
hourlyResults = myStream
.keyBy(...)
.window(TumblingEventTimeWindows.of(Time.hours(1)))
.aggregate(new MyAggregateFunction(), new MyProcessWindowFunction())
dailyResults = hourlyResults
.keyBy(...)
.window(TumblingEventTimeWindows.of(Time.days(1)))
.aggregate(new MyAggregateFunction(), new MyProcessWindowFunction())
hourlyResults.addSink(...)
dailyResults.addSink(...)
Note that the result of a window is not a KeyedStream, so you will need to use keyBy again, unless you can arrange to leverage reinterpretAsKeyedStream (docs).
Normally when I get to more complex behavior like this, I use a KeyedProcessFunction. You can aggregate (and save in state) hourly and daily results, set timers as needed, and use a side output for the hourly results versus the regular output for the daily results.
There are quite a few questions here. I will try to ask all of them. First of all, if You specify Your own trigger using trigger() this means You are going to effectively override the default trigger and thus the window may not work the way it would by default. So, if You for example if You create the 1 day event time tumbling window, but override a trigger so that it fires for every 20th element, it will never fire based on event time.
Now, after Your custom trigger fires, the output from MyAggregateFunction will be passed to MyProcessWindowFunction, so It will work the same as for the default trigger, you don't need to access the MyAggregateFunction from inside the trigger.
Finally, while it may be technically possible to implement trigger to trigger partial results every hour, my personal opinion is that You should probably go with the two separate windows. While this solution may create a slightly larger overhead and may result in a larger state, it should be much clearer, easier to implement, and finally much more error resistant.
Say i have job which needs to be executed at fixed times in a day like below,
"05:00, 06:10, 07:30, 08:15, 09:05, 10:35"
How could i build a Trigger for this in Quartz ?
I couldnt find the way to achieve this out-of-the-box.
I see two ways to solve your problem:
1. Multiple triggers (recommended).
The most obvious and easy way to set the unusual scheduling for you job is to combine several triggers.
Quartz allows to set as much triggers as you want for single JobDetail.
2. Implement your own trigger.
It is more complicated way, applicable only if you must use only one trigger.
You could implement org.quartz.Trigger interface or any subinterfaces to set yourown rules.
In AnyLogic, how can I let the event be touched after running the simulation, so each time I don't need to copy the table from Log and paste it to excel. I tried to use the database to store the variables but it seems too complicated and I couldn't work with it!
When I ran model in anylogic, event can't be touched off. It showed that event don't be scheduled. I try many ways, but it is also that.
To answer the question about calling event after simulation:
In main, you can call a function on destroy. At the experiment level, you can also call a function after run, iteration, or experiment.
I am defining a state machine and would like to have the machine to "run" when the object is created. With that in mind I left out the triggers on all the transitions (and only defined guards). It seems though that a created object stays in the first state if not triggered further? How can I avoid having to call the trigger explicitly? If I do execute a trigger, all subsequent states are passed by that (one) trigger call? Is there something "special" with the first state?
The first state is special in not needing a trigger. The transition from start-state is executed on object creation.
To mimic the behavior you are looking for you can use the same trigger method on all other transitions. These transitions are guarded so that only 1 transition is valid at a time. But you will need to actually execute this single trigger to make anything happen.
You can now check if triggering is possible and if so trigger by this pseudo code:
if self.trigger? then self.trigger
I have a command that updates two aggregates. Since aggregate routes are transactional boundaries, I have a command that does a repository.Save() action on the first aggregate and then I fire another command (from within the first command) which acts on the second aggregate. Each Save() actions starts its Event-Store transaction and commits the changes and then publishes them.
First is this correct, i.e. letting one command notify another aggregate via another command?
I noticed in Mark Nihjof's code that he uses event handlers which is nice as you could register the event handlers to the same event. I tried doing this using J Oliver's Event-Store but my commits.events in IDispatchCommit were referencing the first aggregates values when processing the second. This caused some weird errors.
So should I find a way of making this work with EventHandlers or is firing off commands within commands okay?
JD
Edit - I have used switched my wire up to use .UsingAsynchronousDispatchScheduler() and am now allowing registered events to fire more than one event handler which in turn fires a command on the other aggregate and it seems to work. So, is this the correct way to do it and not use commmands firing commands?
I think there's a million and one ways to skin this cat. I'm not sure firing a command from an event handler is the way to go, I have to command handlers respond to the same command in this instance.
I do find documently good for a reference app. Have you looked a that?