Are Update's serialized or parallelized? - unity3d

I'm new to Unity5, and I'm trying to create a simple game.
By extending MonoBehaviour, I receive an Update() function. But I don't know how it works behind the scenes.
My question is, are the Update() functions serialized (called one after the other) or parallelized when many MonoBehaviours have their own Update functions.
For example, if I have two scripts with its own Update in each, would the Updates be called at the same time (parallelized) or are they called one after the other (serialized)?
If they are serialized, how do I determine the order?

By default, new MonoBehaviour scripts are executed in whatever order Unity compiles them into. They are not run at the same time, they run one after another.
If you want to specify the execution order, you can do so under:
Edit > Project Settings > Script Execution Order.
Further reading: Execution Order of Event Functions

Related

Is update order execution in unity constant?

Assuming two things:
1.- That there is no defined order for the Update() function execution and that If a specific order is needed, you can define it yourself with the script execution order as specified here, and here.
2.- That the gameobjects are not updated according to their hierarchy as explained here
Something I could not clear out from the documentation is if you do not set any order for the script execution, once this is set 'randomly' by unity, if this order remains unaltered through the execution of the game or if the update execution order may change in time.
If you do not specify anything, the order won't be consistent. If you want a specific script to execute before others you HAVE to use the script execution order or use an array has suggested in you linked post.

Early firing in Flink - how to emit early window results to a different DataStream with a trigger

I'm working with code that uses a tumbling window of one day, and would like to send early results to a different DataStream on an hourly basis.
I understand that triggers are a way to go here, but don't really see how it would work.
The current code is as follows:
myStream
.keyBy(..)
.window(TumblingEventTimeWindows.of(Time.days(1)))
.aggregate(new MyAggregateFunction(), new MyProcessWindowFunction())
In my understanding, I should register a trigger, and then on its onEventTime method get a hold of a TriggerContext and I can send data to the labeled output from there. But how do I get the current state of MyAggregateFunction from there? Or would I need to my own computation here inside of onEventTime()?
Also, the documentation states that "By specifying a trigger using trigger() you are overwriting the default trigger of a WindowAssigner.". Would my one day window then still fire correctly, or do I need to trigger it somehow differently?
Another way of doing this is creating two different operators - one that windows by 1 hour, and another that windows by 1 day. Would triggers be a preferred approach to that?
Rather than using a custom Trigger, it would be simpler to have two layers of windowing, where the hourly results are further aggregated into daily results. Something like this:
hourlyResults = myStream
.keyBy(...)
.window(TumblingEventTimeWindows.of(Time.hours(1)))
.aggregate(new MyAggregateFunction(), new MyProcessWindowFunction())
dailyResults = hourlyResults
.keyBy(...)
.window(TumblingEventTimeWindows.of(Time.days(1)))
.aggregate(new MyAggregateFunction(), new MyProcessWindowFunction())
hourlyResults.addSink(...)
dailyResults.addSink(...)
Note that the result of a window is not a KeyedStream, so you will need to use keyBy again, unless you can arrange to leverage reinterpretAsKeyedStream (docs).
Normally when I get to more complex behavior like this, I use a KeyedProcessFunction. You can aggregate (and save in state) hourly and daily results, set timers as needed, and use a side output for the hourly results versus the regular output for the daily results.
There are quite a few questions here. I will try to ask all of them. First of all, if You specify Your own trigger using trigger() this means You are going to effectively override the default trigger and thus the window may not work the way it would by default. So, if You for example if You create the 1 day event time tumbling window, but override a trigger so that it fires for every 20th element, it will never fire based on event time.
Now, after Your custom trigger fires, the output from MyAggregateFunction will be passed to MyProcessWindowFunction, so It will work the same as for the default trigger, you don't need to access the MyAggregateFunction from inside the trigger.
Finally, while it may be technically possible to implement trigger to trigger partial results every hour, my personal opinion is that You should probably go with the two separate windows. While this solution may create a slightly larger overhead and may result in a larger state, it should be much clearer, easier to implement, and finally much more error resistant.

event don't not be scheduled in anylogic

In AnyLogic, how can I let the event be touched after running the simulation, so each time I don't need to copy the table from Log and paste it to excel. I tried to use the database to store the variables but it seems too complicated and I couldn't work with it!
When I ran model in anylogic, event can't be touched off. It showed that event don't be scheduled. I try many ways, but it is also that.
To answer the question about calling event after simulation:
In main, you can call a function on destroy. At the experiment level, you can also call a function after run, iteration, or experiment.

quartz-scheduler depend jobs

I'm working on a project with Quartz and has been a problem with the dependencies with jobs.
we have a setup where A and B aren't dependent on eachother, though C is:
A and B can run at the same time, but C can only run when both A and B are complete.
Is there a way to set this kind of scenario up in Quartz, so that C will only trigger when A and B finish?
Not directly AFAIK, but it should be not too hard to use a TriggerListener to implement such a functionality (a TriggerListener is run both a start and end of jobs, and you can set them up for individual triggers or trigger groups).
EDIT: there is even a specific FAQ Topic about this problem:
There currently is no "direct" or "free" way to chain triggers with
Quartz. However there are several ways you can accomplish it without
much effort. Below is an outline of a couple approaches:
One way is to use a listener (i.e. a TriggerListener, JobListener or
SchedulerListener) that can notice the completion of a job/trigger and
then immediately schedule a new trigger to fire. This approach can get
a bit involved, since you'll have to inform the listener which job
follows which - and you may need to worry about persistence of this
information. See the listener
org.quartz.listeners.JobChainingJobListener which ships with Quartz -
as it already has some of this functionality.
Another way is to build a Job that contains within its JobDataMap the
name of the next job to fire, and as the job completes (the last step
in its execute() method) have the job schedule the next job. Several
people are doing this and have had good luck. Most have made a base
(abstract) class that is a Job that knows how to get the job name and
group out of the JobDataMap using pre-defined keys (constants) and
contains code to schedule the identified job. This abstract Job's
implementation of execute() delegates to an abstract template method
such as "doWork()" (where the extending Job class's real work goes)
and then it contains the code for scheduling the follow-up job. Then
they simply make extensions of this class that included the work the
job should do. The usage of 'durable' jobs, or the overloaded
addJob(JobDetail, boolean, boolean) method (added in Quartz 2.2) helps
the application define all the jobs at once with their proper data,
without yet creating triggers to fire them (other than one trigger to
fire the first job in the chain).
In the future, Quartz will provide a much cleaner way to do this, but
until then, you'll have to use one of the above approaches, or think
of yet another that works better for you.

CQRS/EventStore - changing two aggregates

I have a command that updates two aggregates. Since aggregate routes are transactional boundaries, I have a command that does a repository.Save() action on the first aggregate and then I fire another command (from within the first command) which acts on the second aggregate. Each Save() actions starts its Event-Store transaction and commits the changes and then publishes them.
First is this correct, i.e. letting one command notify another aggregate via another command?
I noticed in Mark Nihjof's code that he uses event handlers which is nice as you could register the event handlers to the same event. I tried doing this using J Oliver's Event-Store but my commits.events in IDispatchCommit were referencing the first aggregates values when processing the second. This caused some weird errors.
So should I find a way of making this work with EventHandlers or is firing off commands within commands okay?
JD
Edit - I have used switched my wire up to use .UsingAsynchronousDispatchScheduler() and am now allowing registered events to fire more than one event handler which in turn fires a command on the other aggregate and it seems to work. So, is this the correct way to do it and not use commmands firing commands?
I think there's a million and one ways to skin this cat. I'm not sure firing a command from an event handler is the way to go, I have to command handlers respond to the same command in this instance.
I do find documently good for a reference app. Have you looked a that?