Measuring Duration Of Service Fabric Actor Methods - azure-service-fabric

I wish to measure the time it takes for some of my Actor methods to execute. I plan to use OnPreActorMethodAsync and OnPostActorMethodAsync to log when these methods have begun and finished execution.
However I also wish to measure the time it takes for these methods to execute. My original plan was to use an instance of StopWatch to begin timing in the OnPreActorMethodAsync class and stop in the OnPostActorMethodAsync class. However I'm unsure how I can accessthe same StopWatch instance in the second method. If there is a better way of doing this I would be very interested.
If anyone could point me in the right direction on this matter that would be great.
Many Thanks

By using some dirty reflection tricks you can access the Actors' call context.
More about it here: https://stackoverflow.com/a/39526751/5946937
(Disclaimer: Actor internals may have changed since then)

Related

How to run Unreal blueprint nodes on BeginPlay that depend on another blueprint

I need blueprint A to run nodes in BeginPlay which rely on a variable in blueprint B, but that variable is null until set in B's BeginPlay function. Of course, A's BeginPlay could run before B's and I would run into errors. I can think of two ways to get around this, but neither feel like a proper approach:
In A's BeginPlay, add a Delay node with a second or less duration in the hopes that B's variable has been initialized by then. It seems like this could easy break things and isn't smooth.
Have an Event Dispatcher in B called "VariableSet". A binds an event to it in BeginPlay and that event runs the dependent code. This usually works but I haven't heard of anyone doing this.
Is there a proven, documented method to avoid null pointers in BeginPlay?
I've faced the similar problem in the past.
And in my case I just used Event Dispatchers. I'm not too proud of that aproach but It did a job.
Parent is calling ED at the end of BeginPlay execution flow
Child is binding ED to CustomEvent at the beggining of its own BeginPlay execution flow
When Parent's BeginPlay is finished (which calls ED), logic in Child's CustomEvent runs
For me it worked really well because one Parent had a lot of Childs and these Childs had their own Childs, etc, etc...
and due to Dispatchers and Events I was able to easily track what was going on.

The task scheduling problem of Flink, in Flink, how to place subtasks in a slot of the specified task manager?

Recently, I am studying the problem of task scheduling in Flink. My purpose is to schedule subtasks to a slot of the specified node according to my own needs by modifying some source codes of the scheduling part. Through remote debugging and checking the source code, I found the following method call stack, most of which I can't understand (the comments are a little less), especially in this method: org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl#allocateMultiTaskSlot. I guess the code that allocates slots to tasks is around here. Because it is too difficult to read the source code, I have to ask you for help. Of course, if there is a better way to achieve my needs, please specify one or two. Sincerely look forward to your reply! Thank you very much!!!
The method call stack is as follows(The version of Flink I use is 1.11.1):
org.apache.flink.runtime.jobmaster.JobMaster#startJobExecution
org.apache.flink.runtime.jobmaster.JobMaster#resetAndStartScheduler
org.apache.flink.runtime.jobmaster.JobMaster#startScheduling
org.apache.flink.runtime.scheduler.SchedulerBase#startScheduling
org.apache.flink.runtime.scheduler.DefaultScheduler#startSchedulingInternal
org.apache.flink.runtime.scheduler.strategy.EagerSchedulingStrategy#startScheduling
(This is like the method call chain of PipelinedRegionSchedulingStrategy class. In order to simply write it as the method call chain of EagerSchedulingStrategy class, it should have no effect)
org.apache.flink.runtime.scheduler.strategy.EagerSchedulingStrategy#allocateSlotsAndDeploy
org.apache.flink.runtime.scheduler.DefaultScheduler#allocateSlotsAndDeploy
org.apache.flink.runtime.scheduler.DefaultScheduler#allocateSlots
org.apache.flink.runtime.scheduler.DefaultExecutionSlotAllocator#allocateSlotsFor
org.apache.flink.runtime.executiongraph.SlotProviderStrategy.NormalSlotProviderStrategy#allocateSlot
org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl#allocateSlot
org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl#allocateSlotInternal
org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl#internalAllocateSlot
org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl#allocateSharedSlot
org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl#allocateMultiTaskSlot
(I feel that this is the key to allocate slot for subtask, that is, execution vertex, but there is no comment, and I don't understand the process idea, so I can't understand it.)

Understanding GLib Task and Context

I don't understand the GTask functionality? why do I need this?
In my mind it is like callback.. you set a callback to a source in some context and this callback is then called when event is happening.
In general, i'm a bit confused about what is a Context and a Task in GLib and why do we need them.
In my understanding there is a main loop (only 1?) that can run several contexts (what is a context?) and each context is related to several sources which in their turn have callbacks that are like handlers.
So can someone please make some sense for me in it all.
I don't understand the GTask functionality? why do I need this? In my mind it is like callback.. you set a callback to a source in some context and this callback is then called when event is happening.
The main functionality GTask exposes is easily and safely running a task in a thread and returning the result back to the main thread.
In general, i'm a bit confused about what is a Context and a Task in GLib and why do we need them. In my understanding there is a main loop (only 1?) that can run several contexts (what is a context?) and each context is related to several sources which in their turn have callbacks that are like handlers.
For simplicity I think its safe to consider contexts and loops the same thing and there can be multiple of them. So in order to be thread-safe the task must know which context the result is returned to.

Should a function returning a boolean be used in an if statement?

If I have a function call that returns true or false as the condition of an if statement, do I have to worry about Swift forking my code for efficiency?
For example,
if (resourceIsAvailable()) { //execute code }
If checking for the resource is computationally expensive, will Xcode wait or attempt to continue on with the code?
Is this worth using a completion handler?
What if the resource check must make a database call?
Good question.
First off... can a function be used? Absolutely.
Second... should it be used?
A lot of that depends on the implementation of the function. If the function is known (to the person who wrote it) to take a long time to complete then I would expect that person to deal with that accordingly.
Thankfully with a lot of iOS things like that are taken out of the hands of the developer (mostly). CoreData and Network requests normally come with a completion handler. So any function that uses them would also need to be async and have a completion handler.
There is no fixed rule for this. My best advice would be...
If you can see the implementation of the function then try to work out what it’s doing.
If you can’t then give it a go. You could even use the time profiler in Xcode profiler to determine how long it is taking to complete.
The worst that could happen is you find it is slow and then change it for something else.

Clarification about Scala Future that never complete and its effect on other callbacks

While re-reading scala.lan.org's page detailing Future here, I have stumbled up on the following sentence:
In the event that some of the callbacks never complete (e.g. the callback contains an infinite loop), the other callbacks may not be executed at all. In these cases, a potentially blocking callback must use the blocking construct (see below).
Why may the other callbacks not be executed at all? I may install a number of callbacks for a given Future. The thread that completes the Future, may or may not execute the callbacks. But, because one callback is not playing footsie, the rest should not be penalized, I think.
One possibility I can think of is the way ExecutionContext is configured. If it is configured with one thread, then this may happen, but that is a specific behaviour and a not generally expected behaviour.
Am I missing something obvious here?
Callbacks are called within an ExecutionContext that has an eventually limited number of threads - if not by the specific context implementation, then by the underlying operating system and/or hardware itself.
Let's say your system's limit is OS_LIMIT threads. You create OS_LIMIT + 1 callbacks. From those, OS_LIMIT callbacks immediately get a thread each - and none ever terminate.
How can you guarantee that the remaining 1 callback ever gets a thread?
Sure, there could be some detection mechanisms built into the Scala library, but it's not possible in the general case to make an optimal implementation: maybe you want the callback to run for a month.
Instead (and this seems to be the approach in the Scala library), you could provide facilities for handling situations that you, the developer, know are risky. This removes the element of surprise from the system.
Perhaps most importantly - it enables the developer to "bake in" the necessary information about handler/task characteristics directly into his/her program, rather than relying on some obscure piece of language functionality (which may change from version to version).