(reposted from here)
I'm trying to measure/log the running time of a task.
I've looked into "wrapping" a task by adding one task before and one task after but this would not work every time as sbt only guarantees a partial order.
A better wrapping would be something along these lines:
wrappedTask := {
startMeasuringTime()
somehowInvoke(myTaskKey in SomeContext)
endMeasuringTime()
}
What should this "somehowInvoke" be?
Measuring the time taken by a task needs support from the task executor.
As you imply, you cannot do this only by using task primitives.
I've pushed some sample code that I wrote a while back that shows the idea.
A complication that the sample code doesn't handle is that what the user conceptually thinks of as one task (compile, for example) may actually be implemented as several tasks and those timings would need to be combined. Also, a task like internalDependencyClasspath "calls" other tasks (flatMap) and so its execution time includes the execution time of the "called" tasks.
EDIT: This was implemented for 0.13.0 as an experimental feature in 602c1759a1885 and 1cc2f57e158389759. The ExecuteProgress interface provides sufficient information that the previously described issues are not a problem.
Related
Alright, so I'm learning codesys in school and I'm using Function-Blocks. However they didn't seem to update when updating local variables, so I made a test, the one you can see below.
As you can see, in the FB below, "GVL.sw1" becomes True, but "a" doesen't. Why does it not become True? I tested a friends code and his worked just fine, but mine doesen't...
https://i.stack.imgur.com/IpPPZ.png
Comment from reddit
You are showing the source code for a program called "main". You have
a task running called "Main_Task". The program and task are not
directly related.
Is "main" being called anywhere.
So i added main to the "main task" and it worked. I have no idea why it didn't work in the real assignment but maybe I'll solve it now that i have gotten this far.
In your example you have 2 programs (PRG): main and PLC_PRG.
Creating a program doesn't mean that it will be executed/run. For that you need to add the program to a Task in the Task Configuration. Each Task will be, by default, executed on every cycle according to the priority they are configured with (you could also have them be executed on an event and etc. instead). When a Task is executed, each program added to that Task will be executed in the order they are placed (you can reorder them any time).
With that said, if you look at your Task Configuration, the MainTask only has the program PLC_PRG added, so only that program will run. The main program that you are inspecting is never even run.
Recently, I am studying the problem of task scheduling in Flink. My purpose is to schedule subtasks to a slot of the specified node according to my own needs by modifying some source codes of the scheduling part. Through remote debugging and checking the source code, I found the following method call stack, most of which I can't understand (the comments are a little less), especially in this method: org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl#allocateMultiTaskSlot. I guess the code that allocates slots to tasks is around here. Because it is too difficult to read the source code, I have to ask you for help. Of course, if there is a better way to achieve my needs, please specify one or two. Sincerely look forward to your reply! Thank you very much!!!
The method call stack is as follows(The version of Flink I use is 1.11.1):
org.apache.flink.runtime.jobmaster.JobMaster#startJobExecution
org.apache.flink.runtime.jobmaster.JobMaster#resetAndStartScheduler
org.apache.flink.runtime.jobmaster.JobMaster#startScheduling
org.apache.flink.runtime.scheduler.SchedulerBase#startScheduling
org.apache.flink.runtime.scheduler.DefaultScheduler#startSchedulingInternal
org.apache.flink.runtime.scheduler.strategy.EagerSchedulingStrategy#startScheduling
(This is like the method call chain of PipelinedRegionSchedulingStrategy class. In order to simply write it as the method call chain of EagerSchedulingStrategy class, it should have no effect)
org.apache.flink.runtime.scheduler.strategy.EagerSchedulingStrategy#allocateSlotsAndDeploy
org.apache.flink.runtime.scheduler.DefaultScheduler#allocateSlotsAndDeploy
org.apache.flink.runtime.scheduler.DefaultScheduler#allocateSlots
org.apache.flink.runtime.scheduler.DefaultExecutionSlotAllocator#allocateSlotsFor
org.apache.flink.runtime.executiongraph.SlotProviderStrategy.NormalSlotProviderStrategy#allocateSlot
org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl#allocateSlot
org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl#allocateSlotInternal
org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl#internalAllocateSlot
org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl#allocateSharedSlot
org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl#allocateMultiTaskSlot
(I feel that this is the key to allocate slot for subtask, that is, execution vertex, but there is no comment, and I don't understand the process idea, so I can't understand it.)
I'd like to know why sometimes my build is much slower.
So I've decided to measure the time of long-running taks.
When running pure Scala code, it is quite easy to do so:
def myMethod() = {
val initTime = System.currentTimeMillis
...
val elapsedTime = System.currentTimeMillis-initTime
}
But for tasks like packageBin or compile, whose source code I cannot change, I don't know how to measure it, because I cannot control when someTask.value is run.
Any hint?
Related questions:
Profiling sbt builds
add -Dsbt.task.timings=true to your JAVA_OPTS when launching sbt
For a more complete analysis, you can also use jrudolph/sbt-optimizer/
sbt-optimizer is an experimental plugin that hooks into sbt's task execution engine and offers a graphical ASCII report once a tree of tasks has been run
Add the plugin to project/plugins.sbt:
addSbtPlugin("net.virtual-void" % "sbt-optimizer" % "0.1.2")
and enable it in a project with:
enablePlugins(net.virtualvoid.optimizer.SbtOptimizerPlugin)
Each output line corresponds to one task that has been executed.
The first time is the total time this task was running.
The second time displayed in green is the actual execution time.
The third time displayed in red is time the task wanted to run but was waiting for the global ivy lock.
The last time displayed in cyan is the time the task was blocked waiting for Ivy downloads.
Here's the situation that I'm trying to describe, and would like to reflect in the org file:
Suppose you started a task, but while working on it you realized that the description of the task must be amended in a way that it justifies that task to be extended. Still you don't want to lose the original estimate, just for historical purposes, so, roughly, what I would like to have would be:
* STARTED Do something seemingly easy
DEADLINE: <2013-06-09>
This should be a breezy
AMENDED: <2013-06-10>
Looks like this will require more effort
The agenda buffer would use the latest date in the task, but it would be still "interesting" to know the original estimate after the task was finished.
There are many options in org-mode for logging or adding notes whenever a task properties change.
Check for example lognotereschedule, which will ask you for a note when rescheduling a task.
You can choose to store all notes and status changes in a special drawer called LOGBOOK in order to avoid clutter.
are pytest_sessionstart(session) and pytest_sessionfinish(session) valid hooks? They are not described in dev hook docs or latest hook docs
What is the difference between them and pytest_configure(config)/pytest_unconfigure(config)?
In docs it is said:
pytest_configure(config)called after command line options have been parsed. and all plugins
and initial conftest files been loaded.
and
pytest_unconfigure(config) called before test process is exited.
Session is the same, right?
Thanks!
The bad news is that the situation with sessionstart/configure is not very well specified. Sessionstart in particular is not much documented because the semantics differ if one is in the xdist/distribution case or not. One can distinguish these situations but it's all a bit too complicated.
The good news is that pytest-2.3 should make things easier. If you define a #fixture with scope="session" you can implement a fixture that is called once per process within which test execute.
For distributed testing, this means once per test slave. For single-process testing, it means once for the whole test run. In either case, if you do a "--collectonly" run, or "-h" or other options that do not involve the running of tests, then fixture functions will not execute at all.
Hope this clarifies.