As I know Ramp-up function has been removed from locust.
Just wondering if the hatching process was the same as or similar to ramp-up? Or is there anyway to simulate the situation?
Not sure about what ramp up function you are talking about. There are plenty of options for controlling ramp-up in locust, including the new step-load function:
--step-load Enable Step Load mode to monitor how performance
metrics varies when user load increases. Requires
--step-clients and --step-time to be specified.
--step-clients STEP_CLIENTS
Client count to increase by step in Step Load mode.
Only used together with --step-load
--step-time STEP_TIME
Step duration in Step Load mode, e.g. (300s, 20m, 3h,
1h30m, etc.). Only used together with --step-load
Just found a workaround by Taurus to schedule the load.
Related
Is there a way to create a custom DispatchQueue quality of service with its own custom "speed"? For example, I want a QoS that's twice as slow as .utility.
Ideas on how to solve it
Somehow telling the CPU/GPU that we want to run the task every X operation cycles? Not sure if that's directly possible with iOS.
This is a really bad hack which produces messy code and doesn't really solve the issue if 1 line of code runs for several seconds, but we can introduce a wait after every line of code.
In SpriteKit/SceneKit, it's possible to slow down time. Is there a way to utilize that somehow to slow down an arbitrary piece of code?
Blocking the thread every X seconds so that it slows down - not sure if possible without sacrificing app speed
There is no mechanism in iOS or any other Cocoa platform to control the "speed" (for any meaning of that word) of a work item. The only tool offered us is some control over scheduling. Once your work item is scheduled, it will get 100% (*) of the CPU core until it ends or is preempted. There is no way to be asked to be preempted more often (and it would be expensive to allow that, since context switches are expensive).
The way to manage how much work is done is to directly manage the work, not preemption. The best way is to split up the work into small pieces, and schedule them over time and combine them at the end. If your algorithm doesn't support that kind of input segmentation, then the algorithm's main "loop" needs to limit the number of iterations it performs (or the amount of time it spends iterating), and return at that point to be scheduled later.
If you don't control the algorithm code, and you cannot work with whoever does, and you cannot slice your data into smaller pieces, this may be an unsolvable problem.
(*) With the rise of "performance" cores and other such CPU advances, this isn't completely true, but for this question it's close enough.
Technically you cannot alter the speed on the QoS such as .background or .utility or any other Qos.
The way to handle this is to choose the right QoS based on the task you want to perform.
The higher the QoS is, the more resources the OS will spend on it and descends when you use a lower one.
By following your advice I’m constructing small models to learn how to use AnyLogic and build my simulation.
I need discrete events diagram interacting with agent based, where the agent based will represent a “service process” based in a previous recommendation it was straight forward to trigger the agent based activity, but I cannot stop or suspend or delay the “delay” block, I tryed to use the “until stopDelay is called” function but I could not make it work, I decided to test with and cyclic event inside the discrete event agent and but was not possible. I am considering that maybe my approach is not correct, and I need to use a different strategy to stop the discrete events process while the agent-based process is running, however since agent based is attempting to simulate some human behaviour, I’m interested in the time variations this could cause to the discrete events process. So my question is how to stop or suspend the “service delay or the delay blocks and restart them from the agent based diagram?
If you just need to store an entity somewhere until Agent process is done then I would recommend using using a 'wait' block instead of a 'delay'. The whole point of a delay is to have a timed exit so suspending it doesn't align with the intended use-case. You can read more about 'wait' block here.
I found the the Job Shop model example, some blocks using stopDelayForAll(), with a "if" code block, so I noticed that it was using a parameter, so I made some changes and the code I'm using and worked is this:
if ( Inqueue >= queCap )
delay.stopDelayForAll();
"Inqueue" is a variable capturing data from the delay block and queCap is a parameter telling the queue block capacity.
I have been trying to get druid to fire a kill task periodically to clean up unused segments.
These are the configuration variables responsible for it
druid.coordinator.kill.on=true
druid.coordinator.kill.period=PT45M
druid.coordinator.kill.durationToRetain=PT45M
druid.coordinator.kill.maxSegments=10
From the above configuration my mental model is, once ingested data is marked unused, kill task will fire and delete the segments that are older that 45 mins while retaining 45 mins worth of data. period and durationToRetain are the config vars that are confusing me, not quite sure how to leverage them. Any help would be appreciated.
The caveat for druid.coordinator.kill.on=true is that segments are deleted from whitelisted datasources. The whitelist is empty by default.
To populate the whitelist with all datasources, set killAllDataSources to true. Once I did that, the kill task fired as expected and deleted the segments from s3 (COS). This was tested for Druid version 0.18.1.
Now, while the above configuration properties can be set when you build your image, the killAllDataSources needs to be set through an API. This can be set via the druid UI too.
When you click the option, a modal appears that has Kill All Data Sources. Click on True and you should see a kill task (Ingestion ---> Tasks below) firing in the interval specified. It would be really nice to have this as a part of runtime.properties or some sort of common configuration file that we can set the value in when build the druid image.
Use crontab it works quite well for us.
If you want to have a control outside the druid over the segments removal, then you must use an scheduled task which runs based on your desire interval and register kill-tasks in druid. It can increase your control over your segments, since when they go away, you cannot recover them. You can use this script to accompany you:
https://github.com/mostafatalebi/druid-kill-task
I have been developping a Matlab Simulink model to a client and it is required the model to be compatible with a time-rate of 1ms. My client will integrate this model with his model parent that runs at this time rate in a multi-core machine.
In spite of this will run a multi-core machine, my model will share a core with other tasks, that means I have more reasons to make sure my model wont consume a lot of processing and cause task overruns.
I believe there is a good practice rule or conceptual definition that estipulates a desired turnaround time depending on lead time. There is?
I wanted to know how much bellow the lead time (1ms) I should worry to keep the turnaround time bellow.
I would appreciate if you could point any reference.
if your application needs a precise timing, you should consider a trigger signal send from a hardware, like an arduino, or the internal trigger/timer of the instrument itself. This will not be influenced by the load from your computer.
if you have to use software control, you can use the software to send a trigger signal from serial port. it will more like on-demand mode, but precise time interval is difficult.
I'm developing a real time system with FreeRTOS on an
STM3240G
board.
The system contains some different tasks ( GUI, KB, ModBus, Ctrl, etc . . )
The tasks have different priorities.
The GUI seems to display a little slowly.
So I use a Profiler software to see what is going on between the different tasks
during a run. This profiler shows me which task was running at each moment ( microsecond) and what interrupts had arrived.
This profiler enables me to "mark" different locations on the code so I know
when it was there. So I run the program and make a record.
I looked at the record and I saw that (for example) Ctrl task was between two
lines of code for 15 milliseconds (this time change in size) there was not any
task change no interrupt arrived and after this time the system continues normally from this point according to the record and my marks.
I tried closing disabling different interrupts without any success.
Has anyone any idea what it could be?
On the eval board, there is a MIPI connector that supports ETM trace - a considerable luxury/advantage over other development boards!
If you also have one of the more expensive debug adapters that also support ETM trace (like for example, uTrace or J-Trace or
ULINKpro or I-jet Trace), you should use it to trace the entire control flow without having to instrument tasks and ISRs.
Otherwise, you should re-check if really every IRQ handler has been instrumented (credits to #RealtimeRik, who pointed this out) at a low-enough level so that the profiler can really spot it.
Especially if you are using third-party libraries, it may be that some IRQs are serviced by handlers you (or the profiler) doesn't have the code of.
As I had to make this experience once myself, I suggest you review the NVIC settings carefully to re-check if there is an ISR you haven't been aware of.
Another question is how the profiler internally works.
If it is based on ETM/TPIU or ITM/SWO tracing, see above.
If it creates and counts a huge number of snapshots, there might be some systematic cases which prevent snapshots to be made in a particular part of the software:
Could it be that a non-maskable interrupt or exception handler is running in a way that cannot be interrupted by the mechanism that collects snapshots?
Could it be that the timing of the control task correlates (by some coincidence) to a timing signal used for snapshots?
What happens if you insert some time-consuming extra code in front of the unexpected "profiling gap" (e.g., some hundreds or thousands of NOPs)?