Set iteration count - locust

I'm building a locust script to be integrated into our CI/CD pipeline as a synthetic monitoring solution. It'll run once, one iteration every 15 minutes. If the application fails alerts will be enabled and sent to the appropriate personnel.
Currently, I don't see any locust help with an iteration count command line option. I do see a --run-time option but that doesn't specify how many times it runs vs the amount of time to run.

If you add locust-plugins there is now a way to do this, using command line parameter -i. See https://github.com/SvenskaSpel/locust-plugins#command-line-options

Related

Is there an option to forever run a collection with postman-newman, until stopped?

Is there an option to forever run a collection with newman, until stopped ?
Currently using newman as a library and running a collection, from commandline, within a nodejs application.
Using it for monitoring responses realtime.
If I need to perform long running testing, for example, an hour, is it possible ?
I do not want to give a finite number, but can Ctrl+C from command line and newman would stop.

dotMemory command line scheduled snapshots

I'm running dotMemory command line against an IoT Windows Forms application which requires many hours of tests on a custom appliance.
My purpose is to get memory snapshots on a time basis, while the application is running on the appliance. For example, if the test is designed to run for 24h, I want to get a 10 seconds memory snapshot each hour.
I found 2 ways of doing it:
Run dotMemory.exe and get a standalone snapshot on a time basis, by using schtasks to schedule each execution;
Run dotMemory using the attach and trigger arguments and get all the snapshots on a single file.
The first scenario it's ready for me, but as it is easy to see, the second one is much better for further analysis after collecting the data.
I'm able to start it by using a command just like:
C:\dotMemory\dotMemory.exe attach $processId --trigger-on-activation --trigger-timer=10s --trigger-max-snapshots=24 --trigger-delay=3600s --save-to-dir=c:\dotMemory\Snapshots
Here comes my problem:
How can I make the command/process stop after it reaches the max-snapshot value without any human intervention?
Reference: https://www.jetbrains.com/help/dotmemory/Working_with_dotMemory_Command-Line_Profiler.html
If you start your app under profiling instead of attaching to the already running process, stopping the profiling session will kill the app under profiling. You can stop profiling session by passing ##dotMemory["disconnect"] command to the dotMemory console stdin. (E.g. some script can do that after some time).
See dotmemory help service-messages for details
##dotMemory["disconnect"] Disconnect profiler.
If you started profiling with 'start*' commands, the profiled process will be killed.
If you started profiling with 'attach' command, the profiler will detach from the process.
P.S.
Some notes about your command line. With this comand line dotMemory will get a snapshot each 10 seconds but will start to do it after one hour. There is no such thing as "10 seconds memory snapshot" memory snapshot is a momentary snapshot of an object graph in the memory. Right command line for your task will be C:\dotMemory\dotMemory.exe attach $processId --trigger-on-activation --trigger-timer=1h --trigger-max-snapshots=24 --save-to-dir=c:\dotMemory\Snapshots

How can one view the partial output of a job in PBS that has exceeded its walltime?

I'm new to using cluster computers to run experiments. I have a script running in python that should be regularly printing out information, but I find that when my job exceeds its walltime, I get no output at all except the notification that the job has been killed.
I've tried regularly flushing the buffer to no avail, and was wondering if there was something more basic that I'm missing.
Thanks!
I'm guessing you are having issues with a job cleanup script in the epilogue. You may want to ask the admins about it. You may also want to try a different approach.
If you were to redirect your output to a file in a shared filesystem you should be able to avoid data loss. This assumes you have a shared filesystem to work with and you aren't required to stage in and stage out all of your data.
If you reuse your submission script you can avoid clobbering the output of other jobs by including the $PBS_JOBID environment variable in the output filename.
script.py > $PBS_JOBID.out
On mobile so check qsub man page for a list of job environment variables.

Matlab/Simulink: run batch of simulations in parallel?

I have to run a series of simulations and save the results. Since by default Matlab only uses one core, I wonder if it is possible to open multiple worker tasks and assign different simulation runs to them?
You could run each simulation in a separate MATLAB instance and let the OS handle the process to core assignment.
One master MATLAB could synchronize each child instances checking for example if simulation results file are existing.
I aso have the same problem but I did not manage to really understand how to make it in MatLab. The documentation in matlab is too advanced to get to know how to make it.
Since I am working with Ubuntu I find a way to do the work calling the unix command from MatLab and using the parallel GNU command
So I mange to run my simulation in parallel with 4 cores.
unix('parallel --progress -j4 flow > /dev/null :::: Pool.txt','-echo')
you can find more info in the link
Shell, run four processes parallel
Details of the syntaxis can be found at https://www.gnu.org/software/parallel/
but breifly I can tell you
--progress shows a status of the progress
-j4 tells the amount or jobs in parallel you want to have
flow is the name of my simulator
/dev/null was just to avoid the screen run output of the simulator to show up
Pool.txt is a file I made with the required simulator input that is basically the path and the main simulator file.
echo I do not remember now what was it for :D

stackdriver gcloud log write throughput

I am looking into gcloud log shell command line, I started with a classic sample:
gcloud beta logging write --payload-type=struct my-test-log "{\"message\": \"My second entry\", \"weather\": \"aaaaa\"}"
It works fine so I checked the throughputwith the following code its works veru slaw (about 2 records a sec) is this the best way to do so?
Here is my sample code
tail -F -q -n0 /root/logs/general/*.log | while read line
do
echo $line
b=`date`
gcloud beta logging write --payload-type=struct my-test-log "{\"message\": \"My second entryi $b\", \"weather\": \"aaaaa\"}"
done
If you assume each command execution takes around 150ms at best, you can only write a handful of entries every second. You can try using the API directly to send the entries in batches. Unfortunately, the command line can currently only write one entry at a time. We will look into adding the capability to write multiple entries at a time.
If you want to stream large number of messages fast, you may want to look into Pub/Sub.