How to disable thread groups externally in JMeter - powershell

I am using JMETER with my Powershell script and my JMX (XML for Jmeter) file is already created and I Launch the JMETER in Non-GUI mode and pass the JMX to it.
But previously it was working but I added some more Thread Groups with multiple HTTP requests now there may be some heap size issue.
So I thought of disabling some thread groups from the command line using my Automation script(Powershell).
How to disable some thread groups in the JMX file through the command line?

Define number of threads (virtual users) for Thread Groups using __P() function like:
${__P(group1threads,)} - for 1st thread group
${__P(group2threads,)} - for 2nd thread group
etc.
If you want to disable a certain Thread Group - just set "Number of Threads" to 0 via -J command-line argument like:
jmeter -Jgroup1threads=0 -Jgroup2threads=50 etc
However a better idea would be increasing Heap size as JMeter comes with quite low value (512 Mb) by default which is fine for tests development and debugging, but definitely not enough for the real load test. In order to do it locate the following line in JMeter startup script:
HEAP=-Xms512m -Xmx512m
And update the values to be something like 80% of your total available physical RAM. JMeter restart will be required to pick the new Heap size values up. For more information on JMeter tuning refer to 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure guide.

This is exactly explained in this article.
When you have multiple thread groups, you can execute a specific thread group from command line. You need to simply make the thread count to be 0 for the thread group.
Test Plan Design:
Lets say I have 5 thread groups like this. Instead of hardcoding the thread count values, use some property variables. ex: ${__P(user.registration.usercount)}
Now I want to execute only User Login & Order Creation. This can be achieved by passing properties directly throw command line / passing the property file name itself.
Properties:
Execution:
jmeter -n -t test.jmx -p mypropfile.properties
Check the JMeter command line options here.

If you are working with Concurrency Thread Groups and want to disable them with a property you could set the Hold Target Rate Time to zero. (or Target Concurrency to zero)
Set the property in user.properties
TG1-target_hold_rate_in_min=0
or
Set the property through the command line
jmeter -TG1-target_hold_rate_in_min =0

Related

Perf collection on kubernetes pods

I am trying to find performance bottlenecks by using the perf tool on a kubernetes pod. I have already set the following on the instance hosting the pod:
"kernel.kptr_restrict" = "0"
"kernel.perf_event_paranoid" = "0"
However, I have to problems.
When I collect samples through perf record -a -F 99 -g -p <PID> --call-graph dwarf and feed it to speedscore or similarly to a flamegraph, I still see question marks ??? and the process that I would like to see its CPU usage breakdown (C++ based), the aforementioned ??? is on the top of the stack and system calls fall below it. The main process is the one that has ??? around on it.
I tried running perf top and it says
Failed to mmap with 1 (Operation not permitted)
My questions are:
For collecting perf top, what permissions do I need to change on the host instance of the pod?
Which other settings do I need to change at the instance level so I don't see any more ??? showing up on perf's output. I would like to see the function call stack of the process, not just the system calls. See the following stack:
The host OS is ubuntu.
Zooming in on the first system call, you would see this, but this only gives me a fraction of the CPU time spent and only the system calls.
UPDATE/ANSWER:
I was able to run perf top, by setting
"kernel.perf_event_paranoid" = "-1". However, as seen in the image below, the process I'm trying to profile (I've blackened out the name to hide the name), is not showing me function names but just addresses. I try running them through addr2line, but it says addr2line: 'a.out': No such file.
How can I get the addresses resolve to function names on the pod? Is it even possible?
I was also able to fix the memory-function mapping with perf top. This was due to the fact that I was trying to run perf from a different container than where the process was running (same pod, different container). There may be a way to add extra information, but just moving the perf to the container running the process fixed it.

24 hours performance test execution stopped abruptly running in jmeter pod in AKS

I am running load test of 24 hours using Jmeter in Azure Kubernetes service. I am using Throughput shaping timer in my jmx file. No listener is added as part of jmx file.
My test stopped abruptly after 6 or 7 hrs.
jmeter-server.log file under Jmeter slave pod is giving warning --> WARN k.a.j.t.VariableThroughputTimer: No free threads left in worker pool.
Below is snapshot from jmeter-server.log file.
Using Jmeter version - 5.2.1 and Kubernetes version - 1.19.6
I checked, Jmeter pods for master and slaves are continously running(no restart happened) in AKS.
I provided 2GB memory to Jmeter slave pod still load test is stopped abruptly.
I am using log analytics workspace for logging. Checked ContainerLog table not getting error.
Snapshot of JMX file.
Using following elements -> Thread Group, Throughput Controller, Http request Sampler and Throughput Shaping Timer
Please suggest for same.
It looks like your Schedule Feedback Function configuration is wrong in its last parameter
The warning means that the Throughput Shaping Timer attempts to increase the number of threads to reach/maintain the desired concurrency but it doesn't have enough threads in order to do this.
So either increase this Spare threads ration to be closer to 1 if you're using a float value for percentage or increment the absolute value in order to match the number of threads.
Quote from documentation:
Example function call: ${__tstFeedback(tst-name,1,100,10)} , where "tst-name" is name of Throughput Shaping Timer to integrate with, 1 and 100 are starting threads and max allowed threads, 10 is how many spare threads to keep in thread pool. If spare threads parameter is a float value <1, then it is interpreted as a ratio relative to the current estimate of threads needed. If above 1, spare threads is interpreted as an absolute count.
More information: Using JMeter’s Throughput Shaping Timer Plugin
However it doesn't explain the premature termination of the test so ensure that there are no errors in jmeter/k8s logs, one of the possible reasons is that JMeter process is being terminated by OOMKiller

How to fix Java heap space error in Talend?

I have an ETL flow through talend and there:
Read the zipped files from a remote server with a job.
Take this files unzipes them and parse them into HDFS with a job. Inside the job exists a schema check so if something is not
My problem is that TAC server stopes the execution because of this error:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at org.talend.fileprocess.TOSDelimitedReader$ColumnBuffer4Joiner.saveCharInJoiner(TOSDelimitedReader.java:503)
at org.talend.fileprocess.TOSDelimitedReader.joinAndRead(TOSDelimitedReader.java:261)
at org.talend.fileprocess.TOSDelimitedReader.readRecord_SplitField(TOSDelimitedReader.java:148)
at org.talend.fileprocess.TOSDelimitedReader.readRecord(TOSDelimitedReader.java:125)
....
Is there any option to avoid and handle this error automatically?
There are only few files which cause this error but I want to find a solution for further similar situation.
In the TAC Job Conductor, for a selected job, you can add JVM parameters.
Add the -Xmx parameter to specify the maximum heap size. The default value depends on various factors like the JVM release/vendor, the actual memory of the machine, etc... In your situation, the java.lang.OutOfMemoryError: Java heap space reveals that the default value is not enough for this job so you need to override it.
For example, specify -Xmx2048m for 2048Mb or 2gb
#DrGenius Talend has java based environment and some default jvm heap is awarded during initialization, as in for any java program. Default for Talend - Min:256MB (xms) & max:1024MB.. As per your job requirement, you can set the range of min/max jvm like min of 512 mb & max 8gb..
This can be modified from Job run tab - advance setting.. Even this can be parameterized and can be overwritten using variables set in env. Exact value can be seen from job build -> _run.sh ..
But be careful not to set high as too high so that other jobs running on same server is depleted of memory.
More details on heap error & how to debug issue:
https://dzone.com/articles/java-out-of-memory-heap-analysis

How can one view the partial output of a job in PBS that has exceeded its walltime?

I'm new to using cluster computers to run experiments. I have a script running in python that should be regularly printing out information, but I find that when my job exceeds its walltime, I get no output at all except the notification that the job has been killed.
I've tried regularly flushing the buffer to no avail, and was wondering if there was something more basic that I'm missing.
Thanks!
I'm guessing you are having issues with a job cleanup script in the epilogue. You may want to ask the admins about it. You may also want to try a different approach.
If you were to redirect your output to a file in a shared filesystem you should be able to avoid data loss. This assumes you have a shared filesystem to work with and you aren't required to stage in and stage out all of your data.
If you reuse your submission script you can avoid clobbering the output of other jobs by including the $PBS_JOBID environment variable in the output filename.
script.py > $PBS_JOBID.out
On mobile so check qsub man page for a list of job environment variables.

matlab does not save variables on parallel batch job

I was running (in a cluster) a batch job and in the end I was trying to save results using save(), but I had the following error:
ErrorMessage: The parallel job was cancelled because the task with ID 1
terminated abnormally for the following reason:
Cannot create 'results.mat' because '/home/myusername/experiments' does not exist.
why is that happening? What is the correct way to save variables in a parallel job?
You can use SAVE in the normal way during execution of a parallel job, but you also need to be aware of where you are running. If you are running using the MathWorks jobmanager on the cluster, then depending on the security level set on the jobmanager, you might not have access to the same set of directories as you normally would. More about that stuff here: http://www.mathworks.co.uk/help/mdce/setting-job-manager-security.html