Jmeter big JTL Report File Thread based Reporting - command-line

I’m currently load testing an web based application with a couple of pc’s(4 older pc’s) each with about 200 threads over 6h. As recommended everywhere on the web, I’ve used best practice settings for the saveservice and JTL output.
The issue here is the file size I’m getting after 6h run. The result JTL is about 1.2GB and about 2 million lines which can’t be opened or edited that easy with windows tools. The point is I need to report latency based for each thread count (e.g. 5, 50,100,200 Threads). If I’m using one large file its close to be impossible with windows tools to extract and create a new JTL with the thread count I want to report on and if I’m setting up the command line to run a batch of 5 Jmeter runs for each thread count a different JTL it’s still too big to be opened. I’m not that experienced in programming ore scripting and need a solution for thread based reporting from a big JTL file.
Does anyone have a suggestion to handle that?
Btw. I've read about Blazemeter and Loadsophia and co but they are not an suitable solution for me.
4PC - need to run CL
2GHz 2 Cores, <2GB Ram, Win7
Sample batch I’ve used
call ..\jmeter -n -t versuch.jmx -l ${__time("yyMMddHHmmss")}_Threads-${__P(threads)}.jtl -Jthreads=5 -Jrampup=0 -Jloopcount=3000
call ..\jmeter -n -t versuch.jmx -l ${__time("yyMMddHHmmss")}_Threads-${__P(threads)}.jtl -Jthreads=50 -Jrampup=0 -Jloopcount=3000
call ..\jmeter -n -t versuch.jmx -l ${__time("yyMMddHHmmss")}_Threads-${__P(threads)}.jtl -Jthreads=100 -Jrampup=0 -Jloopcount=3000
jmeter.save.saveservice.output_format=csv
jmeter.save.saveservice.data_type=false
jmeter.save.saveservice.label=true
jmeter.save.saveservice.response_code=true
jmeter.save.saveservice.response_data.on_error=false
jmeter.save.saveservice.response_message=false
jmeter.save.saveservice.assertion_results_failure_message=false
jmeter.save.saveservice.successful=true
jmeter.save.saveservice.thread_name=true
jmeter.save.saveservice.time=true
jmeter.save.saveservice.subresults=false
jmeter.save.saveservice.assertions=true
jmeter.save.saveservice.latency=true
jmeter.save.saveservice.bytes=true
jmeter.save.saveservice.hostname=true
jmeter.save.saveservice.thread_counts=true
jmeter.save.saveservice.sample_count=true
jmeter.save.saveservice.timestamp_format=HH:mm:ms
jmeter.save.saveservice.default_delimiter=;
jmeter.save.saveservice.print_field_names=true
jmeter.save.saveservice.autoflush=false

I wrote a JMeter reporting backend
handles huge JTL (multiple GBs)
creates an extended JMeter HTML report
parses CSV & XML JMeter report files
See https://github.com/sgoeschl/jmeter-sla-report
Cheers,
Siegfried Goeschl

Related

Profiling foswiki with NYTProf results in incomplete profile data

I've an foswiki installation which is really slow (~ 60 seconds for a uncached page). I've tried to profile the installation with NYTProf, according to http://foswiki.org/Support/NYTProfDebugging with the following command:
> sudo -u www-data NYTPROF="file=/tmp/nytprof.out:addpid=1:endatexit=1" perl -wTd:NYTProf view -topic Some.Topic -username MyUsername
The script fails with an exit code 141 when I run it with profiler. If I run it without profiler (remote d:NYTProf) it exits successful and producing output.
After the profiling I've gotten a bunch of profile files in my /tmp directory:
nytprof.out.[841-1860]
But when I try to merge these files, I've get an error for the first file:
> nytprofmerge nytprof.out.*
Profile data incomplete, inflate error -5 ((null)) at end of input file, perhaps the process didn't exit cleanly or the file has been truncated (refer to TROUBLESHOOTING in the documentation)
I can merge the files without the first file, but the results are useless and shows only 87 calls to Foswiki::Sandbox::CORE:open and that's it.
Do I have any chance got get an valid profiling result? Or is there an other tool, that I can use in this case?
I'm not sure why you can't get NYTProfiler to work, we've used it to figure out some performance issues in Foswiki 2.0.2, which have been partially addressed in Foswiki 2.0.3. There are a couple of issues going on, but one major cause is our conversion to UNICODE internally, and some Perl regex issues in perl versions before 5.20. https://rt.perl.org/Public/Bug/Display.html?id=66852
Foswiki 2.0.3 made the following performance updates:
Changed some heavily called internal functions from regular expressions, to index()
Changed EditRowPlugin to generate less html that requires processing by regular expressions in the rendering module.
Made some other improvements to reduce excessive re-reading of topics.
If 2.0.3 doesn't significantly help, Check to see if the problem pages have large tables in them. If so, you might try disabling the EditRowPlugin and use EditTablePlugin.
Other than that, you might try our official support channel #foswiki on IRC, http://irclogs.foswiki.org/
The script fails with an exit code 141 when I run it with profiler.
That suggests the process received a SIGPIPE signal. The sigexit option may help.
If I run it without profiler ... it exits successful and producing output.
You're using sudo so permissions might be an issue, but that's just a guess. You'll need to dig deeper to confirm if a SIGPIPE is being received and why.
I'm not familiar with foswiki. Perhaps someone in that community could be more helpful.

robocopy error with ERROR 32 (0x00000020)

I have two drives A and B. Using a python script I am creating some files in "A" drive and I am running a powerscript which copies all the files in the drive A to drive B in the interval of 1 sec.
I am getting this error in my powershell.
2015/03/10 23:55:35 ERROR 32 (0x00000020) Time-Stamping Destination
File \x.x.x.x\share1\source\ Dummy_100.txt The process cannot access
the file because it is being used by another process. Waiting 30
seconds...
How will I overcome this error?
This happened is because the file is locked by running process. To fix this, download Process Explorer. Then use Find>Find Handle or DLL, find out which process locked this file. Use 'taskkill' to kill that process in commandline. You will be fine.
if you want to skip this files you can use /r:n that n is times of tries
for example /w:3 /r:5 will try 5 time every 3 seconds
How will I overcome this error?
If backup is, what you got in mind, and you encounter in-use files frequently, you look into Volume Shadow Copies (VSS), which allow to copy files despite them being ‘in use’. It's not a product, but a windows technology used by various backup tool.
Sadly, it's not built into robocopy, but can be used in conjunction with it. See
➝ https://superuser.com/a/602833/75914
and especially:
➝ https://github.com/candera/shadowspawn
It could be many reasons.
In my case, I was running a CMD script to copy from one server to another, a heap of SQL Server backups and transaction logs. I too had the same problem because it was trying to write into a log file that was supposedly opened by another process. It was not.
I ran many IP checks and Process ID checkers that I ran out of knowing what was hogging the log file. Event viewer said nothing.
I found out it was not even the log file that was being locked. I was able to delete it by logging into the server as a normal user with no admin privileges!
It was the backup files themselves by the SQL Server Agent. Like #Oseack said, there may have been the need to use another tool whilst the backup files themselves were still being used or locked by the SQL Server Agent.
The way I got around it was to force ROBOCOPY to wait.
/W:5
did it.

what's the purpose of the '--delete-after' option of wget?

I came across the "--delete-after" option when I was reading the manpage of wget ?
what's the purpose of providing such an option ? Is it just for testing the page is ok for downloading ? Or maybe there are other situations where this option is useful, I hope you guys may give me some hints.
With reference to your comments above. I'm providing some examples of how we use it. We have a few websites running on Rackspace Cloud Sites which is a managed cloud hosting solution. We don't have access to regular cron.
We had an issue with runaway usage on a site using WordPress because WP kept calling wp-cron.php. To give you a sense of runaway usage, it used up in one day the allotted CPU cycles for a month. Anyway what I did was disable wp-cron.php being called within the WordPress system and manually call it through wget. I'm not interested in the output from the process so if I don't use --delete-after with wget (wget ... > /dev/null 2>&1 works well too) the folder where wget runs would get filled with hundreds of useless logs and output of each time the script was called.
We also have SugarCRM installed and that system requires its cron script to be called to handle system maintenance. We use wget silently for that as well. Basically a lot of these kinds of web-based systems have cron scripts. If you can't call your scripts directly say using php on the machine then the other option is calling it silently with wget.
The command to call these cron scripts is quite basic - wget --delete-after http://example.com/cron.php?parameters=if+needed
I'm using wget (with cron) to automate commands to a web application, so I have no interest in the contents of the pages. --delete-after is ideal for this.
You can use it for testing if a page is downloading ok, but usually it's used to force proxy servers to cache their contents.
If your sitting on a connection where there's a network appliance caching content between the site and your endpoint, and you have a site that's popular among users on that network, then what you may want to do as a sysadmin, is to use a down level machine just after the proxy to script a recursive "-r" or mirror "-m" wget operation.
The proxy appliance will see this and pre-cache the site and it's assets, thus making site accesses for uses after said proxy a bit faster.
You'd then want to specify "--delete-after" to free up the disk space used unless your wanting to keep a local copy of all sites you force to cache.
Sometimes you only need to visit a website to set an IP address - say if you are rolling your own dyn dns service.

Aggregation of IIS logs

We have an IIS .Net application deployed across several machines. We use IIS log information to do reporting of performance of the web application and navigation by the user. Currently the reporting is only required infrequently (once a day, for the previous day), so we just roll the logs every 24 hours, and move the old logs to our reporting server.
We have a new requirement that means we need much faster turnaround on the IIS log information, say every minute for the sake of the discussion.
There exist Apache tools like Facebook's Scribe to scalably move Apache web server logs across a network of servers.
Are there any similar tools available for IIS?
Is this the right question to ask?
Should we be doing something different, if the timing requirements have changed so much?
I've looked at this question and the answers, and the only one that seems to come close is this one.
Pointers appreciated!
Snare is a little old but worth mentioning.
Snare Agent for IIS Servers
http://www.intersectalliance.com/projects/SnareIIS/index.html
I used this old version a long time ago and it worked well by forwarding/sending/replicating IIS logs over a network via syslog.
Today, they have a newer version called Snare Epilog
http://www.intersectalliance.com/projects/EpilogWindows/index.html
The code is also open source; perhaps you might find it useful.
You might also want to try ...
http://nxlog.org
http://www.syslogserver.com/syslogagent.html
I tend to write a .bat file in conjunction with LOG Parser 2.2. The .Bat file will determine the appropriate file dates and pull the corresponding logs from multiple IIS server log locations into a single local directory. Once the files are across I then run a Log Parser command to query the log content over all log files and then produce a single output file in .csv format. Finally, I run an SSIS job to import the new .csv file into a running log table which I can then query on an ongoing basis.

Application Deployment with Powershell

I've developed a Powershell script to deploy updates to a suite of applications; including SQL Server database updates.
Next I need a way to execute these scripts on 100+ servers; without manually connecting to each server. "Powershell v2 with remoting" is not an option as it is still in CTP.
Powershell v1 with WinRM looks the most promising, but I can't get feedback from my scripts. The scripts execute, but I need to know about exceptions. The scripts create a log file, is there a way to send the contents of the log file back to the "client" (the local computer making the remote calls)?
Quick answer is No. Long version is, possible but will involve lots of hacks. I developed very similar deployment script/system using PowerShell 2 last year. The remoting feature is the primary reason we put up with the CTP status. PowerShell 1 with WinRM is flaky at best and as you said, no real feedback apart from ok or failed.
Alternative that I considered included using PsExec, which is very much non-standard and may be blocked by firewall. The other approach involves using system management tools such as MS's System Center, but that's just a big hammer for a tiny nail. So you have to pick your poison...
Just a comment on this: The easiest way to capture powershell output is to use the start-transcript cmdlet to pipe console output to a file. We have a small snippet at the start of all our script that sends a log file with the console output from each script to a central file share, and names the log file with script name and date executed so that we'll have an idea of what happened. Its not too hard to pipe all those log files into a database for further processing either. Probably won't seolve all your problems, but would definitely help on the "getting data back" part.
best regards,
Trond