LoadRunner soaptests as templates - soap

I want to have my soap request saved as a template on the filesystem and have placeholders that could be replaced by substitutes with the editing of contracts/wsdl's.
Does Loadrunner support soap templates?

I do this all the time. I just use a standard web virtual user with a web custom request. I load the file in the init phase of the script to avoid disk I/O on the load generator during the test and then reuse it over and over again for multiple iterations with multiple parameter sets. See the C standard file I/O and memory management routines for loading the file and managing the memory.
In the source request file simply include your parameter markers "{paramname}" in the source file.
Works just fine.

Related

Load testing using Jmeter of Elastic Search API Queries through CSV

Load testing using Jmeter of Elastic Search API Queries through CSV
I want to perform load testing using Jmeter of Elastic Search API queries which I will pass through CSV.
Please give me suggestions, what should be things I should Consider before doing that and what kinds of graphs that I should look in to, and what plugins should be installed in Jmeter
Get familiarized with the concept of web applications performance testing, load patterns, performance metrics, etc. See Performance Testing Guidance for Web Applications as an example reference material
Build your test plan "skeleton". Implement requests to web services endpoints using HTTP Request samplers. You may also need to add a HTTP Header Manager to send at least Content-Type header. See Testing SOAP/REST Web Services Using JMeter article for details.
Once done validate your script by running it with 1 virtual user and View Results Tree listener enabled. Check request and response details to see if your test is doing what it is supposed to be doing.
If your test works fine - add CSV Data Set config to your Test Plan and replace the values you would like to parameterize with the JMeter Variables originating from the CSV file
Repeat step 3 with 1-2 users to see whether your parameterization works as expected.
Now it's time to configure your load pattern (number of virtual users, ramp-up, test duration, etc.) and run your test
Analyze results using JMeter Listeners and/or HTML Reporting Dashboard

how to configure a second domain for images in TYPO3?

we have a configuration with pound, varnish, apache and TYPO3. every now and then we hava a lot of access to the site and the pound sw gets to its limit.
One idea to loosen the stress was that all images could be fetched with another domain (which gets its own sw).
so the html would be called like http://www.my.domain/folder/page.html and inside the HTML images are referenced like http://images.my.domain/fileadmin/img/image.jpg
what needs to be done so
the editors could work as before (just access files from /fileadmin/...)
in the HTML all image(/file)access are generated with another domain?
make sure all genarated images (processed) can be accessed with the new domain?
I assume you are using Pound as reverse proxy which does the TLS.
I suggest to reduce the setup and use only Varnish and Apache, so Varnish can handle the requests directly.
Kevin Häfeli wrote an article about it some time ago (in German):
https://blog.snowflake.ch/2015/01/28/https-caching-mit-varnish-tlsssl-zertifikat-typo3/
In case you still want to server the images form an other web server, I suggest to have a look at the file abstraction layer:
https://docs.typo3.org/typo3cms/FileAbstractionLayerReference/Index.html

Data Upload using REST

I am developing a REST application that can be used as a data upload service for large file. I create chunks of the file and upload each chunk. I would like to have multiple services running this service (For load balancing). I would like my REST service to be a stateless system (No information about each stored chunk). This will help me avoid server affinity. If i allow server affinity, i can have a server for each upload request and the chunks can be stored in a temporary file in the disk and can be moved to some other place once the upload is complete.
Ideally i would use a central place for the data to be stored. I would like to avoid this as this is a single point of failure (bad in a distributed system). So i was thinking about using a distributed file system say like HDFS but appending to file is not very efficient and so this is not an option.
Is it possible to use some kind of a cache for storing the data? Since the size of the data is quite big (2 -3 GB files) traditional cache solutions like Memcache cannot be used.
Is there any other option to solve this problem. Am I not looking in any particular direction?
Any help will be greatly appreciated.

Lightweight HTTP application/server for static content

I am in need of a scalable and performant HTTP application/server that will be used for static file serving/uploading. So I only need support for GET and PUT operations.
However, there are a few extra features that I need:
Custom authentication: I need to
check credentials against a database for each request.
Thus I must be able to integrate propietary
database interaction.
Support for
signed access keys: The access to
resources via PUT should be signed
using a key like http://uri/?key=foo The key then contains information about the request like md5(user + path + secret) which allows me to block unwanted requests. The application/server should allow me to check for this.
Performance: I'd like to avoid piping content as much as possible. Otherwise the whole application could be implemented in Perl/etc. in a few lines as CGI.
Perlbal (in webserver mode) looks nice, however the single-threaded model does not fit with my database lookup and it does also not support query strings.
Lighttp/Nginx/… have some modules for these tasks, however it is not feasible putting everything together without ending up writing own extensions/modules.
So how would you solve this? Are there other leightweight webservers available for this?
Should I implement an application inside of a webserver (i.e. CGI). How can I avoid/speed up piping content between the webserver and my application.
Thanks in advance!
Have a look at nodejs http://nodejs.org/
There are a few modules for static web servers and database interfaces:
http://wiki.github.com/ry/node/modules
You might have to write your own file upload handler, or use one from this example http://www.componentix.com/blog/13
nginx + spawn-fcgi + fcgi application written in C + memcached + sqlite serves for similar task well, latency is about 20-30 ms for small data and fast connections from the same local network. As far as I know production server handles about 100-150 requests per second with no problem. On test server I peaked up to 20k requests per second, again with no problem, average latency were about 60 ms. Aggressive caching and UNIX domain sockets is the key.
Do not know how that configuration will behave on frequent PUT requests, in our task they are very rare and typically batched.

Downloading multiple items in package

I need to allows users to download multiple images in a single download. This download will include an sql file and images. Once the download completes, the sql will execute, inserting text into an sqlite database. This text will include references to the download images. The text and images are rendered in a UIWebView.
What is the best way to download everything in a single download? I was thinking to use a bundle since it can be loaded at runtime but not sure of any limitations/restrictions in this scenario. I have tested putting the bundle into the Documents folder and then accessing resources inside of it. That seems to work fine in a simple test.
You're downloading everything through a socket, which only knows about bytes, so a bundle, or even a file, doesn't "naturally" transfer through, the server side opens files and encodes and sends them into the connection, the client reads from the socket and reconstructs the original file structure.
Assuming the application has a UI for picking which items needs to be transferred, it could then request all items to the server, and the server could then send all the items through the single connection with some delimitation you invent, so that the iPhone app can split the stream back into the individual files.
Or another options is that the client could just perform individual HTTP requests for the different files, through pretty straightforward use of NSURLConnection.
The former sounds like an attempt to optimize the latter. Have you already tested and verified that the latter is too slow/inefficient? It definitely is more complex to implement.
There is a latency issue with multiple HTTP connections that you run in a sequence, however you can perhaps mitigate it by running multiple downloads connections in parallel -- for example through an NSOperationQueue with a limit of 2 to 5 concurrent download operations.