How to Use Command Line Parameters in JMeter - command-line

I'm using Jmeter for testing APIs and I want to parametrize the project's path from the terminal and then I want to use this parameter in JMeter.
The parameter that I've sent via Command Line :
./jmeter -n -t your_script.jmx -Jurl=abcdef.com
The parameter that I've used in User Defined Variables :
${__P(url)}
But when I run my automation in JMeter, my test scripts are not going to URL that's been defined. When I check the request body, I see POST https://1 as URL.
Please see the attached photos.
https://mylifebox.com/shr/3df5bb35-cf43-4488-b20b-5c2d59656212&language=en

Let's start clean:
In the User Defined Variables configure the variable with the name of url and the value of ${__P(url,)}
In the HTTP Request sampler (or even better HTTP Request Defaults) put ${url} into "Server Name or IP" field:
Run your test in command-line non-GUI mode like:
jmeter -n -t your_script.jmx -Jurl=abcdef.com -f -l result.jtl
mind this -f argument which tells JMeter to overwrite the existing results file (it might be the case you're looking into "old" results where the url property value was starting with 1)
That's it, you should see the HTTP Request sampler making a call to abcdef.com in the .jtl results file. And if you change this url parameter - you will see the impact in the .jtl results file:

Put ${__P(url)} inside Server Name field in HTTP Request.
Domain name or IP address of the web server, e.g. www.example.com. [Do not include the http:// prefix.] Note: If the "Host" header is defined in a Header Manager, then this will be used as the virtual host name.
Don't use User Defined Variables

Related

How to pass API parameters to GCP cloud build triggers

I have a large set of GCP Cloud Build Triggers that I invoke via a Cloud scheduler, all running fine.
Now I want to invoke these triggers by an external API call and pass them dynamic parameters that vary in values and number of parameters.
I was able to start a trigger by running an API request but any JSON parameters in the API request that I sent were ignored.
Google talks about substitution parameters at https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values. I define these variables in the cloudbuild.yaml file, however they were not propagated into my shell script from the API request.
I don't any errors with authentication or authorization, so security may not be an issue.
Is my idea supported at all or do I need to resort to another solution such as running a GKE cluster with containers that would expose its API (a very heavy-boxing solution).
We do something similar -- we migrated from Jenkins to GCB but for some people we still need a nicer "UI" to start builds / pass variables.
I got scripts from here and modified them to our own needs: https://medium.com/#nieldw/put-your-build-triggers-into-source-control-with-the-cloud-build-api-ed0c18d6fcac
Here is their REST API: https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.triggers/run
For the script below, keep in mind you need the trigger-id of what you want to run. (you can also get this by parsing the output of another REST API.)
TRIGGER_ID=1
# we need to specify ATLEAST the branch name or commit id (check after)
BRANCH_OR_SHA=$2
# check if branch_name or commit_sha
if [[ $BRANCH_OR_SHA =~ [0-9a-f]{5,40} ]]; then
# is COMMIT_HASH
COMMIT_SHA=$BRANCH_OR_SHA
BRANCH_OR_SHA="\"commitSha\": \"$COMMIT_SHA\""
else
# is BRANCH_NAME
BRANCH_OR_SHA="\"branchName\": \"$BRANCH_OR_SHA\""
fi
# This is the request we send to google so it knows what to build
# Here we're overriding some variables that we have already set in the default 'cloudbuild.yaml' file of the repo
cat <<EOF > request.json
{
"projectId": "$PROJECT_ID",
$BRANCH_OR_SHA,
"substitutions": {
"_MY_VAR_1": "my_value",
"_MY_VAR_2": "my_value_2"
}
}
EOF
# our curl post, we send 'request.json' with info, add our Token, and set the trigger_id
curl -X POST -T request.json -H "Authorization: Bearer $(gcloud config config-helper \
--format='value(credential.access_token)')" \
https://cloudbuild.googleapis.com/v1/projects/"$PROJECT_ID"/triggers/"$TRIGGER_ID":run

Routing to different ports based on environment variable stored in path of request

This is kind of a weird complicated question.
Context:
I have a bunch of docker containers that need to be routed to from haproxy dynamically. They are each running on different ports on the machine, and are stored in environment variables like this:
a=9873
b=9874
c=9875
These are available to the haproxy server. The request path that comes in will be in the form like this example:
/api/a/action
From that, the taks is as follows:
The /api needs to be removed from the path.
The /a refers to the service, so the environment variable for a needs to be retrieved to get the port of the server
The request needs to be routed to localhost:9873/a/action where the port, 9873, is the environment variable that is the value from the path in the beginning (after removing /api) and then the path is simply appended onto the request (which is the /a/action that remains after removing the /api.
My current config looks like this:
backend api
reqrep ^([^\ ]*\ /)api[/]?(.*) \1\2
server api_server localhost:9871
All this config is doing is removing the /api from the path of the request and sending it to a static port, 9871. *I need this port to be the value held by the environment variable of the same name as the first element in the path (the /a above) and the rest (passing the remaining path) is already working.*
I also would like to be able to get the environment variable of the name prefix_a, where the path will have the name /a, but I need to prepend one common prefix prefix_ to get the environment variable. This can be a separate question or search though, unless it's simple to just put that into the solution.
Please let me know if I can clarify or give more information that might help solve the problem.
(I've done a heck a lot of googling. Here are some related urls but not quite the answer I need:
https://gist.github.com/meineerde/8bea63e64fc47f9a67c0
Dynamic routing to backend based on context path in HAProxy
How can I set up HAProxy to a backend based on a value in the url?
Haproxy route and rewrite based on URI path
haproxy: get the host name
https://serverfault.com/questions/818937/haproxy-is-giving-me-problems-with-regex-replace-is-this-a-bug-or-am-i-doing-so
https://serverfault.com/questions/668025/how-to-use-environment-variable-in-haproxy
http://cbonte.github.io/haproxy-dconv/1.9/configuration.html#7.2
How do I set a dynamic variable in HAProxy?
use environment variables in haproxy

Jmeter SOAP/ XML-RPC request default URL

I am trying to test web service for my project. The Web service accepts a SOAP request and gives appropriate response.
In JMeter I have chosen SOAP/ XML-RPC request. It works completely fine for me and gives me correct response. However, I have more than 100s of web services in my scope of testing and I have to test them in different environments. It is very cumbersome work to change the URL value from the SOAP/ XML-RPC sample to point it to different env. Do we have something like HTTP Request Default for SOAP/XML-RPC requests?
I have also tried a bean shell sampler where I am setting the value of a variable and then retrieve it in the SOAP sampler URL parameter. However it did not work for me. Below is the code.
Bean Shell sampler code:
vars.put("baseURL","http://localhost:9191/ws/soap");
SOAP/ XML-RPS Sampler URL value:
${__BeanShell(vars.get("baseURL"))}
Any suggestions? I read in JMeter docs that this can be done via http sampler, however, I want to avoid using the same if possible.
You should avoid using SOAP/XML-RPC in favor of pure Http Sampler.
Use the "Templates..." (menu) > Building a SOAP Webservice Test Plan:
This way you can use HTTP Request Default if you want.
But note from what you describe, using a CSV Data Set Config would allow you to variabilize the URL.
Use JMeter Properties to set base url like:
in user.properties file (under /bin folder of your JMeter installation) add one line per property:
baseURL=http://localhost:9191/ws/soap
alternatively you can pass the property via -J command line key as:
jmeter -JbaseURL=http://localhost:9191/ws/soap -n -t /path/to/your/testplan.jmx -l /path/to/results.jtl
Refer the defined property in your test plan using __P() function
${__P(baseURL,)}
You can even provide the default value, i.e. if the property is not set via user.properties file or command-line argument - default value will be used:
${__P(baseURL,http://localhost:9191/ws/soap)}
See Apache JMeter Properties Customization Guide for more information on JMeter properties and ways of setting, overriding and using them.

Why does OpenShift interfere with my redirects?

I configured both, europe.example.org and example.eu as domain alias in OpenShift.
When example.eu is called (eg via curl -i http://example.eu), my OpenShift app's logic sends this HTTP Location header in order to perform a redirect:
Location: http://europe.example.org/?from=example.eu
However, OpenShift intereferes with what I send, actually submitting the following instead:
Location: http://example.eu/?from=example.eu
This creates an infinite redirect-loop.
How can I stop OpenShift from doing that and instead have it pass what my app actually says to?
Try:
Location: http://#europe.example.org/?from=example.eu

How to check HTTP response code in zabbix?

I have a Zabbix server 2.2 and a few linux hosts with websites. How can I get a notification from Zabbix, if the HTTP(s) response code is not 200?
I've tried those triggers without any success:
{owncloud:web.test.rspcode[Availability of owncloud,owncloud availability].last(,10)}#200
{owncloud:web.test.error[Availability of owncloud].count(10,200)}<1
{owncloud:web.test.error[Availability of owncloud].last(#1,10)}=200
But nothing works. I never got an notification, that the code is not 200 anymore even it was 404, because I have renamed the index.php of owncloud to index2.php
I configured the Application and the we the Web Scenario as followed:
if you have already configured the host go to step 1
1) Select the host by Configuration-> Host groups -> select host (example server 1)
2) Go to Config > Hosts > [Host Created Above] > Applications and click on Create Application
3) Now you have to create the Web scenario with the status code check, in my case I checked status code 200. So go to Configuration > Hosts > [Host Created Above] > Web Scenarios and click on Create Web Scenario .
Remark: you have to select the previous application created at the step 2
4) After that without click on Add button go to Steps window and you have to configure the host and parameters for the chek. After that click on Add. In my cas e check the status code 200 response for the HTTP request.
I found the issue. You need to specify the URL to check with file. For example like this in your web scenario:
https://owncloud.example.com/index.php
"Note that Zabbix frontend uses JavaScript redirect when logging in, thus first we must log in, and only in further steps we may check for logged-in features. Additionally, the login step must use full URL to index.php file." - https://www.zabbix.com/documentation/2.4/manual/web_monitoring/example
I also used following expression as trigger:
{owncloud:web.test.fail[Availability of owncloud].last()}>0
you have set a triggers bye Expression
{host name:web.test.rspcode[Scenario name,Steps name].last()}=200
The question has been answered adequately, but I will provide a very much more advanced solution that you could use for all HTTP status codes.
I've created an item that monitors all HTTP status codes of a proxy, graphs them, and then set up multiple different types of triggers to watch last value and counts in last N minutes.
The regex I used to extract all the values from a Nginx or Apache access log is
^(\S+) (\S+) (\S+) \[([\w:\/]+\s[+\-]\d{4})\] \"(\S+)\s?(\S+)?\s?(\S+)?\" (\d{3}|-) (\d+|-)\s?\"?([^\"]*)\"?\s?\"?([^\"]*)\"?\s
I then set many triggers relevant for my particular situation
101 Switching Protocols
301 Moved Permanently
302 Redirect
304 not modified
400 Bad Request
401 Unauthorised
403 Forbidden
404 Not found
500 Server Error
It's also important that your Zabbix agent has permissions to read the log file on the host. You can add the zabbix-agent to the www-data group using this command.
$ sudo usermod -a -G www-data Zabbix
See the tutorial for all the steps in greater detail.