I wrote a test plan on Windows 7. I remotely started a test on two machines, both Windows Vista. A problem came up when I tried do same thing on Linux - I used the same test plan.
I can login a group of users and simulate their behaviour, but when I try to log them out nothing happens.
On windows, they are logout but linux gives me empty response data. Listeners show green status so I'm rather confused to what's going on. Should I change something in properties or is it problem with my script?
EDIT:
Script:
Login user using authorization data. every user gets different JSESSIONID.
Simulate user behaviour using Access Log Sampler.
Logout user.
On Windows, everything works fine login and logout. Listener shows: sample result, request data and response data for every sample.
On linux response data is blank for every sample.
Examples of Sample result for
windows and
linux
Request Data is the same for both.
Response data for linux is blank.
EDIT2:
Test Plan
setUP Thread Group
Clean cache server
Clean file with JSESSIONID
Thread Group
HTTP Request Defaults
Login (once only controller)
Acces Log Sampler
using beanshell script i save JSESSIONID (cookie variable) to file
Cookie Manager
tearDown Thread Group
HTTP Request Defaults
read JSESSIONID from file
logout all users
Cookie Manager
result tree
Summary report
Logout must be performed after all samples from access log are done. That's why i save JSESSIONID to file to share same session between thread group.
Ok somehow I eliminate error with response. Apparently there was a problem with java version on linux server.
Current problem is that when i start remotely script on Linux it doesn't follow redirect. The same script on win XP or Vista follow redirects and user is logout.
Exp.
GET connection.rpc?logout=D5D076123FD6CCBF137FE1673F531006
On Windows I get two redirections and user is logout.
Thread Name: Logout 1-1
Sample Start: 2013-05-18 13:50:52 CEST
Load time: 15
Latency: 13
Size in bytes: 777
Headers size in bytes: 573
Body size in bytes: 204
Sample Count: 1
Error Count: 0
Response code: 200
Response message: OK
Response headers:
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
X-wkpl-server-name: OnlineRC2
Content-Type: text/html;charset=UTF-8
Content-Language: en-US
Content-Length: 204
Date: Sat, 18 May 2013 11:50:43 GMT
HTTPSampleResult fields:
ContentType: text/html;charset=UTF-8
DataEncoding: UTF-8
Thread Name:
Sample Start: 2013-05-18 13:50:52 CEST
Load time: 13
Latency: 13
Size in bytes: 374
Headers size in bytes: 374
Body size in bytes: 0
Sample Count: 1
Error Count: 0
Response code: 302
Response message: Moved Temporarily
Response headers:
HTTP/1.1 302 Moved Temporarily
Server: Apache-Coyote/1.1
Set-Cookie: JSESSIONID=6D3F7A3774ABB1411A5F8E1744004A71; Path=/WKPLOnline
CacheControl: no-cache
Pragma: no-cache, no-store
Expires: -1
Location: connection.rpc?logout=BE8C04D8538641675A8BFD2490CDDD4D
Content-Length: 0
Date: Sat, 18 May 2013 11:50:43 GMT
Thread Name: Logout 1-1
HTTPSampleResult fields:
ContentType:
DataEncoding: null
Sample Start: 2013-05-18 13:50:52 CEST
Load time: 2
Latency: 2
Size in bytes: 403
Headers size in bytes: 199
Body size in bytes: 204
Sample Count: 1
Error Count: 0
Response code: 200
Response message: OK
Response headers:
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
X-wkpl-server-name: OnlineRC2
Content-Type: text/html;charset=UTF-8
Content-Language: en-US
Content-Length: 204
Date: Sat, 18 May 2013 11:50:43 GMT
HTTPSampleResult fields:
ContentType: text/html;charset=UTF-8
DataEncoding: UTF-8
On Linux I don't get redirects and user is not logout.
Thread Name: Logout 1-1
Sample Start: 2013-05-18 13:51:48 CEST
Load time: 18
Latency: 18
Size in bytes: 264
Headers size in bytes: 243
Body size in bytes: 21
Sample Count: 1
Error Count: 0
Response code: 200
Response message: OK
Response headers:
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Set-Cookie: JSESSIONID=D17A4ABCDE7FB90C1DF702BDCB3827D7; Path=/WKPLOnline
CacheControl: no-cache
Pragma: no-cache, no-store
Expires: -1
Content-Length: 21
Date: Sat, 18 May 2013 11:51:53 GMT
HTTPSampleResult fields:
ContentType:
DataEncoding: null
It is strange because during authorization there are a few redirects and linux performs them correctly.
You should check that jmeter accesses correctly your jsessionid file on linux:
check path is ok (no )
check read access
If you are using distributed testing, issue may be that file is not found by agent or some file is overwritten by another agent
Problem is solve, yupi :)
It turned out that the target server has set the lock for some machines. On such a machine was a Linux. That is why I could not log users in a separate thread.
Therefore, if someone encounters a similar problem (from one machine request are handled correctly from another are not), he should check if his machine have correct permission in my case i needed to do correct entry in adm.list in test server.
Related
I have a server using the google Drive API. I tried with a curl PUT request to upload a simple file (test.txt) at http://myserver/test.txt. As you can see, I did the PUT request at the root of my server. The response I get is the following:
HTTP/1.1 200 OK
X-GUploader-UploadID: AEnB2UqANa4Bj6ilL7z5HZH0wlQi_ufxDiHPtb2zq1Gzcx7IxAEcOt-AOlWsbX1q_lsZUwWt_hyKOA3weAeVpQvPQTwbQhLhIA
ETag: "6e809cbda0732ac4845916a59016f954"
x-goog-generation: 1548877817413782
x-goog-metageneration: 1
x-goog-hash: crc32c=jwfJwA==
x-goog-hash: md5=boCcvaBzKsSEWRalkBb5VA==
x-goog-stored-content-length: 6
x-goog-stored-content-encoding: identity
Content-Type: text/html; charset=UTF-8
Accept-Ranges: bytes
Via: 1.1 varnish
Content-Length: 0
Accept-Ranges: bytes
Date: Wed, 30 Jan 2019 19:50:17 GMT
Via: 1.1 varnish
Connection: close
X-Served-By: cache-bwi5139-BWI, cache-cdg20732-CDG
X-Cache: MISS, MISS
X-Cache-Hits: 0, 0
X-Timer: S1548877817.232336,VS0,VE241
Vary: Origin
Access-Control-Allow-Methods: POST,PUT,PATCH,GET,DELETE,OPTIONS
Access-Control-Allow-Headers: Cache-Control,X-Requested-With,Authorization,Content-Type,Location,Range
Access-Control-Allow-Credentials: true
Access-Control-Max-Age: 300
I know you're not supposed to use the API that way. I did that for testing purposes. I understand every headers returned but can't figure out if my file has been uploaded because I don't have enough knowledge of this API.
My question is very simple :
Just by looking at the response, can you tell me if my file has been uploaded ?
If yes can I retrieve it and how ?
The HTTP status code traditionally indicates, for any given request, if it was successful. The status code in the response is always on the first line:
HTTP/1.1 200 OK
200 type status codes mean success. You should take some time to familiarize yourself with HTTP status codes if you intend to work with HTTP APIs.
Edit: it seems the issue was caused by a dropped cookie. There should have been a session id cookie as well.
For posterity, here's the original question
When sending a request formed as this
GET https://<url>?<parameters>
Cache-Control: max-age=0
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Charset: iso-8859-1,utf-8,UTF-8
Accept-Encoding: gzip, x-gzip, deflate, x-bzip2
Accept-Language: en-US,en;q=0.5
If-None-Match: "6eb7d55abfd0546399e3245ad3a76090"
User-Agent: Mozilla/5.0 libwww-perl/6.13
Cookie: auth_token=<blah>; __cfduid=<blah>
Cookie2: $Version="1"
I receive the following response
response-type: text/html
charset=utf-8
HTTP/1.1 406 Not Acceptable
Cache-Control: no-cache
Connection: keep-alive
Date: Fri, 12 Feb 2016 18:34:00 GMT
Server: cloudflare-nginx
Content-Type: text/html; charset=utf-8
CF-RAY: 273a62969a9b288e-SJC
Client-Date: Fri, 12 Feb 2016 18:34:00 GMT
Client-Peer: <IP4>:443
Client-Response-Num: 10
Client-SSL-Cert-Issuer: /C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limite
d/CN=COMODO ECC Domain Validation Secure Server CA 2
Client-SSL-Cert-Subject: /OU=Domain Control Validated/OU=PositiveSSL Multi-Domai
n/CN=ssl<blah>.cloudflaressl.com
Client-SSL-Cipher: <some value>
Client-SSL-Socket-Class: IO::Socket::SSL
Client-SSL-Warning: Peer certificate not verified
Client-Transfer-Encoding: chunked
Status: 406 Not Acceptable
X-Runtime: 9
I'm not entirely sure why the response is 406 Not Acceptable. When
downloaded with firefox, the file in question in 996 KB (as reported
by Windows 8.1's explorer). It looks like I have a partially
transferred file from my perl script at 991 KB (again, windows
explorer size), so it got MOST of the file before throwing the Not
Acceptable response. Using the same URL pattern and request style, I
was able to successfully download a 36 MB file from the server with
this perl library and request form, so the size of the file should not
be magically past some max (chunk) size. As these files are being
updated on approximately 15-minute intervals, I suppose it's possible
that a write was performed on the server, invalidating the ETag before
all chunks were complete on this file?
I tried adding chunked to Accept-Encoding, but that's not for
Transfer encoding and it appears to have no affect on the server's behavior. Additionally, as I've been able to download larger files
(same format) from the same server, that alone shouldn't be the cause
of my woes. LWP is supposed to be able to handle chunked data
returned by a response to GET (as per this newsgroup post).
The server in question is running nginx with Rack::Lint. The
particular server configuration (which I in no way control), throws
500 errors on its own attempts to send 304: not modified. This
caused me to write a workaround for File::Download (sub
lintWorkAround here), so I'm not above putting blame on the
server in this instance also, if warranted. I don't believe I buggered
up the chunk-handling code from File::Download 0.3 (see diff),
but I suppose that's also possible. Is it possible to request a
particular chunk size from the server?
I'm using LWP and libwww versions 6.13 in perl 5.18.2.
File::Download version is my own 0.4_050601.
So, what else could the 406 error mean? Is there a way to request that
the server temporarily cache/version control the entire file so that I
can download a given ETag'd file once the transfer begins?
I am trying to test a Rest service through HTTP sampler using Jmeter. The first sampler generates a token and I am using this token for authorization in the header manager of another HTTP sampler "GetUserandPolicies"(Rest WS request) using RegEx and ForEach controller. I can see in the view results tree that RegEx is working fine passing the actual token to the next request. But the Rest Request is failing giving a response message as Forbidden and Response Code 403 which means that the server is able to recognise the request but denying the access.There is no port number for this HTTP sampler which I suspect would be the culprit. But, the same test is passing with another tool (iTKO LISA) without any port value. Both the samplers "TokenGeneration" and "GetUserandPolicies" have no port values. I need some help on this. I am using the POST method in the HTTP sampler
Please find the sampler result:
Thread Name: Thread Group 1-1
Sample Start: 2014-01-13 12:12:29 IST
Load time: 1390
Latency: 1390
Size in bytes: 382
Headers size in bytes: 354
Body size in bytes: 28
Sample Count: 1
Error Count: 1
Response code: 403
Response message: Forbidden
Response headers:
HTTP/1.1 403 Forbidden
Server: Apache-Coyote/1.1
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, PUT, OPTIONS
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: Authorization, X-Requested-With, Content-Type
Content-Type: text/plain;charset=UTF-8
Content-Length: 28
Date: Mon, 13 Jan 2014 06:42:30 GMT
HTTPSampleResult fields:
ContentType: text/plain;charset=UTF-8
DataEncoding: UTF-8
Looking into Access-Control-Allow-Headers: Authorization, X-Requested-With, Content-Type stanza I guess that you're missing proper Authorization header.
In regards to empty port everything is fine, it defaults to port 80 in case of HTTP and 443 in case of HTTPS
There are 2 options on how you can deal with Basic HTTP Authentication:
Pass username and password in URL like protocol://username:password#host:port/path
i.e. http://user:pass#your.server.com/somelocation
Use JMeter HTTP Authorization Manager to construct required "Authorization" header for you.
In case if your authentication system uses other approaches, i.e. Cookie-based, NTLM or Kerberos it's still possible but a little bit more tricky. If so - update this post with all details you can get (i.e. request details) and don't hesitate to leave a comment requesting for more input
I am trying to list the users directory of apple calendar server on my localhost. i am getting access forbidden error , its the same for groups as well. my operating system is ubuntu 12.04 LTS. and the package is from the repository.
here is the log of the runshell.py command
/calendars/users > ls
<-------- BEGIN HTTP CONNECTION -------->
Server: localhost
<-------- BEGIN HTTP REQUEST -------->
PROPFIND /calendars/users/ HTTP/1.1
Host: localhost:8008
Authorization: Digest username="test", realm="Test Realm", nonce="17913381079262023151194175611", uri="/calendars/users/", response="df3db481efdc68df9c39733a957f072a", algorithm="md5"
Content-Length: 145
Content-Type: text/xml; charset=utf-8
Depth: 1
Brief: t
<?xml version='1.0' encoding='utf-8'?>
<ns0:propfind xmlns:ns0="DAV:">
<ns0:prop>
<ns0:resourcetype />
</ns0:prop>
</ns0:propfind>
<-------- BEGIN HTTP RESPONSE -------->
HTTP/1.1 403 Forbidden
Date: Mon, 03 Jun 2013 06:48:12 GMT
DAV: 1, access-control
Content-Type: text/html;charset=utf-8
Content-Length: 139
Server: Twisted/8.2.0 TwistedWeb/8.2.0
<html><head><title>403 Forbidden</title></head><body><h1>Forbidden</h1>You don't have permission to access /calendars/users/.</body></html>
<-------- END HTTP RESPONSE -------->
<-------- END HTTP CONNECTION -------->
Ignoring error: 403
First of all, have you verified that the request uri that you are using corresponds to the DAV:principal-collection-set property ? See https://www.rfc-editor.org/rfc/rfc3744#section-5.8
Then, the principals namespace is typically not queried through PROPFIND but rather through a DAV:principal-property-search REPORT query. See https://www.rfc-editor.org/rfc/rfc3744#section-9.4
Now, if you want to retrieve all the users on the calendar server, I'm not sure that the server will actually let you do that, especially if you have a large number of users.
The Google Cloud Storage Developer Guide explains how to set Cache-Control headers, and explains their critical impact on the consistency behavior of the api, yet the Expires headers aren't mentioned nor did they appear to be inheriting from the Cache-Control configuration.
The Expires header appeared to always be equal to request time plus 1 year, regardless of Cache-Control setting, eg.
$ gsutil setmeta -h "Cache-Control:300" gs://example-bucket/doc.html
A request was made to a document (doc.html) in the Google Cloud Storage bucket (example-bucket) via
$ curl -I http://example-bucket.storage.googleapis.com/doc.html
which produced the following headers
HTTP/1.1 200 OK
Server: HTTP Upload Server Built on Oct 3 2012 16:52:30 (1349308350)
Date: Sat, 13 Oct 2012 00:51:13 GMT
Cache-Control: 300, no-transform
Expires: Sun, 13 Oct 2013 00:51:13 GMT
Last-Modified: Fri, 12 Oct 2012 20:08:41 GMT
ETag: "28fafe4213ae34c7d3ebf9ac5a6aade8"
x-goog-sequence-number: 82
x-goog-generation: 1347601001449082
x-goog-metageneration: 1
Content-Type: text/html
Accept-Ranges: bytes
Content-Length: 7069
Vary: Origin
Not sure why you say the Expires header shows request time plus 1 year. In your example, the Expires header shows a timestamp one hour after the request date, which is to be expected.
I just did an experiment where I set an object's max age to 3600 and then 7200 via this command:
gsutil setmeta "Cache-Control:max-age=7200" gs://marc-us/xyz.txt
Then I retrieved the object using the gsutil cat command with the -D option to see the request/response details, like this:
gsutil -D cat gs://marc-us/xyz.txt
In both experiments, the Expires header produced the expected timestamp, as per the object's max-age setting (i.e. one hour after request time and two hours after request time).
Looks like this was caused by a malformed header. Duh.
Cache-Control: 300, no-transform
should be
Cache-Control: public, max-age=300, no-transform
When things are set correctly, they work. See RFC 2616 (HTTP/1.1) Section 14.9 (Cache-Control).