Doing blackbox Rest API fuzzing, I would like to verify after each fuzzing request that a non fuzzed request are still OK as I have no access to the target. Is session.post_send can be used and how ?
Yes, post_send can be used this way. See Session.post_send documentation.
For example, you might add the following after line 50 of this example file:
session.post_send = my_post_send
Of course, you will need to define my_post_send, for example:
def my_post_send(target, fuzz_data_logger, session, sock, *args, **kwargs):
target.send('some data')
response = target.recv(10000)
fuzz_data_logger.log_check('Checking response data ....')
# if failure is found:
fuzz_data_logger.log_fail('SUT responded with ___ indicating catastrophic failure')
See also documentation for Target and IFuzzLogger.
Related
[EDITED: I realized after reading response that I oversimplified my question.]
I am new to Locust and not sure how to solve this problem.
I have function (call it "get_doc") that is passed a locust.HttpSession() and uses it to issue an HTTP request. It gets the response and parses it, returning it up several layers of call. One of these higher-level calls looks at the returned, parsed document to decide if the response was what was expected or not. If not, I want Locust to mark the request/response as failed. A code sketch would be:
class MyUser (HttpUser):
#task
def mytask(self):
behavior1 (self.client)
def bahavior1(session):
doc = get_doc(session, url1)
if not doc_ok (doc):
??? how to register a failure with Locust here...
doc2 = get_doc(session, url2)
...
def get_doc(http_session, url):
page = http_session.get(url)
doc = parse (page)
return doc
There may be several behavior[n] functions and several Locust users calling them.
A constraint is that I would like to keep Locust-specific stuff out of bahavior1() so that I can call it with an ordinary Requests session. I have tried to do something like this in get_doc() (the catch_response parameter and success/fail stuff is actually conditionalized on 'session' being an HttpSession object):
def get_doc (session, meth, url):
resp = session.request (meth, url, catch_response=True)
doc = parse (resp.content)
doc.logfns = resp.success, resp.failure
return doc
and then in behavior1() or some higher up-chain caller I can
doc.logfns[1]("Document not as expected")
or
doc.logfns[0] # Looks good!
Unfortunately this is not working; the calls to them produce no errors but Locust doesn't seem to record any successes or failures either. I am not sure if it should work or I bungled something in my code. Is this feasible? Is there a better way?
You can make get_doc a context manager, call .get with catch_response=True and yield instead of return inside it. Similar to how it is done here: https://github.com/SvenskaSpel/locust-plugins/blob/2cbbdda9ae37b6cbb0a11cf69aca80b164198aec/locust_plugins/users/rest.py#L22
And then use it like this
def mytask(self):
with get_doc(self.client, url) as doc:
if not doc_ok(doc):
doc.failure(”doc was not ok :(”)
If you want, you can add the parsed doc as a field on the response before yielding in your doc function, or call doc.failure() inside doc_ok.
scrapys duplication filter ignores already seen urls/requests. So far, so good.
The Problem
Even if a request is dropped I still want to keep the redirection history.
Example:
Request 1 : B
Request 2 : A --301--> B
In this case request 2 is dropped without letting me know that it is a 'hidden' duplicate of request 1.
Attempts
I already tried to catch the signal request_dropped. This works but I don't see a possibility to sent an item to the pipeline from the handler.
Best regards and thanks for your help :)
Raphael
You are probably looking for DUPEFILTER_DEBUG
Set it to True in settings.py file and you will see all URLs that were ignored because of being duplicate
I figured out a way to handle those 'hidden' redirects:
Catch the signal 'request_dropped' from 'from_crawler':
#classmethod
def from_crawler(cls, crawler, *args, **kwargs):
spider = super(YourSpider, cls).from_crawler(crawler, *args, **kwargs)
crawler.signals.connect(spider.on_request_dropped, signal=signals.request_dropped)
return spider
Use 'self.crawler.engine.scraper.enqueue_scrape' to route the response to a callback which can yield items. enqueue_scrape expects a response so you can simply create a dummy response from the dropped request (I used TextResponse for this). With this response you can also define the callback.
def on_request_dropped(self, request, spider):
""" handle dropped request (duplicates) """
request.callback = self.parse_redirection_from_dropped_request
response = TextResponse(url=request.url, request=request)
self.crawler.engine.scraper.enqueue_scrape(request=request,
response=response, spider=self)
Process the redirection history of the dropped request within the callback you defined. From here you can handle things exactly like within the regular parse callback.
def parse_redirection_from_dropped_request(self, response):
...
yield item
I hope this might help you if you stumble upon the same problem.
I would like to uses jmeter for api functional testing, the jmeter dashboard reporting is not ideal for functional testing.
I have attempted to integrate extent 2.41.2 reporting with groovy script that validates responses (http and expected response code).
I have attempted to use the idea given in Using extentreports for jmeter test results
However that has failed. I used a js2322 assertion to check for valid responses, but then I get errors whenever attempt to run.
I'm not sure whether it should be setup as post processor step instead of an assertion.
Has anyone got any ideas on how this can be achieved?
You can assert result by using prev which is SampleResult:
prev - (SampleResult) - gives access to the previous SampleResult (if any)
Here's example of checking token exists in response and if not return relevant assertion:
import org.apache.jmeter.assertions.AssertionResult;
boolean assertToken = prev.getResponseDataAsString().contains("token");
prev.setSuccessful(assertToken);
if (!assertToken) {
AssertionResult assertionResult = new AssertionResult("Assertion expected to contain token")
assertionResult.setFailureMessage("Assertion failure message: Test failed: text expected to contain /token/");
assertionResult.setFailure(true);
prev.addAssertionResult(assertionResult);
}
I'm load testing a local API that will redirect a user based on a few conditions. Locust is not redirecting the simulated users hitting the end points and I know this because the app logs all redirects. If I manually hit the end points using curl, I can see the status is 302 and the Location header is set.
According to the embedded clients.HttpSession.request object, the allow_redirects option is set to True by default.
Any ideas?
We use redirection in our locust test, especially during the login phase. The redirects are handled for us without a hitch. Print the status_code of the response that you get back. Is it 200, 3xx or something worse?
Another suggestion: Don't throw your entire testing workflow into the locust file. That makes it too difficult to debug problems. Instead, create a standalone python script that uses the python requests library directly to simulate your workflow. Iron out any kinks, like redirection problems, in that simple, non-locust test script. Once you have that working, extract what you did into a file or class and have the locust task use the class.
Here is an example of what I mean. FooApplication does the real work. He is consumed by the locust file and a simple test script.
foo_app.py
class FooApplication():
def __init__(self, client):
self.client = client
self.is_logged_in = False
def login(self):
self.client.cookies.clear()
self.is_logged_in = False
name = '/login'
response = self.client.post('/login', {
'user': 'testuser',
'password': '12345'
}, allow_redirects=True, name=name)
if not response.ok:
self.log_failure('Login failed', name, response)
def load_foo(self):
name = '/foo'
response = self.client.get('/foo', name=name)
if not response.ok:
self.log_failure('Foo request failed ', name, response)
def log_failure(self, message, name, response):
pass # add some logging
foo_test_client.py
# Use this test file to iron out kinks in your request workflow
import requests
from locust.clients import HttpSession
from foo_app import FooApplication
client = HttpSession('http://dev.foo.com')
app = FooApplication(client)
app.login()
app.load_foo()
locustfile.py
from foo_app import FooApplication
class FooTaskSet(TaskSet):
def on_start(self):
self.foo = FooApplication(self.client)
#task(1)
def login(self):
if not self.foo.is_logged_in:
self.foo.login()
#task(5) # 5x more likely to load a foo vs logging in again
def load_foo(self):
if self.foo.is_logged_in:
self.load_foo()
else:
self.login()
Since Locust uses the Requests HTTP library for Python, you might find your answer there.
The Response object can be used to evaluate if a redirect has happened and what the history of redirects contains.
is_redirect:
True if this Response is a well-formed HTTP redirect that could have been
processed automatically (by Session.resolve_redirects).
There might be an indication that the redirect is not well-formed.
I am using Scrapy to fetch some data from iTunes' AppStore database. I start with this list of apps: http://itunes.apple.com/us/genre/mobile-software-applications/id36?mt=8
In the following code I have used the simplest regex which targets all apps in the US store.
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
class AppStoreSpider(CrawlSpider):
domain_name = 'itunes.apple.com'
start_urls = ['http://itunes.apple.com/us/genre/mobile-software-applications/id6015?mt=8']
rules = (
Rule(SgmlLinkExtractor(allow='itunes\.apple\.com/us/app'),
'parse_app', follow=True,
),
)
def parse_app(self, response):
....
SPIDER = AppStoreSpider()
When I run it I receive the following:
[itunes.apple.com] DEBUG: Crawled (200) <GET http://itunes.apple.com/us/genre/mobile-software-applications/id6015?mt=8> (referer: None)
[itunes.apple.com] DEBUG: Filtered offsite request to 'itunes.apple.com': <GET http://itunes.apple.com/us/app/bloomberg/id281941097?mt=8>
As you can see, when it starts crawling the first page it says: "Filtered offsite request to 'itunes.apple.com'". and then the spider stops..
it also returns this message:
[ScrapyHTTPPageGetter,client] /usr/lib/python2.5/cookielib.py:1577: exceptions.UserWarning: cookielib bug!
Traceback (most recent call last):
File "/usr/lib/python2.5/cookielib.py", line 1575, in make_cookies
parse_ns_headers(ns_hdrs), request)
File "/usr/lib/python2.5/cookielib.py", line 1532, in _cookies_from_attrs_set
cookie = self._cookie_from_cookie_tuple(tup, request)
File "/usr/lib/python2.5/cookielib.py", line 1451, in _cookie_from_cookie_tuple
if version is not None: version = int(version)
ValueError: invalid literal for int() with base 10: '"1"'
I have used the same script for other website and I didn't have this problem.
Any suggestion?
When I hit that link in a browser, it automatically tries to open iTunes locally. That could be the "offsite request" mentioned in the error.
I would try:
1) Remove "?mt=8" from the end of the URL. It looks like it's not needed anyway and it could have something to do with the request.
2) Try the same request in the Scrapy Shell. It's a much easier way to debug your code and try new things. More details here: http://doc.scrapy.org/topics/shell.html?highlight=interactive
I see this post is pretty old, if you haven't figured out the cause yet, here it is.
I run into a similar issue working with itunesconnect using mechanize. After much frustration i found that there's a bug in cookielib that doesn't handle some cookies correctly. It's discussed here: http://bugs.python.org/issue3924
The fix at the bottom of that post worked for me. I'll repost here for convenience.
Basically you create a custom subclass of cookielib.CookieJar, override _cookie_from_cookie_tuple and use this CustomCookieJar in place of the cookielib jar
class CustomCookieJar(cookielib.CookieJar):
def _cookie_from_cookie_tuple(self, tup, request):
name, value, standard, rest = tup
version = standard.get("version", None)
if version is not None:
# Some servers add " around the version number, this module expects a pure int.
standard["version"] = version.strip('"')
return cookielib.CookieJar._cookie_from_cookie_tuple(self, tup,request)