How can I run a single only once request for with Locust (Python) - locust

There's a notion of SetUp Thread Group with Number of Threads (users) possibility on JMeter but when using Locust, the on_start() method is running as the same amount of users.
My workaround.
#events.test_start.add_listener
def _(environment, **kwargs):
global token
token = get_token(environment.host)
...
def get_token():
r = requests.post(host+'/url/token', headers={}, ...)
return r.text
To be honest, I don't really like that workaround.
I only need to get a Token once. I can reuse that token and also, it's a really heavy call (because the amount of validations), so I don't want that call to be executed each time for each user.
Is there a possible way to create a Only Once request that handles this at the beginning of the test?
Any ideas 🤔
Edited: This is related to Locust library NOT JMeter.

I think your ”workaround” is the correct solution.
But if you for some reason really dont like module-scoped variables and the global keyword, you can add your own fields to the environment (available as self.environment in the User instance).

Related

Why do I get ReadTimeout errors even with low load using Locust?

I'm running load-tests with Locust. I have two user-types that inherit from the HttpUser class, and they both call the same endpoints with the same parameters using the Python Requests library.
Type A users make calls more often and Type B users make calls less often, but pass longer query-strings.
No matter how many users I spawn (very few, or very many) usually once the test has been running for ten to 15 minutes, the same error starts occurring, for just one User-type (User Type B):
ReadTimeout(ReadTimeoutError("HTTPSConnectionPool(host=\'server.name.com',port=443):Read timed out. (read=timeout=None)",),)'
User Type A (on high and low load) continues making all requests with no failures.
Again, both user types are making the same calls, and all the individual calls have been QA'ed (they all work). It's only under load tests where they seem to fail intermittently.
Here is a view into some of the code. Below is a task, an intermediary function, and an implementation of one of my api calls. My code repeats this structure for all calls, for both users.
Task
#task(5)
def get_one_audio_file(self):
self.do_task(self.get_one_audio_file_request)
def do_task(self, a_task):
a_task()
def get_one_audio_file_request(self):
path = files/audio/
self.client.get(path, name="files/audio", headers=headers, verify=False)
I'm at a loss to explain what's going on here.
If I run a load test with Type A users only, they test runs fine.
Any ideas?

Sequential scenarios each with a different protocol

I have a simulation where I first need an admin user to do certain things before a normal user can perform certain tasks.
val adminConf = http
.baseURL(server)
.headers(sentHeaders)
.basicAuth(admin, password)
val normalUserConf = http
.baseURL(server)
.headers(sentHeaders)
.basicAuth(normalUser, password)
At the moment I'm only able to run one scenario:
setUp(adminScenario
.inject(atOnceUsers(1))
.protocols(adminConf))
How can I run one scenario with the adminConf protocol and one scenario with the normalUserConf protocol sequentially?
Gatling does not have API to run scenarios sequentially.
What you have described looks like setup step. I would recommend to use before hook to perform initial setup. Here is relevant question.
Because inside before we don't have access to gatling we chose to use sttp library that has API somewhat similar to gatling
sttp
.cookie("login", "me")
.body("This is a test")
.post(uri"http://endpoint.com/secret")
.send()

SugarCrm: How to find caller of a logic hook

Suppose we have a before save logic hook on leads, now How we can detect if this called logic hook is:
a crm user who is saving a lead form
is a lead captured from on of entry points
is a save triggered by soap calls
is a workflow which is modifying lead fields
is called because of csv import
...
I have checked some of the behaviors, it seems logic hooks are not called on workflows (at least in my test)
Also I hope to figure out this issue in global variables, but there are a lot of global variables.
So How I can detect caller of a logic hook ?
The best way that I found to figure this out is to add a:
$GLOBALS['log']->fatal(print_r($_REQUEST,true));
To your logic hook. Then test each scenario that you need to account for and see how the request differs. Also check $_SESSION. You'll be able to find a few things that you can depend upon for your logic.
That's what I did eventually. I share some of my observations so it may help the others (these are statements, based on the case some of them may applies)
in third party entry point calls $_SESSION is empty, in direct entry point calls is not so. Also in rest calls session is not empty.
rest calls have $_REQUEST[rest_data] and others don't.
entry point calls have $_REQUEST[entryPoint] available in the array
global $current_user is available, but the id var ($current_user->id) is a string only in the case a user is submitting a form in crm.
in inline editing $_REQUEST[action] is equal to saveHTMLField
in user calls $_SERVER[HTTP_USER_AGENT] is available and in other calls it is not.
In a simple case this code shows how to detect user calls:
$trigger = false;
global $current_user;
if (!isset($current_user->id) || !strlen($current_user->id) > 2)
$trigger = true;
if ($trigger) {
//#My Custome Code
}

QUnit and Sinon, testing XHR requests

I'm relatively new to unit testing and i'm trying to figure out a way to test an XHR request in a meaningful way.
1) The request pulls in various scripts and other resources onto the page, I want to make sure the correct number of resources are being loaded, and that the request is successful.
2) Should I use an actual request to the service that is providing the resource? I looked at fakeserver and fakexhr request on sinonjs.org, but I don't really get how those can provide a meaningful test.
3) I'm testing existing code, which I realize is pretty pointless, but it's what i'm required to do. That being said, there is alot of code in certain methods which could potentially be broken down into various tests. Should I break the existing code down and create tests for my interpreted expectation? Or just write tests for what is actually there?.... if that makes any sense.
Thanks,
-John
I find it useful to use the sinon fakeServer to return various test responses that will exercise my client-side functions. You can set up a series of tests in which a fakeServer response returns data that you can use to subsequently check the behaviour of your code. For example, suppose you expect ten resource objects to be returned, you can create pre-canned xml or json to represent those resources and then check that your code has handled them properly. In another test, what does your code do when you only receive nine objects?
Begin writing your tests to cover your existing code. When those tests pass, begin breaking up your code into easier-to-understand and meaningful units. If the tests still pass, then great, you've just refactored your code and not inadvertently broken anything. Also, now you've got smaller chunks of code that can more readily be tested and understood. From this point on you'll never look back :-)

hosting simple python scripts in a container to handle concurrency, configuration, caching, etc

My first real-world Python project is to write a simple framework (or re-use/adapt an existing one) which can wrap small python scripts (which are used to gather custom data for a monitoring tool) with a "container" to handle boilerplate tasks like:
fetching a script's configuration from a file (and keeping that info up to date if the file changes and handle decryption of sensitive config data)
running multiple instances of the same script in different threads instead of spinning up a new process for each one
expose an API for caching expensive data and storing persistent state from one script invocation to the next
Today, script authors must handle the issues above, which usually means that most script authors don't handle them correctly, causing bugs and performance problems. In addition to avoiding bugs, we want a solution which lowers the bar to create and maintain scripts, especially given that many script authors may not be trained programmers.
Below are examples of the API I've been thinking of, and which I'm looking to get your feedback about.
A scripter would need to build a single method which takes (as input) the configuration that the script needs to do its job, and either returns a python object or calls a method to stream back data in chunks. Optionally, a scripter could supply methods to handle startup and/or shutdown tasks.
HTTP-fetching script example (in pseudocode, omitting the actual data-fetching details to focus on the container's API):
def run (config, context, cache) :
results = http_library_call (config.url, config.http_method, config.username, config.password, ...)
return { html : results.html, status_code : results.status, headers : results.response_headers }
def init(config, context, cache) :
config.max_threads = 20 # up to 20 URLs at one time (per process)
config.max_processes = 3 # launch up to 3 concurrent processes
config.keepalive = 1200 # keep process alive for 10 mins without another call
config.process_recycle.requests = 1000 # restart the process every 1000 requests (to avoid leaks)
config.kill_timeout = 600 # kill the process if any call lasts longer than 10 minutes
Database-data fetching script example might look like this (in pseudocode):
def run (config, context, cache) :
expensive = context.cache["something_expensive"]
for record in db_library_call (expensive, context.checkpoint, config.connection_string) :
context.log (record, "logDate") # log all properties, optionally specify name of timestamp property
last_date = record["logDate"]
context.checkpoint = last_date # persistent checkpoint, used next time through
def init(config, context, cache) :
cache["something_expensive"] = get_expensive_thing()
def shutdown(config, context, cache) :
expensive = cache["something_expensive"]
expensive.release_me()
Is this API appropriately "pythonic", or are there things I should do to make this more natural to the Python scripter? (I'm more familiar with building C++/C#/Java APIs so I suspect I'm missing useful Python idioms.)
Specific questions:
is it natural to pass a "config" object into a method and ask the callee to set various configuration options? Or is there another preferred way to do this?
when a callee needs to stream data back to its caller, is a method like context.log() (see above) appropriate, or should I be using yield instead? (yeild seems natural, but I worry it'd be over the head of most scripters)
My approach requires scripts to define functions with predefined names (e.g. "run", "init", "shutdown"). Is this a good way to do it? If not, what other mechanism would be more natural?
I'm passing the same config, context, cache parameters into every method. Would it be better to use a single "context" parameter instead? Would it be better to use global variables instead?
Finally, are there existing libraries you'd recommend to make this kind of simple "script-running container" easier to write?
Have a look at SQL Alchemy for dealing with database stuff in python. Also to make script writing easier for dealing with concurrency look into Stackless Python.