how to pass custom parameters to a locust test class? - locust

I'm currently passing custom parameters to my load test using environment variables. For example, my test class looks like this:
from locust import HttpLocust, TaskSet, task
import os
class UserBehavior(TaskSet):
#task(1)
def login(self):
test_dir = os.environ['BASE_DIR']
auth=tuple(open(test_dir + '/PASSWORD').read().rstrip().split(':'))
self.client.request(
'GET',
'/myendpoint',
auth=auth
)
class WebsiteUser(HttpLocust):
task_set = UserBehavior
Then I'm running my test with:
locust -H https://myserver --no-web --clients=500 --hatch-rate=500 --num-request=15000 --print-stats --only-summary
Is there a more locust way that I can pass custom parameters to the locust command line application?

You could use like env <parameter>=<value> locust <options> and use <parameter> inside the locust script to use its value
E.g.,
env IP_ADDRESS=100.0.1.1 locust -f locust-file.py --no-web --clients=5 --hatch-rate=1 --num-request=500 and use IP_ADDRESS inside the locust script to access its value which is 100.0.1.1 in this case.

Nowadays it is possible to add custom parameters to Locust (it wasnt possible when this question was originally asked, at which time using env vars was probably the best option).
Since version 2.2, custom parameters are even forwarded to the workers in a distributed run.
https://docs.locust.io/en/stable/extending-locust.html#custom-arguments
from locust import HttpUser, task, events
#events.init_command_line_parser.add_listener
def _(parser):
parser.add_argument("--my-argument", type=str, env_var="LOCUST_MY_ARGUMENT", default="", help="It's working")
# Set `include_in_web_ui` to False if you want to hide from the web UI
parser.add_argument("--my-ui-invisible-argument", include_in_web_ui=False, default="I am invisible")
#events.test_start.add_listener
def _(environment, **kw):
print("Custom argument supplied: %s" % environment.parsed_options.my_argument)
class WebsiteUser(HttpUser):
#task
def my_task(self):
print(f"my_argument={self.environment.parsed_options.my_argument}")
print(f"my_ui_invisible_argument={self.environment.parsed_options.my_ui_invisible_argument}")

It is not recommended to run locust in command line if you want to test in high concurrency. As in --no-web mode, you can only use one CPU core, so that you can not make full use of your test machine.
Back to your question, there is not another way to pass custom parameters to locust in command line.

Related

Karate- Gatling: Not able to run scenarios based on tags

I am trying to run performance test on scenario tagged as perf from the below feature file-
#tag1 #tag2 #tag3
**background:**
user login
#tag4 #perf
**scenario1:**
#tag4
**scenario2:**
Below is my .scala file setup-
class PerfTest extends Simulation {
val protocol = karateProtocol()
val getTags = scenario("Name goes here").exec(karateFeature("classpath:filepath"))
setUp(
getTags.inject(
atOnceUsers(1)
).protocols(protocol)
)
I have tried passing the tags from command line and as well as passing the tag as argument in exec method in scala setup.
Terminal command-
mvn clean test-compile gatling:test "-Dkarate.env={env}" "-Dkarate.options= --tags #perf"
.scala update:- I have also tried passing the tag as an argument in the karate execute.
val getTags = scenario("Name goes here").exec(karateFeature("classpath:filepath", "#perf"))
Both scenarios are being executed with either approach. Any pointers how i can force only the test with tag perf to run?
I wanted to share the finding here. I realized it is working fine when i am passing the tag info in .scala file.
My scenario with perf tag was a combination of GET and POST call as i needed some data from GET call to pass in POST call. That's why i was seeing both calls when running performance test.
I did not find any reference in karate gatling documentation for passing tags in terminal execution command. So i am assuming that might not be a valid case.

upgrading celery 4.x.x to 5.x.x in Django app - execute_from_commandline() replacement

the usage in 4.x.x was as following:
from tenant_schemas_celery.app import CeleryApp
class TenantCeleryApp(CeleryApp):
def create_task_cls(self):
return self.subclass_with_self('...', abstract=True, name='...', attribute='_app')
tenant_celery = TenantCeleryApp()
base = celery.CeleryCommand(app=tenant_celery)
base.execute_from_commandline('...')
...
Now when updating celery lib to 5.x.x the following error show:
base = celery.CeleryCommand(app=tenant_celery)
TypeError: __init__() got an unexpected keyword argument 'app'
from the documentation, the new CeleryCommand use click.Command class, how do I change my code to fit - what is the replacement usage for execute_from_commandline()?
EDIT:
after some tries hard the following code works:
tenant_celery.worker_main(argv=['--broker=amqp://***:***#rabbitmq:5672//***',
'-A', f'{__name__}:tenant_celery',
'worker', '-c', '1', '-Q', 'c1,c2,c3,c4'])
You can do a few things here.
The typical way to invoke / start a worker from within python is discussed at this answer:
worker = tenant_celery.Worker(
include=['project.tasks']
)
worker.start()
In this case, you would be responsible for making the worker exit when you are done.
To execute the CeleryCommand / click.Command, you pass in the arguments to the main function
base = CeleryCommand()
base.main(args=['worker', '-A', f'{__name__}:tenant_celery'])
You would still be responsible for controlling how celery exits in this case, too. You may choose a verb other than worker such as multi for whatever celery subcommand you were expecting to call.
You may also want to explicitly specify the name of the celery module for the -A parameter as discussed here.

How to stop Locust Load Test after all users complete their task?

In locust documentation, we can only stop task using self.interrupt() but it will move to parent class. It will not stop load test. I want to stop complete load test after all users login and complete their task
Locust Version: 1.1
class RegisteredUser(User):
#task
class Forum(TaskSet):
#task(5)
def view_thread(self):
pass
#task(1)
def stop(self):
self.interrupt()
#task
def frontpage(self):
pass
You can call self.environment.runner.quit() to stop the whole run.
More info: https://docs.locust.io/en/stable/writing-a-locustfile.html#environment-attribute
My framework creates a list of tuples of the credentials and support variables for every user. I have stored all my user credentials, tokens, support file names etc in those tuples as part of list. (Actually that is automatically done before starting locust)
I import that list in locustfile
# creds is created before running locust file and can be stored outside or part of locust # file
creds = [('demo_user1', 'pass1', 'lnla'),
('demo_user2', 'pass2', 'taam9'),
('demo_user3', 'pass3', 'wevee'),
('demo_user4', 'pass4', 'avwew')]
class RegisteredUser(SequentialTaskSet)
def on_start(self):
self.credentials = creds.pop()
#task
def task_one_name(self):
task_one_commands
#task
def task_two_name(self):
task_two_commands
#task
def stop(self):
if len(creds) == 0:
self.user.environment.reached_end = True
self.user.environment.runner.quit()
class ApiUser(HttpUser):
tasks = [RegisteredUser]
host = 'hosturl'
I use self.credentials in tasks
I created stop function in my class
Also, observe that RegisteredUser is inherited from SequentialTaskSet to run all tasks in sequence.

pytest implementing a logfile per test method

I would like to create a separate log file for each test method. And i would like to do this in the conftest.py file and pass the logfile instance to the test method. This way, whenever i log something in a test method it would log to a separate log file and will be very easy to analyse.
I tried the following.
Inside conftest.py file i added this:
logs_dir = pkg_resources.resource_filename("test_results", "logs")
def pytest_runtest_setup(item):
test_method_name = item.name
testpath = item.parent.name.strip('.py')
path = '%s/%s' % (logs_dir, testpath)
if not os.path.exists(path):
os.makedirs(path)
log = logger.make_logger(test_method_name, path) # Make logger takes care of creating the logfile and returns the python logging object.
The problem here is that pytest_runtest_setup does not have the ability to return anything to the test method. Atleast, i am not aware of it.
So, i thought of creating a fixture method inside the conftest.py file with scope="function" and call this fixture from the test methods. But, the fixture method does not know about the the Pytest.Item object. In case of pytest_runtest_setup method, it receives the item parameter and using that we are able to find out the test method name and test method path.
Please help!
I found this solution by researching further upon webh's answer. I tried to use pytest-logger but their file structure is very rigid and it was not really useful for me. I found this code working without any plugin. It is based on set_log_path, which is an experimental feature.
Pytest 6.1.1 and Python 3.8.4
# conftest.py
# Required modules
import pytest
from pathlib import Path
# Configure logging
#pytest.hookimpl(hookwrapper=True,tryfirst=True)
def pytest_runtest_setup(item):
config=item.config
logging_plugin=config.pluginmanager.get_plugin("logging-plugin")
filename=Path('pytest-logs', item._request.node.name+".log")
logging_plugin.set_log_path(str(filename))
yield
Notice that the use of Path can be substituted by os.path.join. Moreover, different tests can be set up in different folders and keep a record of all tests done historically by using a timestamp on the filename. One could use the following filename for example:
# conftest.py
# Required modules
import pytest
import datetime
from pathlib import Path
# Configure logging
#pytest.hookimpl(hookwrapper=True,tryfirst=True)
def pytest_runtest_setup(item):
...
filename=Path(
'pytest-logs',
item._request.node.name,
f"{datetime.datetime.now().strftime('%Y%m%dT%H%M%S')}.log"
)
...
Additionally, if one would like to modify the log format, one can change it in pytest configuration file as described in the documentation.
# pytest.ini
[pytest]
log_file_level = INFO
log_file_format = %(name)s [%(levelname)s]: %(message)
My first stackoverflow answer!
I found the answer i was looking for.
I was able to achieve it using the function scoped fixture like this:
#pytest.fixture(scope="function")
def log(request):
test_path = request.node.parent.name.strip(".py")
test_name = request.node.name
node_id = request.node.nodeid
log_file_path = '%s/%s' % (logs_dir, test_path)
if not os.path.exists(log_file_path):
os.makedirs(log_file_path)
logger_obj = logger.make_logger(test_name, log_file_path, node_id)
yield logger_obj
handlers = logger_obj.handlers
for handler in handlers:
handler.close()
logger_obj.removeHandler(handler)
In newer pytest version this can be achieved with set_log_path.
#pytest.fixture
def manage_logs(request, autouse=True):
"""Set log file name same as test name"""
request.config.pluginmanager.get_plugin("logging-plugin")\
.set_log_path(os.path.join('log', request.node.name + '.log'))

What is considered defaults for celery's app.conf.humanize(with_defaults=False)?

I'm trying to printout celery's configuration using app.conf.humanize(with_defaults=False) following the example in the user guide. But I always get an empty string when using with_defaults=False, I know that the configuration changes are in effects because I can see the changes using .humanize(with_defaults=True) instead.
I'm guessing that adding configuration with app.conf.config_from_object('myconfig') is loading the configuration settings as "defaults", so is there a way to load the config at module myconfig in a way that is not a default?
This is my source code:
#myconfig.py
worker_redirect_stdouts_level='INFO'
imports = ('tasks',)
and
#tasks.py
from celery import Celery
app = Celery()
app.config_from_object('myconfig')
print "config: %s" % app.conf.humanize(with_defaults=False)
#app.task
def debug(*args, **kwargs):
print "debug task args : %r" % (args,)
print "debug task kwargs: %r" % (kwargs,)
I start celery using env PYTHONPATH=. celery worker --loglevel=INFO and it prints config: (if I change to with_defaults=True I get the expected full output).
The configuration loaded with config_from_object() or config_from_envvar() is not considered defaults.
The behaviour observed was due a bug fixed by this commit in response to my bug report so future versions of celery will work as expected.
from celery import Celery
app = Celery
app.config_from_object('myconfig')
app.conf.humanize() # returns only settings directly set by 'myconfig' omitting defaults
where myconfig is a python module in the PYTHONPATH:
#myconfig.py
worker_redirect_stdouts_level='DEBUG'