How to explicitly trigger MasterLocustRunner.quit() in locust - locust

To simulate specific number of iterations in locust, I am using simple code flow as below :
class NcsoTest(TaskSet):
REQ_HEADER = {
"Accept": "application/json",
"Accept-Encoding": "gzip, deflate",
"Connection": "keep-alive",
"Content-Length": "860",
"Content-Type": "application/json",
"User-Agent": "python-requests/2.21.0",
}
#seq_task(1)
def user_workflow(self):
locrunner = MasterLocustRunner()
for i in range(1, 100, 1):
self.send_post_request()
self.send_get_request()
Singleton.logger().info("Test Number reached. Stopping Locust.")
#runners.locust_runner.quit()
locrunner.quit()
def send_post_request(self):
response = self.client.post("/api/v2/services", data=Singleton.json_body, headers=NcsoTest.REQ_HEADER)
print response
def send_get_request(self):
response = self.client.get("/api/v2/services", headers=NcsoTest.REQ_HEADER)
print response
class NcsoLoad(HttpLocust):
max_wait = 300
min_wait = 300
sleep_time = 10
task_set = NcsoTest
To run this test, I am using command with below parameters :
--host https://10.123.123.123 --min_wait_time 300 --max_wait_time 300 --num_clients 1 --hatch_rate 1 --test_time 5m
Although, I am able to execute 100 requests, if these 100 request finish before 5 minutes, locust runner keeps giving error stating MasterLocustRunner needs more parameters which are locust_class & options (which I specified in locust command)
If I un-comment "runners.locust_runner.quit()" and comment "locrunner.quit()", test keeps on waiting till the graceful terminate call from Master sent after run-time ends.
What I would like to know is, how to manually/end terminate the locust gracefully - Master and Slave.

This has been much improved in locust 1.1. You are now be able to call self.environment.runner.quit() from a User (or self.user.environment.runner.quit() from a TaskSet)
You can also call it from a completely separate greenlet (maybe not exactly what you are looking for, but I thought I’d mention it), like here https://docs.locust.io/en/stable/extending-locust.html#run-a-background-greenlet
You should not create a new Runner inside your user tasks (which your code does). That is a little too wild :)

Related

Celery chain - if any tasks fail, do x, else y

I'm just getting into Celery chains in my Django project. I have the following function:
def orchestrate_tasks_for_account(account_id):
# Get the account, set status to 'SYNC' until the chain is complete
account = Account.objects.get(id=account_id)
account.status = "SYNC"
account.save()
chain = task1.s(account_id) | task2.s() | task3.s()
chain()
# if any of the tasks in the chain failed, set account.status = 'ERROR'
# else set the account.status = 'OK'
The chain works as expected, but I'm not sure how to take feedback from the chain and update the account based on the results
In other words, I'd like to set the account status to 'ERROR' if any of the tasks in the chain fail, otherwise I'd like to set the account status to 'OK'
I'm confused by the Celery documentation on how to handle an error with an if/else like I've commented in the last two lines above.
Does anyone have experience with this?
Ok - here's what I came up with
I've leveraged the waiting library in this solution
from celery import chain
from waiting import wait
def orchestrate_tasks_for_account(account_id):
account = Account.objects.get(id=account_id)
account.status = "SYNC"
account.save()
job = chain(
task1.s(account_id),
task2.s(),
task3.s()
)
result = job.apply_async()
wait(
lambda: result.ready(), # when async job is completed...
timeout_seconds=1800, # wait 1800 seconds (30 minutes)
waiting_for="task orchestration to complete"
)
if result.successful():
account.status = 'OK'
else:
account.status = 'ERROR'
account.save()
I am open to suggestions to make this better!

How to check for proper format in my API response

Currently running tests for my REST API which:
takes an endpoint from the user
using that endpoint, grabs info from a server
sends it to another server to be translated
then proceeds to jsonify the data.
I've written a series of automated tests running and I cannot get one to pass - the test that actually identifies the content of the response. I've tried including several variations of what the test is expecting but I feel it's the actual implementation that's the issue. Here's the expected API response from the client request:
{ "name": "random_character", "description": "Translated description of requested character is output here" }
Here is the testing class inside my test_main.py:
class Test_functions(unittest.TestCase):
# checking if response of 200 is returned
def test_healthcheck_PokeAPI(self):
manualtest = app.test_client(self)
response = manualtest.get("/pokemon/")
status_code = response.status_code
self.assertEqual(status_code, 200)
# the status code should be a redirect i.e. 308; so I made a separate test for this
def test_healthcheck_ShakesprAPI(self):
manualtest = app.test_client(self)
response = manualtest.get("/pokemon/charizard")
self.assertEqual(response.status_code, 308)
def test_response_content(self):
manualtest = app.test_client(self)
response = manualtest.get("/pokemon/charizard")
self.assertEqual(response.content_type,
'application/json') <<<< this test is failing
def test_trans_shakespeare_response(self):
manualtest = app.test_client(self)
response = manualtest.get("/pokemon/charizard")
self.assertFalse(b"doth" in response.data)
Traceback:
AssertionError: 'text/html; charset=utf-8' != 'application/json' - text/html; charset=utf-8 + application/json
Any help would be greatly appreciated

airflow http callback sensor

Our airflow implementation sends out http requests to get services to do tasks. We want those services to let airflow know when they complete their task, so we are sending a callback url to the service which they will call when their task is complete. I can't seem to find a callback sensor, however. How do people handle this normally?
There is no such thing as a callback or webhook sensor in Airflow. The sensor definition follows as taken from the documentation:
Sensors are a certain type of operator that will keep running until a certain criterion is met. Examples include a specific file landing in HDFS or S3, a partition appearing in Hive, or a specific time of the day. Sensors are derived from BaseSensorOperator and run a poke method at a specified poke_interval until it returns True.
This means that a sensor is an operator that performs polling behavior on external systems. In that sense, your external services should have a way of keeping state for each executed task - either internally or externally - so that a polling sensor can check on that state.
This way you can use for example the airflow.operators.HttpSensor that polls an HTTP endpoint until a condition is met. Or even better, write your own custom sensor that gives you the opportunity to do more complex processing and keep state.
Otherwise, if the service outputs data in a storage system you can use a sensor that polls a database for example. I believe you get the idea.
I'm attaching a custom operator example that I've written for integrating with the Apache Livy API. The sensor does two things: a) submits a Spark job through the REST API and b) waits for the job to be completed.
The operator extends the SimpleHttpOperator and at the same time implements the HttpSensor thus combining both functionalities.
class LivyBatchOperator(SimpleHttpOperator):
"""
Submits a new Spark batch job through
the Apache Livy REST API.
"""
template_fields = ('args',)
ui_color = '#f4a460'
#apply_defaults
def __init__(self,
name,
className,
file,
executorMemory='1g',
driverMemory='512m',
driverCores=1,
executorCores=1,
numExecutors=1,
args=[],
conf={},
timeout=120,
http_conn_id='apache_livy',
*arguments, **kwargs):
"""
If xcom_push is True, response of an HTTP request will also
be pushed to an XCom.
"""
super(LivyBatchOperator, self).__init__(
endpoint='batches', *arguments, **kwargs)
self.http_conn_id = http_conn_id
self.method = 'POST'
self.endpoint = 'batches'
self.name = name
self.className = className
self.file = file
self.executorMemory = executorMemory
self.driverMemory = driverMemory
self.driverCores = driverCores
self.executorCores = executorCores
self.numExecutors = numExecutors
self.args = args
self.conf = conf
self.timeout = timeout
self.poke_interval = 10
def execute(self, context):
"""
Executes the task
"""
payload = {
"name": self.name,
"className": self.className,
"executorMemory": self.executorMemory,
"driverMemory": self.driverMemory,
"driverCores": self.driverCores,
"executorCores": self.executorCores,
"numExecutors": self.numExecutors,
"file": self.file,
"args": self.args,
"conf": self.conf
}
print payload
headers = {
'X-Requested-By': 'airflow',
'Content-Type': 'application/json'
}
http = HttpHook(self.method, http_conn_id=self.http_conn_id)
self.log.info("Submitting batch through Apache Livy API")
response = http.run(self.endpoint,
json.dumps(payload),
headers,
self.extra_options)
# parse the JSON response
obj = json.loads(response.content)
# get the new batch Id
self.batch_id = obj['id']
log.info('Batch successfully submitted with Id %s', self.batch_id)
# start polling the batch status
started_at = datetime.utcnow()
while not self.poke(context):
if (datetime.utcnow() - started_at).total_seconds() > self.timeout:
raise AirflowSensorTimeout('Snap. Time is OUT.')
sleep(self.poke_interval)
self.log.info("Batch %s has finished", self.batch_id)
def poke(self, context):
'''
Function that the sensors defined while deriving this class should
override.
'''
http = HttpHook(method='GET', http_conn_id=self.http_conn_id)
self.log.info("Calling Apache Livy API to get batch status")
# call the API endpoint
endpoint = 'batches/' + str(self.batch_id)
response = http.run(endpoint)
# parse the JSON response
obj = json.loads(response.content)
# get the current state of the batch
state = obj['state']
# check the batch state
if (state == 'starting') or (state == 'running'):
# if state is 'starting' or 'running'
# signal a new polling cycle
self.log.info('Batch %s has not finished yet (%s)',
self.batch_id, state)
return False
elif state == 'success':
# if state is 'success' exit
return True
else:
# for all other states
# raise an exception and
# terminate the task
raise AirflowException(
'Batch ' + str(self.batch_id) + ' failed (' + state + ')')
Hope this will help you a bit.

Matlab RESTful PUT Command - net.http - nesting body values

I am using Matlab's matlab.net.http library to launch get, put and post commands to a website. I can successfully launch get and post commands.
For example:
MyBody = matlab.net.http.MessageBody(struct('Id',YYYYYY,'WindfarmId',XXX,'Month','YYYY-MM-DD'));
Request = matlab.net.http.RequestMessage;
Request.Method = 'POST';
Request.Header = matlab.net.http.HeaderField('Content-Type','application/json','Authorization',['Basic ' matlab.net.base64encode([Username ':' Password])]);
Request.Body = MyBody;
uri = matlab.net.URI(ENTERURLHERE);
Response = Request.send(uri,MyHTTPOptions);
This works well. However using a PUT command I have to enter the equiavlent of this body (written in curl syntax):
-d '{ "InputValues": [ {"MetricLevelAId": 1, "MetricLevelBId": 1, "InputMetricId": 7, "Value": 56 } ] }'
I tried this:
data_InputValues = struct ('MetricLevelAId',1,'MetricLevelBId',1,'InputMetricId',7,'Value',56);
MyBody = matlab.net.http.MessageBody(struct('InputValues',dataInputValues));
However I keep receiving the following 'Bad Request' response from the server:
"Input values required"
I think this is linked to the way Matlab interprets the body part of the request and passes it to the server, i.e. it cannot pass the nested struct correctly. Anyone got any ideas how to solve this?
N.B. potentially linked to Translating curl into Matlab/Webwrite (it is dealing with a nested value)

What's going wrong when I try to create a review comment through Github's v3 API?

I'm trying to create a review commit through Github's v3 API and am not succeeding. Consider this repository. There's a single pull request and for the purposes of this question let's say I want to leave a 'changes requested' review on that PR. Here's the code I've got:
#!/usr/bin/env python3
import requests
import json
TOKEN='YOUR_TOKEN_HERE'
REPO = "blt/experiment-repo"
PR_NUM = 1
COMMIT_SHA_1 = "4160bee478c3c985eaaa35f161cc922fe20b354a"
COMMIT_SHA_2 = "df9d13a2e35f9b6c228e1f30ea30585ed85af26a"
def main():
pr_comment_headers = {
'user-agent': 'benedikt/0.0.1',
'Authorization': 'token %s' % TOKEN,
# Accept header per
# https://developer.github.com/changes/2016-12-16-review-requests-api/
'Accept': 'application/vnd.github.black-cat-preview+json',
}
msg = "BLEEP BLOOP I AM A ROBOT"
payload = { 'commit_id': COMMIT_SHA_2,
'body': msg,
'event': "REQUEST_CHANGES" }
# Per https://developer.github.com/v3/pulls/reviews/#create-a-pull-request-review
review_url = "https://api.github.com/repos/%s/pulls/%s/reviews" % (REPO, PR_NUM)
res = requests.post(review_url, headers = pr_comment_headers,
json = json.dumps(payload))
print(res)
print(res.text)
if __name__ == '__main__':
main()
I've marked in code comments where I've discovered the API endpoints to hit and with what payloads. Excepting, I must have goofed somewhere because when I run the above program I receive:
<Response [422]>
{"message":"Validation Failed","errors":["Variable commitOID of type GitObjectID was provided invalid value","Variable event of type PullRequestReviewEvent was provided invalid value"],"documentation_url":"https://developer.github.com/v3/pulls/reviews/#create-a-pull-request-review"}
I've verified that the commit SHAs are the exact ones that Github shows and REQUEST_CHANGES is the string in the documentation.
What am I missing?
I think you need to let requests encode the request body instead of encoding it yourself with json.dumps(), something like this: requests.post(..., json=payload)