How do I fail a test when timeout is reached - locust

When using the --run-time 0h01m arg I want the test to fail, however you get the standard:
10:59:36 [2020-07-15 09:59:32,838] jslave-traditional-v2-10-25-234-143/INFO/locust.main: Time limit reached. Stopping Locust.
10:59:36 [2020-07-15 09:59:32,838] jslave-traditional-v2-10-25-234-143/INFO/locust.main: Shutting down (exit code 0), bye.
10:59:36 [2020-07-15 09:59:32,838] jslave-traditional-v2-10-25-234-143/INFO/locust.main: Cleaning up runner...
10:59:36 [2020-07-15 09:59:32,838] jslave-traditional-v2-10-25-234-143/INFO/locust.main: Running teardowns...
Alternatively when the system receives the SYSTERM signal this would also fail, how do I do either?
EDIT:
I wasn't clear with the goal. When locust is running and either times out or recieves a sigterm signal to stop I want locust to first change the status code to 1.
So I tried to set a custom exit code using this code just to test if I code get this going:
from locust import events
#events.quitting.add_listener
def _(environment, **kw):
environment.process_exit_code = 1
and this is the stacktrace:
13:52:32 [2020-07-16 12:52:32,747] jslave-traditional-v2-10-25-213-101/ERROR/stderr: import context
13:52:32 [2020-07-16 12:52:32,747] jslave-traditional-v2-10-25-213-101/ERROR/stderr:
13:52:32 [2020-07-16 12:52:32,747] jslave-traditional-v2-10-25-213-101/ERROR/stderr: File "/var/build/predictive-routing-e2e-dev-2556/tests/locust/context.py", line 7, in <module>
13:52:32 [2020-07-16 12:52:32,747] jslave-traditional-v2-10-25-213-101/ERROR/stderr:
13:52:32 [2020-07-16 12:52:32,747] jslave-traditional-v2-10-25-213-101/ERROR/stderr: import util
13:52:32 [2020-07-16 12:52:32,747] jslave-traditional-v2-10-25-213-101/ERROR/stderr:
13:52:32 [2020-07-16 12:52:32,747] jslave-traditional-v2-10-25-213-101/ERROR/stderr: File "/var/build/predictive-routing-e2e-dev-2556/tests/locust/util.py", line 149, in <module>
13:52:32 [2020-07-16 12:52:32,747] jslave-traditional-v2-10-25-213-101/ERROR/stderr:
13:52:32 [2020-07-16 12:52:32,747] jslave-traditional-v2-10-25-213-101/ERROR/stderr: #events.quitting.add_listener
13:52:32 [2020-07-16 12:52:32,747] jslave-traditional-v2-10-25-213-101/ERROR/stderr:
13:52:32 [2020-07-16 12:52:32,747] jslave-traditional-v2-10-25-213-101/ERROR/stderr: AttributeError
13:52:32 [2020-07-16 12:52:32,748] jslave-traditional-v2-10-25-213-101/ERROR/stderr: :
13:52:32 [2020-07-16 12:52:32,748] jslave-traditional-v2-10-25-213-101/ERROR/stderr: 'EventHook' object has no attribute 'add_listener'
If this did work the goal is to set environment exit code to 1 if locust receives a sigterm to shutdown.

I’m not sure exactly what you need, but you can override Locust’s exit code by setting self.environment.process_exit_code in a task.
For more info: https://docs.locust.io/en/stable/running-locust-without-web-ui.html?highlight=Exit#controlling-the-exit-code-of-the-locust-process

Related

Show() brings error after applying pandas udf to dataframe

I am having problems to make this trial code work. The final line df.select(plus_one(col("x"))).show() doesn't work, I also tried to save in a variable ( vardf = df.select(plus_one(col("x"))) followed by vardf.show() and fails too.
import pyspark
import pandas as pd
from typing import Iterator
from pyspark.sql.functions import col, pandas_udf, struct
spark = pyspark.sql.SparkSession.builder.getOrCreate()
spark.sparkContext.setLogLevel("WARN")
pdf = pd.DataFrame([1, 2, 3], columns=["x"])
df = spark.createDataFrame(pdf)
df.show()
#pandas_udf("long")
def plus_one(batch_iter: Iterator[pd.Series]) -> Iterator[pd.Series]:
for s in batch_iter:
yield s + 1
df.select(plus_one(col("x"))).show()
Error message (parts of it):
File "C:\bigdatasetup\anaconda3\envs\pyspark-env\lib\site-packages\spyder_kernels\py3compat.py", line 356, in compat_exec
exec(code, globals, locals)
File "c:\bigdatasetup\dataanalysiswithpythonandpyspark-trunk\code\ch09\untitled0.py", line 24, in
df.select(plus_one(col("x"))).show()
File "C:\bigdatasetup\anaconda3\envs\pyspark-env\lib\site-packages\pyspark\sql\dataframe.py", line 494, in show
print(self._jdf.showString(n, 20, vertical))
File "C:\bigdatasetup\anaconda3\envs\pyspark-env\lib\site-packages\py4j\java_gateway.py", line 1321, in call
return_value = get_return_value(
File "C:\bigdatasetup\anaconda3\envs\pyspark-env\lib\site-packages\pyspark\sql\utils.py", line 117, in deco
raise converted from None
PythonException:
An exception was thrown from the Python worker. Please see the stack trace below.
...
...
ERROR 2022-04-21 09:48:24,423 7608 org.apache.spark.scheduler.TaskSetManager [task-result-getter-0] Task 0 in stage 3.0 failed 1 times; aborting job

JSON error while using Pandas output format

I am using alpha_vantage Timeseries API like below:
-----------------------------------------code------------------------------------
import pandas as pd
from alpha_vantage.timeseries import TimeSeries
from alpha_vantage.techindicators import TechIndicators
from matplotlib.pyplot import figure
import matplotlib.pyplot as plt
from pprint import pprint
#my key
key = 'mykey'
ts = TimeSeries(key, output_format='pandas')
def processMyBatch(batch, FD):
for i in batch:
df, meta_data = ts.get_quote_endpoint(i)
FD=FD.append(df)
return(FD)
main code...
for i in batches:
DF2=processMyBatch(i, DF)
DF=DF2
While the API worked for few symbols (see error log below), somewhere in between going through the list of symbols, I suddenly got the following JSONDecoder error ... but I am using output_format as pandas. Could you please throw some light on why this error occurred?
thank you
================error===============
/opt/scripts
starting now. fileName is: /mnt/NAS/Documents/../../../dailyquote2020-03-03.xlsx
completed the batch: ['AAPL', 'ABBV', 'AMZN', 'BAC', 'BNDX']
Waiting to honor API requirement: for 1 min
Waited: 65 sec
completed the batch: ['C', 'CNQ', 'CTSH', 'EEMV', 'FBGRX']
Waiting to honor API requirement: for 1 min
Waited: 65 sec
completed the batch: ['FDVV', 'FFNOX', 'FSMEX', 'FXAIX', 'GE']
Waiting to honor API requirement: for 1 min
Waited: 65 sec
Traceback (most recent call last):
File "getQuotes.py", line 55, in <module>
DF2=processMyBatch(i, DF)
File "getQuotes.py", line 29, in processMyBatch
df, meta_data = ts.get_quote_endpoint(i)
File "/home/username/.local/lib/python3.6/site-packages/alpha_vantage/alphavantage.py", line 174, in _format_wrapper
self, *args, **kwargs)
File "/home/username/.local/lib/python3.6/site-packages/alpha_vantage/alphavantage.py", line 159, in _call_wrapper
return self._handle_api_call(url), data_key, meta_data_key
File "/home/username/.local/lib/python3.6/site-packages/alpha_vantage/alphavantage.py", line 287, in _handle_api_call
json_response = response.json()
File "/home/username/.local/lib/python3.6/site-packages/requests/models.py", line 898, in json
return complexjson.loads(self.text, **kwargs)
File "/usr/lib/python3/dist-packages/simplejson/__init__.py", line 518, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3/dist-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/usr/lib/python3/dist-packages/simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
simplejson.errors.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Added on 3/4/2020
..
..
completed the batch: ['FDVV', 'FFNOX', 'FSMEX', 'FXAIX', 'GE']
Waiting to honor API requirement: for 1 min
Waited: 65 sec
completed the batch: ['GOOGL', 'IGEB', 'IJH', 'IJR', 'IMTB']
Waiting to honor API requirement: for 1 min
Waited: 65 sec
Traceback (most recent call last):
File "getQuotes.py", line 55, in <module>
DF2=processMyBatch(i, DF)
..
..
Well I was getting an error like that today and it turns out the Alpha Vantage site is down!

Cannot cast ListType[tuple(float64 x 2)] to list(tuple(float64 x 2)) in numba

Hello I am trying to use typed List in numba v46.0
>>> from numba.typed import List
>>> from numba import types
>>> mylist = List.empty_list(item_type=types.Tuple((types.f8, types.f8)))
>>> mylist2 = List.empty_list(item_type=types.List(dtype=types.Tuple((types.f8, types.f8))))
>>> mylist2.append(mylist)
but I got the following error, I am wondering how to fix it?
Traceback (most recent call last): File "", line 1, in
File
"/usr/local/lib/python3.7/site-packages/numba/typed/typedlist.py",
line 223, in append
_append(self, item) File "/usr/local/lib/python3.7/site-packages/numba/dispatcher.py", line
401, in _compile_for_args
error_rewrite(e, 'typing') File "/usr/local/lib/python3.7/site-packages/numba/dispatcher.py", line
344, in error_rewrite
reraise(type(e), e, None) File "/usr/local/lib/python3.7/site-packages/numba/six.py", line 668, in
reraise
raise value.with_traceback(tb) numba.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend) Internal error at
. Failed in
nopython mode pipeline (step: nopython mode backend) Cannot cast
ListType[tuple(float64 x 2)] to list(tuple(float64 x 2)): %".24" =
load {i8*, i8*}, {i8*, i8*}* %"item"
File
"../../usr/local/lib/python3.7/site-packages/numba/listobject.py",
line 434:
def impl(l, item):
casteditem = _cast(item, itemty)
the following should work
mylist2 = List.empty_list(item_type=types.ListType(itemty=types.Tuple((types.f8, types.f8))))

RuntimeWarning while importing scipy.stats

While importing the scipy.stats I got following error, how to resolve this error
my versions are:
'3.7.3 (default, Mar 27 2019, 17:13:21) [MSC v.1915 64 bit (AMD64)]'
Anaconda version: 2019.03 Build: py37_0
import scipy.stats as stats
Traceback (most recent call last):
File "<ipython-input-2-7e938f6e949f>", line 1, in <module>
import scipy.stats as stats
File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\stats\__init__.py", line 367, in <module>
from .stats import *
File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\stats\stats.py", line 172, in <module>
import scipy.special as special
File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\special\__init__.py", line 641, in <module>
from ._ufuncs import *
File "__init__.pxd", line 918, in init scipy.special._ufuncs
RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility.
Expected 192 from C header, got 216 from PyObject

IPython fails to import pandas from Miniconda

I'm using IPython and pandas to work with Google Bigquery. I installed pandas using 'condas install pandas'. And I believe Miniconda stalled all dependencies. But when I tried to import pandas in IPython notebook, it gave me the following errors:
>
> ---------------------------------------------------------------------------
> ImportError Traceback (most recent call last)
> <ipython-input-1-a3826df0a77b> in <module>()
> ----> 1 import pandas as pd
> 2
> 3 projectid = "geotab-bigdata-test"
> 4 data_frame = pd.read_gbq('SELECT * FROM RawVin.T20141201', project_id = projectid)
>
> C:\Users\fionazhao\Installed\Continuum\Miniconda\lib\site-packages\pandas\__init__.pyc
> in <module>()
> 45
> 46 # let init-time option registration happen
> ---> 47 import pandas.core.config_init
> 48
> 49 from pandas.core.api import *
>
> C:\Users\fionazhao\Installed\Continuum\Miniconda\lib\site-packages\pandas\core\config_init.py
> in <module>()
> 15 is_instance_factory, is_one_of_factory,
> 16 get_default_val)
> ---> 17 from pandas.core.format import detect_console_encoding
> 18
> 19
>
> C:\Users\fionazhao\Installed\Continuum\Miniconda\lib\site-packages\pandas\core\format.py
> in <module>()
> 7 from pandas.core.base import PandasObject
> 8 from pandas.core.common import adjoin, notnull
> ----> 9 from pandas.core.index import Index, MultiIndex, _ensure_index
> 10 from pandas import compat
> 11 from pandas.compat import(StringIO, lzip, range, map, zip, reduce, u,
>
> C:\Users\fionazhao\Installed\Continuum\Miniconda\lib\site-packages\pandas\core\index.py
> in <module>()
> 13 import pandas.algos as _algos
> 14 import pandas.index as _index
> ---> 15 from pandas.lib import Timestamp, Timedelta, is_datetime_array
> 16 from pandas.core.base import PandasObject, FrozenList, FrozenNDArray, IndexOpsMixin, _shared_docs
> 17 from pandas.util.decorators import (Appender, Substitution, cache_readonly,
>
> ImportError: cannot import name Timedelta
Found solutions myself. When install pandas using Miniconda, we should make sure all python threads have been stopped. Otherwise, it'll messed up the installation and bring such errors.
I just stopped all python threads, re-install pandas by 'conda install -f pandas', and the errors gone