intermittent "No result backend" exception - celery

Given the two very simple files:
Tasks.py
from celery import Celery
app = Celery('tasks', backend='redis://localhost', broker='pyamqp://')
#app.task
def add(x, y):
return x + y
Webapp.py
from celery import Celery
from celery.execute import send_task
#random_name = Celery('tasks', backend='redis://localhost', broker='pyamqp://')
r1 = send_task('tasks.add',(1,1))
print(r1.get())
If I run the code as is, I'm getting the error
"NotImplementedError: No result backend is configured.".
If I just remove the # (before random_name = ...), everything is working fine (the program prints 2).
Why?

I got an official answer here:
https://github.com/celery/celery/issues/6744
Basically follow https://docs.celeryproject.org/en/stable/userguide/configuration.html
, you need a celeryconfig.py file at your PYTHONPATH for celery to have app configured correctly.

Related

how to pytest an app that can use ipython embed as arg parameter?

I have a python application that has an option "-y" to end its procedure in a ipython terminal with all objects created ready for an interactive manipulation.
I'm trying to think in how can I design a pytest that could allow me, somehow, to interact with this terminal, to check if objects are there in a python session, exit, and then capture the results for assert (I know how to use capsys for example).
During my attempts (all failed so far) I got a suggestion to use pytest -s option which, obviously, is not my case.
So I have this example:
go_to_python.py
import argparse
import random
parser = argparse.ArgumentParser()
parser.add_argument(
"-y",
"--ipython",
action="store_true",
dest="ipython",
help="start iPython interpreter")
args = parser.parse_args()
if __name__ == "__main__":
randomlist = []
for i in range(0, 5):
n = random.randint(1, 30)
randomlist.append(n)
if args.ipython:
import IPython
IPython.embed(colors="neutral")
How could I create a test that could assert that randomlist is inside the ipython session?

Is there a way to start mitmproxy v.7.0.2 programmatically in the background?

Is there a way to start mitmproxy v.7.0.2 programmatically in the background?
ProxyConfig and ProxyServer have been removed since version 7.0.0, and the code below isn't working.
from mitmproxy.options import Options
from mitmproxy.proxy.config import ProxyConfig
from mitmproxy.proxy.server import ProxyServer
from mitmproxy.tools.dump import DumpMaster
import threading
import asyncio
import time
class Addon(object):
def __init__(self):
self.num = 1
def request(self, flow):
flow.request.headers["count"] = str(self.num)
def response(self, flow):
self.num = self.num + 1
flow.response.headers["count"] = str(self.num)
print(self.num)
# see source mitmproxy/master.py for details
def loop_in_thread(loop, m):
asyncio.set_event_loop(loop) # This is the key.
m.run_loop(loop.run_forever)
if __name__ == "__main__":
options = Options(listen_host='0.0.0.0', listen_port=8080, http2=True)
m = DumpMaster(options, with_termlog=False, with_dumper=False)
config = ProxyConfig(options)
m.server = ProxyServer(config)
m.addons.add(Addon())
# run mitmproxy in backgroud, especially integrated with other server
loop = asyncio.get_event_loop()
t = threading.Thread( target=loop_in_thread, args=(loop,m) )
t.start()
# Other servers, such as a web server, might be started then.
time.sleep(20)
print('going to shutdown mitmproxy')
m.shutdown()
from BigSully's gist
You can put your Addon class into your_script.py and then run mitmdump -s your_script.py. mitmdump comes without the console interface and can run in the background.
We (mitmproxy devs) officially don't support manual instantiation from Python anymore because that creates a massive amount of support burden for us. If you have some Python experience you can probably find your way around.
What if my addon has additional dependencies?
Approach 1: pip install mitmproxy is still perfectly supported and gets you the same functionality as the standalone binaries. Bonus tip: You can run venv/bin/mitmproxy or venv/Scripts/mitmproxy.exe to invoke mitmproxy in your virtualenv without having your virtualenv activated.
Approach 2: You can install mitmproxy with pipx and then run pipx inject mitmproxy <your dependency name>. See https://docs.mitmproxy.org/stable/overview-installation/#installation-from-the-python-package-index-pypi for details.
How can I debug mitmproxy itself?
If you are debugging from the command line (be it print statements or pdb), the easiest approach is to run mitmdump instead of mitmproxy, which provides the same functionality minus the console interface. Alternatively, you can use PyCharm's remote debug functionality, which also works while the console interface is active (https://github.com/mitmproxy/mitmproxy/blob/main/examples/contrib/remote-debug.py).
This example below should work fine with mitmproxy v7
from mitmproxy.tools import main
from mitmproxy.tools.dump import DumpMaster
options = main.options.Options(listen_host='0.0.0.0', listen_port=8080)
m = DumpMaster(options=options)
# the rest is same in the previous versions
from mitmproxy.addons.proxyserver import Proxyserver
from mitmproxy.options import Options
from mitmproxy.tools.dump import DumpMaster
options = Options(listen_host='127.0.0.1', listen_port=8080, http2=True)
m = DumpMaster(options, with_termlog=True, with_dumper=False)
m.server = Proxyserver()
m.addons.add(
// addons here
)
m.run()
Hi, I think that should do it

How to take screenshots in AWS Device farm for a run using appium python?

Even after a successful execution of my tests in DeviceFarm, I get an empty screenshots report. I have kept my code as simple as below -
from appium import webdriver
import time
import unittest
import os
class MyAndroidTest(unittest.TestCase):
def setUp(self):
caps = {}
self.driver = webdriver.Remote("http://127.0.0.1:4723/wd/hub", caps)
def test1(self):
self.driver.get('http://docs.aws.amazon.com/devicefarm/latest/developerguide/welcome.html')
time.sleep(5)
screenshot_folder = os.getenv('SCREENSHOT_PATH', '/tmp')
self.driver.save_screenshot(screenshot_folder + 'screen1.png')
time.sleep(5)
def tearDown(self):
self.driver.quit()
if __name__ == '__main__':
suite = unittest.TestLoader().loadTestsFromTestCase(MyAndroidTest)
unittest.TextTestRunner(verbosity=2).run(suite)
I tested on a single device pool -
How can I make this work ?
TIA.
Missing a slash (/) before the filename (i.e., screen1.png). Line 15 should be as below -
self.driver.save_screenshot(screenshot_folder + '/screen1.png')
Though I'm not sure exactly how to write this to a file in Device Farm here are the appium docs for the screenshot endpoint and a python example.
https://github.com/appium/appium/blob/master/docs/en/commands/session/screenshot.md
It gets a base 64 encoded string which then we would just need to save it somewhere like the appium screenshot dir the other answers mentioned. Otherwise we could also save it in the /tmp dir and then export it using the custom artifacts feature.
Let me know if that link helps.
James

What is considered defaults for celery's app.conf.humanize(with_defaults=False)?

I'm trying to printout celery's configuration using app.conf.humanize(with_defaults=False) following the example in the user guide. But I always get an empty string when using with_defaults=False, I know that the configuration changes are in effects because I can see the changes using .humanize(with_defaults=True) instead.
I'm guessing that adding configuration with app.conf.config_from_object('myconfig') is loading the configuration settings as "defaults", so is there a way to load the config at module myconfig in a way that is not a default?
This is my source code:
#myconfig.py
worker_redirect_stdouts_level='INFO'
imports = ('tasks',)
and
#tasks.py
from celery import Celery
app = Celery()
app.config_from_object('myconfig')
print "config: %s" % app.conf.humanize(with_defaults=False)
#app.task
def debug(*args, **kwargs):
print "debug task args : %r" % (args,)
print "debug task kwargs: %r" % (kwargs,)
I start celery using env PYTHONPATH=. celery worker --loglevel=INFO and it prints config: (if I change to with_defaults=True I get the expected full output).
The configuration loaded with config_from_object() or config_from_envvar() is not considered defaults.
The behaviour observed was due a bug fixed by this commit in response to my bug report so future versions of celery will work as expected.
from celery import Celery
app = Celery
app.config_from_object('myconfig')
app.conf.humanize() # returns only settings directly set by 'myconfig' omitting defaults
where myconfig is a python module in the PYTHONPATH:
#myconfig.py
worker_redirect_stdouts_level='DEBUG'

How to load objects in memory and share across different executions of Celery worker?

I have setup celery + rabbitmq for on a 3 cluster machine. I have also created a task which generates a regular expression based on data from the file and uses the information to parse text. However, I would like that the process of reading the file is done only once per worker spawn and not on every execution of as task.
from celery import Celery
celery = Celery('tasks', broker='amqp://localhost//')
import re
#celery.task
def add(x, y):
return x + y
def get_regular_expression():
with open("text") as fp:
data = fp.readlines()
str_re = "|".join([x.split()[2] for x in data ])
return str_re
#celery.task
def analyse_json(tw):
str_re = get_regular_expression()
re.match(str_re,tw.text)
In the above code, I would like to open the file and read the output into the string only once per worker, and then the task analyse_json should just use the string.
Any help will be appreciated,
thanks,
Amit
Put the call to get_regular_expression at the module level:
str_re = get_regular_expression()
#celery.task
def analyse_json(tw):
re.match(str_re, tw.text)
It will only be called once, when the module is first imported.
Additionally, if you must have only one instance of your worker running at a time (for example CUDA), you have to use the -P solo option:
celery worker --pool solo
Works with celery 4.4.2.