How to use multiple User classes in the locusfile.py file when running more than one worker - locust

I have an issue when running Locust in distributed mode when the locusefile.py has two User classes defined. If I run this with one worker, I can see tasks from both User objects getting executed. When I invoke two works (with 5 users) I see one of the works only running tasks from one of the User objects from the locusfile.py and the other worker is running tasks from the other User.
I expected that both works would process tasks from both of the User objects define in the locusfile.py.
If I kill one of the workers (Ctrl+C) then the remaining worker starts up the rest of the users and everything is OK.
When you have multiple User objects defined in the locust file. Is there any way to fix this issue.

Related

DASK with local files on WORKER systems

I am working with mutiple systems as workers.
Each worker system has a part of the data locally stored. And I want the computation done by each worker on its respective file only.
I have tried using :
distributed.scheduler.decide_worker()
send_task_to_worker(worker, key)
but I could not automate assigning the task for each file.
Also, is there anyway I can access local files of the worker? Using tcp address, I only have access to a temp folder of the worker created for dask.
You can target computations to run on certain workers using the workers= keyword to the various methods on the client. See http://distributed.readthedocs.io/en/latest/locality.html#user-control for more information.
You might run a function on each of your workers that tells you which files are present:
>>> client.run(os.listdir, my_directory)
{'192.168.0.1:52523': ['myfile1.dat', 'myfile2.dat'],
'192.168.0.2:4244': ['myfile3.dat'],
'192.168.0.3:5515': ['myfile4.dat', 'myfile5.dat']}
You might then submit computations to run on those workers specifically.
future = client.submit(load, 'myfile1.dat', workers='192.168.0.1:52523')
If you are using dask.delayed you can also pass workers= to the `persist method. See http://distributed.readthedocs.io/en/latest/locality.html#user-control for more information

Run celery periodic tasks on one machine only

I am working on a django project where I am using celery. I have three two big modules in the project named app1 and app2. I have created two celery apps for that which are running on two separate machines. In the app1 and app2 there are different tasks which i want to run difference machines and it is working fine. But my problem is that i have some periodic_tasks. I have defined a queue named periodic_tasks for them. I want to run these periodic tasks on a separate third machine. Or on the third machine I want to run only the periodic tasks, and these periodic tasks shouldn't executed from the other two machines. Is it possible using celery.
On your third machine, make sure to start up celery with the -Q or --queues option with periodic_tasks. On app1 and app2, start up celery without the periodic_tasks queue. You can read more about queue handling here: http://docs.celeryproject.org/en/latest/reference/celery.bin.worker.html#cmdoption-celery-worker-Q

PowerShell session/environment isolation - Jobs sharing same context?

I'm testing a workflow runbook that utilizes Add-Type to add some custom C# code.
All of a sudden I started getting 'type already exists' errors on subsequent test jobs, as if a new PSSession is not being created.
In other words, it looks like new jobs are sharing the same execution context. I only get this locally if I try to run the same command twice per PS instance.
The type in question is a static class with some Extension methods. Since it also happens to be the first type declared in the source block, I don't doubt other non-static types would throw errors as well.
I've executed this handfuls of times already, so I fully expect that 'eventually' this will stop happening, but I can't seem to force it, and I have no idea what I could've done to trip it into this situation, either.
Seeing evidence of shared execution contexts across jobs like this - even (especially?) if only temporal - makes me wonder if some or all of the general execution inconsistencies we've seen in the past when making & deploying changes & performing subsequent tests soon-after are related to this.
I'm tempted to think that this is simply a part of the difference between a Test Job and a 'real' one, but that raises questions about the validity of the Test jobs themselves WRT mimicking Published Jobs.
Are all Azure Automation Jobs supposed to execute in Isolation? Can this be controlled/exploited by a developer?
Each automation account has its own isolated sandboxes where its jobs run. Those sandboxes are distributed among a number of worker machines. For test jobs, to try to improve job start time since [make code change, retest] over and over is very common, Automation reuses the same sandbox as used for previous test jobs of this runbook, if the sandbox has not been cleaned up yet, so that sandboxes do not have to be spun up for each unique test job (sandbox creation is one reason for a longer job start time than desired). Due to this behavior, if you execute test jobs of the same runbook within a short amount of time, you will get the behavior you're seeing above.
However, even for production jobs, jobs of the same automation account (across runbooks) can share the same sandboxes. We randomly distribute jobs across our worker machines, so its possible job A is queued for execution and is placed on worker W, then 5 minutes later, job B is queued for execution and is placed on worker W as well. If job A and job B are of the same automation account and have the same "demands" in terms of modules / module versions, they will be placed in the same sandbox, if job A's sandbox is still around. "Module / module version demands" does not mean the modules used by the runbook, but the modules / latest module versions that existed in the automation account at the time when the job was started / runbook was scheduled (for jobs started via schedule) / runbook was assigned to a webhook (for jobs started via webhook)
In terms of resolving your specific problem, you could surround Add-Type with a try, catch statement, or maybe use Add-Type -IgnoreWarnings

Load and use a Service Worker in Karma test

We want to write a Service Worker that performs source code transformation on the loaded files. In order to test this functionality, we use Karma.
Our tests import source files, on which the source code transformation is performed. The tests only succeed if the Service Worker performs the transformation and fail when the Service Worker is not active.
Locally, we can start Karma with singleRun: false and watch for changed files to restart the tests. However, Service Workers are not active for the page that originally loaded them. Therefore, every test case succeeds but the first one.
However, for continuous integration, we need a single-run mode. So, our Service Worker is not active during the run of the test, which fail accordingly.
Also, two consecutive runs do not solve this issue, as Karma restarts the used browser (so we lose the Service Worker).
So, the question is, how to make the Service Worker available in the test run?
E.g., by preserving the browser instance used by karma.
Calling self.clients.claim() within your service worker's activate hander signals to the browser that you'd like your service worker to take control on the initial page load in which the service worker is first registered. You can see an example of this in action in Service Worker Sample: Immediate Control.
I would recommend that in the JavaScript of your controlled page, you wait for the navigator.serviceWorker.ready promise to resolve before running your test code. Once that promise does resolve, you'll know that there's an active service worker controlling your page. The test for the <platinum-sw-register> Polymer element uses this technique.

Execute external script on server with multiple play 2 framework instances

I have a linux server with three play 2 framework instances on it and I would like to execute regularly an external Scala script that has access to all application environment (models) and that is executed only once at a time.
I would like to call this script from crontab but I cannot find any documentation on how to do it. I know that we can schedule asynchronous tasks from Global object, but I want the script executed only once for the three play instances.
Actually I would like to do the same kind of things as Ruby on Rails rake tasks for those who knows them.
Create a regular action for this task which will be accessible via http, then you can use ie. curl in unix' crontab to call that action and it will hit first available instance.
Other possibility is... using Global object to schedule the task with Akka support. In this case to make sure that only one instance will schedule task, you need to determine somehow which one should it be. If you are starting all 3 instances with specified port (always the same per instance) you can read http.port to allow or skip the execution.
Finally you can use database to inform other instances, that task is just executed: all 3 instances tries to execute the Akka scheduler, but before execution the task they can check if this task has still TODO flag. If not, instance sets TODO flag to false and continues execution, otherwise just skips execution this time.
Also you can use filesystem for similar approach: at the beginning of the execution, create flag-file to inform other instances, that, this time they can skip the task.