NServiceBus worker endpoints - msmq

Im currently evaluation the distributor in NSB and noticed that when i run the distributor and a couple of workers on my own machine, then the queue name for each worker is appended with a Guid.
According to Udi, the master himself :), in this post: Distributor and worker end point queue in same machine
The reason is that NSB assumes you are running in a test setup.
Question:
But what happens if I run 4 workers on 1 seperate machine?
Will the queue names on that machine again be appended with a Guid OR are the workers capable of sharing the same queue just because the distributor is on a remote machine?
The question is important as I did expect to have multiple workers on 1 remote machine and generating new queue names every time the machine is booted is not a good idea for maintenance purposes.
Kind regards

But what happens if I run 4 workers on 1 separate machine?
Why would you want to do that?
Each worker can be configured to run multiple worker threads. This is why it doesn't make sense to run multiple workers on a single machine ...
I would increase the number of threads a single worker is using until the max throughput is reached on that machine. Then, scale out to another machine ... so, one worker per box, multiple threads per worker
See here for details on NumberOfWorkerThreads configuration

Related

Initial job has not accepted any resources; Error with spark in VMs

I have three Ubuntu VMs (clones) in my local machine which i wanted to use to make a simple cluster. One VM to be used as a master and the other two as slaves. I can ssh every VM from every other one succesfully and i have the ip's of the two slaves in the conf/slaves file of the master and the master's ip in the spark-env.sh of every VM.When I run
start-slave.sh spark://master-ip:7077
from the slaves,they appear in the spark UI. But when i try to run things in parallel i always get the message about the resources. For testing code i use the scala shell
spark-shell --master://master-ip:7077 and sc.parallelize(1 until 10000).count.
Do You mean that warn: WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster ui to ensure that workers are registered and have sufficient memory
This message will pop up any time an application is requesting more resources from the cluster than the cluster can currently provide.
Spark is only looking for two things: Cores and Ram. Cores represents the number of open executor slots that your cluster provides for execution. Ram refers to the amount of free Ram required on any worker running your application.
Note for both of these resources the maximum value is not your System's max, it is the max as set by the your Spark configuration.
If you need to run multiple Spark apps simultaneously then you’ll need to adjust the amount of cores being used by each app.
If you are working with applications on the same node you need to assign cores to each application to make them work in parallel: ResourceScheduling
If you use VMs (as in your situation): assign only one core to each VM
when you first create it or whatever relevant to your system
resource capacity as by now spark request 4 cores for each * 2 VMs = 8 core which you don't have.
This is a tutorial i find that could help you: Install Spark on Ubuntu: Standalone Cluster Mode
Further Reading: common-spark-troubleshooting

Queries regarding celery scalability

I have few questions regarding celery. Please help me with that.
Do we need to put the project code in every celery worker? If yes, if I am increasing the number of workers and also I am updating my code, what is the best way to update the code in all the worker instances (without manually pushing code to every instance everytime)?
Using -Ofair in celery worker as argument disable prefetching in workers even if have set PREFETCH_LIMIT=8 or so?
IMPORTANT: Does rabbitmq broker assign the task to the workers or do workers pull the task from the broker?
Does it make sense to have more than one celery worker (with as many subprocesses as number of cores) in a system? I see few people run multiple celery workers in a single system.
To add to the previous question, whats the performance difference between the two scenarios: single worker (8 cores) in a system or two workers (with concurrency 4)
Please answer my questions. Thanks in advance.
Do we need to put the project code in every celery worker? If yes, if I am increasing the number of workers and also I am updating my code, what is the best way to update the code in all the worker instances (without manually pushing code to every instance everytime)?
Yes. A celery worker runs your code, and so naturally it needs access to that code. How you make the code accessible though is entirely up to you. Some approaches include:
Code updates and restarting of workers as part of deployment
If you run your celery workers in kubernetes pods this comes down to building a new docker image and upgrading your workers to the new image. Using rolling updates this can be done with zero downtime.
Scheduled synchronization from a repository and worker restarts by broadcast
If you run your celery workers in a more traditional environment or for some reason you don't want to rebuild whole images, you can use some central file system available to all workers, where you update the files e.g. syncing a git repository on a schedule or by some trigger. It is important you restart all celery workers so they reload the code. This can be done by remote control.
Dynamic loading of code for every task
For example in omega|ml we provide lambda-style serverless execution of
arbitrary python scripts which are dynamically loaded into the worker process.
To avoid module loading and dependency issues it is important to keep max-tasks-per-child=1 and use the prefork pool. While this adds some overhead it is a tradeoff that we find is easy to manage (in particular we run machine learning tasks and so the little overhead of loading scripts and restarting workers after every task is not an issue)
Using -Ofair in celery worker as argument disable prefetching in workers even if have set PREFETCH_LIMIT=8 or so?
-O fair stops workers from prefetching tasks unless there is an idle process. However there is a quirk with rate limits which I recently stumbled upon. In practice I have not experienced a problem with neither prefetching nor rate limiting, however as with any distributed system it pays of to think about the effects of the asynchronous nature of execution (this is not particular to Celery but applies to all such such systems).
IMPORTANT: Does rabbitmq broker assign the task to the workers or do workers pull the task from the broker?
Rabbitmq does not know of the workers (nor do any of the other broker supported by celery) - they just maintain a queue of messages. That is, it is the workers that pull tasks from the broker.
A concern that may come up with this is what if my worker crashes while executing tasks. There are several aspects to this: There is a distinction between a worker and the worker processes. The worker is the single task started to consume tasks from the broker, it does not execute any of the task code. The task code is executed by one of the worker processes. When using the prefork pool (which is the default) a failed worker process is simply restarted without affecting the worker as a whole or other worker processes.
Does it make sense to have more than one celery worker (with as many subprocesses as number of cores) in a system? I see few people run multiple celery workers in a single system.
That depends on the scale and type of workload you need to run. In general CPU bound tasks should be run on workers with a concurrency setting that doesn't exceed the number of cores. If you need to process more of these tasks than you have cores, run multiple workers to scale out. Note if your CPU bound task uses more than one core at a time (e.g. as is often the case in machine learning workloads/numerical processing) it is the total number of cores used per task, not the total number of tasks run concurrently that should inform your decision.
To add to the previous question, whats the performance difference between the two scenarios: single worker (8 cores) in a system or two workers (with concurrency 4)
Hard to say in general, best to run some tests. For example if 4 concurrently run tasks use all the memory on a single node, adding another worker will not help. If however you have two queues e.g. with different rates of arrival (say one for low frequency but high-priority execution, another for high frequency but low-priority) both of which can be run concurrently on the same node without concern for CPU or memory, a single node will do.

How to configure various supervisors for a nimbus in storm?

I have a nimbus server and 3 other supervisor servers. And I have 11 storm topologies running. But all of them are running in the Nimbus only. How to configure the other supervisors so that the topologies get distributed among various supervisors. Which configuration files I have to change?
It seems that there is something funny going on. For the two hosts corona-stage-storm-supervisor-01 and corona-stage-storm-supervisor-02 there are two supervisors each. However, a host should have only one supervisor running. I would assume that this "confuses" Nimbus and it uses remaining host (corona-storm-nimbus-01) that does only have a single supervisor running.
See Storm documentation for more detail (and talk to your admin who did the setup):
https://storm.apache.org/releases/1.0.0/Setting-up-a-Storm-cluster.html
About number of workers: this parameter defines how many worker JVM are use for a topology (the supervisor JVM starts worker JVM that do the actual work -- supervisors are basically "host local master" for coordination). You can set it in you job configuration via conf.setNumWorkers(int). If you want a topology to spread out over multiple hosts, you need to increase the parameter. Nevertheless, for multiple topologies as in your case, a value of one might also be ok -- different topologies should run of different host, independently of this parameter.
See Storm documentation for more details:
https://storm.apache.org/releases/1.0.0/Understanding-the-parallelism-of-a-Storm-topology.html

Supervisors in STORM

I have a doubt in storm and here is goes:
Can multiple supervisors run on a single node? or is it the fact that we can run only one supervisor in one machine?
Thanks.
In Principle, There should be 1 Supervisor daemon per 1 physical machine. Why ?
Answer : Nimbus receives heart beat of Supervisor daemon and try to restart it in case supervisor died, if nimbus failed permanently on restart attempt. Nimbus will assign that job to another Supervisor.
Imagine, two Supervisors going down same time as they are from same physical machine, poor fault tolerant !!
running two Supervisor daemons will also be waste of memory resources.
If you have very high memory machines simply increase number of workers by adding more ports in storm.yaml instead adding supervisor.slots.ports.
Theoretically possible - practically you may not need to do it - unless you are doing a PoC/Demo. I did this for one of the demo I gave by making multiple copies of storm and changing the ports for one of the supervisors - you can do it by changing supervisors.slots.ports.
It is designed basically per node. So one node should have only one supervisor. This daemon deals with number of worker processes that you configured based on ports.
So there is no need of extra supervisor daemon per node.
It is possible to run multiple supervisors on a single host. Have a look at this post in storm-user mailing list.
Just copy multiple Storm, and change the storm.yaml to specify
different ports for each supervisor(supervisor.slots.ports)
Supervisor is configured per node basis. Running multiple supervisor on a single node does not make much sense. The sole purpose of the supervisor daemon is to start/stop the worker process (each of these workers are responsible for running subset of topologies). From the doc page ..
The supervisor listens for work assigned to its machine and starts and stops worker processes as necessary based on what Nimbus has assigned to it.

Load distribution to instances of a perl script running on each node of a cluster

I have a perl script (call it worker) installed on each node/machine (4 total) of a cluster (each running RHEL). The script itself is configured as a RedHat Cluster service (which means the RH cluster manager would ensure that one and exactly one instance of this script is running as long as at least one node in the cluster is up).
I have X amount of work to be done every day once a day, which this script does. So far the X was small enough and only one instance of this script was enough to do it. But now the load is going to increase and along with High Availability (viz already implemented using RHCS), I also need load distribution.
Question is how do I do that?
Of course I have a way to split the work in n parts of size X/n each. Options I had in mind:
Create a new load distributor, which splits the work in jobs of X/n. AND one of the following:
Create a named pipe on the network file system (which is mounted and visible on all nodes), post all jobs to the pipe. Make each worker script on each node read (atomic) from the pipe and do the work. OR
Make each worker script on each node listen on a TCP socket and the load distributor send jobs to each this socket in a round robin (or some other algo) fashion.
Theoretical problem with #1 is that we've observed some nasty latency problems with NFS. And I'm not even sure if NFS would support IPC via named pipes across machines.
Theoretical problem with #2 is that I have to implement some monitors to ensure that each worker is running and listening, which being a noob to Perl, I'm not sure if is easy enough.
I personally prefer load distributor creating a pool and workers pulling from it, rather than load distributor tracking each worker and pushing work to each. Any other options?
I'm open to new ideas as well. :)
Thanks!
-- edit --
using Perl 5.8.8, to be precise: This is perl, v5.8.8 built for x86_64-linux-thread-multi
If you want to keep it simple use a database to store the jobs and then have each worker lock the table and get the jobs they need then unlock and let the next worker do it's thing. This isn't the most scalable solution since you'll have lock contention, but with just 4 nodes it should be fine.
But if you start going down this road it might make sense to look at a dedicated job-queue system like Gearman.