Error while Running Multiple Supervisors on Apache Storm UI - centos

We have zookeeper running on one machine , Nimbus on second and then two supervisors(workers) running on different machines.
Zookeper is running on windows 7 and all others are running over Cent OS.
Now Problem is that when we run storm UI on machine running nimbus, it displays only single superviser (which randomly changes between the two supervisors on refreshing the page).
How to display both of them on UI simultaneously?
#Zookeeper ip = 10.135.155.133
#Nimbus ip = 10.135.158.22
#Supervisor 1 ip = 10.135.156.63
#supervisor 2 ip = 10.135.156.162
Below is zoo.cfg file of zookeper (on first machine)
tickTime=2000
initLimit=10
syncLimit=5
dataDir=D:\\tmp\\zookeeper
clientPort=2181
Below is Storm.yaml file coniguration running nimbus (on second machine)
storm.zookeeper.servers:
- "10.135.155.133"
storm.local.dir: "/storm/apache-storm-1.1.0/lib/"
nimbus.host: "10.135.158.22"
Below is Storm.yaml file configuration running supervisor 1 (on third machine)
storm.zookeeper.servers:
- "10.135.155.133"
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
storm.local.dir: "/storm/apache-storm-1.1.0/new"
nimbus.host: "10.135.158.22"
Below is Storm.yaml file configuration running supervisor 2 (on fouth machine).
storm.zookeeper.servers:
- "10.135.155.133"
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
storm.local.dir:"/storm/apache-storm-1.1.0/new 2"
nimbus.host: "10.135.158.22"

Symptoms:
Some supervisor processes are missing from the Storm UI.
List of supervisors in Storm UI changes on refreshes
Solutions:
Make sure the supervisor local dirs are independent (e.g., not sharing a local dir over NFS)
Try deleting the local dirs for the supervisors and restarting the daemons.
Supervisors create a unique id for themselves and store it locally. When that id is copied to other nodes, Storm gets confused.

Problem is unique super ID. See in your
apache-storm-1.1.0/conf/storm.yaml
parameter
storm.local.dir: "/var/lib/storm/data"
If you duplicate machine with this folder it can happen. Delete this folder stop and start supervisor process and new id will be generated
sudo rm -r /var/lib/storm/data
If you run more than one supervisors on the same machine make sure you have different folders.

Related

Can protractor be run from two different logins/users on same host?

I have few CI machines to un protractor jobs, as test case count increased, job completion time also followed the same path.
Now instead of having more machines, I am thinking of adding new users to same VMs, but before that I want to ensure if protractor process can be invoked from two different machines or not.
Also, would Chrome and Firefox work simultaneously on two different user accounts or this is not supported.
To answer your question in a more generic way, Yes, you can distribute your Protractor execution processes on multiple machines or multiple Users on same machine. But I have a better suggestion to adapt to the latest trends :)
Establish a Selenium Grid using docker containers. You can have a combination like below
docker container 1: - hosts the Hub - May be this image -
selenium/hub
docker container 2(can be multiple): - hosts the chrome node -
selenium/node-chrome
docker container 3(can be multiple): - hosts the firefox node -
selenium/node-firefox
Protractor process will run only on the master machine and actual execution will happen on the docker containers. This set of docker containers can be located on an external machine outside the one that hosts Jenkins

Issue using same zookeeper for Kafka and Mesos

I am trying to setup Kafka and Spark with Mesos on our 8 nodes cluster as following but having issues launching/starting Mesos Agent using zookeeper endpoint of Mesos masters.
Install and setup Zookeeper on 3 nodes (server00,server01,server02) (through $KAFKA_HOME/config/zookeeper.properties)
Install Kafka brokers on all 8 nodes (and point it to 3 zookeepers by setting following property in its $KAFKA_HOME/config/server.properties)
zookeeper.connect=server00:2181,server01:2181,server02:2181
Install Mesos master on 3 nodes (server00,server01,server02) and update /etc/mesos/zk with following line:
zk://server00:2181,server01:2181,server02:2181/mesos
Install Mesos agents on all 8 nodes.
Edit /etc/mesos/zk file on all other servers to have following line.
zk://server00:2181,server01:2181,server02:2181/mesos
Start Mesos master on all 3 master servers as below (verified that all Mesos master are running and available by launching http://server00:5050/#/, http://server01:5050/#/, http://server02:5050/#/
sudo /usr/sbin/mesos-master --cluster=server_mesos_cluster --log_dir=/var/log/mesos --work_dir=/var/lib/mesos
Start Mesos Agent on all 8 servers.
Example of launching this on server00:
sudo /usr/sbin/mesos-slave --work_dir=/var/lib/mesos --master=zk://server00:2181,server01:2181,server02:2181/mesos --ip=9.1.69.150
But above doesn't launch agent.
But following command does which makes me think that perhaps master mesos are not getting registered with zookeepers.
sudo /usr/sbin/mesos-slave --work_dir=/var/lib/mesos --master=server00:5050 --ip=9.1.69.150
Could anyone shed any light as to whether
My configuration is not right or
If I have to setup separate zookeepers for Mesos cluster?
How can I verify if Mesos masters are getting registered with zookeeper?
Once this setup is working, I intend to run Spark on all 8 nodes.
On Ubuntu, at least, /etc/mesos/zk, and other config files under /etc/mesos are only read by /usr/bin/mesos-init-wrapper. Thus your master isn't seeing your zk config.
You'll either need to launch it with the init script (service mesos-master start), run the wrapper manually, or use the -zk option to mesos-master:
sudo /usr/sbin/mesos-master --cluster=server_mesos_cluster --log_dir=/var/log/mesos --work_dir=/var/lib/mesos --zk=zk://server00:2181,server01:2181,server02:2181/mesos
`

Storm Topology Raspberry pi

I have a group of raspberries in which one of them is Pi2 and the others are Pi(Pi2 uses ARMv7 and other ARMv6). On Pi2 i run zookeeper, nimbus, ui (storm 0.10.0) and on the others I run supervisors (1 worker per device).
When I start the supervisors I get an error:
Raspberry pi server vm is only supported on armv7+ vfp
I managed to bypass this error by setting as -client instead of -server at storm.py file. The problem begins when I submit a topology on the storm. Nimbus(which runs on Pi2) tries to assign the topology to the workers. The workers download the topology but I again encounter the same error:
Error occurred during initialization of VM
Server VM is only supported on ARMv7+ VFP
I run
grep server * -R
in order to find if '-server' setting is used at the workers. I did not notice any crusial file that uses this setting (some logs indicated the server word).
So my question is how can I bypass the server option when a topology is submitted to the workers?
You would need to patch the Storm source code and build Storm by yourself. It is hardcoded in https://github.com/apache/storm/blob/2b7a758396c3a0529524b293a9c773e974f70b56/storm-core/src/clj/backtype/storm/daemon/supervisor.clj#L1075

Mesos cluster does not recover when physical host restart

I'm using mesosphere on 3 host over Ubuntu 14.04 as follow:
one with mesos master
two with mesos slave
All work fine, but after restart all physical hosts all scheduled job was lost. It's normal? I'm expected that zookeeper will store the current jobs, then when the system will need restart it, all jobs will be rescheduled after the master boot.
Update:
I'm using marathon and mesos on a same node, and I'm run marathon with flag --zk
With marathon's --zk and --ha enabled, Marathon should be storing its state in ZK and recovering it on restart, as long as Mesos allows it to reregister with the same framework ID.
However, you'll also need to enable the Mesos registry (even for a single master), to ensure that Mesos persists information about what frameworkIds are registered in the event of master failover. This can be accomplished by setting the --registry=replicated_log (default), --quorum=1 (since you only have 1 master), and --work_dir=/path/to/registry (where to store the state).
I solved the problem following this installation instructions: How To Configure a Production-Ready Mesosphere Cluster on Ubuntu 14.04
Although you found a solution, I'd like to explain more to this issue:)
In official doc:http://mesos.apache.org/documentation/latest/slave-recovery/
Note that if the operating system on the slave is rebooted, all
executors and tasks running on the host are killed and are not
automatically restarted when the host comes back up.
So all frameworks on Mesos will be killed after reboot. One way to restart the frameworks is to run all frameworks on Marathon, which will manage other frameworks and restart them in need.
However, then you need to auto-restart Marathon when it's killed. In the digitialocean link you mentioned, the Marathon is installed with script in init.d, so it can be restarted after rebooted. Otherwise, if you installed the Marathon via source code, you can use tools like supervisord to monitor Marathon.

Supervisors in STORM

I have a doubt in storm and here is goes:
Can multiple supervisors run on a single node? or is it the fact that we can run only one supervisor in one machine?
Thanks.
In Principle, There should be 1 Supervisor daemon per 1 physical machine. Why ?
Answer : Nimbus receives heart beat of Supervisor daemon and try to restart it in case supervisor died, if nimbus failed permanently on restart attempt. Nimbus will assign that job to another Supervisor.
Imagine, two Supervisors going down same time as they are from same physical machine, poor fault tolerant !!
running two Supervisor daemons will also be waste of memory resources.
If you have very high memory machines simply increase number of workers by adding more ports in storm.yaml instead adding supervisor.slots.ports.
Theoretically possible - practically you may not need to do it - unless you are doing a PoC/Demo. I did this for one of the demo I gave by making multiple copies of storm and changing the ports for one of the supervisors - you can do it by changing supervisors.slots.ports.
It is designed basically per node. So one node should have only one supervisor. This daemon deals with number of worker processes that you configured based on ports.
So there is no need of extra supervisor daemon per node.
It is possible to run multiple supervisors on a single host. Have a look at this post in storm-user mailing list.
Just copy multiple Storm, and change the storm.yaml to specify
different ports for each supervisor(supervisor.slots.ports)
Supervisor is configured per node basis. Running multiple supervisor on a single node does not make much sense. The sole purpose of the supervisor daemon is to start/stop the worker process (each of these workers are responsible for running subset of topologies). From the doc page ..
The supervisor listens for work assigned to its machine and starts and stops worker processes as necessary based on what Nimbus has assigned to it.