JMeter Master Slave configuration is not working in CentOS - centos

I set up a Master-Slave load testing environment using JMeter. I am using 3 CentOS machines with following IP's
xxx.xxx.xxx.1 (Master)
xxx.xxx.xxx.2 (Slave1)
xxx.xxx.xxx.3 (Slave2)
Here are the steps I did.
1) Added the following to the slaves jmeter.properties file:
remote_hosts=xxx.xxx.xxx.1
2) Added the following to master jmeter-server file
#RMI_HOST_DEF=-Djava.rmi.server.hostname=xxx.xxx.xxx.2 `
Then when I'm executing the following command from the /apache-jmeter-2.13/bin folder of xxx.xxx.xxx.2 Slave machine.(I don't have root user access have only SUDO root access)
sudo ./jmeter-server
I'm getting the error
./jmeter-server: line 32: ./jmeter: Permission denied
Is my Master-Slave setup is correct? Am I doing something wrong here?
Do I need to do anything else to setup master-slave?

Add the following to client (master) jmeter.properties file:
remote_hosts= xxx.xxx.xxx.2,xxx.xxx.xxx.3
Add the following to servers (in each slave machines) jmeter-server:
RMI_HOST_DEF=-Djava.rmi.server.hostname=xxx.xxx.xxx.2 for (Slave1)
&
RMI_HOST_DEF=-Djava.rmi.server.hostname=xxx.xxx.xxx.3 for (Slave2)
Then start jmeter-server.sh from those two Slave
machines(xxx.xxx.xxx.2,xxx.xxx.xxx.3) using this command
./jmeter-server
Then ran the following command from the client machine(xxx.xxx.xxx.1) to start remote start all the slaves.
./jmeter -n -t <testscript.jmx> -r
See this Thread.

Related

WAL configuration SSL Off

I am trying to configure a docker-compose to create a master and slave servers local using WAL to replicate databases, and it is not working, because of some problems in the configuration.
I am receiving this error in the image below:
All my code is here:
https://github.com/Oracy/postgres
To run is just docker-compose up -d
Thanks in advance.

Failed to parse remote port from server output: bash: no job control in this shell

I am trying to use VSCode - Insiders to run code on a docker container in a remote AWS machine using the Remote - SSH plugin. I have opened a terminal and set up port forwarding like so: ssh -L 2201:localhost:2222 user#host -N -i ~/.ssh/id_rsa. Then in VSCode I try to connect to root#localhost and it starts up, but then gives me an error message:
> Found existing installation...
> Found running server...
>
> bash: no job control in this shell
"install" terminal command done
Received install output: bash: no job control in this shell
Failed to parse remote port from server output: bash: no job control in this shell
I started doing this process a couple days ago and it worked. Yesterday it was in and out a bit, and today it's not working at all. I've tried turning it off and on again, but can't get it to work. In case it's relevant, I am on MacOS with the Mojave OS.
Edit:
Magically, it worked today (the following day) the first time. I would still be interested in knowing how to fix this next time it breaks. In case this helps, here's the output from when it is working:
SSH Resolver called for "ssh-remote+7b22686..."
SSH Resolver called for host: root#localhost
Setting up SSH remote "localhost"
Using commit id "473af338..." and quality "insider" for server
Using SSH config file "/Users/user/config"
Install and start server if needed
> Found existing installation...
> Found running server...
>
> bash: no job control in this shell
> 368805d0-03...==38466==
"install" terminal command done
Received install output: 368805d0-03...==38466==
Server is listening on port 38466
Using SSH config file "/Users/user/config"
Spawning tunnel with: ssh -F /Users/user/config root#localhost -N -L localhost:39003:localhost:38466
Spawned SSH tunnel between local port 39003 and remote port 38466
Waiting for ssh tunnel to be ready
Tunneling remote port 38466 to local port 39003
Resolving "ssh-remote+7b22686f737..." to "localhost:39003", attempt: 1
Edit 2: And now (the following following day) it's not working again.
Edit 3: I have a config file at ~/config. Here are the contents:
Host *
User root
Port 2201
IdentityFile ~/id_rsa
In the specific implementation shown above, you have User root in your config and are logging in with root#localhost, so you have your username twice. Leave the config file as is and just enter localhost in VSCode. This still doesn't solve the instability issue, but it does fix one problem.
I have the same issue when configuring my server. It solved by this issue. After save your config file for remote server, change the remote shell path like this issue, and then connect, you will in.
https://github.com/microsoft/vscode-remote-release/issues/220#issuecomment-490374437

Singularity failing to create slave on Rancher with RHEL 7 instances

I'm trying to deploy Singularity on Rancher with a set of RHEL 7.3 (3.10.0) instances. Everything else works fine but the slave node keeps failing to start giving the following error.
Failed to create a containerizer: Could not create MesosContainerizer:
Failed to create launcher: Failed to create Linux launcher: Failed to
determine the hierarchy where the subsystem freezer is attached
How can I resolve this?
Have you tried this sollution
try to use mesos-slave option --launcher=posix
you can permanently set it by echo 'MESOS_SLAVE_PARAMS="--launcher=posix"' >> restricted/host
with new image you can also do it in mesos way: echo MESOS_LAUNCHER=posix >> restricted/env
In case of Ubuntu (Debian), you can add this value to /etc/default/mesos-slave
MESOS_LAUNCHER=posix

How to run mongodb on AWS

I'm looking for a little direction on how to set up services on AWS. I have an application that is build with Node.js and uses mongodb (and mongoose as the ODM). I'm porting everything over to AWS and would like to set up an autoscaling group behind a load balancer. What I am not really understanding, however, is where my mongodb instance should live. I know that using DynamoDB it can be fairly intuitive to set up to work with that, but since I am not, my question is this: Where and how should mongo be set up to work with my app? Should it be on the same ec2 instance with my app, and if so, how does that work with new instances starting and being terminated? Should I set up an instance dedicated only for mongo? In addition, to that question, how do I create snapshots and backups of my data?
This is a good document for installing MongoDB on EC2, and managing backups: https://docs.mongodb.org/ecosystem/platforms/amazon-ec2/
If you aren't comfortable doing all this yourself you might want to also look into MongoLab which is a MongoDB as a Service that can run on AWS.
Your database should definitely be in a separate instance than your app, from all aspects.
A very basic tier based application should comprise of the app server cluster in a scaling group behind a load balancer - in a public subnet, and a separate cluster (recommended in a different subnet which is not publicly accessible), which your app cluster will speak to. whether to use an ELB for Mongo or not actually depends on your mongo config (replica set).
In regards to snapshots (assume this will only be relevant for your DB), have a look at this.
You can easily install MongoDB in AWS Cloud 9 by using the below process
First create Cloud 9 environment in AWS then at the terminal
ubuntu:~/environment $ At the terminal you’ll see this.
Enter touch mongodb-org-3.6.repo into the terminal
Now open the mongodb-org-3.6.repo file in your code editor (select it from the left-hand file menu) and paste the following into it then save the file:
[mongodb-org-3.6]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/amazon/2013.03/mongodb-org/3.6/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.6.asc
* Now run the following in your terminal:
sudo mv mongodb-org-3.6.repo /etc/yum.repos.d
sudo yum install -y mongodb-org
If the second code does not work try:
sudo apt install mongodb-clients
Close the mongodb-org-3.6.repo file and press Close tab when prompted
Change directories back into root ~ by entering cd into the terminal then enter the following commands:
“ubuntu:~ $ “ - Terminal should look like this.
sudo mkdir -p /data/db
echo 'mongod --dbpath=data --nojournal' > mongod
chmod a+x mongod
Now test mongod with ./mongod
Remember, you must first enter cd to change directories into root ~ before running ./mongod
Don't forget to shut down ./mongod with ctrl + c each time you're done working
-if this error pops up while using command mongod
exception in initAndListen: IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db, terminating
Then use the code:
sudo chmod -R go+w /data/db
Reference

Wildfly 8 cluster in standalone mode on different machines

I am trying to create a standalone cluster in Wildfly 8. I am referring to http://middlewaremagic.com/jboss/?p=1952. I was successful in creating the cluster on same machine, but can't create the same on different machines. I started the server on both machines as follows:
1) On machine 1 > Go to cmd > jboss-wildfly\bin > run following command:
standalone.bat -c standalone-ha.xml -b 10.10.54.27 -u 230.0.0.4 -Djboss.server.base.dir=../standalone -Djboss.node.name=nodeOne(10.10.54.27 is IP of machine 1)
1) On machine 2 > Go to cmd > jboss-wildfly\bin > run following command:
standalone.bat -c standalone-ha.xml -b 10.10.52.42 -u 230.0.0.4 -Djboss.server.base.dir=../standalone -Djboss.node.name=nodeTwo(10.10.52.42 is IP of machine 2)
The servers are getting started without any problem, but the nodes can't see each other. I used ClusterWebApp.war (downloaded from same site given above) for testing the cluster.
Am I missing something? Please help.
Most likely UDP broadcasting/multicasting is not allowed on your network (as of today, this would be the case on AWS VPCs for instance)
There is a test you can run to confirm this:
http://www.techstacks.com/howto/troubleshoot-jgroups-and-multicast-ip-issues.html
If this is the case, you may have to cluster using tcp. These links explain how this can be done
http://middlewaremagic.com/jboss/?p=2015
http://www.redhat.com/summit/2011/presentations/jbossworld/whats_new/wednesday/ban_w_310_running_in_the_cloud.pdf
I got the problem. For this kinda clustering to work, multicast protocol needed to be supported on network switches and routers. Previously I tried doing it on my laptop for which multicast protocol is enabled. So, I successfully created cluster on same machine (i.e. my laptop). However, other machine is one of the servers in my network, for which multicast was not enabled. Hence it was failing when tried on two different machines.Very basic problem!! After correcting this, cluster is working fine.