JBoss EAP6 - restart multiple hosts with one command in domain mode - jboss

I have an app that needs to be restarted through the JBoss CLI on multiple hosts during a deployment.
Is there a way to do this dynamically with a single restart(blocking=true) command? Or is there a different command that restarts all hosts while also using the blocking argument that waits for the servers restart.
Example code
/host=devserver1/server-config=Group-devserver1:restart(blocking=true)
/host=devserver2/server-config=Group-devserver2:restart(blocking=true)
/host=devserver3/server-config=Group-devserver3:restart(blocking=true)

You can use the server group to restart the servers with blocking or you can restart the servers on the host, but there is no blocking.
To restart via the server group you'd do something like:
/server-group=main-server-group:restart-servers(blocking=true)
To restart on the host you'd do something like:
/host=master:reload(restart-servers=true)

Related

Dynamic port mapping for ECS tasks

I want to run a socket program in aws ecs with client and server in one task definition. I am able to run it when I use awsvpc network mode and connect to server on localhost every time. This is good so I don’t need to know the IP address of server. The issue is server has to start on some port and if I run 10 of these tasks only 3 tasks(= number of running instances) run at a time. This is clearly because 10 tasks cannot open the same port. I can manually check for open ports before starting the server and somehow write it to docker shared volume where client can read and connect. But this seems complicated and my server has unnecessary code. For the Services there is dynamic port mapping by using Application Load Balancer but there isn’t anything for simply running tasks.
How can I run multiple socket programs without having to manage the port number in Aws ecs?
If you're using awsvpc mode, each task will get its own eni and there shouldn't be any port conflict. But each instance type has a limited number of enis available. You can increase that by enabling eni trunking which, however is supported by a handful of instance types:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-instance-eni.html#eni-trunking-supported-instance-types

Jboss multiple instances in Standalone mode on same pc

We are using Jboss 7 App Server and we are trying run multiple server nodes on a single box and also on other box *basically 2 boxes which will have 2 each nodes of Jboss servers running).
My question is to have multiple nodes of Jboss Servers on a single box in Standalone mode. Should I have to copy server folder twice with port offsets?
Or is it ok to start servers just via port offset without having to copying server folder?
What is the best practice to have multiple server nodes running on the same box? Any advice would be greatly appreciated.
Thank you.
Just create multiple copies of standalone directory(Example: standalone_PROD,standalone_SIT) so that we will have separate log files and deployment directories for each instance. And use below option while starting server instance:
-Djboss.server.base.dir=/path/to/standalone_SIT <-- Location of standalone dir
-Djboss.socket.binding.port-offset=10 <-- PortOffset to avoid port conflict
We have had two instances of jboss on the same computer over several years. Both instances were in the same domain. Each instance had its own port and of course lay in its own path. Our experiences were good.
You can have as many standalone instances you want on a machine, depending upon the resources available.
All you need to do is copy over the same folder twice and make changes in all the ports to be used in the standalone mode. Also If you are setting any parameters make sure they are according to the memory available on the machine.

Jboss Server with same port on the same machine

Can we run more than one instance of Jboss Server with same port on the same machine ? If yes how ?
Thanks
Amar
of course the only way to have two services listening on the same port is to make sure that they bind on different IP addresses. If you consider acceptable configure multiple addresses on the same interface, simply start each instance of JBoss with the flag "-b <address>"
Yes you can. All you need is to also run a Apache server instance and use it as a load balancer to a JBoss cluster and use the mod_proxy or mod_ajp plugin to load balance between multiple JBoss instances. To spin up multiple instances of JBoss 5 or JBoss 6 on Windows , use my script here (but you will have to enhance the configuration yourself to enable clustering and the Apache load balancer). Plus, my launch script requires you download stuff from the YAJSW server wrapper project.
I frequently run multiple jboss servers as a cluster and I always run a Apache server on port 80 and 443 that load balances to the JBoss instances. Here is am example post from my blog.
Yes, you can do it if your machine has several network interfaces (IP addresses) and you bind each Jboss instance to one different IP. For example, if your machines has two network interfaces: 192.168.1.1 and 192.168.1.2, you could run each instance with the command:
./run.sh -c instance1 -b 192.168.1.1
./run.sh -c instance2 -b 192.168.1.2
But the most common case is running several instances in the same machine using different ports each instance, you can achieve that with Jboss Ports Bindings.
Look for detailed info in this JBoss Web: Configuring Multiple JBoss Instances On One Machine.

How do I deploy an entire environment (group of servers) using Chef?

I have an environment (Graphite) that looks like the following:
N worker servers
1 relay server that forwards work to these worker servers
1 web server that can query the relay server.
I would like to use Chef to setup and deploy this environment in EC2 without having to create each worker server individually, get their IPs and set them as attributes in the relay cookbook, create that relay, get the IP, set it as an attribute in the web server cookbook, etc.
Is there a way using chef in which I can make sure that the environment is properly deployed, configured and running without having to set the IPs manually? Particularly, I would like to be able to add a worker server and have the relay update its worker list, or swap the relay server for another one and have the web server update its reference accordingly.
Perhaps this is not what Chef is intended for and is more for per-server configuration and deployment, if that is the case, what would be a technology that facilitates this?
Things you will need are:
knife-ec2 - This is used to start/stop Amazon EC2 instances.
chef-server - To be able to use search in your recipes. It should be also accessible from your EC2 instances.
search - with this you will be able to find among the nodes provisioned by chef, exactly the one you need using different queries.
I have lately written an article How to Run Dynamic Cloud Tests with 800 Tomcats, Amazon EC2, Jenkins and LiveRebel. It involves loadbalancer installation and loadbalancer must know all IP adresses of the servers it balances. You can check out the recipe of balanced node, how it looks for loadbalancer:
search(:node, "roles:lr-loadbalancer").first
And check out the loadbalancer recipe, how it looks for all the balanced nodes and updates the apache config file:
lr_nodes = search(:node, "role:lr-node")
template ::File.join( node[:apache2][:home], 'conf.d', 'httpd-proxy-balancer.conf' ) do
mode 0644
variables(:lr_nodes => lr_nodes)
notifies :restart, 'service[apache2]'
end
Perhaps you are looking for this?
http://www.infochimps.com/platform/ironfan

Having Capistrano skip over down hosts

My setup
I am deploying a Ruby on Rails application to 70+ hosts. These hosts are located behind consumer-grade ADSL connections which may or may not be up. Probability of being up is aroud 99% but definently not 100%.
The deploy process works perfectly fine and I have no problem specific to it.
The problem
When Capistrano encounters a down host, it stops the entire process. This is a problem because if host n°30 is down, then the 40 other hosts after it do not get the deployment.
What I would like is definently an error for the hosts that are down but I would also like Capistrano to continue deploying to all the hosts that are up.
Is there any setting or configuration that would enable me to do this ?
I ended up running a Capistrano instance for each IP then parsing the logs to see which one has failed and which one has succeeded.
A little Python script adjusted to my needs does this fine.