How deploy WARs one after other by script after startup wildfly? - deployment

I have problem with starting Wildfly. I have more than 40 WARs on application server and XMX limitation. Sometimes server will start, sometimes not...
There is any apportunity to deploy WARs one after other after startup Wildfly?
Server has problem with start when is deploying all WARs during starting.

Yes, regardless how you deploy, you can deploy one-by-one.
If you deploy by copying your WARs to standalone/deployments, then yes, you can (1) start an empty server and then (2) copy the wars one-by-one manually or using a shell script. However, it is quite possible that your problem will re-appear when you shut down and start up the fully deployed server again. To solve that, maybe you could have a script to undeploy the WARs before shutting the server down.
You can reach a similar effect also using jboss-cli: (1) start an empty server (2) connect your cli client jboss-cli.sh --connect and (3) deploy the WARs one-by-one: deploy ~/Desktop/my-app1.war ... See https://docs.wildfly.org/16/Admin_Guide.html#standalone-server-3

Related

JBOSS EAP 6.4 Unable to load topology

in my Jboss web console, the topology view in Domain tab is empty, I don`t know why. Everything is up and running, I just can't see the domain topology in Jboss console. "Unable to load topology"
I just got the "Unable to load topology" error today.
I have separate multinode jboss domain configurations in my env.
one is 6 nodes and one is 3 nodes. all running 6.4.22.GA
The error came up for me when we were switching ldap user authentication hosts and attempting to do that while leaving the servers up/running as much as possible.
When the domain node was changed to the new ldap server and brought back up we got the topology error.
fix was to bounce jbossas-domain on the other nodes and point to the new ldap server. After we did that the jboss console was able to display the topology again.
In short my solution was to make sure all the nodes in the jboss domain had the same configuration and then bounce them.

How to redeploy soa projects to a managed node using Weblogic Enterprise Manager

I have configured a Soa Cluster with one admin node and two managed nodes and all server nodes configured in three different machines. once I deploy a Bpell to one managed node it automatically deploys in the other managed nodes as well(default behavior). once you go to soa enterprise manager those deployed Bpels can be viewed under [Soa -> managed node -> Defult ->..]. It is the same place where we deploy new Bpels. I accidentally undeploy all bpels (you can do it by right clicking a managed node and choosing un-deploy option).
Now I'm having a hard time to get back to previous state, how to deploy all those projects again to a specific managed node. I tried to restart the node hoping it would sync again, yet the managed server went to "admin" state (not the ok state).
is there anything needs to be done !!
Thanks, Hemal
You'll need to start the server from command line, it will work.
For managing 'managed servers' from EM or WLS console, there's one additional step that's needed during instalation process.
Please modify the nodemanager.properties of WLS and set the property startscriptenabled=true.
http://download.oracle.com/docs/cd/E12839_01/core.1111/e10105/start.htm#CIHBACFI

Prevent deployment to entry node, only deploy to other nodes

I have a free OpenShift account with the default 3 gears. On this I have installed the WildFly 8.1 image using the OpenShift web console. I set the minimal and maximal scaling to 3.
What happens now is that OpenShift will create 3 JBoss WildFly instances:
One on the entry node (which is also running HAProxy)
One on an auxiliary node
One on another auxiliary node
The weird thing is that the JBoss WildFly instance on the entry node is by default disabled in the load balancer config (haproxy.conf). BUT, OpenShift is still deploying the war archive to it whenever I commit in the associated git repo.
What's extra problematic here is that because of the incredibly low number of max user processes (250 via ulimit -u), this JBoss WildFly instance on the entry node cannot even startup. During startup JBoss WildFly will throw random 'java.lang.OutOfMemoryError: unable to create new native thread' (and no, memory is fine, it's the OS process limit).
As a result, the deployment process will hang.
So to summarize:
A JBoss WildFly instance is created on the entry node, but disabled in the load balancer
JBoss WildFly in its default configuration cannot startup on the entry node, not even with a trivial war.
The deployer process attempts to deploy to JBoss WildFly on the entry node, despite it being disabled in the load balancer
Now my question:
How can I modify the deployer process (including the gear start command) to not attempt to deploy to the JBoss WildFly instance on the entry node?
When an app scales from 2 gears to 3, HAproxy stops routing traffic to your application on the headgear and routes it to the two other gears. This assures that HAproxy is getting the most CPU as possible as the application on your headgear (where HAproxy is running) is no longer serving requests.
The out of memory message you're seeing might not be an actual out of memory issue but a bug relating to ulimit https://bugzilla.redhat.com/show_bug.cgi?id=1090092.

deploynig different ear files in different clusters of same weblogic server domain

Hi I am new to this forum as well as weblogic server.My requirement is that I have an application that runs on a cluster having an admin server and three managed server MS1,MS2,MS3.Currently my application has two parts(or logic), both of which are in a single ear file.The part1 always occupies one server, say MS1 and rest in other two MS2 & MS3 .I want to divide my code in two different ear part1 and part2 with par1_ear deployed in MS1 and part2_ear deployed in MS2 and MS3 all running under same admin server
ear1 deployed in ----->MS1
ear2 deployed in ----->MS2 &MS3
All running under same managed server.
Can this be done if not other suggestion also welcome but i can have only one admin server and 3 clusters
Yes, when you deploy your .ear files you can target specific machines in a cluster. Your deployments don't have to go to all machines in a cluster.
Also, if you really only want one server in the cluster to handle some specific event you might want to look into Singleton Services.
Have you had experience deploying applications in Weblogic before?

Having Capistrano skip over down hosts

My setup
I am deploying a Ruby on Rails application to 70+ hosts. These hosts are located behind consumer-grade ADSL connections which may or may not be up. Probability of being up is aroud 99% but definently not 100%.
The deploy process works perfectly fine and I have no problem specific to it.
The problem
When Capistrano encounters a down host, it stops the entire process. This is a problem because if host n°30 is down, then the 40 other hosts after it do not get the deployment.
What I would like is definently an error for the hosts that are down but I would also like Capistrano to continue deploying to all the hosts that are up.
Is there any setting or configuration that would enable me to do this ?
I ended up running a Capistrano instance for each IP then parsing the logs to see which one has failed and which one has succeeded.
A little Python script adjusted to my needs does this fine.