Fuse Fabric: How to read and maintain a configuration PID per environment? - jbossfuse

I have configured a fabric profile app-ticketing with the configuration PID using the maven plugin that bundles the dependencies and the configuration PID's. When the camel context is initialized and the camel route starts up it the configuration from the PID file for connection settings port numbers etc. The camel polling route is configured automatically startup as soon as the profile is deployed to a container.
We have 3 environments DEV, QA and Production with different connection parameters, port numbers etc.
How to I setup the profile such that it determines the current environment and uses a different PID file for example com.example.ticketing.dev.properties if it is the DEV environment instead of having to edit the PID everytime I need to deploy to a different environment.

I have posted a feature for this using fabric8:maven:plugin , please refer to the example at https://github.com/sundarmr/camelexamples/tree/master/camel-examples/camel-envbased-props .
See if this fits your usecase.

Related

Running a spring batch with partitions in cloud foundry

I have created an app with spring batch(with partition) application taking example of this https://github.com/mminella/S3JDBC. My app is reading some files from object store and doing some processing and writing back to object the store. My app with local partition works fine in my machine.
I changed the maven, to run in cloud foundry , did change for deployer partition handler and step execution listener and deploying on pcf.
But while trying to push and run the app on pcf , I am getting an issue :
Failing URI /v2/info. I tried to log the error found that there is one call to my app e.g https://mypcf.com:443/v2/info and after that it gives the error. I cant provide full logs because of some restrictions. So I want to know :
To deploy a spring batch in pcf(is there any extra configuration
needed except the maven dependency and code changes for
deployerpartitionhandler and stepexecutionlistener and #cloudtask):
org.springframework.cloud spring-cloud-deployer-cloudfoundry
1.1.0.M1
Is it mandatory to have a separate data base service like my-sql for the partition job. Cant I use H2(the default one, if I
don't configure anything)?
Do I need to do any configuration in pcf to support running multiple partitions ?
As I am running remote partitioning , can I run that app on local STS or Intellij(not on PCF-DEV)so that it will run my app in
pcf(remote) and launch the workers.(Sorry for the stupid question ,
I am new to PCF).
Thanks for checking out my example. To answer your questions:
You should be able to use the latest deployer release (instead of that rather old version).
Yes. Partitioned steps need to all be able to share the same job repository data store so an in memory database like H2 will not work for that use case.
Besides defining your datasource, that's all that is required to live in PCF. That being said, there are other things that need to be configured, but you can use other mechanisms to do so (Spring Cloud Config Server, application.properties/yml, etc).
Yes, you should be able to run the master locally and have it deploy the workers onto PCF if you're using the CF deployer.

Per-host or per-deployment subsystem configuration in WildFly

I have a number of demo environments that I would like to setup for different groups of customers. These would contain the same deployment apps (WAR's) but requiring different configurations. currently I'm using:
3 datasources (accessed by JNDI) per application (so each environment would need different databases)
some Naming/JNDI simple bindings which would need to be different by environment.
one activeMQ queue for environment, also identified via JNDI.
Would it be possible, on Wildfly 11, to configure the Naming, Datasources and ActiveMQ subsystems on a non-global manner ? Maybe by either configuring the subsystems on a server, host or deployment level? I don't mind having multiple Server or Hosts definitions with different network ports (8080, 8081, etc...)
I know that I can setup multiple instances of standalone running on the same machine, each with a different configuration file, but I would realy like to use the same Wildfly instance to manage this scenario. Is this at all possible ?
Thank you,
You should be using domain mode where you can manage several servers and assign to them different configuration profile https://docs.jboss.org/author/display/WFLY/Domain+Setup

How to redeploy soa projects to a managed node using Weblogic Enterprise Manager

I have configured a Soa Cluster with one admin node and two managed nodes and all server nodes configured in three different machines. once I deploy a Bpell to one managed node it automatically deploys in the other managed nodes as well(default behavior). once you go to soa enterprise manager those deployed Bpels can be viewed under [Soa -> managed node -> Defult ->..]. It is the same place where we deploy new Bpels. I accidentally undeploy all bpels (you can do it by right clicking a managed node and choosing un-deploy option).
Now I'm having a hard time to get back to previous state, how to deploy all those projects again to a specific managed node. I tried to restart the node hoping it would sync again, yet the managed server went to "admin" state (not the ok state).
is there anything needs to be done !!
Thanks, Hemal
You'll need to start the server from command line, it will work.
For managing 'managed servers' from EM or WLS console, there's one additional step that's needed during instalation process.
Please modify the nodemanager.properties of WLS and set the property startscriptenabled=true.
http://download.oracle.com/docs/cd/E12839_01/core.1111/e10105/start.htm#CIHBACFI

Load balancing in JBoss with mod_cluster

Got a general question about load balancing setup in JBoss (7.1.1.Final). I'm trying to setup a clustered JBoss instance with a master and slave node and I'm using the demo app here (https://docs.jboss.org/author/display/AS72/AS7+Cluster+Howto) to prove the load balancing/session replication. I've basically followed through to just before the 'cluster configuration' section.
I've got the app deployed to the master and slave nodes and if I hit their individual IPs directly I can access the application fine. According to the JBoss logs and admin console the slave has successfully connected to the master. However, if I put something in the session on the slave, take the slave offline, the master cannot read the item that the slave put in the session.
This is where I need some help with the general setup. Do I have to have a separate apache httpd instance sat in front of JBoss to do the load balancing? I thought there was a load balancing capability built into JBoss that wouldn't need the separate server, or am I just completely wrong? If I don't need apache, please could you point me in the direction of instructions to setup the JBoss load balancing?
Thanks.
Yes, you need a Apache or any other software or hardware that allows you to perform load balancing of the HTTP request JBoss Application Server does not provide this functionality.
For proper operation of the session replication you should check that the server configuration and the application configuration is well defined.
On the server must have the cache enabled for session replication (you can use standalone-ha.xml or standalone-full-ha.xml file for initial config).
To configuring the application to replicate the HTTP session is done by adding the <distributable/> element to the web.xml.
You can see a full example in http://blog.akquinet.de/2012/06/21/clustering-in-jboss-as7eap-6/

Prevent deployment to entry node, only deploy to other nodes

I have a free OpenShift account with the default 3 gears. On this I have installed the WildFly 8.1 image using the OpenShift web console. I set the minimal and maximal scaling to 3.
What happens now is that OpenShift will create 3 JBoss WildFly instances:
One on the entry node (which is also running HAProxy)
One on an auxiliary node
One on another auxiliary node
The weird thing is that the JBoss WildFly instance on the entry node is by default disabled in the load balancer config (haproxy.conf). BUT, OpenShift is still deploying the war archive to it whenever I commit in the associated git repo.
What's extra problematic here is that because of the incredibly low number of max user processes (250 via ulimit -u), this JBoss WildFly instance on the entry node cannot even startup. During startup JBoss WildFly will throw random 'java.lang.OutOfMemoryError: unable to create new native thread' (and no, memory is fine, it's the OS process limit).
As a result, the deployment process will hang.
So to summarize:
A JBoss WildFly instance is created on the entry node, but disabled in the load balancer
JBoss WildFly in its default configuration cannot startup on the entry node, not even with a trivial war.
The deployer process attempts to deploy to JBoss WildFly on the entry node, despite it being disabled in the load balancer
Now my question:
How can I modify the deployer process (including the gear start command) to not attempt to deploy to the JBoss WildFly instance on the entry node?
When an app scales from 2 gears to 3, HAproxy stops routing traffic to your application on the headgear and routes it to the two other gears. This assures that HAproxy is getting the most CPU as possible as the application on your headgear (where HAproxy is running) is no longer serving requests.
The out of memory message you're seeing might not be an actual out of memory issue but a bug relating to ulimit https://bugzilla.redhat.com/show_bug.cgi?id=1090092.