Camel application is taking more than one hour to start - jboss

We're using apache camel with JBOSS fuse in our application for integration. We have build almost 80 APIs under one bundle and camel context contains 100+ routes.
When we deploy the bundle on JBOSS fuse, it takes almost 1 hour for all routes to UP and running and bundle gets "started" after 1 hour of deployment.
We could have divided our bundle into parts, lets says each bundle with maximum 10 APIs but we have already developed our application in one bundle and it's consuming so much time for starting.
Is there any way to reduce the time for the bundle to be "started" other than splitting the bundle into smaller bundles?

First it should be noted-- this is an anti-pattern and should never be repeated (looking at you Camel developer who just Google'd and found this article).
Karaf / JBoss Fuse can definitely scale to 100's of routes per instance (I've successfully run up to 500 in a single instance), but putting 100's of varying routes in one bundle is not advised-- ever. You lose a whole lot of process control, startup ordering management, and flexibility in adjusting your deployments to multiple containers as-needed.
You would need to look into some sort of async startup of the routes. The bundle won't go active until all OSGi components are active (including bundle activators, spring descriptors, scr components and blueprint descriptors).
If your routes are JMS consumers, look into the 'asyncStartListener' and 'asyncStopListener' options.
Another option is to disable starting the routes at bundle activation time, allow the bundle to go active and then have another process come through (in a threaded approach) and start calling route.start() on the all your routes.
Camel JMS Component
Camel Startup ordering

Related

How to reduce load on JBoss Fuse Karaf running Camel routes with ActiveMQ?

I have a broken system running JBoss Fuse 7.5 and 6.3 with Karaf, Camel routes, and ActiveMQ. The main purpose of the system is to move data from A to B using various protocols.
The system was designed over 10 years ago where it worked fine handling a few dozen routes. However the demand has grown from a few dozen to a thousand or more. The load is too much. Routes are breaking and we are losing data, running out of memory, exhausting resources, etc.
I need to keep the system running without error until the replacement system comes online which will be a while. We are required to use the hardware we have. We cannot increase processing power by adding a VM.
Some ideas I have. I am not sure if these can be done.
Restrict number of concurrent processes.
Configure camel routes to run only during certain times of the day. Or have a method to spread the load out over time.
Any suggestions on what can be done?

Spring restful services in websphere

Our application environment in Websphere Application Server has 3 clusters
1. UI Cluster
2. Service Cluster
3. Integration Cluster
We have around 50 war files (Micro Services) deployed to Service cluster. All services are REST based and exposed through SPRING API. Restarting Service cluster takes close to 30 mins. This time is critical during live incidents in Production. For reasons, if Service cluster needs to be restarted, we need to have 30 mins downtime for all end users. We are looking to reduce to recycle time, please suggest us for any solution.
Is there a way to load all the Spring based jar files before the application starts?
i.e. for example there is a service war file called as xyz-1.0.war and there are Spring based jar files as maven dependencies. All 50 number of WAR files has the same set of dependencies, I am thinking in a way to see if we can load all the Spring based jars before the application is started by websphere server.
Please suggest.
I don't know that you can load them BEFORE the application starts (class loading is generally on-demand), but you might be able to speed things up through the use of shared libraries for your common files, so they'd be loaded by a single class loader rather than from each WAR's class loader. It won't eliminate the class loading activity, since each WAR would still need to load the necessary classes, but it'd speed up the mechanics of the class loads since the shared library loader would return the already-loaded class rather than searching its class path.
There are two different approaches you could take to this. Step one in both cases is to create a shared library with the classes that are shared among the applications. The options for step two:
1) Create a custom class loader on the server and associate the shared library with this new class loader. This will make the classes in the shared library visible to all applications running on the server.
2) In the shared library configuration, select "Use an isolated class loader for this shared library", then associate the shared library with any applications that require it. In the event that the shared classes are required only by some applications, this will provide them only to the applications that require them.
A couple points of caution:
If you require unique Class instances (for example, static values unique to each WAR), this approach won't work, because there will be only one instance of the Class loaded by the shared library loader. In that event, you'll have to stick with WAR-level packaging.
If you use the isolated class loader solution, note that those loaders use "parent last" class loading, in which they are searched before server class loaders. If you have anything in those libraries that conflicts with classes provided by the server, it could open you up to ClassCastExceptions or LinkageErrors.
Note that the shared library loaders operate as parents of the WAR loaders, and as such, classes in the libraries will not be able to "see" classes in the WARs. You'll need to make sure that the libraries are essentially self-contained in order for these approaches to be successful.
More specific details on the configuration steps can be found in this blog post: https://www.ibm.com/developerworks/community/blogs/aimsupport/entry/create_shared_library_and_associate_it_with_the_application_server_or_application_on_websphere_application_server?lang=en_us
If you are doing microservices then your services should be independently deployable, so they each should be in the separate cluster. Traditional WebSphere Application Server is a bit heavy weight for this solution (depending on how many resources you have on your nodes), so I'd suggest you to migrate your service cluster to WebSphere Liberty and in that case you could have each service in separate clusters. This would allow you to restart each service independently, at much shorter time.
If you are doing microservices, then your UI cluster should be prepared for any service unavailability - that is a primer when doing microservices, and display some message to end user, that this service is temporary unavailable.
Regarding your current setup - you could try the "Rollout update" option, which will restart your server sequentially, so services should be available on other nodes.
so-random-dude advice to use blue/green deployment is also very good. You could have 2 cells, and then switch plugin configuration after redeployment. If your services are written in a way that different versions can run in parallel during update, you would have no downtime.
If you want to reduce downtime further and improve performance, you should consider using Java EE/Rest services instead of Spring, as it will cut down significantly size of your app, amount of libs to be scanned, deployment and startup time. It is much better integrated and supported in WebSphere Liberty than tons of jars that you have to include with Spring.
I have a simple solution for you, just ditch the Websphere and deploy those 50 "wars" as independent jars with embedded netty/undertow/tomcat/jetty in it.
I am afraid, what you have currently is not at all a microservice architecture. Agreed, different teams/consultants/organizations has different interpretation of "micro"services. But this is an extreme, which you should avoid at any cost; because you are having all the pain points of microservice and ZERO benefits (benefits such as independent scalability/deployability etc).
Restarting Service cluster takes close to 30 mins. This time is
critical during live incidents in Production. For reasons, if Service
cluster needs to be restarted, we need to have 30 mins downtime for
all end users
Have you looked at different deployment strategies like Canary deployment / Blue-Green deployment? Do you have more than one instances behind a load balancer?

Micro services with JBOSS

I am new to Jboss, want to know if micro services architecture is a right choice on JBOSS. I cannot change the application server as it is decided by client architect and I have no choice.
Want to know whether we can develop micro services with underlying JBOSS application server.
I understand Spring boot comes with embedded tomcat container, which makes it flexible to stop and start, deploy individual service with no impact to other services.
However will that architecture works with JBoss too.
Please suggest.
Thanks,
I actually developed a feasibility study to investigate the solution you mentioned. My conclusion is that it is totally viable to use Micro Service principles in a JBoss Platform.
I used the combination of JBoss \ Spring Boot \ Netflix to create successful Micro Service stack, I personally do that to find a solution to the transaction problem (multiple micro services collaborating) and the fan out problem which caused because excessive Network communication and Serialization costs.
I also wrote a blog about the subject, you might find more details there if you like to, here is the link.
Micro Services – Fan Out, Transaction Problems and Solutions with Spring Boot/JBoss and Netflix Eureka
By the definition what micro services are, then conceptually yes. A micro service is a service that is an independent unit, it could deployed, updated, and undeployed independently without affecting any unrelated part of your application. So that would mean having multiple instances of JBoss for MS and your application calling them through some sort of gateway or any other mechanism depending on your use case. If you plan to deploy all your MS in the same JBoss instance then it defeats the very purpose of a MS. Given that, JBoss wouldn't be a right choice for MS deployment because it will only make your MS deployment infrastructure quite heavy.
Depending on what your client's requirements are, your could possibly keep your webapp in JBoss and deploy your MS containers separately.
It depends on what you want to get out of microservices.
Some of the developers at my organisation looked at Spring Boot but concluded that it's best off being run as a standalone container rather than in JBoss, otherwise you've effectively got two container frameworks competing (SB and JBoss) and a range of associated issues.
Deploying microservices in JBoss won't give you the same flexibility as a true container system like Docker. With Docker you create standalone packages for your microservices that contain all the code, system tools, runtime environment, etc. It can be as small or large as it needs to be. JBoss on the other hand is a large container running a single JVM designed to hold multiple applications. The level of isolation is not the same, and it's not efficient to have JBoss as a container for a single microservice so you have to appropriately size and then deploy to the instance to make use of the resources it has available.
If you're looking at microservices as a way to gain greater control over service lifecycle management (deployment, versioning, deprecating, etc.) as opposed to an automated, web-scale component deployment model a la Netflix or LinkedIn, you could do this adequately with JBoss.
I'm actually looking to do something along these lines here. It won't be true microservices but by packaging and deploying individual, properly versioned APIs rather than monolithic applications and following most of the other principles of microservice development (componentisation, business function focus, stateless etc.) we will be hopefully better able to manage and benefit from our APIs.
Our APIs will all be behind an API gateway and load balancer so we can choose how we distribute the microserves distributed across the JBoss instances and balance resource usage as required. Note that our organisation is relatively small and has relatively low and predictable traffic so this approach should work fine. Your needs however may be different.

Determining version of jboss programmatically between jboss 5 and 7

I'm trying to find the best way to grammatically determine if my program is running on Jboss 5 or Jboss 7 (eap-6.1). The ways I've been finding so far are jboss 5 or jboss 7 specific, which doesn't work because the code has to work in both. Tried both solutions from here: How do I programmatically obtain the version in JBoss AS 5.1? and they didn't work. One complained about org.jboss.Main not existing in jboss 7, the other complained aobut not finidng "jmx/rmi/RMIAdaptor".
The only way I can see is to do Class.forName to look for "org.jboss.Version" (should be found if jboss 5) and if that fails, do Class.forName "org.jboss.util.xml.catalog.Version" (jboss 7). But that seems like a terrible idea.
The reason I need to know if the war is running on jboss 5 or 7 is because there are some custom files that are located in different places in both. So it's like "if jboss 5, execute this piece of code, if jboss 7 execute the other.
Ok i just saw what the problem is.
I would suggest you to think about design issues/refactoring of your software.
If you want to provide your software within different environments, seperate your logic from
technology dependencies.
Build facedes and interfaces to meet environmental requironments.
In my oppionen thats much better as to think we must support all integration platforms and support all there versions. This is completely impossible.
So decouple your business logic and offer specific interfaces. These interfaces (adapters) are much simplier to implement and to maintain.
Hope it helps.
UPDATE DUE TO COMMENT.
I think a solution is for servers 4 to 6 is to use
the MBean Server of JBoss to lookup the registered web application
which is associated to the deployed WarFile.
I suggest first to lookup the registered MBean of the web application manually using the JBoss jmx-console. The name of the WebApplication should be found under the capital "web" or "web-deployment" within the jmx-console.
If you found that name you can implement an own jmx based lookup mechanism
to check for that name.
Here is an Tutorial: pretty old but i think it gives you an idea how to do.
There must be more tutorials for this problem:
http://www.theserverside.com/news/1364648/Using-JMX-to-Manage-Web-Applications
Within JBoss 7 i just can give you the hint that its architecture is based on OSGI. So to lookup for other services you should have a look to this mechanism.
In any case you don't have direct access to the file system and the deployment directory
from an application which is deployed within a JEE container, except of
using the mechanisms provided by the container. JNDI Lookup, JMX ManagedBean mechanism, Java Connector Archicture (JCA) (makes no sense in your case)
It's not an answer just an suggestions since the implementations are completely different
One way could be to use the "interceptors" which are executed during bootstrap and before any ejb invocation and there you have access to the invocation context in other words ejb container.
I can't give you any example but this would be an access point to start.
Another accesspoint is to check for system wide JMX Beans by looking through the
Adminstratore console of the JBoss Server.
You can inject JMX Bean state into your application through the Context Mechansim.
Take a look from Version 4 to 6 at the JMX Managed Bean mechanism. The JMX Achitecture is the main concept of JBoss 3 to 6, so at this point you can influence and maintain the JBoss behaviour.
Aditionally i think you have differences from 4 to 6.x version and 7.0 because since
7 it's a completely new architecture. Since 7.0 the JMX architecture doens't exists anymore.

startup class (extends ServiceMBean) vs load-on-startup servlet

I am new to jboss and would like to know what are the differences between ServiceMBean and load-on-startup servlet tag in web.xml? Also, I would like to know which one will always get loaded first or they are loaded at the same time? In what situation, I should use MBean and when I should use startup servlet or it doesn't matter?
I need to write a a class/servlet to validate if all the required system properties (e.g -DINSTALL_DIR=blah ) is set. If not, then stop right there. else proceed and start the application.
Thanks in advance
-A
ServiceMBean is JMX, it is part of your JVM. load-on-startup servlet tag in web.xml is part of your J2EE application.
JMX is part of J2SE starting from JDK 1.5. So, you can have one ServiceMBean per JVM. not per application. JMX is used mostely for monitoring and managing the JVM. It provides access to information such as: number of classes loaded and threads running, memory consumption, garbage collection statistics, on-demand deadlock detection, and others. Another common use, is to refresh your cache.
JMX will allow you to instrument your application and control/monitor it using what-ever management console that your JMX container supports. An example would be a web application that implements a reference data cache...
A problem we had before was we would occasionally need to refresh the cache because a customer name changed in the database. If we had a refresh method on the MBean interface then we should be able to trigger this event using the JMX console. The JMX console may be a web or fat client that comes with our J2EE server. Our J2EE server may also support SNMP. This means that we may be able to invoke the method from a standard Tivoli or UniCenter console.
http://www.theserverside.com/news/1364664/J2EE-Application-Management-The-Power-of-JMX
You don't need remote access to ServiceMBean in order to trigger some asynchrious action. Moreover, you need validation on scope of application, not the whole JVM (while, you can, theoretically, handle this issue in the ServiceMBean). So, it is more naturally, to do it as load-on-startup servlet tag in web.xml. In this way, in every start up of your application validation will happen.
One more clarification: ServiceMBean is JBoss-way to write JMX. All MBeans are server wide (not application wide). That's why I use MBean and ServiceMBean freely above.