Spring restful services in websphere - rest

Our application environment in Websphere Application Server has 3 clusters
1. UI Cluster
2. Service Cluster
3. Integration Cluster
We have around 50 war files (Micro Services) deployed to Service cluster. All services are REST based and exposed through SPRING API. Restarting Service cluster takes close to 30 mins. This time is critical during live incidents in Production. For reasons, if Service cluster needs to be restarted, we need to have 30 mins downtime for all end users. We are looking to reduce to recycle time, please suggest us for any solution.
Is there a way to load all the Spring based jar files before the application starts?
i.e. for example there is a service war file called as xyz-1.0.war and there are Spring based jar files as maven dependencies. All 50 number of WAR files has the same set of dependencies, I am thinking in a way to see if we can load all the Spring based jars before the application is started by websphere server.
Please suggest.

I don't know that you can load them BEFORE the application starts (class loading is generally on-demand), but you might be able to speed things up through the use of shared libraries for your common files, so they'd be loaded by a single class loader rather than from each WAR's class loader. It won't eliminate the class loading activity, since each WAR would still need to load the necessary classes, but it'd speed up the mechanics of the class loads since the shared library loader would return the already-loaded class rather than searching its class path.
There are two different approaches you could take to this. Step one in both cases is to create a shared library with the classes that are shared among the applications. The options for step two:
1) Create a custom class loader on the server and associate the shared library with this new class loader. This will make the classes in the shared library visible to all applications running on the server.
2) In the shared library configuration, select "Use an isolated class loader for this shared library", then associate the shared library with any applications that require it. In the event that the shared classes are required only by some applications, this will provide them only to the applications that require them.
A couple points of caution:
If you require unique Class instances (for example, static values unique to each WAR), this approach won't work, because there will be only one instance of the Class loaded by the shared library loader. In that event, you'll have to stick with WAR-level packaging.
If you use the isolated class loader solution, note that those loaders use "parent last" class loading, in which they are searched before server class loaders. If you have anything in those libraries that conflicts with classes provided by the server, it could open you up to ClassCastExceptions or LinkageErrors.
Note that the shared library loaders operate as parents of the WAR loaders, and as such, classes in the libraries will not be able to "see" classes in the WARs. You'll need to make sure that the libraries are essentially self-contained in order for these approaches to be successful.
More specific details on the configuration steps can be found in this blog post: https://www.ibm.com/developerworks/community/blogs/aimsupport/entry/create_shared_library_and_associate_it_with_the_application_server_or_application_on_websphere_application_server?lang=en_us

If you are doing microservices then your services should be independently deployable, so they each should be in the separate cluster. Traditional WebSphere Application Server is a bit heavy weight for this solution (depending on how many resources you have on your nodes), so I'd suggest you to migrate your service cluster to WebSphere Liberty and in that case you could have each service in separate clusters. This would allow you to restart each service independently, at much shorter time.
If you are doing microservices, then your UI cluster should be prepared for any service unavailability - that is a primer when doing microservices, and display some message to end user, that this service is temporary unavailable.
Regarding your current setup - you could try the "Rollout update" option, which will restart your server sequentially, so services should be available on other nodes.
so-random-dude advice to use blue/green deployment is also very good. You could have 2 cells, and then switch plugin configuration after redeployment. If your services are written in a way that different versions can run in parallel during update, you would have no downtime.
If you want to reduce downtime further and improve performance, you should consider using Java EE/Rest services instead of Spring, as it will cut down significantly size of your app, amount of libs to be scanned, deployment and startup time. It is much better integrated and supported in WebSphere Liberty than tons of jars that you have to include with Spring.

I have a simple solution for you, just ditch the Websphere and deploy those 50 "wars" as independent jars with embedded netty/undertow/tomcat/jetty in it.
I am afraid, what you have currently is not at all a microservice architecture. Agreed, different teams/consultants/organizations has different interpretation of "micro"services. But this is an extreme, which you should avoid at any cost; because you are having all the pain points of microservice and ZERO benefits (benefits such as independent scalability/deployability etc).
Restarting Service cluster takes close to 30 mins. This time is
critical during live incidents in Production. For reasons, if Service
cluster needs to be restarted, we need to have 30 mins downtime for
all end users
Have you looked at different deployment strategies like Canary deployment / Blue-Green deployment? Do you have more than one instances behind a load balancer?

Related

Is it possible to utilize the same service worker for two projects?

I have an issue with a service worker, I have two different projects that are in the same server but in different folders, and I want to precache the files on project number 2 using my service worker (My service worker is already working on project number 1). My question is, is it possible to do this? is there any other way I can attack this? Any help is very much appreciated.
In general, yes, as long as the service worker is hosted at a URL that is at the same level (or "higher") than the root of each of those projects. That would ensure that each project will be within scope of the service worker.
I'm assuming that one of the challenges you're asking about relates to creating a precache manifest within that service worker that contains build artifacts from both projects. There are a few different ways to tackle that, but I think the most straightforward would be to ensure that you always run the build process for each project at the same time, and then when you use Workbox's build tooling to create the precache manifest, you ensure that you grab all the assets that were output by each of the projects.
The specifics of configuring that build process depends on what you're currently using. You mention that there's a service worker (presumably using Workbox's precaching) already in place for the first project, so I think just using the same build setup, with tweaks to pick up the additional assets, would be easiest.

Is it possible to modify the test server configuration in each separate microservice project?

I am developing a number of microservices which will run on Open Liberty. I have set up a test server in my eclipse environment which is configured to use all the features required by all the services which I am currently working on.
Whilst this works, it seems a heavy-handed approach and it would be good to test each service in an environment which closely resembles the target server. The services can differ in the set of features they require as well as the JVM settings necessary.
Each service will run in its own docker container and the docker configuration is defined in each project.
Is there a way to better test these services without explicitly setting up a new server for each individual service?
I am not aware of any way to segment the Liberty runtime (its features) nor the jvm (for different jvm settings) for different applications running in a single Liberty instance.
You can set app specific variables and retrieve them using MP Config, but that's not the same as jvm settings and certainly not the same as trying to segment specific features of the runtime to a specific application.
However, in general when testing, I would highly recommend trying to mimic your production environment as much as possible. Since you're planning on deploying into docker, I would do the same locally when testing, and given Liberty's lightweight, composable nature, it's unlikely that you'll hit resource issues locally when doing this (you should only enable the features on each Liberty instance that your app is using to minimize the size of that instance). This approach is one of the big benefits/value provided by containers and Liberty.
In other words, even if you could have one Liberty instance segmented per application, I would not recommend it for your testing because, as you said, "it would be good to test each service in an environment which closely resembles the target server"

Camel application is taking more than one hour to start

We're using apache camel with JBOSS fuse in our application for integration. We have build almost 80 APIs under one bundle and camel context contains 100+ routes.
When we deploy the bundle on JBOSS fuse, it takes almost 1 hour for all routes to UP and running and bundle gets "started" after 1 hour of deployment.
We could have divided our bundle into parts, lets says each bundle with maximum 10 APIs but we have already developed our application in one bundle and it's consuming so much time for starting.
Is there any way to reduce the time for the bundle to be "started" other than splitting the bundle into smaller bundles?
First it should be noted-- this is an anti-pattern and should never be repeated (looking at you Camel developer who just Google'd and found this article).
Karaf / JBoss Fuse can definitely scale to 100's of routes per instance (I've successfully run up to 500 in a single instance), but putting 100's of varying routes in one bundle is not advised-- ever. You lose a whole lot of process control, startup ordering management, and flexibility in adjusting your deployments to multiple containers as-needed.
You would need to look into some sort of async startup of the routes. The bundle won't go active until all OSGi components are active (including bundle activators, spring descriptors, scr components and blueprint descriptors).
If your routes are JMS consumers, look into the 'asyncStartListener' and 'asyncStopListener' options.
Another option is to disable starting the routes at bundle activation time, allow the bundle to go active and then have another process come through (in a threaded approach) and start calling route.start() on the all your routes.
Camel JMS Component
Camel Startup ordering

Micro services with JBOSS

I am new to Jboss, want to know if micro services architecture is a right choice on JBOSS. I cannot change the application server as it is decided by client architect and I have no choice.
Want to know whether we can develop micro services with underlying JBOSS application server.
I understand Spring boot comes with embedded tomcat container, which makes it flexible to stop and start, deploy individual service with no impact to other services.
However will that architecture works with JBoss too.
Please suggest.
Thanks,
I actually developed a feasibility study to investigate the solution you mentioned. My conclusion is that it is totally viable to use Micro Service principles in a JBoss Platform.
I used the combination of JBoss \ Spring Boot \ Netflix to create successful Micro Service stack, I personally do that to find a solution to the transaction problem (multiple micro services collaborating) and the fan out problem which caused because excessive Network communication and Serialization costs.
I also wrote a blog about the subject, you might find more details there if you like to, here is the link.
Micro Services – Fan Out, Transaction Problems and Solutions with Spring Boot/JBoss and Netflix Eureka
By the definition what micro services are, then conceptually yes. A micro service is a service that is an independent unit, it could deployed, updated, and undeployed independently without affecting any unrelated part of your application. So that would mean having multiple instances of JBoss for MS and your application calling them through some sort of gateway or any other mechanism depending on your use case. If you plan to deploy all your MS in the same JBoss instance then it defeats the very purpose of a MS. Given that, JBoss wouldn't be a right choice for MS deployment because it will only make your MS deployment infrastructure quite heavy.
Depending on what your client's requirements are, your could possibly keep your webapp in JBoss and deploy your MS containers separately.
It depends on what you want to get out of microservices.
Some of the developers at my organisation looked at Spring Boot but concluded that it's best off being run as a standalone container rather than in JBoss, otherwise you've effectively got two container frameworks competing (SB and JBoss) and a range of associated issues.
Deploying microservices in JBoss won't give you the same flexibility as a true container system like Docker. With Docker you create standalone packages for your microservices that contain all the code, system tools, runtime environment, etc. It can be as small or large as it needs to be. JBoss on the other hand is a large container running a single JVM designed to hold multiple applications. The level of isolation is not the same, and it's not efficient to have JBoss as a container for a single microservice so you have to appropriately size and then deploy to the instance to make use of the resources it has available.
If you're looking at microservices as a way to gain greater control over service lifecycle management (deployment, versioning, deprecating, etc.) as opposed to an automated, web-scale component deployment model a la Netflix or LinkedIn, you could do this adequately with JBoss.
I'm actually looking to do something along these lines here. It won't be true microservices but by packaging and deploying individual, properly versioned APIs rather than monolithic applications and following most of the other principles of microservice development (componentisation, business function focus, stateless etc.) we will be hopefully better able to manage and benefit from our APIs.
Our APIs will all be behind an API gateway and load balancer so we can choose how we distribute the microserves distributed across the JBoss instances and balance resource usage as required. Note that our organisation is relatively small and has relatively low and predictable traffic so this approach should work fine. Your needs however may be different.

Is there a way to split Hybris modules to different managed servers

I have a Hybris deployment on a single Weblogic Managed Server. The problem is during performance testing it was found that it would be better to split the Hybris modules like Admin Cockpit and Product Catalogue to different Managed Servers.
EDIT
I Suppose I should also mention the fact that my Infra Team is asking me to separate out the EARs so that in case of code changes, only the affected module gets redeployed and not the whole bunch. That way even if we let the performance front out, still I need the splits
Now my problem is that for build Hybris produces a single EAR file.
Is there a way, in which I can break down the EAR file and have the modules optionally there...
So the structure would be:
Managed Server 1
Hybris Core
Admin Cockpit
Managed Server 2
Hybris Core
Product Catalogue
After this the links to the deployments be redirected via URL configuration
Any Suggestions??
I'm not sure if this will eliminate the problems you encounter as I don't think the admin cockpit by itself will be causing a performance bottleneck.
What is the performance issue? Quite often performance impact can come from admin/backend triggered functionality like e.g. cronjobs (e.g. updating product catalog with stock/product information), or solr indexing jobs etc.
One common approach I have seen in hybris cluster environments is to setup a cluster of multiple nodes and have one node reserved for backend activity (so that expensive cronjobs run on a dedicated node that is not served by the load balancer handling storefront requests).
But I think from a code deployment perspective the artifact would still be the same.
Hope this helps at least as food for thoughts :)
EDIT
In short: Multiple hybris servers accessing the same db need to be setup as cluster.
Multiple hybris servers with different sets of extensions can't share the same db (as the db layout will be different).
To be honest, this doesn't sound like a good approach to me.
In hybris you would use different localextension.xml files (which define which extensions (i.e. modules) are part of your code artifact). That being said, if you have two vastly different localextensions.xml files (one for your product catalog and one for admin) the resulting 'admin' deployment artifact would not contain the data model of the 'catalog' deployment, so the persistence layer wouldn't match up. In other words, in your admin server you wouldn't be able to even see the data model that is defined in your 'catalog' server because the 'catalog' specific extensions are not installed.
And if you go without a properly set up cluster environment, changes on one server (written down to the db) wouldn't be noticed on the other server unless you actively refresh/purge the hybris cache there either, so multiple hybris servers sharing the same db is only functioning if the servers are set up as a cluster.
I think if your admin server is supposed to work on the actual 'catalog' data, they both need to have the same set of extensions defined in their localextensions.xml in order for it to work at all.
Sharing the same database without being aware that there is a cluster (or basically other hybris servers accessing the same db) is not going to work IMO.
I still think your best shot would be to deploy the same code artifacts (in cluster environments you can still setup different behavior/configuration per node). You could still (if you are 100% sure of it) deploy a new release with code changes that affect only your 'catalog' node only on that catalog node if you want to reduce downtime etc, but its always a risk if you have a cluster with different deployments on each node.
Good luck :)