I have a need to write a microservice and planning to use Karaf. I understand that Karaf is an OSGi based container for microservices to provide modularity. Now, my question is
Suppose I write a REST-based/ extends BundleActivator service and deploy the Java code
As a bundle into Karaf. And say later when I make changes to service and deploy it again
Will it continue without any downtime? Will, it Hot deploy the java code/ module without any downtime?
Will it have support for one module to invoke another as an API/service call?
Thanks
Related
Our application environment in Websphere Application Server has 3 clusters
1. UI Cluster
2. Service Cluster
3. Integration Cluster
We have around 50 war files (Micro Services) deployed to Service cluster. All services are REST based and exposed through SPRING API. Restarting Service cluster takes close to 30 mins. This time is critical during live incidents in Production. For reasons, if Service cluster needs to be restarted, we need to have 30 mins downtime for all end users. We are looking to reduce to recycle time, please suggest us for any solution.
Is there a way to load all the Spring based jar files before the application starts?
i.e. for example there is a service war file called as xyz-1.0.war and there are Spring based jar files as maven dependencies. All 50 number of WAR files has the same set of dependencies, I am thinking in a way to see if we can load all the Spring based jars before the application is started by websphere server.
Please suggest.
I don't know that you can load them BEFORE the application starts (class loading is generally on-demand), but you might be able to speed things up through the use of shared libraries for your common files, so they'd be loaded by a single class loader rather than from each WAR's class loader. It won't eliminate the class loading activity, since each WAR would still need to load the necessary classes, but it'd speed up the mechanics of the class loads since the shared library loader would return the already-loaded class rather than searching its class path.
There are two different approaches you could take to this. Step one in both cases is to create a shared library with the classes that are shared among the applications. The options for step two:
1) Create a custom class loader on the server and associate the shared library with this new class loader. This will make the classes in the shared library visible to all applications running on the server.
2) In the shared library configuration, select "Use an isolated class loader for this shared library", then associate the shared library with any applications that require it. In the event that the shared classes are required only by some applications, this will provide them only to the applications that require them.
A couple points of caution:
If you require unique Class instances (for example, static values unique to each WAR), this approach won't work, because there will be only one instance of the Class loaded by the shared library loader. In that event, you'll have to stick with WAR-level packaging.
If you use the isolated class loader solution, note that those loaders use "parent last" class loading, in which they are searched before server class loaders. If you have anything in those libraries that conflicts with classes provided by the server, it could open you up to ClassCastExceptions or LinkageErrors.
Note that the shared library loaders operate as parents of the WAR loaders, and as such, classes in the libraries will not be able to "see" classes in the WARs. You'll need to make sure that the libraries are essentially self-contained in order for these approaches to be successful.
More specific details on the configuration steps can be found in this blog post: https://www.ibm.com/developerworks/community/blogs/aimsupport/entry/create_shared_library_and_associate_it_with_the_application_server_or_application_on_websphere_application_server?lang=en_us
If you are doing microservices then your services should be independently deployable, so they each should be in the separate cluster. Traditional WebSphere Application Server is a bit heavy weight for this solution (depending on how many resources you have on your nodes), so I'd suggest you to migrate your service cluster to WebSphere Liberty and in that case you could have each service in separate clusters. This would allow you to restart each service independently, at much shorter time.
If you are doing microservices, then your UI cluster should be prepared for any service unavailability - that is a primer when doing microservices, and display some message to end user, that this service is temporary unavailable.
Regarding your current setup - you could try the "Rollout update" option, which will restart your server sequentially, so services should be available on other nodes.
so-random-dude advice to use blue/green deployment is also very good. You could have 2 cells, and then switch plugin configuration after redeployment. If your services are written in a way that different versions can run in parallel during update, you would have no downtime.
If you want to reduce downtime further and improve performance, you should consider using Java EE/Rest services instead of Spring, as it will cut down significantly size of your app, amount of libs to be scanned, deployment and startup time. It is much better integrated and supported in WebSphere Liberty than tons of jars that you have to include with Spring.
I have a simple solution for you, just ditch the Websphere and deploy those 50 "wars" as independent jars with embedded netty/undertow/tomcat/jetty in it.
I am afraid, what you have currently is not at all a microservice architecture. Agreed, different teams/consultants/organizations has different interpretation of "micro"services. But this is an extreme, which you should avoid at any cost; because you are having all the pain points of microservice and ZERO benefits (benefits such as independent scalability/deployability etc).
Restarting Service cluster takes close to 30 mins. This time is
critical during live incidents in Production. For reasons, if Service
cluster needs to be restarted, we need to have 30 mins downtime for
all end users
Have you looked at different deployment strategies like Canary deployment / Blue-Green deployment? Do you have more than one instances behind a load balancer?
From the documentation and examples I've seen it seems that the preferred way to deploy Service Fabric services is to package them all up into an application package.
If I have 30 services and I make a change to one of them I'm not interested in having my CI server pull down the entire repository, build the solution, package it and then have Service Fabric decide that only one service changed and so only one should be updated.
Is there a way to create an application package with just one service?
You want to perform a partial upgrade (Differential Packaging). See here. Differential packaging
I am new to Jboss, want to know if micro services architecture is a right choice on JBOSS. I cannot change the application server as it is decided by client architect and I have no choice.
Want to know whether we can develop micro services with underlying JBOSS application server.
I understand Spring boot comes with embedded tomcat container, which makes it flexible to stop and start, deploy individual service with no impact to other services.
However will that architecture works with JBoss too.
Please suggest.
Thanks,
I actually developed a feasibility study to investigate the solution you mentioned. My conclusion is that it is totally viable to use Micro Service principles in a JBoss Platform.
I used the combination of JBoss \ Spring Boot \ Netflix to create successful Micro Service stack, I personally do that to find a solution to the transaction problem (multiple micro services collaborating) and the fan out problem which caused because excessive Network communication and Serialization costs.
I also wrote a blog about the subject, you might find more details there if you like to, here is the link.
Micro Services – Fan Out, Transaction Problems and Solutions with Spring Boot/JBoss and Netflix Eureka
By the definition what micro services are, then conceptually yes. A micro service is a service that is an independent unit, it could deployed, updated, and undeployed independently without affecting any unrelated part of your application. So that would mean having multiple instances of JBoss for MS and your application calling them through some sort of gateway or any other mechanism depending on your use case. If you plan to deploy all your MS in the same JBoss instance then it defeats the very purpose of a MS. Given that, JBoss wouldn't be a right choice for MS deployment because it will only make your MS deployment infrastructure quite heavy.
Depending on what your client's requirements are, your could possibly keep your webapp in JBoss and deploy your MS containers separately.
It depends on what you want to get out of microservices.
Some of the developers at my organisation looked at Spring Boot but concluded that it's best off being run as a standalone container rather than in JBoss, otherwise you've effectively got two container frameworks competing (SB and JBoss) and a range of associated issues.
Deploying microservices in JBoss won't give you the same flexibility as a true container system like Docker. With Docker you create standalone packages for your microservices that contain all the code, system tools, runtime environment, etc. It can be as small or large as it needs to be. JBoss on the other hand is a large container running a single JVM designed to hold multiple applications. The level of isolation is not the same, and it's not efficient to have JBoss as a container for a single microservice so you have to appropriately size and then deploy to the instance to make use of the resources it has available.
If you're looking at microservices as a way to gain greater control over service lifecycle management (deployment, versioning, deprecating, etc.) as opposed to an automated, web-scale component deployment model a la Netflix or LinkedIn, you could do this adequately with JBoss.
I'm actually looking to do something along these lines here. It won't be true microservices but by packaging and deploying individual, properly versioned APIs rather than monolithic applications and following most of the other principles of microservice development (componentisation, business function focus, stateless etc.) we will be hopefully better able to manage and benefit from our APIs.
Our APIs will all be behind an API gateway and load balancer so we can choose how we distribute the microserves distributed across the JBoss instances and balance resource usage as required. Note that our organisation is relatively small and has relatively low and predictable traffic so this approach should work fine. Your needs however may be different.
I'm currently experimenting about building a self-contained web app that runs with embedded server. To run the app is just simply by executing
java -jar application.jar -Dserver.port=80
The thing is, starting the application from executing the line above until it can really listen to incoming requests needs a lot of time. In the sample above, I'm using Java's Spring Boot example, but it can be other libraries or other language.
With that problem above, deploying by killing the current server PID, symlink the new artifact, and then running the new artifact will have a slight downtime.
Another constraint is that, I can only have one instance at one time. So provisioning a new instance then deploying the new artifact into the new instance and then switch the instances from the load-balancer is out of question.
So, how do I do this?
In fact, I'm trying to see which would be the best approach to achieve play framework native support on openshift.
Play has it's own http server developed with netty. Right now you can deploy a play application to openshift, but you have to deploy it as a war, in which case play uses Servlet Container wrapper.
Being able to deploy it as a netty application would allow us to use some advanced features, like asynchronuos request.
Openshift uses jboss, so this question would also involve which would be the recommended approach to deploy a netty application on a jboss server, using netty instead of the servlet container provided by jboss.
Here is request for providing play framework native support on openshift There's more info there, and if you like it you can also add your vote ;-)
Start with creating 'raw-0.1' application.
SSH into the server and
cd $OPENSHIFT_DATA_DIR
download and install play into a directory here. $OPENSHIFT_DATA_DIR is supposed to survive redeploys of your application.
Now you can disconnect from SSH.
Clone the application repository. In the repository, there is a file .openshift/actions_hooks/start. It's task is to start the application using a framework of your choice. The file will need to contain at least (from what I know about Play)
cd $OPENSHIFT_REPO_DIR
$OPENSHIFT_DATA_DIR/play-directroy/play run --http.port=$OPENSHIFT_INTERNAL_PORT --some-other-parameters
Important
You have to bind to $OPENSHIFT_INTERNAL_IP:$OPENSHIFT_INTERNAL_PORT. Trying to bind to different interface is not allowed, also most of the ports are blocked.
To create some sort of template, save the installation steps into .openshift/action_hooks/build file. Check if play is installed, if it is do nothing, if it's not, execute the installation process.