I am new to Jboss, want to know if micro services architecture is a right choice on JBOSS. I cannot change the application server as it is decided by client architect and I have no choice.
Want to know whether we can develop micro services with underlying JBOSS application server.
I understand Spring boot comes with embedded tomcat container, which makes it flexible to stop and start, deploy individual service with no impact to other services.
However will that architecture works with JBoss too.
Please suggest.
Thanks,
I actually developed a feasibility study to investigate the solution you mentioned. My conclusion is that it is totally viable to use Micro Service principles in a JBoss Platform.
I used the combination of JBoss \ Spring Boot \ Netflix to create successful Micro Service stack, I personally do that to find a solution to the transaction problem (multiple micro services collaborating) and the fan out problem which caused because excessive Network communication and Serialization costs.
I also wrote a blog about the subject, you might find more details there if you like to, here is the link.
Micro Services – Fan Out, Transaction Problems and Solutions with Spring Boot/JBoss and Netflix Eureka
By the definition what micro services are, then conceptually yes. A micro service is a service that is an independent unit, it could deployed, updated, and undeployed independently without affecting any unrelated part of your application. So that would mean having multiple instances of JBoss for MS and your application calling them through some sort of gateway or any other mechanism depending on your use case. If you plan to deploy all your MS in the same JBoss instance then it defeats the very purpose of a MS. Given that, JBoss wouldn't be a right choice for MS deployment because it will only make your MS deployment infrastructure quite heavy.
Depending on what your client's requirements are, your could possibly keep your webapp in JBoss and deploy your MS containers separately.
It depends on what you want to get out of microservices.
Some of the developers at my organisation looked at Spring Boot but concluded that it's best off being run as a standalone container rather than in JBoss, otherwise you've effectively got two container frameworks competing (SB and JBoss) and a range of associated issues.
Deploying microservices in JBoss won't give you the same flexibility as a true container system like Docker. With Docker you create standalone packages for your microservices that contain all the code, system tools, runtime environment, etc. It can be as small or large as it needs to be. JBoss on the other hand is a large container running a single JVM designed to hold multiple applications. The level of isolation is not the same, and it's not efficient to have JBoss as a container for a single microservice so you have to appropriately size and then deploy to the instance to make use of the resources it has available.
If you're looking at microservices as a way to gain greater control over service lifecycle management (deployment, versioning, deprecating, etc.) as opposed to an automated, web-scale component deployment model a la Netflix or LinkedIn, you could do this adequately with JBoss.
I'm actually looking to do something along these lines here. It won't be true microservices but by packaging and deploying individual, properly versioned APIs rather than monolithic applications and following most of the other principles of microservice development (componentisation, business function focus, stateless etc.) we will be hopefully better able to manage and benefit from our APIs.
Our APIs will all be behind an API gateway and load balancer so we can choose how we distribute the microserves distributed across the JBoss instances and balance resource usage as required. Note that our organisation is relatively small and has relatively low and predictable traffic so this approach should work fine. Your needs however may be different.
Related
I will be building an Android application (not a game) soon. I heard of containerized development and Docker/Kubernetes but I'm not well-versed in its functions and use cases.
Why should I build my Android application with Kubernetes?
Your question can be split up into two parts:
1. Why should I containerize my deployment?
I hope by "deployment", you are referring to the backend services that serve your Android application; not the application itself (not sure how one would do that...). Here is a good article.
Containerization is a powerful abstraction that can help you manage both your code and environment. Setting up a container with the correct dependencies, utilities etc., and securing them is a lot of work, as is the case with any server setup. However, once you have packaged everything into a container, you can deploy said container multiple times and build on-top of it. The value of the grunt work that you have done in the past is therefore carried forward in your future deployments; conversely, so are the bugs... Additionally, you can also leverage the Docker ecosystem and build on various community contributions greatly accelerating your workflows.
A possible unintended advantage is also protection against configuration drift. Whenever services fail or your application crashes, you can simply restart your container, and a fresh version of the service will be created again. However, to support these operations, you need to ensure that your containerized service behaves nicely across restarts and fails gracefully. There are many other caveats and advantages that are not listed here; you can find more discussion on Google.
2. Why should I use Kubernetes for my container orchestration?
If you have many containers (think in the order of 100s), then using a single-node solution like Docker/docker-compose to manage them becomes tedious.
If only there was a tool to manage across multiple nodes, implement service discovery between your nodes, have fault tolerance (ie. automatic restarts, backoff policies), do health-checking of your services, manage storage assets, and conveniently expose your containers to the public. That tool is Kubernetes.
Here is a more in-depth intro.
Hope this helps!
Our application environment in Websphere Application Server has 3 clusters
1. UI Cluster
2. Service Cluster
3. Integration Cluster
We have around 50 war files (Micro Services) deployed to Service cluster. All services are REST based and exposed through SPRING API. Restarting Service cluster takes close to 30 mins. This time is critical during live incidents in Production. For reasons, if Service cluster needs to be restarted, we need to have 30 mins downtime for all end users. We are looking to reduce to recycle time, please suggest us for any solution.
Is there a way to load all the Spring based jar files before the application starts?
i.e. for example there is a service war file called as xyz-1.0.war and there are Spring based jar files as maven dependencies. All 50 number of WAR files has the same set of dependencies, I am thinking in a way to see if we can load all the Spring based jars before the application is started by websphere server.
Please suggest.
I don't know that you can load them BEFORE the application starts (class loading is generally on-demand), but you might be able to speed things up through the use of shared libraries for your common files, so they'd be loaded by a single class loader rather than from each WAR's class loader. It won't eliminate the class loading activity, since each WAR would still need to load the necessary classes, but it'd speed up the mechanics of the class loads since the shared library loader would return the already-loaded class rather than searching its class path.
There are two different approaches you could take to this. Step one in both cases is to create a shared library with the classes that are shared among the applications. The options for step two:
1) Create a custom class loader on the server and associate the shared library with this new class loader. This will make the classes in the shared library visible to all applications running on the server.
2) In the shared library configuration, select "Use an isolated class loader for this shared library", then associate the shared library with any applications that require it. In the event that the shared classes are required only by some applications, this will provide them only to the applications that require them.
A couple points of caution:
If you require unique Class instances (for example, static values unique to each WAR), this approach won't work, because there will be only one instance of the Class loaded by the shared library loader. In that event, you'll have to stick with WAR-level packaging.
If you use the isolated class loader solution, note that those loaders use "parent last" class loading, in which they are searched before server class loaders. If you have anything in those libraries that conflicts with classes provided by the server, it could open you up to ClassCastExceptions or LinkageErrors.
Note that the shared library loaders operate as parents of the WAR loaders, and as such, classes in the libraries will not be able to "see" classes in the WARs. You'll need to make sure that the libraries are essentially self-contained in order for these approaches to be successful.
More specific details on the configuration steps can be found in this blog post: https://www.ibm.com/developerworks/community/blogs/aimsupport/entry/create_shared_library_and_associate_it_with_the_application_server_or_application_on_websphere_application_server?lang=en_us
If you are doing microservices then your services should be independently deployable, so they each should be in the separate cluster. Traditional WebSphere Application Server is a bit heavy weight for this solution (depending on how many resources you have on your nodes), so I'd suggest you to migrate your service cluster to WebSphere Liberty and in that case you could have each service in separate clusters. This would allow you to restart each service independently, at much shorter time.
If you are doing microservices, then your UI cluster should be prepared for any service unavailability - that is a primer when doing microservices, and display some message to end user, that this service is temporary unavailable.
Regarding your current setup - you could try the "Rollout update" option, which will restart your server sequentially, so services should be available on other nodes.
so-random-dude advice to use blue/green deployment is also very good. You could have 2 cells, and then switch plugin configuration after redeployment. If your services are written in a way that different versions can run in parallel during update, you would have no downtime.
If you want to reduce downtime further and improve performance, you should consider using Java EE/Rest services instead of Spring, as it will cut down significantly size of your app, amount of libs to be scanned, deployment and startup time. It is much better integrated and supported in WebSphere Liberty than tons of jars that you have to include with Spring.
I have a simple solution for you, just ditch the Websphere and deploy those 50 "wars" as independent jars with embedded netty/undertow/tomcat/jetty in it.
I am afraid, what you have currently is not at all a microservice architecture. Agreed, different teams/consultants/organizations has different interpretation of "micro"services. But this is an extreme, which you should avoid at any cost; because you are having all the pain points of microservice and ZERO benefits (benefits such as independent scalability/deployability etc).
Restarting Service cluster takes close to 30 mins. This time is
critical during live incidents in Production. For reasons, if Service
cluster needs to be restarted, we need to have 30 mins downtime for
all end users
Have you looked at different deployment strategies like Canary deployment / Blue-Green deployment? Do you have more than one instances behind a load balancer?
I'm currently learning how to develop a simple web application with NetBeans. When I'm creating a new web application, the IDE ask me to choose one of the server from the list below. I was just wondering what are pros and cons of each server? Could someone share their expertise in the area?
Your question piqued my interest, so I decided to put in a bit of research.
Amazon Web Services Elastic Beanstalk: is a cloud host more than a simple web server. It's used for deploying infrastructures which orchestrate various AWS services. From the docs (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html) :
Amazon Web Services (AWS) comprises dozens of services, each of which exposes an area of functionality. While the variety of services offers flexibility for how you want to manage your AWS infrastructure, it can be challenging to figure out which services to use and how to provision them.
With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without worrying about the infrastructure that runs those applications. AWS Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.
Tomcat is simply an application server, ie, an implementation of the Java Servlet and JSP specifications only. The question that you should ask is: can I use Tomcat for this project? If the answer is yes, it's probably the best choice.
Pros: ligher memory footprint (typically less than 100 MB).
JBoss and Glassfish are full Java EE application servers, ie, an implementation of the Java EE Application Server specification which fully complies with, and supports, all Java EE features.
JBoss has a larger community than Glassfish. Glassfish however performs better than JBoss and has a very slick GUI-based admin console, whereas JBoss can only be administrated with a command line.
Cons compared to Tomcat: heavier memory footprint (hundreds of MB).
Oracle WebLogic is a full Java EE application server. It is a proprietary product however.
Pros compared to JBoss and Glassfish: very stable and robust.
Cons: licensing cost.
Wildfly is just the next iteration of JBoss, after JBoss AS 7.x - basically, it's JBoss AS 8.X with a different name.
Edit: here are a few other servers which can be of interest.
IBM WebSphere: IBM's application server.
Pros: integration with IBM's other products (IDEs, services, engines...)
Cons: licensing costs.
Jetty: is a set of software components that offer HTTP and servlet services.
Pros compared to Tomcat: lighter memory footprint (circa 50MB), very flexible, very easy to set up.
I use Glassfish Java, and JSP over MySQL for my web applications. Many online people uses this web application and that web-site should not be down.
When I want to deploy a new war file, I should undeploy and deploy the new one for my application at server.
My question is that;
Is there any technology that doesn't need to undeploy my application and just change the appropriate classes so no need to redoploy it again?
There are java technologies that would allow you to replace classes on the fly (like JRebel). But since you're using Glassfish already, you should just start using clustering which is built into glassfish. You'll need either 2.1 or 3.1, as 3.0 does not support clustering. With a Glassfish cluster, you have a load balancer (Apache, Sun Web Server, hardware (Big IP, Coyote), etc) distribute the load among your cluster nodes. When you want to upgrade the app, you can technically do it one node at a time. Setting up the cluster is not the easiest thing in the world, but it is doable and it would get you some great benefits. You'll be able to scale the load by adding new hardware and even using Amazon (or whoever) cloud services. You'll be able to keep your site running even if the hardware fails on one of the nodes.
Personally I'm in the middle of converting from Glassfish 2.1 to 3.1. So far I like the management of the Glassfish 3.1 cluster much better, but I can't personally vouch for how it will run in production, though I have high expectations.
http://download.oracle.com/docs/cd/E18930_01/html/821-2432/gktqx.html#gktob
Jim is right, the best solution is currently to use a cluster and perform a manual rolling-upgrade.
But there is actually work ongoing to address your needs. We are working on a rolling-upgrade feature in a single standalone instance. To sum up in a nutshell (as the specifications have not been published yet), it will let you switch from an application version to another (see application versioning and the enable command) with no downtime. Stay tuned.
For a variety of unfortunate management reasons (budget constraints etc.) I, the developer, have been put in the position to deploy the app in a production environment. The catch is that I don't have any experience in production EJB application server deployment. That said, they are aware that there are no guarantees of success.
The context:
The dev server runs on the latest version of Netbeans with Glassfish v3, on a mac machine
98% / 99% uptime is ok, there are no financial/critical transactions
It is a client/server EJB 3 app, and the web tier, business tier, and resource tiers currently run on the same machine.
I have the liberty to choose the hw/sw infrastructure
Load estimations: 10 simultaneous connexions avg, rare 200 peaks
The outbound public data is text/small pics (it's for iPhone clients), inbound HTTP text only
Basic maintenance will be taken care of (backup, server reboot, etc)
My questions for production deployment:
What are the must haves infrastructure-wise? Minimum system specs etc?
Is it ok to keep Glassfish v3?
Which configuration aspects of the server should I focus on?
Worst case scenario: if I deploy the same software infrastructure (Netbeans/Glassfish v3) as during the development, would the server keep up?
Any piece of advice would be most welcome. Thanks!
For the architecture, you can start small with just a single GlassFish instance with no front web server (GlassFish has one built in that is very capable). If you can wait for the release of GlassFish 3.1 you'll be able to add instances (clustered or standalone) and offer scalability and centralized admin.
Most production instances of GlassFish I've seen run with 1GB-2GB of JVM heap (-Xmx) but your mileage may vary if you load lots of data in memory or if you use some frameworks. If you want better reliability, having them on separate machines is a plus obviously. With two instances on the same machine you can offer continuity of service if one instance fails (but not if the machine fails).
I'd suggest scripting as much as possible the provisioning of the resources (connection pool , JDBC datasource, etc...) and applications using the "asadmin" command-line tool and try to not use NetBeans on the production platform.
Benchmarking with simulated load sounds like a wise thing to try to put together before going live and this survival guide will probably come in handy.
You don't mention the database. Isn't there one?
I suggest the following:
Not a Mac expert but I'll say go with 6GB or more RAM
HDD space is not a problem these days
Do not know much abt Mac Processors (watever eq of dual core etc)
Personally I have not used GF3 in Production but I hope it's stable now so you should be ok.
System Architecture:
Receive all HTTP requests on some web server (Apache or Sun web server) and load balance with your Glassfish server(s).
Now depending on your physical (or virtual) machines create instance of Glassfish Application Server on each machine. If you just have one machine then create atleast 2 instances of Glassfish. This will help to put one node down for maintaince and other to keep going.
As far as deployment is concern make sure you stop debug logs and fine tune JPA logs etc.
Use Ant or other scripts to deploy code and taking backup of existing code.
I hope this will help to kick start and rest you can ask or solve as you go along.
Good luck.