Simultaneous Deployment In Websphere - deployment

I have a question. I Actually have around 25 ears to deploy onto WAS. I am doing it one by one. This is costing me a lot of time.
Now i am just thinking is it a good idea to deploy few ears at one time. Meaning I want to deploy 3-4 ears parallel to the same was server (to save time). Is my thinking correct. If yes, please guide me how to proceed.
Regards

There are some resources for speeding up application deployment, but they don't revolve around parallel deployment. For many reasons, I don't see parallel deployment being advised. Deploying two EARs at the same time would likely make both take twice as long.
You could automate the deployment into a wsadmin script or look into some ways to speed deployment or if the applications being deployed are large and have lots of binaries or packages that do not have JEE annotations, you can look into annotation filtering which can reduce deployment time.
It wasn't specified, but if you are using WebSphere Liberty, then application deployment is inherently different, so you could specify all of the application configuration data in one place, the server.xml, and simply copy the EARs to deploy.

Related

local development of microservices, methods and tools to work efficiently

I work with teams members to develop a microservices architecture but I have a problem with the way to work. Indeed, I have too many microservices and when I run them during my development, it consumes too memory even with a good workstation. So I use docker compose to build and execute my MSA but it takes a long time. One often hears about how technically build an MSA but never about the way to work efficiently to build it. How do you do in this case ? How do you work ? Do you use tools or any others to improve and facilitate your developments. I've heard about skaffold but I don't see what the difference is with docker compose or with a simple ci/cd in a cluster env for example. Feel free to give tips and your opinion. Thanks
I've had a fair amount of experience with microservices and local development and here's been some approaches I've seen:
Run all the things locally on docker or k8. If using k8, then a tool like skaffolding can make it easier to run and debug a service locally in the IDE but put it into your local k8 so that it can communicate with other k8 services. It works OK but running more than 4 or 5 full services locally in k8 or docker requires dedicating a substantial amount of CPU and memory.
Build mock versions of all your services. Use those locally and for integration tests. The mock services are intentionally much simpler and therefore easier to run lots of them locally. Obvious downside is that you have to build mock version of every service, and you can easily miss bugs that are caused by mock services not behaving like the real service. Record/replay tools like Hoveryfly can help in building mock services.
Give every developer their own Cloud environment. Run most services in the cloud but use a tool like Telepresence to swap locally running services in and out of the cloud cluster. This eliminates the problem of running too many services on a single machine but can be spendy to maintain separate cloud sandboxes for each developer. You also need a DevOps resource to help developers when their cloud sandbox gets out of whack.
Eliminate unnecessary microservice complexity and consolidate all your services into 1 or 2 monoliths. Enjoy being able to run everything locally as a single service. Accept the fact that a microservice architecture is overkill for most companies. Too many people choose a microservice architecture upfront before their needs demand it. Or they do it out of fear that they will need it in the future. Inevitably this leads to guessing how they should decompose the system into many microservices, and getting the boundaries and contracts wrong, which makes it just as hard or harder to fix in the future compared to a monolith. And they incur the costs of microservices years before they need to. Microservices make everything more costly and painful, from local development to deployment. For companies like Netflix and Amazon, it's necessary. For most of us, it's not.
I prefer option 4 if at all possible. Otherwise option 2 or 3 in that order. Option 1 should be avoided in my opinion but it is probably the option everyone tries first.
In GKE and assuming you have a private cluster. You can utilize port forwarding while hooked up to the GKE environment through the CLI. Create a script that forwards your local ports to the GKE environment. I believe on the services tab in your cluster is where you will find the "port-forwarding" button that will give you the CMD command. This way you can work on one microservice with all of its traffic being routed to the actual DEV cluster. This prevents you from having to run multiple projects at the same time.
I would say create a staging environment which will have all services running. This staging environment will specifically be curated for development. E.g. if it's deployed using k8s then you expose some ports using nodeport service if you need them for your specific microservice. And have a DevOps pipeline to always keep this environment up to date with the code.
This environment should always be built from master branch. If you have single repo for app or repo per service, it's fair assumption that the will always have most recent code when you create your dev/feature branch.
Then when you want to develop a feature or fix a bug you checkout your microservice. And if you are following the microservice pattern appropriately, that single microservice should be an executable and have it's own docker file and should be debuggable from your local IDE. Many enterprises follow this pattern, and enforce at the organization level that the master branch is always production ready and high quality.
Let's say, you discover a bug in some other microservice running in k8s cluster. You will very likely get tempted to find a way to debug that remote microservice. However, that should be written as a bug for the team that owns the microservice. If your team owns it then you fix it and then start working on your feature. If you really think you need to debug multiple microservices, then I think you have real tight coupling between the services or you don't really need the microservice architecture.

Spring restful services in websphere

Our application environment in Websphere Application Server has 3 clusters
1. UI Cluster
2. Service Cluster
3. Integration Cluster
We have around 50 war files (Micro Services) deployed to Service cluster. All services are REST based and exposed through SPRING API. Restarting Service cluster takes close to 30 mins. This time is critical during live incidents in Production. For reasons, if Service cluster needs to be restarted, we need to have 30 mins downtime for all end users. We are looking to reduce to recycle time, please suggest us for any solution.
Is there a way to load all the Spring based jar files before the application starts?
i.e. for example there is a service war file called as xyz-1.0.war and there are Spring based jar files as maven dependencies. All 50 number of WAR files has the same set of dependencies, I am thinking in a way to see if we can load all the Spring based jars before the application is started by websphere server.
Please suggest.
I don't know that you can load them BEFORE the application starts (class loading is generally on-demand), but you might be able to speed things up through the use of shared libraries for your common files, so they'd be loaded by a single class loader rather than from each WAR's class loader. It won't eliminate the class loading activity, since each WAR would still need to load the necessary classes, but it'd speed up the mechanics of the class loads since the shared library loader would return the already-loaded class rather than searching its class path.
There are two different approaches you could take to this. Step one in both cases is to create a shared library with the classes that are shared among the applications. The options for step two:
1) Create a custom class loader on the server and associate the shared library with this new class loader. This will make the classes in the shared library visible to all applications running on the server.
2) In the shared library configuration, select "Use an isolated class loader for this shared library", then associate the shared library with any applications that require it. In the event that the shared classes are required only by some applications, this will provide them only to the applications that require them.
A couple points of caution:
If you require unique Class instances (for example, static values unique to each WAR), this approach won't work, because there will be only one instance of the Class loaded by the shared library loader. In that event, you'll have to stick with WAR-level packaging.
If you use the isolated class loader solution, note that those loaders use "parent last" class loading, in which they are searched before server class loaders. If you have anything in those libraries that conflicts with classes provided by the server, it could open you up to ClassCastExceptions or LinkageErrors.
Note that the shared library loaders operate as parents of the WAR loaders, and as such, classes in the libraries will not be able to "see" classes in the WARs. You'll need to make sure that the libraries are essentially self-contained in order for these approaches to be successful.
More specific details on the configuration steps can be found in this blog post: https://www.ibm.com/developerworks/community/blogs/aimsupport/entry/create_shared_library_and_associate_it_with_the_application_server_or_application_on_websphere_application_server?lang=en_us
If you are doing microservices then your services should be independently deployable, so they each should be in the separate cluster. Traditional WebSphere Application Server is a bit heavy weight for this solution (depending on how many resources you have on your nodes), so I'd suggest you to migrate your service cluster to WebSphere Liberty and in that case you could have each service in separate clusters. This would allow you to restart each service independently, at much shorter time.
If you are doing microservices, then your UI cluster should be prepared for any service unavailability - that is a primer when doing microservices, and display some message to end user, that this service is temporary unavailable.
Regarding your current setup - you could try the "Rollout update" option, which will restart your server sequentially, so services should be available on other nodes.
so-random-dude advice to use blue/green deployment is also very good. You could have 2 cells, and then switch plugin configuration after redeployment. If your services are written in a way that different versions can run in parallel during update, you would have no downtime.
If you want to reduce downtime further and improve performance, you should consider using Java EE/Rest services instead of Spring, as it will cut down significantly size of your app, amount of libs to be scanned, deployment and startup time. It is much better integrated and supported in WebSphere Liberty than tons of jars that you have to include with Spring.
I have a simple solution for you, just ditch the Websphere and deploy those 50 "wars" as independent jars with embedded netty/undertow/tomcat/jetty in it.
I am afraid, what you have currently is not at all a microservice architecture. Agreed, different teams/consultants/organizations has different interpretation of "micro"services. But this is an extreme, which you should avoid at any cost; because you are having all the pain points of microservice and ZERO benefits (benefits such as independent scalability/deployability etc).
Restarting Service cluster takes close to 30 mins. This time is
critical during live incidents in Production. For reasons, if Service
cluster needs to be restarted, we need to have 30 mins downtime for
all end users
Have you looked at different deployment strategies like Canary deployment / Blue-Green deployment? Do you have more than one instances behind a load balancer?

Micro services with JBOSS

I am new to Jboss, want to know if micro services architecture is a right choice on JBOSS. I cannot change the application server as it is decided by client architect and I have no choice.
Want to know whether we can develop micro services with underlying JBOSS application server.
I understand Spring boot comes with embedded tomcat container, which makes it flexible to stop and start, deploy individual service with no impact to other services.
However will that architecture works with JBoss too.
Please suggest.
Thanks,
I actually developed a feasibility study to investigate the solution you mentioned. My conclusion is that it is totally viable to use Micro Service principles in a JBoss Platform.
I used the combination of JBoss \ Spring Boot \ Netflix to create successful Micro Service stack, I personally do that to find a solution to the transaction problem (multiple micro services collaborating) and the fan out problem which caused because excessive Network communication and Serialization costs.
I also wrote a blog about the subject, you might find more details there if you like to, here is the link.
Micro Services – Fan Out, Transaction Problems and Solutions with Spring Boot/JBoss and Netflix Eureka
By the definition what micro services are, then conceptually yes. A micro service is a service that is an independent unit, it could deployed, updated, and undeployed independently without affecting any unrelated part of your application. So that would mean having multiple instances of JBoss for MS and your application calling them through some sort of gateway or any other mechanism depending on your use case. If you plan to deploy all your MS in the same JBoss instance then it defeats the very purpose of a MS. Given that, JBoss wouldn't be a right choice for MS deployment because it will only make your MS deployment infrastructure quite heavy.
Depending on what your client's requirements are, your could possibly keep your webapp in JBoss and deploy your MS containers separately.
It depends on what you want to get out of microservices.
Some of the developers at my organisation looked at Spring Boot but concluded that it's best off being run as a standalone container rather than in JBoss, otherwise you've effectively got two container frameworks competing (SB and JBoss) and a range of associated issues.
Deploying microservices in JBoss won't give you the same flexibility as a true container system like Docker. With Docker you create standalone packages for your microservices that contain all the code, system tools, runtime environment, etc. It can be as small or large as it needs to be. JBoss on the other hand is a large container running a single JVM designed to hold multiple applications. The level of isolation is not the same, and it's not efficient to have JBoss as a container for a single microservice so you have to appropriately size and then deploy to the instance to make use of the resources it has available.
If you're looking at microservices as a way to gain greater control over service lifecycle management (deployment, versioning, deprecating, etc.) as opposed to an automated, web-scale component deployment model a la Netflix or LinkedIn, you could do this adequately with JBoss.
I'm actually looking to do something along these lines here. It won't be true microservices but by packaging and deploying individual, properly versioned APIs rather than monolithic applications and following most of the other principles of microservice development (componentisation, business function focus, stateless etc.) we will be hopefully better able to manage and benefit from our APIs.
Our APIs will all be behind an API gateway and load balancer so we can choose how we distribute the microserves distributed across the JBoss instances and balance resource usage as required. Note that our organisation is relatively small and has relatively low and predictable traffic so this approach should work fine. Your needs however may be different.

Deployment approach to bring down time for Java

We have our J2EE based application basically It is small e-commerce apps that run across global (multiple time zones). When ever we have to deploy the patch it take around 3 hrs time (DB backup,DB changes,Java changes,QA smoke testing). I knew its too high. I want to bring down this deployment time to less than 30 mins.
Now I would brief about application infra: We got two Jboss server and single DB, load balancer is configured for both jboss server. It is not cluster env.
Currently what we do :
We bring down both jboss and DB
Take DB backup
Make the DB changes, run some script
Make the java changes, run patches
Above steps will take around 2 hrs for us
Than QA will do testing for one hr. than bring up the server.
Can you suggest some better approach to achieve this? My main question, when we have multiple jboss and single DB. How to make deployment smooth
One approach I've heard that Netflix uses, but have not had a chance to use myself:
Make all of your DB schema changes both forward and backward compatible with the current version of software running, and the one you are about to deploy. Make the new software version continue to write any data the old version needs. Hopefully this is a minimal set.
Backup your running DB (most DBs don't require downtime for backups), and deploy your database schema updates at least a week prior to your software deploy.
Once your db changes have burnt in and seem to be bug free with the current running version, reconfigure your load balancer to point to only one instance of your JBoss servers. Deploy your updated software to the other instance and have QA smoke test it offline while the other server continues to server production request.
When QA is happy with the results, point the LB to just the offline JBoss server (with the new software). When that comes online, update the software on the newly offline JBoss server, and have QA smoke test if desired. If successful, point the LB to both JBoss instances.
If QA finds major bugs, and a quick bug fix and "roll-forward" is not possible, roll back to the previous version of the deployed software. Since your schema and new code is backward compatible, you won't have lost data.
On your next deploy, remove any garbage from your schema (like columns unused by the current deploy) in a way that makes it still backward and forward compatible.
Although more complex than your current approach, this approach should reduce your deployment downtime with minimal risk.

Glassfish 3 EJB app deployment advice?

For a variety of unfortunate management reasons (budget constraints etc.) I, the developer, have been put in the position to deploy the app in a production environment. The catch is that I don't have any experience in production EJB application server deployment. That said, they are aware that there are no guarantees of success.
The context:
The dev server runs on the latest version of Netbeans with Glassfish v3, on a mac machine
98% / 99% uptime is ok, there are no financial/critical transactions
It is a client/server EJB 3 app, and the web tier, business tier, and resource tiers currently run on the same machine.
I have the liberty to choose the hw/sw infrastructure
Load estimations: 10 simultaneous connexions avg, rare 200 peaks
The outbound public data is text/small pics (it's for iPhone clients), inbound HTTP text only
Basic maintenance will be taken care of (backup, server reboot, etc)
My questions for production deployment:
What are the must haves infrastructure-wise? Minimum system specs etc?
Is it ok to keep Glassfish v3?
Which configuration aspects of the server should I focus on?
Worst case scenario: if I deploy the same software infrastructure (Netbeans/Glassfish v3) as during the development, would the server keep up?
Any piece of advice would be most welcome. Thanks!
For the architecture, you can start small with just a single GlassFish instance with no front web server (GlassFish has one built in that is very capable). If you can wait for the release of GlassFish 3.1 you'll be able to add instances (clustered or standalone) and offer scalability and centralized admin.
Most production instances of GlassFish I've seen run with 1GB-2GB of JVM heap (-Xmx) but your mileage may vary if you load lots of data in memory or if you use some frameworks. If you want better reliability, having them on separate machines is a plus obviously. With two instances on the same machine you can offer continuity of service if one instance fails (but not if the machine fails).
I'd suggest scripting as much as possible the provisioning of the resources (connection pool , JDBC datasource, etc...) and applications using the "asadmin" command-line tool and try to not use NetBeans on the production platform.
Benchmarking with simulated load sounds like a wise thing to try to put together before going live and this survival guide will probably come in handy.
You don't mention the database. Isn't there one?
I suggest the following:
Not a Mac expert but I'll say go with 6GB or more RAM
HDD space is not a problem these days
Do not know much abt Mac Processors (watever eq of dual core etc)
Personally I have not used GF3 in Production but I hope it's stable now so you should be ok.
System Architecture:
Receive all HTTP requests on some web server (Apache or Sun web server) and load balance with your Glassfish server(s).
Now depending on your physical (or virtual) machines create instance of Glassfish Application Server on each machine. If you just have one machine then create atleast 2 instances of Glassfish. This will help to put one node down for maintaince and other to keep going.
As far as deployment is concern make sure you stop debug logs and fine tune JPA logs etc.
Use Ant or other scripts to deploy code and taking backup of existing code.
I hope this will help to kick start and rest you can ask or solve as you go along.
Good luck.