Is there a way to split Hybris modules to different managed servers - deployment

I have a Hybris deployment on a single Weblogic Managed Server. The problem is during performance testing it was found that it would be better to split the Hybris modules like Admin Cockpit and Product Catalogue to different Managed Servers.
EDIT
I Suppose I should also mention the fact that my Infra Team is asking me to separate out the EARs so that in case of code changes, only the affected module gets redeployed and not the whole bunch. That way even if we let the performance front out, still I need the splits
Now my problem is that for build Hybris produces a single EAR file.
Is there a way, in which I can break down the EAR file and have the modules optionally there...
So the structure would be:
Managed Server 1
Hybris Core
Admin Cockpit
Managed Server 2
Hybris Core
Product Catalogue
After this the links to the deployments be redirected via URL configuration
Any Suggestions??

I'm not sure if this will eliminate the problems you encounter as I don't think the admin cockpit by itself will be causing a performance bottleneck.
What is the performance issue? Quite often performance impact can come from admin/backend triggered functionality like e.g. cronjobs (e.g. updating product catalog with stock/product information), or solr indexing jobs etc.
One common approach I have seen in hybris cluster environments is to setup a cluster of multiple nodes and have one node reserved for backend activity (so that expensive cronjobs run on a dedicated node that is not served by the load balancer handling storefront requests).
But I think from a code deployment perspective the artifact would still be the same.
Hope this helps at least as food for thoughts :)
EDIT
In short: Multiple hybris servers accessing the same db need to be setup as cluster.
Multiple hybris servers with different sets of extensions can't share the same db (as the db layout will be different).
To be honest, this doesn't sound like a good approach to me.
In hybris you would use different localextension.xml files (which define which extensions (i.e. modules) are part of your code artifact). That being said, if you have two vastly different localextensions.xml files (one for your product catalog and one for admin) the resulting 'admin' deployment artifact would not contain the data model of the 'catalog' deployment, so the persistence layer wouldn't match up. In other words, in your admin server you wouldn't be able to even see the data model that is defined in your 'catalog' server because the 'catalog' specific extensions are not installed.
And if you go without a properly set up cluster environment, changes on one server (written down to the db) wouldn't be noticed on the other server unless you actively refresh/purge the hybris cache there either, so multiple hybris servers sharing the same db is only functioning if the servers are set up as a cluster.
I think if your admin server is supposed to work on the actual 'catalog' data, they both need to have the same set of extensions defined in their localextensions.xml in order for it to work at all.
Sharing the same database without being aware that there is a cluster (or basically other hybris servers accessing the same db) is not going to work IMO.
I still think your best shot would be to deploy the same code artifacts (in cluster environments you can still setup different behavior/configuration per node). You could still (if you are 100% sure of it) deploy a new release with code changes that affect only your 'catalog' node only on that catalog node if you want to reduce downtime etc, but its always a risk if you have a cluster with different deployments on each node.
Good luck :)

Related

local development of microservices, methods and tools to work efficiently

I work with teams members to develop a microservices architecture but I have a problem with the way to work. Indeed, I have too many microservices and when I run them during my development, it consumes too memory even with a good workstation. So I use docker compose to build and execute my MSA but it takes a long time. One often hears about how technically build an MSA but never about the way to work efficiently to build it. How do you do in this case ? How do you work ? Do you use tools or any others to improve and facilitate your developments. I've heard about skaffold but I don't see what the difference is with docker compose or with a simple ci/cd in a cluster env for example. Feel free to give tips and your opinion. Thanks
I've had a fair amount of experience with microservices and local development and here's been some approaches I've seen:
Run all the things locally on docker or k8. If using k8, then a tool like skaffolding can make it easier to run and debug a service locally in the IDE but put it into your local k8 so that it can communicate with other k8 services. It works OK but running more than 4 or 5 full services locally in k8 or docker requires dedicating a substantial amount of CPU and memory.
Build mock versions of all your services. Use those locally and for integration tests. The mock services are intentionally much simpler and therefore easier to run lots of them locally. Obvious downside is that you have to build mock version of every service, and you can easily miss bugs that are caused by mock services not behaving like the real service. Record/replay tools like Hoveryfly can help in building mock services.
Give every developer their own Cloud environment. Run most services in the cloud but use a tool like Telepresence to swap locally running services in and out of the cloud cluster. This eliminates the problem of running too many services on a single machine but can be spendy to maintain separate cloud sandboxes for each developer. You also need a DevOps resource to help developers when their cloud sandbox gets out of whack.
Eliminate unnecessary microservice complexity and consolidate all your services into 1 or 2 monoliths. Enjoy being able to run everything locally as a single service. Accept the fact that a microservice architecture is overkill for most companies. Too many people choose a microservice architecture upfront before their needs demand it. Or they do it out of fear that they will need it in the future. Inevitably this leads to guessing how they should decompose the system into many microservices, and getting the boundaries and contracts wrong, which makes it just as hard or harder to fix in the future compared to a monolith. And they incur the costs of microservices years before they need to. Microservices make everything more costly and painful, from local development to deployment. For companies like Netflix and Amazon, it's necessary. For most of us, it's not.
I prefer option 4 if at all possible. Otherwise option 2 or 3 in that order. Option 1 should be avoided in my opinion but it is probably the option everyone tries first.
In GKE and assuming you have a private cluster. You can utilize port forwarding while hooked up to the GKE environment through the CLI. Create a script that forwards your local ports to the GKE environment. I believe on the services tab in your cluster is where you will find the "port-forwarding" button that will give you the CMD command. This way you can work on one microservice with all of its traffic being routed to the actual DEV cluster. This prevents you from having to run multiple projects at the same time.
I would say create a staging environment which will have all services running. This staging environment will specifically be curated for development. E.g. if it's deployed using k8s then you expose some ports using nodeport service if you need them for your specific microservice. And have a DevOps pipeline to always keep this environment up to date with the code.
This environment should always be built from master branch. If you have single repo for app or repo per service, it's fair assumption that the will always have most recent code when you create your dev/feature branch.
Then when you want to develop a feature or fix a bug you checkout your microservice. And if you are following the microservice pattern appropriately, that single microservice should be an executable and have it's own docker file and should be debuggable from your local IDE. Many enterprises follow this pattern, and enforce at the organization level that the master branch is always production ready and high quality.
Let's say, you discover a bug in some other microservice running in k8s cluster. You will very likely get tempted to find a way to debug that remote microservice. However, that should be written as a bug for the team that owns the microservice. If your team owns it then you fix it and then start working on your feature. If you really think you need to debug multiple microservices, then I think you have real tight coupling between the services or you don't really need the microservice architecture.

Spring restful services in websphere

Our application environment in Websphere Application Server has 3 clusters
1. UI Cluster
2. Service Cluster
3. Integration Cluster
We have around 50 war files (Micro Services) deployed to Service cluster. All services are REST based and exposed through SPRING API. Restarting Service cluster takes close to 30 mins. This time is critical during live incidents in Production. For reasons, if Service cluster needs to be restarted, we need to have 30 mins downtime for all end users. We are looking to reduce to recycle time, please suggest us for any solution.
Is there a way to load all the Spring based jar files before the application starts?
i.e. for example there is a service war file called as xyz-1.0.war and there are Spring based jar files as maven dependencies. All 50 number of WAR files has the same set of dependencies, I am thinking in a way to see if we can load all the Spring based jars before the application is started by websphere server.
Please suggest.
I don't know that you can load them BEFORE the application starts (class loading is generally on-demand), but you might be able to speed things up through the use of shared libraries for your common files, so they'd be loaded by a single class loader rather than from each WAR's class loader. It won't eliminate the class loading activity, since each WAR would still need to load the necessary classes, but it'd speed up the mechanics of the class loads since the shared library loader would return the already-loaded class rather than searching its class path.
There are two different approaches you could take to this. Step one in both cases is to create a shared library with the classes that are shared among the applications. The options for step two:
1) Create a custom class loader on the server and associate the shared library with this new class loader. This will make the classes in the shared library visible to all applications running on the server.
2) In the shared library configuration, select "Use an isolated class loader for this shared library", then associate the shared library with any applications that require it. In the event that the shared classes are required only by some applications, this will provide them only to the applications that require them.
A couple points of caution:
If you require unique Class instances (for example, static values unique to each WAR), this approach won't work, because there will be only one instance of the Class loaded by the shared library loader. In that event, you'll have to stick with WAR-level packaging.
If you use the isolated class loader solution, note that those loaders use "parent last" class loading, in which they are searched before server class loaders. If you have anything in those libraries that conflicts with classes provided by the server, it could open you up to ClassCastExceptions or LinkageErrors.
Note that the shared library loaders operate as parents of the WAR loaders, and as such, classes in the libraries will not be able to "see" classes in the WARs. You'll need to make sure that the libraries are essentially self-contained in order for these approaches to be successful.
More specific details on the configuration steps can be found in this blog post: https://www.ibm.com/developerworks/community/blogs/aimsupport/entry/create_shared_library_and_associate_it_with_the_application_server_or_application_on_websphere_application_server?lang=en_us
If you are doing microservices then your services should be independently deployable, so they each should be in the separate cluster. Traditional WebSphere Application Server is a bit heavy weight for this solution (depending on how many resources you have on your nodes), so I'd suggest you to migrate your service cluster to WebSphere Liberty and in that case you could have each service in separate clusters. This would allow you to restart each service independently, at much shorter time.
If you are doing microservices, then your UI cluster should be prepared for any service unavailability - that is a primer when doing microservices, and display some message to end user, that this service is temporary unavailable.
Regarding your current setup - you could try the "Rollout update" option, which will restart your server sequentially, so services should be available on other nodes.
so-random-dude advice to use blue/green deployment is also very good. You could have 2 cells, and then switch plugin configuration after redeployment. If your services are written in a way that different versions can run in parallel during update, you would have no downtime.
If you want to reduce downtime further and improve performance, you should consider using Java EE/Rest services instead of Spring, as it will cut down significantly size of your app, amount of libs to be scanned, deployment and startup time. It is much better integrated and supported in WebSphere Liberty than tons of jars that you have to include with Spring.
I have a simple solution for you, just ditch the Websphere and deploy those 50 "wars" as independent jars with embedded netty/undertow/tomcat/jetty in it.
I am afraid, what you have currently is not at all a microservice architecture. Agreed, different teams/consultants/organizations has different interpretation of "micro"services. But this is an extreme, which you should avoid at any cost; because you are having all the pain points of microservice and ZERO benefits (benefits such as independent scalability/deployability etc).
Restarting Service cluster takes close to 30 mins. This time is
critical during live incidents in Production. For reasons, if Service
cluster needs to be restarted, we need to have 30 mins downtime for
all end users
Have you looked at different deployment strategies like Canary deployment / Blue-Green deployment? Do you have more than one instances behind a load balancer?

Is there any way to move individual entity from one server to another in Master data services?

I have master data model with some entity and it is deployed on production server.
Now i have created 2 more new entity in development server and wanted to move only these two entity.
If anyone has any idea please share with me.
Thanks !
You have two options.
Web-app(easiest): On your Dev server, go to System Administration. Click on Deployment and create a package. You then deploy this package by going on the production server, follow the same steps, but choose deploy instead of create under the 'deployment' button.
The alternative is to use the MDSModelDeploy.exe. You can find it on the server by going to the appropriate folder. Generally it's in this path: C:\Program Files\Microsoft SQL Server\130\Master Data Services\Configuration.
I recommend you use this method, as you have more control. You can choose to deploy with data, or without or clone your model. You can read more here ([https://learn.microsoft.com/en-us/sql/master-data-services/deploy-a-model-deployment-package-by-using-mdsmodeldeploy][1])
I can also recommend you consider the ModelPackageEditor when your model starts getting big. Then you have control over what you need to deploy, as in entities, views, business rules etc.
You need to have a deployment strategy in place, because if your development and production is not exactly the same, then you run into deployment errors. It normally happens when you create, for example business rules on the environment to which you are deploying and it is not on your dev environment. MDS uses copious amounts of id's and if the models are not in sync, then you run into problems.

TFS Intranet Automated Deploy Strategy

I have introduced branching/merging to my team and have talked before about how it would be great to automatically build and deploy code checked into the staging/master branches, but I'm a junior dev, not very ops-y.
The trouble I'm having, is that we create intranet applications and store them on our own VM's which we have access to, but we also have load balancing which is causing me grief!
I can get a build to automate (well, I haven't got all the bugs figured out but I'm working my way through them) - and I can even get the build to automatically create a zip file ready for deployment.
Is it possible to configure several servers for deployment?
I.E
1) I check in some code to stage
***Automatically***
2) Code builds
3) Build completes, Unit tests run and they complete
4) Code is packaged into a .zip
5) .Zip is deployed across the three load balancing servers (all with the same file path).
***
Maybe worth noting we currently have our TFS server running Visual Studio so the code is built on the same server it is all stored, but this is not the server we run live code from.
Any help or tutorials specific to my setup would be GREATLY appreciated, I really want to turn this departments releasing strategies around!
I am going to address only the deployment aspect. There are a lot of different ways that this can be handled, such as:
Customizing the build template
Writing custom .Net code and inserting it into the build template (which would also involve customizing the template)
Creating a Batch or Powershell script set to run after the build completes
Using a separate tool such as OctoDeploy or Release Manager to handle the deployments
The first thing you need to do is separate the build and deployment steps in your head. While they are tightly coupled in your model, they are two totally different tasks that need to be handled different ways.
The second thing is to stop thinking like a developer when it comes to the deployment portion. While there will likely be a programmatic solution, you'll need to identify the manual steps first.
You stated that you're not very ops-y, by which I assume you mean you're more Developer and not Systems Analyst. If that is the case, then the third thing you'll need to do is get someone who is involved, such as your current release team.
There are 3 major things that need to be done then:
EVERYTHING needs to be standardized. If you can't standardize something, then standardize the way that it's non-standard (example: You have a bulk list of servers you need to deploy to, and you need to figure out which ones to deploy to based on their name, which can be anything. In that case, a rule needs to be put in place that all QA servers need to have QA in their name, User Acceptance servers need UAT, Production need PROD, etc.).
Figure out how you're going to communicate from the build to the deployment, which builds are going to deployed, to which servers, and where the code is going to be picked up from
You need to document every manual step, and every exception to those steps, and every exception to those exceptions.
Once you have all those pieces in place, you need to then go through each manual step and automate it, whether that's through Batch, Powershell, or a custom-built application. Once you have all the steps automated, you'll have both the build and deploy pieces complete.
After you're able to execute a single "manual" automatic deployment to a single environment, you're then ready to figure out how you want to run it for multiple environments. This can be as complex as an XML file that is iterated through, to simply calling the same command multiple times with different parameters.
A quick summary of how I've done this at my current job (where using a third-party deployment tool was not an option):
Created a tool using .Net WinForms to allow us to "manually" run automated builds (We use the interface to determine the input parameters, and the custom classes under the hood do all the heavy lifting. These custom classes are in a separate project that builds to their own dll. This also allows us to test tweaks and changes to the process in a testing environment before we roll it out to our production build server)
Set up an XML file for each set of environment (QA, UAT, Prod, etc.) that contains all of the servers that need to be deployed to in that environment, including destination paths, scheduled tasks, and Windows Services
Customize the TFS build template and include the custom classes created for the custom tool, which will read the XML file and iterate through each server entry to perform the deployments
I'm more than happy to help with more specific examples and assistance, I look at things a bit different than most people and it helps when it comes to release management.

Should docker image be bundled with code?

We are building a SaaS application. I don't have (for now - for this app) high demands on availability. It's mostly going to be used in a specific time zone and for business purposes only, so scheduled restarting at 3 in the morning shouldn't be a problem at all.
It is an ASP.NET application running in mono with the fastcgi server. Each customer will have - due to security reasons - his own application deployed. This is going to be done using docker containers, with an Nginx server in the front, to distribute the requests based on URL. The possible ways how to deploy it are for me:
Create a docker image with the fcgi server only and run the code from a mount point
Create a docker image with the fcgi server and the code
pros for 1. would seem
It's easier to update the code, since the docker containers can keep running
Configuration can be bundled with the code
I could easily (if I ever wanted to) add minor changes for specific clients
pros for 2. would seem
everything is in an image, no need to mess around with additional files, just pull it and run it
cons for 1.
a lot of folders for a lot of customers additionally to the running containers
cons for 2.
Configuration can't be in the image (or can it? - should i create specific images per customer with their configuration?) => still additional files for each customer
Updating a container is harder since I need to restart it - but not a big deal, as stated in the beginning
For now - the first year - the number of customers will be low and when the demand is low, any solution is good enough. I'm looking rather at - what is going to work with >100 customers.
Also for future I want to set up CI for this project, so we wouldn't need to update all customers instances manually. Docker images can have automated builds but not sure that will be enough.
My concerns are basically - which solution is less messier, maybe easier to automate?
I couldn't find any best practices with docker which cover a similar scenario.
It's likely that your application's dependencies are going to be dependent on the code, so you'll still have to sometimes rebuild the images and restart the containers (whenever you add a new dependency).
This means you would have two upgrade workflows:
One where you update just the code (when there are no dependency changes)
One where you update the images too, and restart the containers (when there are dependency changes)
This is most likely undesirable, because it's difficult to automate.
So, I would recommend bundling the code on the image.
You should definitely make sure that your application's configuration can be stored somewhere else, though (e.g. on a volume, or accessed through through environment variables).
Ultimately, Docker is a platform to package, deploy and run applications, so packaging the application (i.e. bundling the code on the image) seems to be the better way to use it.