How to know to upgrade my pod when a container is updated? - kubernetes

I have a pod with 3 containers (ca, cb, cc).
This container is owned by TeamA. Team A creates and owns ca.the other two containers are developed by two other teams. TeamB for cb and TeamC for cc.
Both cb and cc also run independently (outside the TeamA pod) as services within this same cluster.
How can TeamA pod find out when cb or cc deploy a newer version in the cluster and how can we ensure that it triggers a refresh?
As you may have guesses cb and cc are services that ca relies on heavily and they are also services in their own right.
Is there a way to ensure that TeamA pod keeps cb and cc updated whenever TeamB and TeamC deploy new versions to the cluster?

This is not a task for Kubernetes. You should configure that in your CI/CD tool. For example, whenever a new commit is pushed to service A, it will first trigger the pipeline for A, and then trigger corresponding pipelines for services B and C. Every popular CI/CD system has this ability and this is how it's normally done.
Proper CI/CD tests will also protect you from mistakes. If service A update breaks compatibility with B and C, your pipeline should fail and notify you about that.

There's no one specific answer. https://github.com/jetstack/version-checker provides a metrics/alerting approach. Numerous kubectl plugins give a CLI reporting approach. Stuff like https://fluxcd.io/docs/guides/image-update/ can do upgrades automatically within certain parameters.
Or maybe just actually talk to your coworkers, this is both a technological and social problem and you'll need answers from both sides.

Related

Whole Application level rolling update

My kubernetes application is made of several flavors of nodes, a couple of “schedulers” which send tasks to quite a few more “worker” nodes. In order for this app to work correctly all the nodes must be of exactly the same code version.
The deployment is performed using a standard ReplicaSet and when my CICD kicks in it just does a simple rolling update. This causes a problem though since during the rolling update, nodes of different code versions co-exist for a few seconds, so a few tasks during this time get wrong results.
Ideally what I would want is that deploying a new version would create a completely new application that only communicates with itself and has time to warm its cache, then on a flick of a switch this new app would become active and start to get new client requests. The old app would remain active for a few more seconds and then shut down.
I’m using Istio sidecar for mesh communication.
Is there a standard way to do this? How is such a requirement usually handled?
I also had such a situation. Kubernetes alone cannot satisfy your requirement, I was also not able to find any tool that allows to coordinate multiple deployments together (although Flagger looks promising).
So the only way I found was by using CI/CD: Jenkins in my case. I don't have the code, but the idea is the following:
Deploy all application deployments using single Helm chart. Every Helm release name and corresponding Kubernetes labels must be based off of some sequential number, e.g. Jenkins $BUILD_NUMBER. Helm release can be named like example-app-${BUILD_NUMBER} and all deployments must have label version: $BUILD_NUMBER . Important part here is that your Services should not be a part of your Helm chart because they will be handled by Jenkins.
Start your build with detecting the current version of the app (using bash script or you can store it in ConfigMap).
Start helm install example-app-{$BUILD_NUMBER} with --atomic flag set. Atomic flag will make sure that the release is properly removed on failure. And don't delete previous version of the app yet.
Wait for Helm to complete and in case of success run kubectl set selector service/example-app version=$BUILD_NUMBER. That will instantly switch Kubernetes Service from one version to another. If you have multiple services you can issue multiple set selector commands (each command executes immediately).
Delete previous Helm release and optionally update ConfigMap with new app version.
Depending on your app you may want to run tests on non user facing Services as a part of step 4 (after Helm release succeeds).
Another good idea is to have preStop hooks on your worker pods so that they can finish their jobs before being deleted.
You should consider Blue/Green Deployment strategy

How to set automatic rollbacks in CodeDeploy with CloudFormation?

I'm creating a Deployment Group in CodeDeploy with a CloudFormation template.
The Deployment Group is successfully created and the application is deployed perfectly fine.
The CF resource that I defined (Type: AWS::CodeDeploy::DeploymentGroup) has the "Deployment" property set. The thing is that I would like to configure automatic rollbacks for this deployment, but as per CF documentation for "AutoRollbackConfiguration" property: "Information about the automatic rollback configuration that is associated with the deployment group. If you specify this property, don't specify the Deployment property."
So my understanding is that if I specify "Deployment", I cannot set "AutoRollbackConfiguration"... Then how are you supposed to configure any rollback for the deployment? I don't see any other resource property that relates to rollbacks.
Should I create a second DeploymentGroup resource and bind it to the same instances that the original Deployment Group has? I'm not sure this is possible or makes sense but I ran out of options.
Thanks,
Nicolas
First i like to describe why you cannot specify both, deployment and rollback configuration:
Whenever you specify a deployment directly for the group, you already state which revision you like to deploy. This conflicts with the idea of CloudFormation of having resources managed by it without having a drift in the actual configuration of those resources.
I would recommend the following:
Use CloudFormation to deploy the 'underlying' infrastructure (the deployment group, application, roles, instances, etc.)
Create a CodePipline within this infrastructure template, which then includes a CodeDeploy deployment action (https://docs.aws.amazon.com/codepipeline/latest/userguide/action-reference-CodeDeploy.html, https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-codepipeline-pipeline-stages-actions-actiontypeid.html)
The pipeline can triggered whenever you have a new version inside you revision location
This approach clearly separates the underlying stuff, which is not changing dynamically and the actual application deployment, done using a proper pipeline.
Additionally in this way you can specify how you like to deploy (green/blue, canary) and how/when rollbacks should be handled. The status of your deployment also to be seen inside CodePipeline.
I didn't mention it but what you are suggesting about CodePipeline is exactly what I did.
In fact, I have one CloudFormation template that creates all the infrastructure and includes the DeploymentGroup. With this, the application is deployed for the first time to my EC2 instances.
Then I have another CF template for CI/CD purposes with a CodeDeploy stage/action that references the previous DeploymentGroup. Whenever I push some code to my repository, the Pipeline is triggered, code is built and new version successfully deployed to the instances.
However, I don't see how/where in any of the CF templates to handle/configure the rollback for the DeploymentGroup as you were saying. I think I get the idea of your explanation about the conflict CF might have in case of having a drift, but my impression is that in case of errors during the CF stack creation, CF rollback should just remove the DeploymentGroup you're trying to create. In other words, for me there's no CodeDeploy deployment rollback involved in that scenario, just removing the resource (DeploymentGroup) CF was trying to create.
One thing that really impresses me is that you can enable/disable automatic rollbacks for the DeploymentGroup through the AWS Console. Just edit and go to Advanced Configuration for the DeploymentGroup and you have a checkbox. I tried it and triggered the Pipeline again and worked perfectly. I made a faulty change to make the deployment fail in purpose, and then CodeDeploy automatically reverted back to the previous version of my application... completely expected behavior. Doesn't make much sense that this simple boolean/flag option is not available through CF.
Hope this makes sense and helps clarifying my current situation. Any extra help would be highly appreciated.
Thanks again

How isUpgrade setting affects deployment process in Service Fabric Application Deployment task in Azure DevOps

Azure Devops has a standard task for deploying apps to ServiceFabric. The task is named Service Fabric Application Deployment and is documented here. Among other settings, it contains an optional boolean isUpgrade setting (default value 'true'). I tried to set it explicitly to true and false, but I did not find any difference in the behavior of the task. In both cases, the deployment was successful, all previously deployed packages were still provisioned, and Azure Pipelines logs were the same. The time of the deployment was the same, too.
My question is what the setting affects? Maybe, somebody has used it in his CI pipelines.
There are 2 types of deployment in Service Fabric. This isUpgrade flag controls which type op upgrade you are executing.
Regular
Basically this removes the old application and deploys the new version. So if you have Statefull services, this will remove all state. You will have downtime when you do a regular upgrade.
Upgrade
An upgrade will do a lot of things, It will keep the state, it will do health checking, make sure the services are available. Does a rollback when the healthcheck fails, ...
If your application or services didn't change, nothing changes in your cluster.
Typically an upgrade will take more time (This is highly dependent on your health check rules). See the application upgrade flowchart
More info about the 2 types
If you look at the code of the task. You see that it will only take effect if overridePublishProfileSettings is true. Otherwise the PulishProfile.xml is used.

Deployment synchronization across microservices

Let's say there are two environments: staging and prod.
There are also two microservices: A and B.
Microservices are already deployed to both environments and each running service version 1. So we have:
staging: A1, B1
prod: A1, B1
Now one development team implements new feature in microservice A (either synchronous API or async one through message broker) and deploys A2 to staging. Then another team implements new feature in B that makes use of new feature from A2 and also deploys it to staging (B2). Now we have:
staging: A2, B2
prod: A1, B1
New features are tested in staging environment by client and there are approved to be deployed to production. The question is how to determine which services should be deployed first to production. Obviously B2 must not be deployed first because it depends on A2. Are there any tools/strategies to keep track of it?
What I can imagine is that all services keep track on which versions of other services they depend and during deployment this versions are checked against what is running in target environment and if something is missing then deployment is rejected.
This raises a question - should microservice deployment be parallel or one-microservice-at-a-time?
Also what if, before deploying A2 and B2 to prod, there will be A3 released to staging and will depend on B2? Now we should schedule deployment to production like this:
A2 => B2 => A3
Those are real-world examples that come to my mind, but maybe in microservice architecture it's possible to avoid such situation following some rules?
Versioning in integration points can be an option.
For example if microservice-A get an information from microservice-B through a REST call, when microservice-B wants to change integration (change in rest call contract for instance), microservice-B can add new endpoint with new versioned mapping like "/getInformationFromMeV2" without deleting old one. By this way when you deploy microservice-B, microservice-A can still use the old endpoint for a while. After deploying microservice-A too, you can remove old endpoint from microservice-B in next deployment.
Note: Of course if microservice-A wants to use newly developed endpoint from microservice-B, microservice-B must be deployed before microservice A.
For async communication you can still apply this approach. If you use broker based communication like using Kafka (suppose that you want to deploy microservice-A first):
If microservice-A is consumer, then new topic is created and microservice-A subscribes the new topic too and new version is deployed for microservice-A. At that point microservice-A consume messages both from new and old topics (of course until microservice-B is deployed, all the messages will be sent to old topic) after microservice-B is also deployed, old topic is deleted.
If microservice-A is producer again new topic is created and after deployment of microservice-A new messages are sent to new topic. When Microservice-B is deployed too, it starts reading messages from beginning of new topic and old topic is deleted.
With this kind of approaches you can deploy microservices independently as convenient to microservices architecture.

Does IBM Bluemix eliminate the need to maintain servers?

Currently we are maintaining server for each environment like DEV, FVT, UAT and PROD.
I think we can create spaces in Bluemix to replicate the above setup, but does Bluemix completely remove the need of servers?.
I think we at least need to maintain a Sandbox environment to test the code before pushing it to Bluemix.
And how does the deployment process differ in Bluemix compared to the traditional way?
#aryanRaj_kary
The concept of spaces[1] is perfect for separating out environments like DEV, FVT & PROD. I don't think there's anything wrong with having a sandbox as well, but the spaces concept in Bluemix should satisfy your needs.
In Bluemix, in terms of HA, you have the choice of two deployment methods. We use an intelligent update service called Active Deploy [2] and we also employ the zero-downtime concept of "Blue-green" deployments [3]. The difference between the two is that in Blue-Green deployments, both versions are never active at the same time. However, with Active Deploy, there's minimal traffic allowed to both versions during ramp-up phase [4].
[1] https://console.ng.bluemix.net/docs/admin/orgs_spaces.html#spaceinfo
[2] https://console.ng.bluemix.net/docs/services/ActiveDeploy/index.html
[3] https://console.ng.bluemix.net/docs/manageapps/updapps.html#blue_green
[4] https://console.ng.bluemix.net/docs/services/ActiveDeploy/faq.html#bluegreendeployments