I had a requirement wherein we need to support backward compatibility.
Old implementation is we are fetching credentials thru init container
new implementation, we will fetch credentials thru credentials request.
With the new implementation, there's no need to run init-container anymore, but still we wanted to support backward compatibility wherein customer can have an option to still use the old implementation.
Is it possible for kubernetes init-container to run it conditionally by just providing flag in the deployment file?
Thanks..
You can do it two ways:
Use Helm and set a conditional clause around the init container block and you set the flag via the chart values.yaml or as a command line --set argument
Run the init container all the time, but pass an environment variable into it as well to indicates whether it should pull credentials. If it's set to pull credentials, it'll do so, if it is not set, it'll terminate and do nothing.
Related
During a pipeline run, under deployment job, providing a deployment environment eliminates the need of providing service connection manually. I'd guess, it's either creating a new SC at this time or it would have created SC at the time of environment creation and using the same.
Either ways, is there a way to find out which Service connection is being used from the logs of pipeline run or from anywhere else?
In our environment, I see a lot of service connection for one environment and a cleanup is necessary to get things in place.
I tried giving SC manually along with environment and it works as expected. So, going forward, I can use this method. But for cleanup, I'd still like to know which one gets used when not specified! (none of the auto-created SCs show any execution history, but I know the deployment has happened multiple times)
As a Kubernetes resource in an environment is referencing Kubernetes service connection, you can use this API to list the serviceEndpointId of a Kubernetes resource, which is also the resourceId of the referenced service connection.
GET https://dev.azure.com/{organization}/{project}/_apis/distributedtask/environments/{environmentId}/providers/kubernetes/{resourceId}?api-version=7.0
Applied with the value of the serviceEndpointId from the response of the above API, we can proceed to use this API to get the referenced service connection details.
GET https://dev.azure.com/{organization}/{project}/_apis/serviceendpoint/endpoints/{endpointId}?api-version=7.0
My kubernetes application is made of several flavors of nodes, a couple of “schedulers” which send tasks to quite a few more “worker” nodes. In order for this app to work correctly all the nodes must be of exactly the same code version.
The deployment is performed using a standard ReplicaSet and when my CICD kicks in it just does a simple rolling update. This causes a problem though since during the rolling update, nodes of different code versions co-exist for a few seconds, so a few tasks during this time get wrong results.
Ideally what I would want is that deploying a new version would create a completely new application that only communicates with itself and has time to warm its cache, then on a flick of a switch this new app would become active and start to get new client requests. The old app would remain active for a few more seconds and then shut down.
I’m using Istio sidecar for mesh communication.
Is there a standard way to do this? How is such a requirement usually handled?
I also had such a situation. Kubernetes alone cannot satisfy your requirement, I was also not able to find any tool that allows to coordinate multiple deployments together (although Flagger looks promising).
So the only way I found was by using CI/CD: Jenkins in my case. I don't have the code, but the idea is the following:
Deploy all application deployments using single Helm chart. Every Helm release name and corresponding Kubernetes labels must be based off of some sequential number, e.g. Jenkins $BUILD_NUMBER. Helm release can be named like example-app-${BUILD_NUMBER} and all deployments must have label version: $BUILD_NUMBER . Important part here is that your Services should not be a part of your Helm chart because they will be handled by Jenkins.
Start your build with detecting the current version of the app (using bash script or you can store it in ConfigMap).
Start helm install example-app-{$BUILD_NUMBER} with --atomic flag set. Atomic flag will make sure that the release is properly removed on failure. And don't delete previous version of the app yet.
Wait for Helm to complete and in case of success run kubectl set selector service/example-app version=$BUILD_NUMBER. That will instantly switch Kubernetes Service from one version to another. If you have multiple services you can issue multiple set selector commands (each command executes immediately).
Delete previous Helm release and optionally update ConfigMap with new app version.
Depending on your app you may want to run tests on non user facing Services as a part of step 4 (after Helm release succeeds).
Another good idea is to have preStop hooks on your worker pods so that they can finish their jobs before being deleted.
You should consider Blue/Green Deployment strategy
Say I have a container image that contains a large command-line program that is executed from the shell. I have another container that contains a scheduler whose job it is to invoke the first container when it receives a certain signal. For various reasons I don't want to put them in the same container (mainly because the scheduler can invoke many different tools, and different versions of those tools, and I don't want to have to put all the tools and their versions in the same container image.)
I know how to put two containers in the same pod. However, the default behavior is to run both containers at startup. What I want to be able to do is to have the scheduler be able to decide when to invoke the other container, and to be able to specify the command-line arguments (and ideally, environment variables) for it. Also, I need to know the exit status. Extra credit for getting stdout/stderr, but I can hack around with volumes if I need to.
I also know how to do this if the second container was a server, but in this case it's a shell program.
A quick way to do this is:
Add a kubectl proxy in your container startup
Then call a kubernetes job from the first pod.
This would create a lightweight solution in which the desired job can be queried for success state, seemingly fulfilling your requirements
What is the meaning of "mode" of set-AzureDeployment?
-Mode
Specifies the mode of upgrade. Supported values are: "Auto", "Manual", and "Simultaneous".
What does "Auto","Manual", and "Simultaneous" mean?
I am particularly interested in "Simultaneous". Does it mean my package will be deployed to multiple instances simultaneously?
Thanks
Mode specifies the type of update to initiate. Role instances are allocated to update domains when the service is deployed. Updates can be initiated manually in each update domain or initiated automatically in all update domains.
If not specified, the default value is Auto. If set to Manual, WalkUpgradeDomain must be called to apply the update. If set to Auto, the update is automatically applied to each update domain in sequence.
To perform an automatic update of a deployment, call Upgrade Deployment or Change Deployment Configuration with the Mode element set to automatic. The update proceeds from that point without a need for further input. You can call Get Operation Status to determine when the update is complete.
To perform a manual update, first call Upgrade Deployment with the Mode element set to manual. Next, call Walk Upgrade Domain to update each domain within the deployment. You should make sure that the operation is complete by calling Get Operation Status before updating the next domain. More information please refer to this link.
One of the new deployment options we now support is the ability to do a “Simultaneous Update” of a Cloud Service (we sometimes also refer to this as the “Blast Option”). When you use this option we bypass the normal upgrade domain walk that is done by default with Cloud Services (where we upgrade parts of the Cloud Service sequentially to avoid ever bringing the entire service down) and we instead upgrade all roles and instances simultaneously. With today’s release this simultaneous update logic now happens within Windows Azure (on the cloud side). This has the benefit of enabling the Cloud Service update to happen much faster. More information please refer to this link.
I am particularly interested in "Simultaneous". Does it mean my
package will be deployed to multiple instances simultaneously?
The answer is yes.
I have installed Deis Workflow v.2.11 in a GKE cluster, and some of our applications share values in common, like a proxy URL e credentials. I can use these values putting them into environment variables, or even in a .env file.
However, every new application, I need to create a .env file, with shared values and then, call
deis config:push
If one of those shared value changes, I need to adjust every configuration of every app and restart them. I would like to modify the value in ConfigMap once and, after changes, Deis restart the applications.
Does anyone know if it is possible to read values from Kubernetes ConfigMap and to put them into Deis environment variables? Moreover, if yes, how do I do it?
I believe what you're looking for is a way to set environment variables globally across all applications. That is currently not implemented. However, please feel free to hack up a PR and we'd likely accept it!
https://github.com/deis/controller/issues/383
https://github.com/deis/controller/issues/1219
Currently there is no support for configMaps in Deis Workflow v2.18.0 . We would appreciate a PR into the Hephy Workflow (open source fork of Deis Workflow). https://github.com/teamhephy/controller
There is no functionality right now to capture configMap in by the init scripts of the containers.
You could update the configMap, but each of the applications would need to run kubectl replace -f path/accessible/for/everyone/configmap.yaml to get the variables updated.
So, I would say yes, at Kubernetes level you can do it. Just figure out the best way for your apps to update the configMap. I don't have details of your use case, so I can't tell you specific ways.