Helm rollback hook from current release - kubernetes

Let's say I have two versions of chart Foo - v1 and v2. I had installed v1 (as revision 1) then upgraded to v2 (revision 2).
Now I'd like to rollback to first revision (helm rollback Foo 1). Is there any way to run job defined in v2 at some point of rollback after v1 resources are restored.
It must perform some actions on v1 resources because backwards incompatible changes made in v2.
I'd assume that pre-rollback hook defined in v2 should do the job. Unfortunetly chart lifecycle documentation is a bit confusing for me.
I tried to use
annotations:
"helm.sh/hook": post-rollback
as suggested in the answers. Unfortunately when I do rollback from v2 to v1 then v1's version of pre/post rollback job is executed. I need to execute job defined in v2 chart.

The following documentation and examples should clear up your confusion -
https://helm.sh/docs/topics/charts_hooks/#the-available-hooks
https://helm.sh/docs/topics/charts_hooks/#writing-a-hook
tldr
Add the following to the job you want to execute.
annotations:
"helm.sh/hook": post-rollback

pre-rollback is executed before any resources are created. Your desire state is to have that job to run on already created resources so you have to use post-rollback hook as described in the documentation:
post-rollback Executes on a rollback request after all resources have been modified

No it's not possible, Helm always uses hooks from target release, in this case from v1.

Related

Whole Application level rolling update

My kubernetes application is made of several flavors of nodes, a couple of “schedulers” which send tasks to quite a few more “worker” nodes. In order for this app to work correctly all the nodes must be of exactly the same code version.
The deployment is performed using a standard ReplicaSet and when my CICD kicks in it just does a simple rolling update. This causes a problem though since during the rolling update, nodes of different code versions co-exist for a few seconds, so a few tasks during this time get wrong results.
Ideally what I would want is that deploying a new version would create a completely new application that only communicates with itself and has time to warm its cache, then on a flick of a switch this new app would become active and start to get new client requests. The old app would remain active for a few more seconds and then shut down.
I’m using Istio sidecar for mesh communication.
Is there a standard way to do this? How is such a requirement usually handled?
I also had such a situation. Kubernetes alone cannot satisfy your requirement, I was also not able to find any tool that allows to coordinate multiple deployments together (although Flagger looks promising).
So the only way I found was by using CI/CD: Jenkins in my case. I don't have the code, but the idea is the following:
Deploy all application deployments using single Helm chart. Every Helm release name and corresponding Kubernetes labels must be based off of some sequential number, e.g. Jenkins $BUILD_NUMBER. Helm release can be named like example-app-${BUILD_NUMBER} and all deployments must have label version: $BUILD_NUMBER . Important part here is that your Services should not be a part of your Helm chart because they will be handled by Jenkins.
Start your build with detecting the current version of the app (using bash script or you can store it in ConfigMap).
Start helm install example-app-{$BUILD_NUMBER} with --atomic flag set. Atomic flag will make sure that the release is properly removed on failure. And don't delete previous version of the app yet.
Wait for Helm to complete and in case of success run kubectl set selector service/example-app version=$BUILD_NUMBER. That will instantly switch Kubernetes Service from one version to another. If you have multiple services you can issue multiple set selector commands (each command executes immediately).
Delete previous Helm release and optionally update ConfigMap with new app version.
Depending on your app you may want to run tests on non user facing Services as a part of step 4 (after Helm release succeeds).
Another good idea is to have preStop hooks on your worker pods so that they can finish their jobs before being deleted.
You should consider Blue/Green Deployment strategy

Kubectl patch $deleteFromPrimitiveList directive

I was searching for a way to remove a specific value from a list on a pod through a patch, specifically on the SecurityContext.Capabilities attribute. At first I came across the json patch remove limitation which requires an index but after some more digging I found the $deleteFromPrimitiveList directive used in the strategic patch type. Thing is this directive is not documented anywhere under the official documentation and only has a couple of hits in forums and the source code itself. This is what I ended up with which is working for me:
patch.yaml:
spec:
template:
spec:
containers:
- name: test
securityContext:
capabilities:
$deleteFromPrimitiveList/add: ["SYS_RAWIO"]
patch command:
kubectl patch deployment test --patch="$(cat patch.json)"
My question is, should I use this, is this officially supported? If so, is there a needed minimum cluster version?
and is there a reason its not documented anywhere?
Thanks
It seems to be officially supported, but not well documented.
The best documentation I could find was a markdown file in the community repo that covers this and other strategic merge patch directives.
This documentation specifically calls out backward compatibility:
Changes to the strategic merge patch must be backwards compatible such that patch requests valid in previous versions continue to be valid. That is, old patch formats sent by old clients to new servers with must continue to function correctly.

Argo Workflows semaphore with value 0

I'm learning about semaphores in the Argo project workflows to avoid concurrent workflows using the same resource.
My use case is that I have several external resources which only one workflow can use one at a time. So far so good, but sometimes the resource needs some maintenance and during that period I don't want to Argo to start any workflow.
I guess I have two options:
I tested manually setting the semaphore value in the configMap to the value 0, but Argo started one workflow anyway.
I can start a workflow that runs forever, until it is deleted, claiming the synchronization lock, but that requires some extra overhead to have workflows running that don't do anything.
So I wonder how it is supposed to work if I set the semaphore value to 0, I think it should not start the workflow then since it says 0. Anyone have any info about this?
This is the steps I carried out:
First I apply my configmap with kubectl -f.
I then submit some workflows and since they all use the same semaphore Argo will start one and the rest will be executed in order one at a time.
I then change value of the semapore with kubectl edit configMap
Submit new job which then Argo will execute.
Perhaps Argo will not reload the configMap when I update the configMap through kubectl edit? I would like to update the configmap programatically in the future but used kubectl edit now for testing.
Quick fix: after applying the ConfigMap change, cycle the workflow-controller pod. That will force it to reload semaphore state.
I couldn't reproduce your exact issue. After using kubectl edit to set the semaphore to 0, any newly submitted workflows remained Pending.
I did encounter an issue where using kubectl edit to bump up the semaphore limit did not automatically kick off any of the Pending workflows. Cycling the workflow controller pod allowed the workflows to start running again.
Besides using the quick fix, I'd recommend submitting an issue. Synchronization is a newer feature, and it's possible it's not 100% robust yet.

upgrade helm chart upon patching one of its objects

Do we need to explicitly upgrade helm chart "Release-1" when we do patch a particular object separately , for eg. Cron job "CJ1"?
In my case, I have patched cron job to run every min.
I did not however upgrade the helm chart that deployed the cron job.
"Kubectl get cj CJ1 -o yaml " although shows that the changes have been made from older schedule to the new schedule :- "* * * * *".
However the job is now not running at "* * * * *"
When you say patch I presume you are referring to editing the object with kubectl edit ... or in any other way that apply the change without going through helm upgrade ?
Generally speaking if you follow DevOps and GitOps best practices any change you do should go through git (be version controlled). If you patch an object separately/manually then your code no longer represent what you have deployed, so the next time you upgrade the chart you are going to get the version without the patch (loose your changes).
So if you want to keep the changes that you have applied separately/manually then Yes ... change your code, then upgrade the chart.
If in the long term it doesn't matter and you are just playing around ... then you don't have to do anything as the change you want is already in Kubernetes.

Edit the REST API in Kubernetes source

I have modified kubectl's edit command (/pkg/kubectl/cmd/edit.go) to restart all active pods to reflect the new changes immediately after the edit is done. (A down time is acceptable in my use case). Now I want to include this feature to the REST api, where when I call
PATCH /api/v1/namespaces/{namespace}/replicationcontrollers/{name}
the patch should be applied to the replicationController and restart all the pods that are maintained by the corresponding replication controller. How ever I can't find the file that I should edit in order to alter the REST API. Where can I find these files and is there a better way to achieve what I am currently doing. (Edits to the RC should be reflect immediately in the pods)
We're actually implementing this feature in the Deployment API. You want the Recreate update strategy. It will be in Kubernetes 1.2, but you can try it now, in v1.2.0-alpha.6 or by building from HEAD.
https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/deployments.md
That documentation is a little out of date, since Deployment is under active development. For the current API, please see
https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/extensions/v1beta1/types.go