Do we need to explicitly upgrade helm chart "Release-1" when we do patch a particular object separately , for eg. Cron job "CJ1"?
In my case, I have patched cron job to run every min.
I did not however upgrade the helm chart that deployed the cron job.
"Kubectl get cj CJ1 -o yaml " although shows that the changes have been made from older schedule to the new schedule :- "* * * * *".
However the job is now not running at "* * * * *"
When you say patch I presume you are referring to editing the object with kubectl edit ... or in any other way that apply the change without going through helm upgrade ?
Generally speaking if you follow DevOps and GitOps best practices any change you do should go through git (be version controlled). If you patch an object separately/manually then your code no longer represent what you have deployed, so the next time you upgrade the chart you are going to get the version without the patch (loose your changes).
So if you want to keep the changes that you have applied separately/manually then Yes ... change your code, then upgrade the chart.
If in the long term it doesn't matter and you are just playing around ... then you don't have to do anything as the change you want is already in Kubernetes.
Related
My kubernetes application is made of several flavors of nodes, a couple of “schedulers” which send tasks to quite a few more “worker” nodes. In order for this app to work correctly all the nodes must be of exactly the same code version.
The deployment is performed using a standard ReplicaSet and when my CICD kicks in it just does a simple rolling update. This causes a problem though since during the rolling update, nodes of different code versions co-exist for a few seconds, so a few tasks during this time get wrong results.
Ideally what I would want is that deploying a new version would create a completely new application that only communicates with itself and has time to warm its cache, then on a flick of a switch this new app would become active and start to get new client requests. The old app would remain active for a few more seconds and then shut down.
I’m using Istio sidecar for mesh communication.
Is there a standard way to do this? How is such a requirement usually handled?
I also had such a situation. Kubernetes alone cannot satisfy your requirement, I was also not able to find any tool that allows to coordinate multiple deployments together (although Flagger looks promising).
So the only way I found was by using CI/CD: Jenkins in my case. I don't have the code, but the idea is the following:
Deploy all application deployments using single Helm chart. Every Helm release name and corresponding Kubernetes labels must be based off of some sequential number, e.g. Jenkins $BUILD_NUMBER. Helm release can be named like example-app-${BUILD_NUMBER} and all deployments must have label version: $BUILD_NUMBER . Important part here is that your Services should not be a part of your Helm chart because they will be handled by Jenkins.
Start your build with detecting the current version of the app (using bash script or you can store it in ConfigMap).
Start helm install example-app-{$BUILD_NUMBER} with --atomic flag set. Atomic flag will make sure that the release is properly removed on failure. And don't delete previous version of the app yet.
Wait for Helm to complete and in case of success run kubectl set selector service/example-app version=$BUILD_NUMBER. That will instantly switch Kubernetes Service from one version to another. If you have multiple services you can issue multiple set selector commands (each command executes immediately).
Delete previous Helm release and optionally update ConfigMap with new app version.
Depending on your app you may want to run tests on non user facing Services as a part of step 4 (after Helm release succeeds).
Another good idea is to have preStop hooks on your worker pods so that they can finish their jobs before being deleted.
You should consider Blue/Green Deployment strategy
Whenever I run my basic deploy command, everything is redeployed in my environment. Is there any way to tell Helm to only apply things if there were changes made or is this just the way it works?
I'm running:
helm upgrade --atomic MyInstall . -f CustomEnvironmentData.yaml
I didn't see anything in the Helm Upgrade documentation that seemed to indicate this capability.
I don't want to bounce my whole evironment unless I have to.
There's no way to tell Helm to do this, but also no need. If you submit an object to the Kubernetes API server that exactly matches something that's already there, generally nothing will happen.
For example, say you have a Deployment object that specifies image: my/image:{{ .Values.tag }} and replicas: 3. You submit this once with tag: 20200904.01. Now you run the helm upgrade command you show, with that tag value unchanged in the CustomEnvironmentData.yaml file. This will in fact trigger the deployment controller inside Kubernetes. That sees that it wants 3 pods to exist with the image my/image:20200904.01. Those 3 pods already exist, so it does nothing.
(This is essentially the same as the "don't use the latest tag" advice: if you try to set image: my/image:latest, and redeploy your Deployment with this tag, since the Deployment spec is unchanged Kubernetes won't do anything, even if the version of the image in the registry has changed.)
You should probably use helm diff upgrade
https://github.com/databus23/helm-diff
$ helm diff upgrade - h
Show a diff explaining what a helm upgrade would change.
This fetches the currently deployed version of a release
and compares it to a chart plus values.
This can be used visualize what changes a helm upgrade will
perform.
Usage:
diff upgrade[flags] [RELEASE] [CHART]
Examples:
helm diff upgrade my-release stable / postgresql--values values.yaml
Flags:
-h, --help help for upgrade
--detailed - exitcode return a non - zero exit code when there are changes
--post - renderer string the path to an executable to be used for post rendering. If it exists in $PATH, the binary will be used, otherwise it will try to look for the executable at the given path
--reset - values reset the values to the ones built into the chart and merge in any new values
--reuse - values reuse the last release's values and merge in any new values
--set stringArray set values on the command line(can specify multiple or separate values with commas: key1 = val1, key2 = val2)
--suppress stringArray allows suppression of the values listed in the diff output
- q, --suppress - secrets suppress secrets in the output
- f, --values valueFiles specify values in a YAML file(can specify multiple)(default[])
--version string specify the exact chart version to use.If this is not specified, the latest version is used
Global Flags:
--no - color remove colors from the output
Because Kubernetes handles situations where there's a typo in the job spec, and therefore a container image can't be found, by leaving the job in a running state forever, I've got a process that monitors job events to detect cases like this and deletes the job when one occurs.
I'd prefer to just stop the job so there's a record of it. Is there a way to stop a job?
1) According to the K8S documentation here.
Finished Jobs are usually no longer needed in the system. Keeping them around in the system will put pressure on the API server. If the Jobs are managed directly by a higher level controller, such as CronJobs, the Jobs can be cleaned up by CronJobs based on the specified capacity-based cleanup policy.
Here are the details for the failedJobsHistoryLimit property in the CronJobSpec.
This is another way of retaining the details of the failed job for a specific duration. The failedJobsHistoryLimit property can be set based on the approximate number of jobs run per day and the number of days the logs have to be retained. Agree that the Jobs will be still there and put pressure on the API server.
This is interesting. Once the job completes with failure as in the case of a wrong typo for image, the pod is getting deleted and the resources are not blocked or consumed anymore. Not sure exactly what kubectl job stop will achieve in this case. But, when the Job with a proper image is run with success, I can still see the pod in kubectl get pods.
2) Another approach without using the CronJob is to specify the ttlSecondsAfterFinished as mentioned here.
Another way to clean up finished Jobs (either Complete or Failed) automatically is to use a TTL mechanism provided by a TTL controller for finished resources, by specifying the .spec.ttlSecondsAfterFinished field of the Job.
Not really, no such mechanism exists in Kubernetes yet afaik.
You can workaround is to ssh into the machine and run a: (if you're are using Docker)
# Save the logs
$ docker log <container-id-that-is-running-your-job> 2>&1 > save.log
$ docker stop <main-container-id-for-your-job>
It's better to stream log with something like Fluentd, or logspout, or Filebeat and forward the logs to an ELK or EFK stack.
In any case, I've opened this
You can suspend cronjobs by using the suspend attribute. From the Kubernetes documentation:
https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#suspend
Documentation says:
The .spec.suspend field is also optional. If it is set to true, all
subsequent executions are suspended. This setting does not apply to
already started executions. Defaults to false.
So, to pause a cron you could:
run and edit "suspend" from False to True.
kubectl edit cronjob CRON_NAME (if not in default namespace, then add "-n NAMESPACE_NAME" at the end)
you could potentially create a loop using "for" or whatever you like, and have them all changed at once.
you could just save the yaml file locally and then just run:
kubectl create -f cron_YAML
and this would recreate the cron.
The other answers hint around the .spec.suspend solution for the CronJob API, which works, but since the OP asked specifically about Jobs it is worth noting the solution that does not require a CronJob.
As of Kubernetes 1.21, there alpha support for the .spec.suspend field in the Job API as well, (see docs here). The feature is behind the SuspendJob feature gate.
I like the work methology of Kuberenetes, use self-contained image and pass the configuration in a ConfigMap, as a volume.
Now this worked great until I tried to do this thing with Liquibase container, The SQL is very long ~1.5K lines, and Kubernetes rejects it as too long.
Error from Kubernetes:
The ConfigMap "liquibase-test-content" is invalid: metadata.annotations: Too long: must have at most 262144 characters
I thought of passing the .sql files as a hostPath, but as I understand these hostPath's content is probably not going to be there
Is there any other way to pass configuration from the K8s directory to pods? Thanks.
The error you are seeing is not about the size of the actual ConfigMap contents, but about the size of the last-applied-configuration annotation that kubectl apply automatically creates on each apply. If you use kubectl create -f foo.yaml instead of kubectl apply -f foo.yaml, it should work.
Please note that in doing this you will lose the ability to use kubectl diff and do incremental updates (without replacing the whole object) with kubectl apply.
Since 1.18 you can use server-side apply to circumvent the problem.
kubectl apply --server-side=true -f foo.yml
where server-side=true runs the apply command on the server instead of the client.
This will properly show conflicts with other actors, including client-side apply and thus fail:
Apply failed with 4 conflicts: conflicts with "kubectl-client-side-apply" using apiextensions.k8s.io/v1:
- .status.conditions
- .status.storedVersions
- .status.acceptedNames.kind
- .status.acceptedNames.plural
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.
* You may co-own fields by updating your manifest to match the existing
value; in this case, you'll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
See http://k8s.io/docs/reference/using-api/api-concepts/#conflicts
If the changes are intended you can simple use the first option:
kubectl apply --server-side=true -force-conflicts -f foo.yml
You can use an init container for this. Essentially, put the .sql files on GitHub or S3 or really any location you can read from and populate a directory with it. The semantics of the init container guarantee that the Liquibase container will only be launched after the config files have been downloaded.
I have modified kubectl's edit command (/pkg/kubectl/cmd/edit.go) to restart all active pods to reflect the new changes immediately after the edit is done. (A down time is acceptable in my use case). Now I want to include this feature to the REST api, where when I call
PATCH /api/v1/namespaces/{namespace}/replicationcontrollers/{name}
the patch should be applied to the replicationController and restart all the pods that are maintained by the corresponding replication controller. How ever I can't find the file that I should edit in order to alter the REST API. Where can I find these files and is there a better way to achieve what I am currently doing. (Edits to the RC should be reflect immediately in the pods)
We're actually implementing this feature in the Deployment API. You want the Recreate update strategy. It will be in Kubernetes 1.2, but you can try it now, in v1.2.0-alpha.6 or by building from HEAD.
https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/deployments.md
That documentation is a little out of date, since Deployment is under active development. For the current API, please see
https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/extensions/v1beta1/types.go