sync sync all resources from state file (repos, releases and chart deps)
apply apply all resources from state file only when there are changes
sync
The helmfile sync sub-command sync your cluster state as described in your helmfile ... Under
the covers, Helmfile executes helm upgrade --install for each release declared in the
manifest, by optionally decrypting secrets to be consumed as helm chart values. It also
updates specified chart repositories and updates the dependencies of any referenced local
charts.
For Helm 2.9+ you can use a username and password to authenticate to a remote repository.
apply
The helmfile apply sub-command begins by executing diff. If diff finds that there is any changes
sync is executed. Adding --interactive instructs Helmfile to request your confirmation before sync.
An expected use-case of apply is to schedule it to run periodically, so that you can auto-fix skews
between the desired and the current state of your apps running on Kubernetes clusters.
I went through the Helmfile repo Readme to figure out the difference between helmfile sync and helmfile apply. It seems that unlike the apply command, the sync command doesn't do a diff and helm upgrades the hell out of all releases 😃. But from the word sync, you'd expect the command to apply those releases that have been changed. There is also mention of the potential application of helmfile apply to periodically syncing of releases. Why not use helmfile sync for this purpose? Overall, the difference didn't become crystal clear, and I though there could probably be more to it. So, I'm asking.
Consider the use case where you have a Jenkins job that gets triggered every 5 minutes and in that job you want to upgrade your helm chart, but only if there are changes.
If you use helmfile sync which calls helm upgrade --install every five minutes, you will end up incrementing Chart revision every five minutes.
$ helm upgrade --install httpd bitnami/apache > /dev/null
$ helm list
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
httpd 1 Thu Feb 13 11:27:14 2020 DEPLOYED apache-7.3.5 2.4.41 default
$ helm upgrade --install httpd bitnami/apache > /dev/null
$ helm list
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
httpd 2 Thu Feb 13 11:28:39 2020 DEPLOYED apache-7.3.5 2.4.41 default
So, each helmfile sync will result new revision. Now if you were to run helmfile apply, which will first check for diffs and only then (if found) will call helmfile sync which will in turn call helm upgrade --install this will not happen.
Everything in the other answers is correct. However, there is one additional important thing to understand with sync vs apply:
sync will call helm upgrade on all releases. Thus, a new helm secret will be created for each release, and releases' revisions will all be incremented by one. Helm 3 uses three-way strategic merge algorithm to compute patches, meaning it will revert manual changes made to resources in the live cluster handled by Helm;
apply will first use the helm-diff plugin to determine which releases changed, and only run helm upgrade on the changed ones. However, helm-diff only compares the previous Helm state to the new one, it doesn't take the state of the live cluster into account. This means if you modified something manually, helmfile apply (or more precisely the helm-diff plugin) may not detect it, and leave it as-is.
Thus, if you always modify the live cluster through helm, helmfile apply is the way to go. However, if you can have manual changes and want to ensure the live state is coherent with what is defined in Helm/helmfile, then helmfile sync is necessary.
If you want more details, checkout my blog post helmfile: difference between sync and apply (Helm 3)
In simple words, this is how both of them works:
helmfile sync calls helm upgrade --install
helmfile apply calls helm upgrade --install if and only if helm diff returns some changes.
So, generally, helmfile apply would be faster and I suggest using it most of the time.
But, put in your consideration that if someone has deleted any deployments or configmaps manually from the chart, helm diff won't see any changes hence,helmfile apply won't work and these resources will still be deleted, while helmfile sync will recreate it restoring original chart configuration.
We have detected one significant issue that also has implications.
Sometimes a sync or apply operation can fail due to:
Timeout with wait: true. E.g. k8s cluster needs to add more nodes and operation takes longer than expected (but eventually everything is deployed).
Temporary error on a postsync hook.
In these cases a simple retry on the deployment job of the pipeline would solve the issue but succesive helmfile apply will skip both helm upgrade and hook execution, even if release is in status failed.
So my conclusions are:
apply is usually faster but can lead to situations where manual (outside CI/CD logic) are required.
sync is more robust (CI/CD friendly) but usually slower.
Related
When I run kubectl delete deployment.yaml, It is displayed on cli that the deployment is deleted. The pod also gets into terminating state. But a new pod is again created with the same deployment and replica-set.
On further digging in I found out that deployment and RS are not being removed. Any reason why deployment and RS wouldn't be removed? Why would the be terminated if deployment isn't removed?
Any leads are appreciated.
As OP confirmed in the comments that they are running argocd then the recreation of the resources is expected behaviour if argocd is running in auto sync mode for the impacted namespace.
Here is a short snippet from the document
Argo CD has the ability to automatically sync an application when it detects differences between the desired manifests in Git, and the live state in the cluster. A benefit of automatic sync is that CI/CD pipelines no longer need direct access to the Argo CD API server to perform the deployment. Instead, the pipeline makes a commit and push to the Git repository with the changes to the manifests in the tracking Git repo.
Solution: you can disable autosync and monitor the delta and approve sync manually. This is something decided at project level. you can read about it here.
I have a scenario like below,
Have two releases - Release-A and Release-B.
Currently, I am on Release-A and need an upgrade of all the microservices to Release-B.
I tried performing the helm upgrade of microservice - "mymicroservice" with the below command to deliver Release-B.
helm --kubeconfig /home/config upgrade --namespace testing --install --wait mymicroservice mymicroservice-release-b.tgz
Because of some issue, the deployment object got failed to install and went into an error state.
Observing this, I perform the below rollback command.
helm --kubeconfig /home/config --namespace testing rollback mymicroservice
Due to some issue(may be an intermittent system failure or user behavior), the Release-A's deployment object also went into failed/Crashloopbackoff state.Although this will result in helm rollback success, the deployment object is still not entered the running state.
Once I made the necessary corrections, I will retry the rollback. As the deployment spec is already updated with helm, it never attempts to re-install the deployment objects even if it is in the failed state.
Is there any option with Helm to handle the above scenarios ?.
Tried with --force flag, but there are other errors related to Service object replace in the microservice when used the --force flag approach.
Rollback "mymicroservice -monitoring" failed: failed to replace object: Service "mymicroservice-monitoring" is invalid: spec.clusterIP: Invalid value: "": field is immutable
Maybe this can help u out:
Always use the helm upgrade --install command. I've seen you're using so you're doing well. This installs the charts if they're not present and upgrades them if they're present.
Use --atomic flag to rollback changes in the event of a failed operation during helm upgrade.
And the flag --cleanup-on-fail: It allows that Helm deletes newly created resources during a rollback in case the rollback fails.
From doc:
--atomic: if set, upgrade process rolls back changes made in case of failed upgrade. The --wait flag will be set automatically if --atomic is used
--cleanup-on-fail allow deletion of new resources created in this upgrade when upgrade fails
There are cases where an upgrade creates a resource that was not present in the last release. Setting this flag allows Helm to remove those new resources if the release fails. The default is to not remove them (Helm tends to avoid destruction-as-default, and give users explicit control over this)
https://helm.sh/docs/helm/helm_upgrade/
IIRC, helm rollback rolls back to the previous version, whether it is good or not, so if your previous attempts resulted in a failure and you try to rollback, you will rollback to a broken version
I'm using helm 3.4.2 for upgrade my charts to my AKS cluster and I saw that every time I deploy something new, it creates a new secret called sh.helm.v... This is the first time I'm using helm.
I was reading the doc and found that at version 3.x helm is using secrets to store driver as default. Cool, but every time I deploy it creates a new secret and I'm now sure if this is the best to keep it all in my cluster.
Soo, should I keep then all in my cluster? Like, every time I deploy some thing, it creates a secret and live there
or
Can I remove the last before? Like, deploy v5 now and erase v1, v2, v3 and keep the v4 and v5 for some reason. If it's ok to do it, does anyone has a clue for how to do it? Using a bash ou kubectl?
Thanks a lot!
So yes, There are few major changes in Helm3, comparing to Helm2.
Secrets are now used as the default storage driver
In Helm 3, Secrets are now used as the default storage driver. Helm 2 used ConfigMaps by default to store release information. In
Helm 2.7.0, a new storage backend that uses Secrets for storing
release information was implemented, and it is now the default
starting in Helm 3.
Also
Release Names are now scoped to the Namespace
In Helm 3, information about a particular release is now stored in the
same namespace as the release itself. With this greater alignment to
native cluster namespaces, the helm list command no longer lists all
releases by default. Instead, it will list only the releases in the
namespace of your current kubernetes context (i.e. the namespace shown
when you run kubectl config view --minify). It also means you must
supply the --all-namespaces flag to helm list to get behaviour similar
to Helm 2.
Soo, should I keep then all in my cluster? Like, every time I deploy
some thing, it creates a secret and live there or
Can I remove the last before?
I dont thinks its a good practice to remove anything manually. If it is not mandatory necessary - sure better not touch them. However, you can delete unused ones, if you sure you will not need old revisions in the future.
#To check all secretes were created by helm:
kubectl get secret -l "owner=helm" --all-namespaces
#To delete revision you can simply remove appropriate secret..
kubectl delete secret -n <namespace> <secret-name>
Btw(just FYI), taking into an account the fact Helm3 is scoped to namespaces - you can simply delete deployment by deleting its corresponding namespace
And the last remark, maybe it would be useful for: you can pass --history-max to helm upgrade to
limit the maximum number of revisions saved per release. Use 0 for no
limit (default 10)
In My CICD, I am:
generating a new image with a unique tag. foo:dev-1339 and pushing it to my image repo (ECR).
Then I am using a rolling update to update my deployment.
kubectl rolling-update frontend --image=foo:dev-1339
But I have a conflict here.
What if I also need to update some part of my deployment object as stored in a deployment.yaml file. Lets say harden a health check or add a parameter?
Then when I re apply my deployment object as a whole it will not be in sync with the current replica set, the tag will get reverted and I will lose that image update as it exists in the cluster.
How do I avoid this race condition?
A typical solution here is to use a templating layer like Helm or Kustomize.
In Helm, you'd keep your Kubernetes YAML specifications in a directory structure called a chart, but with optional templating. You can specify things like
image: myname/myapp:{{ .Values.tag | default "latest" }}
and then deploy the chart with
helm install myapp --name myapp --set tag=20191211.01
Helm keeps track of these values (in Secret objects in the cluster) so they don't get tracked in source control. You could check in a YAML-format file with settings and use helm install -f to reference that file instead.
In Kustomize, your CI tool would need to create a kustomize.yaml file for per-deployment settings, but then could set
images:
- name: myname/myapp
newTag: 20191211.01
If you trust your CI tool to commit to source control then it can check this modified file in as part of its deployment sequence.
Imperative vs Declarative workflow
There is two fundamental ways of using kubectl for applying changes to your cluster. The Imperative way, when you do commands is a good way for experimentation and development environment. kubectl rolling-updated is an example of an imperative command. See Managing Kubernetes using Imperative Commands.
For a production environment, it is recommended to use a Declarative workflow, by editing manifest-files, store them in a Git-repository. Automatically start a CICD work when you commit or merge. kubectl apply -f <file> or more interesting kubectl apply -k <file> is an example of this workflow. See Declarative Management using Config files or more interesting Declarative Management using Kustomize
CICD for building image and deployment
Building an artifact from source code, including a container image may be done in a CICD pipeline. Managing application config and applying it to the Kubernetes cluster may also be done in a CICD pipeline. You may want to automatize it all, e.g. for doing Continuous Deployment and combine both pipelines to a single long pipeline. This is a more complicated setup and there is no single answer on how to do it. When the build-parts is done, it may trigger an update of the image field in the app configuration repository to trigger the configuration-pipeline.
Unfortunately there is no solution, either from the command line or through the yaml files
As per the doc here, "...a Deployment is a higher-level controller that automates rolling updates of applications declaratively, and therefore is recommended" over the use of Replication Controllers and kubectl rolling-update. Updating the image of a Deployment will trigger Deployment's rollout.
An approach could be to update the Deployment configuration yaml (or json) under version control in the source repo and apply the changed Deployment configuration from the version control to the cluster.
Is there any command to revert back to previous configuration on a resource?
For example, if I have a Service kind resource created declaratively, and then I change the ports manually, how can I discard live changes so the original definition that created the resource is reapplied?
Is there any tracking on previous applied configs? it could be even nicer if we could say: reconfigure my service to current appied config - 2 versions.
EDIT: I know deployments have rollout options, but I am wondering about a Kind-wise mechanism
Since you're asking explicitly about the last-applied-configuration annotation...
Very simple:
kubectl apply view-last-applied deployment/foobar-module | kubectl apply -f -
Given that apply composes via stdin ever so flexibly — there's no dedicated kubectl apply revert-to-last-applied subcommand, as it'd be redundant reimplementation of the simple pipe above.
One could also suspect, that such a revert built-in could never be made perfect, (as Nick_Kh notices) for complicated reasons. A subcommand named revert evokes a lot of expectation from users which it would never fulfill.
So we get a simplified approximation: a spec.bak saved in resource annotations, ready to be re-apply'd.
Actually, Kubernetes does not support rollback option for the inherent resources besides Deployments and DaemonSets.
However, you can consider to use Helm, which is a well known package manager for Kubernetes. Helm provides a mechanism for restoring previous state for your package release and includes all entire object resources to be reverted.
This feature Helm represents with helm rollback command:
helm rollback [flags] [RELEASE] [REVISION]
Full command options you can find in the official Helm Documentation.