Observing weird kubernetes behavior while deleting using yaml - kubernetes

When I run kubectl delete deployment.yaml, It is displayed on cli that the deployment is deleted. The pod also gets into terminating state. But a new pod is again created with the same deployment and replica-set.
On further digging in I found out that deployment and RS are not being removed. Any reason why deployment and RS wouldn't be removed? Why would the be terminated if deployment isn't removed?
Any leads are appreciated.

As OP confirmed in the comments that they are running argocd then the recreation of the resources is expected behaviour if argocd is running in auto sync mode for the impacted namespace.
Here is a short snippet from the document
Argo CD has the ability to automatically sync an application when it detects differences between the desired manifests in Git, and the live state in the cluster. A benefit of automatic sync is that CI/CD pipelines no longer need direct access to the Argo CD API server to perform the deployment. Instead, the pipeline makes a commit and push to the Git repository with the changes to the manifests in the tracking Git repo.
Solution: you can disable autosync and monitor the delta and approve sync manually. This is something decided at project level. you can read about it here.

Related

Deleting kubernetes yaml: how to prevent old objects from floating around?

i'm working on a continuous deployment routine for a kubernetes application: everytime i push a git tag, a github action is activated which calls kubectl apply -f kubernetes to apply a bunch of yaml kubernetes definitions
let's say i add yaml for a new service, and deploy it -- kubectl will add it
but then later on, i simply delete the yaml for that service, and redeploy -- kubectl will NOT delete it
is there any way that kubectl can recognize that the service yaml is missing, and respond by deleting the service automatically during continuous deployment? in my local test, the service remains floating around
does the developer have to know to connect kubectl to the production cluster and delete the service manually, in addition to deleting the yaml definition?
is there a mechanism for kubernetes to "know what's missing"?
You need to use a CI/CD tool for Kubernetes to achieve what you need. As mentioned by Sithroo Helm is a very good option.
Helm lets you fetch, deploy and manage the lifecycle of applications,
both 3rd party products and your own.
No more maintaining random groups of YAML files (or very long ones)
describing pods, replica sets, services, RBAC settings, etc. With
helm, there is a structure and a convention for a software package
that defines a layer of YAML templates and another layer that
changes the templates called values. Values are injected into
templates, thus allowing a separation of configuration, and defines
where changes are allowed. This whole package is called a Helm
Chart.
Essentially you create structured application packages that contain
everything they need to run on a Kubernetes cluster; including
dependencies the application requires. Source
Before you start, I recommend you these articles explaining it's quirks and features.
The missing CI/CD Kubernetes component: Helm package manager
Continuous Integration & Delivery (CI/CD) for Kubernetes Using CircleCI & Helm
There's no such way. You can deploy resources from yaml file from anywhere if you can reach the node and configure kube config. So kubernetes will not know how to respond on a file deletion. If you still want to do this, you can write a program (a go code) which checks the availability of files in one place and deletes the corresponding resource whenever the file gets deleted.
There's one way via kubernetes is by using kubernetes operator, and whenever there is any change in your files you can update the crd used to deploy resources via operator.
Before deleting the yaml file, you can run kubectl delete -f file.yaml, this way all the resources created by this file will be deleted.
However, what you are looking for, is achieving the desired state using k8s. You can do this by using tools like Helmfile.
Helmfile, allow you to specify the resources you want to have all in one file, and it will achieve the desired state every time you run helmfile apply

helmfile sync vs helmfile apply

sync sync all resources from state file (repos, releases and chart deps)
apply apply all resources from state file only when there are changes
sync
The helmfile sync sub-command sync your cluster state as described in your helmfile ... Under
the covers, Helmfile executes helm upgrade --install for each release declared in the
manifest, by optionally decrypting secrets to be consumed as helm chart values. It also
updates specified chart repositories and updates the dependencies of any referenced local
charts.
For Helm 2.9+ you can use a username and password to authenticate to a remote repository.
apply
The helmfile apply sub-command begins by executing diff. If diff finds that there is any changes
sync is executed. Adding --interactive instructs Helmfile to request your confirmation before sync.
An expected use-case of apply is to schedule it to run periodically, so that you can auto-fix skews
between the desired and the current state of your apps running on Kubernetes clusters.
I went through the Helmfile repo Readme to figure out the difference between helmfile sync and helmfile apply. It seems that unlike the apply command, the sync command doesn't do a diff and helm upgrades the hell out of all releases 😃. But from the word sync, you'd expect the command to apply those releases that have been changed. There is also mention of the potential application of helmfile apply to periodically syncing of releases. Why not use helmfile sync for this purpose? Overall, the difference didn't become crystal clear, and I though there could probably be more to it. So, I'm asking.
Consider the use case where you have a Jenkins job that gets triggered every 5 minutes and in that job you want to upgrade your helm chart, but only if there are changes.
If you use helmfile sync which calls helm upgrade --install every five minutes, you will end up incrementing Chart revision every five minutes.
$ helm upgrade --install httpd bitnami/apache > /dev/null
$ helm list
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
httpd 1 Thu Feb 13 11:27:14 2020 DEPLOYED apache-7.3.5 2.4.41 default
$ helm upgrade --install httpd bitnami/apache > /dev/null
$ helm list
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
httpd 2 Thu Feb 13 11:28:39 2020 DEPLOYED apache-7.3.5 2.4.41 default
So, each helmfile sync will result new revision. Now if you were to run helmfile apply, which will first check for diffs and only then (if found) will call helmfile sync which will in turn call helm upgrade --install this will not happen.
Everything in the other answers is correct. However, there is one additional important thing to understand with sync vs apply:
sync will call helm upgrade on all releases. Thus, a new helm secret will be created for each release, and releases' revisions will all be incremented by one. Helm 3 uses three-way strategic merge algorithm to compute patches, meaning it will revert manual changes made to resources in the live cluster handled by Helm;
apply will first use the helm-diff plugin to determine which releases changed, and only run helm upgrade on the changed ones. However, helm-diff only compares the previous Helm state to the new one, it doesn't take the state of the live cluster into account. This means if you modified something manually, helmfile apply (or more precisely the helm-diff plugin) may not detect it, and leave it as-is.
Thus, if you always modify the live cluster through helm, helmfile apply is the way to go. However, if you can have manual changes and want to ensure the live state is coherent with what is defined in Helm/helmfile, then helmfile sync is necessary.
If you want more details, checkout my blog post helmfile: difference between sync and apply (Helm 3)
In simple words, this is how both of them works:
helmfile sync calls helm upgrade --install
helmfile apply calls helm upgrade --install if and only if helm diff returns some changes.
So, generally, helmfile apply would be faster and I suggest using it most of the time.
But, put in your consideration that if someone has deleted any deployments or configmaps manually from the chart, helm diff won't see any changes hence,helmfile apply won't work and these resources will still be deleted, while helmfile sync will recreate it restoring original chart configuration.
We have detected one significant issue that also has implications.
Sometimes a sync or apply operation can fail due to:
Timeout with wait: true. E.g. k8s cluster needs to add more nodes and operation takes longer than expected (but eventually everything is deployed).
Temporary error on a postsync hook.
In these cases a simple retry on the deployment job of the pipeline would solve the issue but succesive helmfile apply will skip both helm upgrade and hook execution, even if release is in status failed.
So my conclusions are:
apply is usually faster but can lead to situations where manual (outside CI/CD logic) are required.
sync is more robust (CI/CD friendly) but usually slower.

Using kubectl roll outs to update my images, but need to also keep my deployment object in version control

In My CICD, I am:
generating a new image with a unique tag. foo:dev-1339 and pushing it to my image repo (ECR).
Then I am using a rolling update to update my deployment.
kubectl rolling-update frontend --image=foo:dev-1339
But I have a conflict here.
What if I also need to update some part of my deployment object as stored in a deployment.yaml file. Lets say harden a health check or add a parameter?
Then when I re apply my deployment object as a whole it will not be in sync with the current replica set, the tag will get reverted and I will lose that image update as it exists in the cluster.
How do I avoid this race condition?
A typical solution here is to use a templating layer like Helm or Kustomize.
In Helm, you'd keep your Kubernetes YAML specifications in a directory structure called a chart, but with optional templating. You can specify things like
image: myname/myapp:{{ .Values.tag | default "latest" }}
and then deploy the chart with
helm install myapp --name myapp --set tag=20191211.01
Helm keeps track of these values (in Secret objects in the cluster) so they don't get tracked in source control. You could check in a YAML-format file with settings and use helm install -f to reference that file instead.
In Kustomize, your CI tool would need to create a kustomize.yaml file for per-deployment settings, but then could set
images:
- name: myname/myapp
newTag: 20191211.01
If you trust your CI tool to commit to source control then it can check this modified file in as part of its deployment sequence.
Imperative vs Declarative workflow
There is two fundamental ways of using kubectl for applying changes to your cluster. The Imperative way, when you do commands is a good way for experimentation and development environment. kubectl rolling-updated is an example of an imperative command. See Managing Kubernetes using Imperative Commands.
For a production environment, it is recommended to use a Declarative workflow, by editing manifest-files, store them in a Git-repository. Automatically start a CICD work when you commit or merge. kubectl apply -f <file> or more interesting kubectl apply -k <file> is an example of this workflow. See Declarative Management using Config files or more interesting Declarative Management using Kustomize
CICD for building image and deployment
Building an artifact from source code, including a container image may be done in a CICD pipeline. Managing application config and applying it to the Kubernetes cluster may also be done in a CICD pipeline. You may want to automatize it all, e.g. for doing Continuous Deployment and combine both pipelines to a single long pipeline. This is a more complicated setup and there is no single answer on how to do it. When the build-parts is done, it may trigger an update of the image field in the app configuration repository to trigger the configuration-pipeline.
Unfortunately there is no solution, either from the command line or through the yaml files
As per the doc here, "...a Deployment is a higher-level controller that automates rolling updates of applications declaratively, and therefore is recommended" over the use of Replication Controllers and kubectl rolling-update. Updating the image of a Deployment will trigger Deployment's rollout.
An approach could be to update the Deployment configuration yaml (or json) under version control in the source repo and apply the changed Deployment configuration from the version control to the cluster.

How to fix k8s namespace permissions in gitlab ci

As I'm playing around with K8s deployment and Gitlab CI my deployment got stuck with the state ContainerStarting.
To reset that, I deleted the K8s namespace using kubectl delete namespaces my-namespace.
Now my Gitlab runner shows me
$ ensure_namespace
Checking namespace [MASKED]-docker-3
error: the server doesn't have a resource type "namespace"
error: You must be logged in to the server (Unauthorized)
I think that has something to do with RBAC and most likely Gitlab created that namespace with some arguments and permissions (but I don't know exactly when and how that happens), which are missing now because of my deletion.
Anybody got an idea on how to fix this issue?
In my case I had to delete the namespace in Gitlab database, so gitlab would readd service account and namespace:
On the gitlab machine or task runner enter the PostgreSQL console:
gitlab-rails dbconsole -p
Then select the database:
\c gitlabhq_production
Next step is to find the namespace that was deleted:
SELECT id, namespace FROM clusters_kubernetes_namespaces;
Take the id of the namespace to delete it:
DELETE FROM clusters_kubernetes_namespaces WHERE id IN (6,7);
Now you can restart the pipeline and the namespace and service account will be readded.
Deleting the namespace manually caused the necessary secrets from Gitlab to get removed. It seems they get autocreated on the first ever deployment and it's impossible to repeat that process.
I had to create a new repo and push to it. Now everything works.
Another solution is removing the cluster from Gitlab (under operations/kubernetes in your repo) and re-adding it.
From GitLab 12.6 you can simply clear the cluster cache.
To clear the cache:
Navigate to your project’s Operations > Kubernetes page, and select your cluster.
Expand the Advanced settings section.
Click Clear cluster cache.
This avoids losing secrets and potentially affecting other applications.

Auto update pod on every image push to GCR

I have a docker image pushed to Container Registry with docker push gcr.io/go-demo/servertime and a pod created with kubectl run servertime --image=gcr.io/go-demo-144214/servertime --port=8080.
How can I enable automatic update of the pod everytime I push a new version of the image?
I would suggest switching to some kind of CI to manage the process, and instead of triggering on docker push triggering the process on pushing the commit to git repository. Also if you switch to using a higher level kubernetes construct such as deployment, you will be able to run a rolling-update of your pods to your new image version. Our process is roughly as follows :
git commit #triggers CI build
docker build yourimage:gitsha1
docker push yourimage:gitsha1
sed -i 's/{{TAG}}/gitsha1/g' deployment.yml
kubectl apply -f deployment.yml
Where deployment.yml is a template for our deployment that will be updated to new tag version.
If you do it manually, it might be easier to simply update image in an existing deployment by running kubectl set image deployment/yourdeployment <containernameinpod>=yourimage:gitsha1
I'm on the Spinnaker team.
Might be a bit heavy, but without knowing your other areas of consideration, Spinnaker is a CD platform from which you can trigger k8s deployments from registry updates.
Here's a codelab to get you a started.
If you'd rather shortcut the setup process, you can get a starter Spinnaker instance with k8s and GCR integration pre-setup via the Cloud Launcher.
You can find further support on our slack channel (I'm #stevenkim).
It would need some glue, but you could use Docker Hub, which lets you define a webhook for each repository when a new image is pushed or a new tag created.
This would mean you'd have to build your own web API server to handle the incoming notifications and use them to update the pod. And you'd have to use Docker Hub, not Google Container Repository, which doesn't allow web hooks.
So, probably too many changes for the problem you're trying to solve.