Find manual changes in ConfigMap when working with "kubectl apply" - kubernetes

I have a Kubernetes cluster and do all my deployments in a declarative manner, by executing the "apply" command in CI/CD pipeline.
"Apply" works in such a way that it merges the state of an object with the manifest that came with the command. As the result, you may make manual changes - for example, add a new key-value to a ConfigMap and "apply" will leave it intact, even though this key doesn't exist in source code.
So I wonder, how can I detect such issues? Doing "delete" and "create" is not an option since it disrupts the availability. I don't want to change deployment from "apply" since it's production. I just to find manual modifications in a namespace.

kubediff is what you are looking for.
You can run it as a command-line tool e.g.:
$ ./kubediff k8s
Checking ReplicationController 'kubediff'
*** .spec.template.spec.containers[0].args[0]: '-repo=https://github.com/weaveworks/kubediff' != '-repo=https://github.com/<your github repo>'
Checking Secret 'kubediff-secret'
Checking Service 'kubediff'
or as a service inside K8s cluster. This mode also gives you a simple UI showing the output.

kubectl patch or kubectl replace are suitable for your use case. Take a look to this blog which better explain the difference between them.
If you always want to update configmap to whatever is in your manifest, then go for replace.

Related

How to check if k8s deployment is passed/failed manually?

I'm trying to understand what does kubectl rollout status <deployment name> do.
I'm using k8s-node-api, and from this thread (https://github.com/kubernetes-client/javascript/issues/536), the maintainer suggest using k8s-watch api to watch for changes in the deployment, but I'm not sure what to check.
Questions:
How to make sure the new deployment succeed?
How to make the the new deployment failed?
Is it safe to assume that if the spec/containers/0/image changes to something different than what I'm expecting, it means there is a new deployment and I should stop watching?
My questions are probably ambiguous because I'm new to k8s.
Any guidance will be great!
I can't use Kubectl - I'm writing a code that does that based on what kubectl does.
As we have discussed in comment section I have mentioned that to check any object and processes in Kubernetes you have to use kubectl - see: kubernetes.io/docs/reference/kubectl/overview.
Take a look how to execute proper command to gain required information - kubectl-rollout.
If you want to check how rollout process looks from backgroud look at the source code - src-code-rollout-kubernetes.
Pay attention on that if you are using node-api:
The node-api group was migrated to a built-in API in the > k8s.io/api repo with the v1.14
release. This repo is no longer maintained, and no longer synced
with core kubernetes as of the v1.18 release.
I often use 2 following command for check out deployment status
kubectl describe deployment <your-deployment-name>
kubectl get deployment <your-deployment-name> -oyaml
The first will show you some events about process of schedule a deployment.
The second is more detailed. It contains all of your deployment's resource info as yaml format.
Is that enough for your need ?
After digging through k8s source code, I was able to implement this logic by my self in Node.js:
How to make sure the new deployment succeed?
How to make the the new deployment failed?
https://github.com/stavalfi/era-ci/blob/master/packages/steps/src/k8s/utils.ts#L387
Basically, I'm subscribing to events about a specific deplyoment (AFTER chancing something in it, for example, the image).
Is it safe to assume that if the spec/containers/0/image changes to something different than what I'm expecting, it means there is a new deployment and I should stop watching?
Yes. But https://github.com/stavalfi/era-ci/blob/master/packages/steps/src/k8s/utils.ts#L62 will help also to idenfity that there is a new deployment going on and the yours is no longer the "latest-deployment".
For More Info
I wrote an answer about how deployment works under the hood: https://stackoverflow.com/a/66092577/806963

Kubernetes - What exactly is imperative Vs Declarative

I am seeing multiple and different explanations for imperative Vs Declarative for Kubernetes - something like Imperative means when we use yaml files to create the resources to describe the state and declarative vice versa.
what is the real and clear difference between these two. I would really appreciate if you can put the group of commands fall under the same - like Create under imperative way etc ..
"Imperative" is a command - like "create 42 widgets".
"Declarative" is a statement of the desired end result - like "I want 42 widgets to exist".
Typically, your yaml file will be declarative in nature: it will say that you want 42 widgets to exist. You'll give that to Kubernetes, and it will execute the steps necessary to end up with having 42 widgets.
"Create" is itself an imperative command, but what you're creating is a Kubernetes cluster. What the cluster should look like is determined by the declarations in the yaml file.
Imperative
Official docs on Managing Kubernetes Objects Using Imperative Commands.
Kubernetes objects can quickly be created, updated, and deleted directly using imperative commands built into the kubectl command-line tool.
kubectl run nginx --generator=run-pod/v1 --image=nginx
kubectl create service nodeport <myservicename>
kubectl delete pod
Declarartive
Kubernetes objects can be created, updated, and deleted by storing multiple object configuration files in a directory and using kubectl apply to recursively create and update those objects as needed. This method retains writes made to live objects without merging the changes back into the object configuration files. kubectl diff also gives you a preview of what changes apply will make.
Official docs on Declarative Management of Kubernetes Objects Using Configuration Files.
Official docs on Declarative Management of Kubernetes Objects Using Kustomize
Define what you want in an yaml file and use kubectl apply
kubectl apply -f app.yaml
kubectl apply -f <directory>/
kubectl apply -f https://k8s.io/examples/application/simple_deployment.yaml
Imperative command means::: We are not creating any yaml file and directly changing resources like pod service network anything via
Command line so that is imperative command.
Imperative object configuration::: means we are creating any resources as per our requirement in yaml file where we will remove
default value's which we don’t need everything except required things so in that case this is imperative object configuration AND that is CREATE command..
Declarative object configuration:: We don’t care about anything just we need final output so in simple words we copied
yaml from internet and created a pod where motive is to only create pod/resources So in that case we use APPLY command.

Revert any changes of a resource to kubectl.kubernetes.io/last-applied-configuration

Is there any command to revert back to previous configuration on a resource?
For example, if I have a Service kind resource created declaratively, and then I change the ports manually, how can I discard live changes so the original definition that created the resource is reapplied?
Is there any tracking on previous applied configs? it could be even nicer if we could say: reconfigure my service to current appied config - 2 versions.
EDIT: I know deployments have rollout options, but I am wondering about a Kind-wise mechanism
Since you're asking explicitly about the last-applied-configuration annotation...
Very simple:
kubectl apply view-last-applied deployment/foobar-module | kubectl apply -f -
Given that apply composes via stdin ever so flexibly — there's no dedicated kubectl apply revert-to-last-applied subcommand, as it'd be redundant reimplementation of the simple pipe above.
One could also suspect, that such a revert built-in could never be made perfect, (as Nick_Kh notices) for complicated reasons. A subcommand named revert evokes a lot of expectation from users which it would never fulfill.
So we get a simplified approximation: a spec.bak saved in resource annotations, ready to be re-apply'd.
Actually, Kubernetes does not support rollback option for the inherent resources besides Deployments and DaemonSets.
However, you can consider to use Helm, which is a well known package manager for Kubernetes. Helm provides a mechanism for restoring previous state for your package release and includes all entire object resources to be reverted.
This feature Helm represents with helm rollback command:
helm rollback [flags] [RELEASE] [REVISION]
Full command options you can find in the official Helm Documentation.

Kubernetes rolling update vs set image

After some intense google and SO search i couldn't find any document that mentions both rolling update and set image, and can stress the difference between the two.
Can anyone shed light? When would I rather use either of those?
EDIT: It's worth mentioning that i'm already working with deployments (rather than replication controller directly) and that I'm using yaml configuration files. It would also be nice to know if there's a way to perform any of those using configuration files rather than direct commands.
In older k8s versions the ReplicationController was the only resource to manage a group of replicated pods. To update the pods of a ReplicationController you use kubectl rolling-update.
Later, k8s introduced the Deployment which manages ReplicaSet resources. The Deployment could be updated via kubectl set image.
Working with Deployment resources (as you already do) is the preferred way. I guess the ReplicationController and its rolling-update command are mainly still there for backward compatibility.
UPDATE: As mentioned in the comments:
To update a Deployment I used kubectl patch as it could also change things like adding new env vars whereas kubectl set image is rather limited and can only change the image version. Also note, that patch can be applied to all k8s resources and is not restricted to be used with a Deployment.
Later, I shifted my deployment processes to use helm - a really neat and k8s native package management tool. Can highly recommend to have a look at it.

Is there a way to make kubectl apply restart deployments whose image tag has not changed?

I've got a local deployment system that is mirroring our production system. Both are deployed by calling kubectl apply -f deployments-and-services.yaml
I'm tagging all builds with the current git hash, which means that for clean deploys to GKE, all the services have a new docker image tag which means that apply will restart them, but locally to minikube the tag is often not changing which means that new code is not run. Before I was working around this by calling kubectl delete and then kubectl create for deploying to minikube, but as the number of services I'm deploying has increased, that is starting to stretch the dev cycle too far.
Ideally, I'd like a better way to tell kubectl apply to restart a deployment rather than just depending on the tag?
I'm curious how people have been approaching this problem.
Additionally, I'm building everything with bazel which means that I have to be pretty explicit about setting up my build commands. I'm thinking maybe I should switch to just delete/creating the one service I'm working on and leave the others running.
But in that case, maybe I should just look at telepresence and run the service I'm dev'ing on outside of minikube all together? What are best practices here?
I'm not entirely sure I understood your question but that may very well be my reading comprehension :)
In any case here's a few thoughts that popped up while reading this (again not sure what you're trying to accomplish)
Option 1: maybe what you're looking for is to scale down and back up, i.e. scale your deployment to say 0 and then back up, given you're using configmap and maybe you only want to update that, the command would be kubectl scale --replicas=0 -f foo.yaml and then back to whatever
Option 2: if you want to apply the deployment and not kill any pods for example, you would use the cascade=false (google it)
Option 3: lookup the rollout option to manage deployments, not sure if it works on services though
Finally, and that's only me talking, share some more details like which version of k8s are you using? maybe provide an actual use case example to better describe the issue.
Kubernetes, only triggers a deployment when something has changed, if you have image pull policy to always you can delete your pods to get the new image, if you want kube to handle the deployment you can update the kubernetes yaml file to container a constantly changing metadata field (I use seconds since epoch) which will trigger a change. Ideally you should be tagging your images with unique tags from your CI/CD pipeline with the commit reference they have been built from. this gets around this issue and allows you to take full advantage of the kubernetes rollback feature.