Skaffold dev stream logs of pods created by helm hooks - kubernetes

I would like to see the output from my pre-install/post-install helm hooks when using skaffold dev, but this does not seem to work.
Which filters does skaffold use to get all the pods for log tailing? Is there a way to force skaffold to pick up the hooks by applying some labels (e.g. skaffold.dev/run-id: static) ?
Context
Doing dev with local docker, the image building is pretty fast, so for some use cases there is no need to use file sync and special dev-mode container images with file watching inside.
There is this feature request: https://github.com/GoogleContainerTools/skaffold/issues/1441, but this is for adding hooks to skaffold itself.
The pods created by helm hooks are not removed (https://github.com/GoogleContainerTools/skaffold/issues/2876), but this is expected behavior for helm delete.

Thanks #acristu for the question. Skaffold dev here.
Currently, skaffold is unaware of pods deployed in the pre and post helm hooks.
The reason, we don't parse the manifests in these hooks and hence can't transform those to add the required label skaffold.dev/run-id
Currently there is no way to force skaffold to pick up the logs from these pods/containers
That said we had a pending feature request to extend the current log configuration to include resourceType or resourceName like portForward section
portForward: # describes user defined resources to port-forward.
- resourceType: # Kubernetes type that should be port forwarded.
resourceName:
Supporting this in skaffold would be great idea.

Related

Deploy Container in K8s in case of only config Map change argocd

I want to redeploy an application in k8s using GitOps(ArgoCD) in case of an only config Map change, how ArgoCD will understand to restart the container as we all know without restarting the container new config map is not going to take effect.
Scenario - If one container is running from ArgoCD and I have to modify configmap yaml file in GitHub and ArgoCD will automatically understand and sync the updated values but container will not restart as we are not modifying in Deployment Yaml Files, so how config map will take effect in the container
Found a workaround for the above question, We can include a parameter(Jenkins Build Number) as env variable in the Deployment config and it will be updated on every build from CI Pipeline, so in case of only config Change in Git repo, deployment will also be rolled out because Build number Parameter will change after running the Pipelines and as we all know ArgoCD will automatically be triggered once any change is done in Git repo connected to ArgoCD
ArgoCD itself doesn't handle this, however other tools can. With Helm this is generally handled inside the chart by hashing the config content into an annotation in the pod template. Kustomize offers the configmap and secret generators which put a hash in the object name and rewrite the pod template to include it. There's also operator solutions like Reloader which does a similar trick to Helm but via an operator.

Add Sidecar container to running pod(s)

I have helm deployment scripts for a vendor application which we are operating. For logging solution, I need to add a sidecar container for fluentbit to push the logs to aggregated log server (splunk in this case).
Now to define this sidecar container, I want to avoid changing vendor defined deployment scripts. Instead i want some alternative way to attach the sidecar container to the running pod(s).
So far I have understood that sidecar container can be defined inside the same deployment script (deployment configuration).
Answering the question in the comments:
thanks #david. This has to be done before the deployment. I was wondering if I could attach a sidecar container to an already deployed (running) pod.
You can't attach the additional container to a running Pod. You can update (patch) the resource definition. This will force the resource to be recreated with new specification.
There is a github issue about this feature which was closed with the following comment:
After discussing the goals of SIG Node, the clear consensus is that the containers list in the pod spec should remain immutable. #27140 will be better addressed by kubernetes/community#649, which allows running an ephemeral debugging container in an existing pod. This will not be implemented.
-- Github.com: Kubernetes: Issues: Allow containers to be added to a running pod
Answering the part of the post:
Now to define this sidecar container, I want to avoid changing vendor defined deployment scripts. Instead i want some alternative way to attach the sidecar container to the running pod(s).
Below I've included two methods to add a sidecar to a Deployment. Both of those methods will reload the Pods to match new specification:
Use $ kubectl patch
Edit the Helm Chart and use $ helm upgrade
In both cases, I encourage you to check how Kubernetes handles updates of its resources. You can read more by following below links:
Kubernetes.io: Docs: Tutorials: Kubernetes Basics: Update: Update
Medium.com: Platformer blog: Enable rolling updates in Kubernetes with zero downtime
Use $ kubectl patch
The way to completely avoid editing the Helm charts would be to use:
$ kubectl patch
This method will "patch" the existing Deployment/StatefulSet/Daemonset and add the sidecar. The downside of this method is that it's not automated like Helm and you would need to create a "patch" for every resource (each Deployment/Statefulset/Daemonset etc.). In case of any updates from other sources like Helm, this "patch" would be overridden.
Documentation about updating API objects in place:
Kubernetes.io: Docs: Tasks: Manage Kubernetes objects: Update api object kubectl patch
Edit the Helm Chart and use $ helm upgrade
This method will require editing the Helm charts. The changes made like adding a sidecar will persist through the updates. After making the changes you will need to use the $ helm upgrade RELEASE_NAME CHART.
You can read more about it here:
Helm.sh: Docs: Helm: Helm upgrade
A kubernetes ressource is immutable, as mention by dawid-kruk . Therefore modifing the pod description will cause the containers to restart.
You can modify the pod using the kubectl patch command, don't forget to reapply the. Patch as necessary.
Alternatively The two following options will inject the sidecar without having to modify/fork upstream chart or mangling deployed ressources.
#1 mutating admission controller
A mutating admission controller (webhook) can modify ressources see https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
You can use a generic framework like opa.
Or a specific webhook like fluentd-sidecar-injector (not tested)
#2 support arbitrary sidecar in helm
You could submit a feature request to the chart mainter to supooort arbitrary sidecar injection, like in Prometheus, see https://stackoverflow.com/a/62910122/1260896

Deleting kubernetes yaml: how to prevent old objects from floating around?

i'm working on a continuous deployment routine for a kubernetes application: everytime i push a git tag, a github action is activated which calls kubectl apply -f kubernetes to apply a bunch of yaml kubernetes definitions
let's say i add yaml for a new service, and deploy it -- kubectl will add it
but then later on, i simply delete the yaml for that service, and redeploy -- kubectl will NOT delete it
is there any way that kubectl can recognize that the service yaml is missing, and respond by deleting the service automatically during continuous deployment? in my local test, the service remains floating around
does the developer have to know to connect kubectl to the production cluster and delete the service manually, in addition to deleting the yaml definition?
is there a mechanism for kubernetes to "know what's missing"?
You need to use a CI/CD tool for Kubernetes to achieve what you need. As mentioned by Sithroo Helm is a very good option.
Helm lets you fetch, deploy and manage the lifecycle of applications,
both 3rd party products and your own.
No more maintaining random groups of YAML files (or very long ones)
describing pods, replica sets, services, RBAC settings, etc. With
helm, there is a structure and a convention for a software package
that defines a layer of YAML templates and another layer that
changes the templates called values. Values are injected into
templates, thus allowing a separation of configuration, and defines
where changes are allowed. This whole package is called a Helm
Chart.
Essentially you create structured application packages that contain
everything they need to run on a Kubernetes cluster; including
dependencies the application requires. Source
Before you start, I recommend you these articles explaining it's quirks and features.
The missing CI/CD Kubernetes component: Helm package manager
Continuous Integration & Delivery (CI/CD) for Kubernetes Using CircleCI & Helm
There's no such way. You can deploy resources from yaml file from anywhere if you can reach the node and configure kube config. So kubernetes will not know how to respond on a file deletion. If you still want to do this, you can write a program (a go code) which checks the availability of files in one place and deletes the corresponding resource whenever the file gets deleted.
There's one way via kubernetes is by using kubernetes operator, and whenever there is any change in your files you can update the crd used to deploy resources via operator.
Before deleting the yaml file, you can run kubectl delete -f file.yaml, this way all the resources created by this file will be deleted.
However, what you are looking for, is achieving the desired state using k8s. You can do this by using tools like Helmfile.
Helmfile, allow you to specify the resources you want to have all in one file, and it will achieve the desired state every time you run helmfile apply

Using kubectl roll outs to update my images, but need to also keep my deployment object in version control

In My CICD, I am:
generating a new image with a unique tag. foo:dev-1339 and pushing it to my image repo (ECR).
Then I am using a rolling update to update my deployment.
kubectl rolling-update frontend --image=foo:dev-1339
But I have a conflict here.
What if I also need to update some part of my deployment object as stored in a deployment.yaml file. Lets say harden a health check or add a parameter?
Then when I re apply my deployment object as a whole it will not be in sync with the current replica set, the tag will get reverted and I will lose that image update as it exists in the cluster.
How do I avoid this race condition?
A typical solution here is to use a templating layer like Helm or Kustomize.
In Helm, you'd keep your Kubernetes YAML specifications in a directory structure called a chart, but with optional templating. You can specify things like
image: myname/myapp:{{ .Values.tag | default "latest" }}
and then deploy the chart with
helm install myapp --name myapp --set tag=20191211.01
Helm keeps track of these values (in Secret objects in the cluster) so they don't get tracked in source control. You could check in a YAML-format file with settings and use helm install -f to reference that file instead.
In Kustomize, your CI tool would need to create a kustomize.yaml file for per-deployment settings, but then could set
images:
- name: myname/myapp
newTag: 20191211.01
If you trust your CI tool to commit to source control then it can check this modified file in as part of its deployment sequence.
Imperative vs Declarative workflow
There is two fundamental ways of using kubectl for applying changes to your cluster. The Imperative way, when you do commands is a good way for experimentation and development environment. kubectl rolling-updated is an example of an imperative command. See Managing Kubernetes using Imperative Commands.
For a production environment, it is recommended to use a Declarative workflow, by editing manifest-files, store them in a Git-repository. Automatically start a CICD work when you commit or merge. kubectl apply -f <file> or more interesting kubectl apply -k <file> is an example of this workflow. See Declarative Management using Config files or more interesting Declarative Management using Kustomize
CICD for building image and deployment
Building an artifact from source code, including a container image may be done in a CICD pipeline. Managing application config and applying it to the Kubernetes cluster may also be done in a CICD pipeline. You may want to automatize it all, e.g. for doing Continuous Deployment and combine both pipelines to a single long pipeline. This is a more complicated setup and there is no single answer on how to do it. When the build-parts is done, it may trigger an update of the image field in the app configuration repository to trigger the configuration-pipeline.
Unfortunately there is no solution, either from the command line or through the yaml files
As per the doc here, "...a Deployment is a higher-level controller that automates rolling updates of applications declaratively, and therefore is recommended" over the use of Replication Controllers and kubectl rolling-update. Updating the image of a Deployment will trigger Deployment's rollout.
An approach could be to update the Deployment configuration yaml (or json) under version control in the source repo and apply the changed Deployment configuration from the version control to the cluster.

How to update kubernetes deployment without updating image

Background.
We are using k8s 1.7. We use deployment.yml to maintain/update k8s cluster state. In deployment.yml, pod's image is set to ${some_image}:latest. Once deployment is created, pod's image will update to ${some_image}:${build_num}, whenever there is code merge into master.
What happen now is, let's say if we need to modified the resource limited in deployment.yml and re-apply it. The image of deployment will be updated to ${some_image} :latest as well. We want to keep the image as it is in cluster state, without maintaining the actual tag in deployment.yml. We know that the replcas can be omitted in file, and it takes the value from cluster state by default.
Question,
On 1.7, the spec.template.spec.containers[0].image is required.
Is it possible to apply deployment.yml without updating the image to ${some_image}:latest as well (an argument like --ignore-image-change, or a specific field in deployment.yml)? If so, how?
Also, I see the image is optional in 1.10 documentation.
Is it true? if so, since which version?
--- Updates ---
CI build and deploy new image on every merge into master. At deploy, CI run the command kubectl set image deployment/app container=${some_image}:${build_num} where ${build_num} is the build number of the pipeline.
To apply deployment.yml, we run kubectl apply -f deployment.yml
However, in deployment.yml file, we specified the latest tag of the image, because it is impossible to keep this field up-to-date
Using “:latest” tag is against best practices in Kubernetes deployments for a number of reasons - rollback and versioning being some of them. To properly resolve this you should maybe rethink you CI/CD pipeline approach. We use ci-pipeline or ci-job version to tag images for example.
Is it possible to update deployment without updating the image to the file specified. If so, how?
To update pod without changing the image you have some options, each with some constraints, and they all require some Ops gymnastics and introduce additional points of failure since it goes against recommended approach.
k8s can pull the image from your remote registry (you must keep track of hashes since your latest is out of your direct control - potential issues here). You can check used hash on local docker registry of a node that pod is running from.
k8s can pull the image from local node registry (you must ensure that on all potential nodes for running pods at “:latest” is on the same page in local registry for this to work - potential issues here). Once there, you can play with container’s imagePullPolicy such that when CI tool is deploying - it uses apply of yaml (in contrast to create) and sets image policu to Always, immediately folowing by apply of image policy of Never (also potential issue here), restricting pulling policy to already pulled image to local repository (as mentioned, potential issues here as well).
Here is an excerpt from documentation about this approach: By default, the kubelet will try to pull each image from the specified registry. However, if the imagePullPolicy property of the container is set to IfNotPresent or Never, then a local image is used (preferentially or exclusively, respectively).
If you want to rely on pre-pulled images as a substitute for registry authentication, you must ensure all nodes in the cluster have the same pre-pulled images.
more about how k8s is handling images and why latest tagging can bite back is given here: https://kubernetes.io/docs/concepts/containers/images/
In case you don't want to deal with complex syntax in deployment.yaml in CI, you have the option to use a template processor. For example mustache. It would change the CI process a little bit:
update image version in template config (env1.yaml)
generate deployment.yaml from template deployment.mustache and env1.yaml
$ mustache env1.yml deployment.mustache > deployment.yaml
apply configuration to cluster.
$ kubectl apply -f deployment.yaml
The main benefits:
env1.yaml always contains the latest master build image, so you are creating the deployment object using correct image.
env1.yaml is easy to update or generate at the CI step.
deployment.mustache stays immutable, and you are sure that all that could possibly change in the final deployment.yaml is an image version.
There are many other template rendering solutions in case mustache doesn't fit well in your CI.
Like Const above I highly recommend against using :latest in any docker image and instead use CI/CD to solve the version problem.
We have the same issue on the Jenkins X project where we have many git repositories and as we change things like libraries or base docker images we need to change lots of versions in pom.xml, package.json, Dockerfiles, helm charts etc.
We use a simple CLI tool called UpdateBot which automates the generation of Pull Requests on all downstream repositories. We tend to think of this as Continuous Delivery for libraries and base images ;). e.g. here's the current Pull Requests that UpdateBot has generated on the Jenkins X organisation repositories
Then here's how we update Dockerfiles / helm charts as we release, say, new base images:
https://github.com/jenkins-x/builder-base/blob/master/jx/scripts/release.sh#L28-L29
Are you aware of the repo.example.com/some-tag#sha256:... syntax for pulling images from docker registry? It is almost exactly designed to solve the problem you are describing.
updated from a comment:
You're solving the wrong problem; the file is only used to load content into the cluster -- from that moment forward, the authoritative copy of the metadata is in the cluster. The kubectl patch command can be a surgical way of changing some content without resorting to sed (or worse), but one should not try and maintain cluster state outside the cluster