How to update kubernetes deployment without updating image - kubernetes

Background.
We are using k8s 1.7. We use deployment.yml to maintain/update k8s cluster state. In deployment.yml, pod's image is set to ${some_image}:latest. Once deployment is created, pod's image will update to ${some_image}:${build_num}, whenever there is code merge into master.
What happen now is, let's say if we need to modified the resource limited in deployment.yml and re-apply it. The image of deployment will be updated to ${some_image} :latest as well. We want to keep the image as it is in cluster state, without maintaining the actual tag in deployment.yml. We know that the replcas can be omitted in file, and it takes the value from cluster state by default.
Question,
On 1.7, the spec.template.spec.containers[0].image is required.
Is it possible to apply deployment.yml without updating the image to ${some_image}:latest as well (an argument like --ignore-image-change, or a specific field in deployment.yml)? If so, how?
Also, I see the image is optional in 1.10 documentation.
Is it true? if so, since which version?
--- Updates ---
CI build and deploy new image on every merge into master. At deploy, CI run the command kubectl set image deployment/app container=${some_image}:${build_num} where ${build_num} is the build number of the pipeline.
To apply deployment.yml, we run kubectl apply -f deployment.yml

However, in deployment.yml file, we specified the latest tag of the image, because it is impossible to keep this field up-to-date
Using “:latest” tag is against best practices in Kubernetes deployments for a number of reasons - rollback and versioning being some of them. To properly resolve this you should maybe rethink you CI/CD pipeline approach. We use ci-pipeline or ci-job version to tag images for example.
Is it possible to update deployment without updating the image to the file specified. If so, how?
To update pod without changing the image you have some options, each with some constraints, and they all require some Ops gymnastics and introduce additional points of failure since it goes against recommended approach.
k8s can pull the image from your remote registry (you must keep track of hashes since your latest is out of your direct control - potential issues here). You can check used hash on local docker registry of a node that pod is running from.
k8s can pull the image from local node registry (you must ensure that on all potential nodes for running pods at “:latest” is on the same page in local registry for this to work - potential issues here). Once there, you can play with container’s imagePullPolicy such that when CI tool is deploying - it uses apply of yaml (in contrast to create) and sets image policu to Always, immediately folowing by apply of image policy of Never (also potential issue here), restricting pulling policy to already pulled image to local repository (as mentioned, potential issues here as well).
Here is an excerpt from documentation about this approach: By default, the kubelet will try to pull each image from the specified registry. However, if the imagePullPolicy property of the container is set to IfNotPresent or Never, then a local image is used (preferentially or exclusively, respectively).
If you want to rely on pre-pulled images as a substitute for registry authentication, you must ensure all nodes in the cluster have the same pre-pulled images.
more about how k8s is handling images and why latest tagging can bite back is given here: https://kubernetes.io/docs/concepts/containers/images/

In case you don't want to deal with complex syntax in deployment.yaml in CI, you have the option to use a template processor. For example mustache. It would change the CI process a little bit:
update image version in template config (env1.yaml)
generate deployment.yaml from template deployment.mustache and env1.yaml
$ mustache env1.yml deployment.mustache > deployment.yaml
apply configuration to cluster.
$ kubectl apply -f deployment.yaml
The main benefits:
env1.yaml always contains the latest master build image, so you are creating the deployment object using correct image.
env1.yaml is easy to update or generate at the CI step.
deployment.mustache stays immutable, and you are sure that all that could possibly change in the final deployment.yaml is an image version.
There are many other template rendering solutions in case mustache doesn't fit well in your CI.

Like Const above I highly recommend against using :latest in any docker image and instead use CI/CD to solve the version problem.
We have the same issue on the Jenkins X project where we have many git repositories and as we change things like libraries or base docker images we need to change lots of versions in pom.xml, package.json, Dockerfiles, helm charts etc.
We use a simple CLI tool called UpdateBot which automates the generation of Pull Requests on all downstream repositories. We tend to think of this as Continuous Delivery for libraries and base images ;). e.g. here's the current Pull Requests that UpdateBot has generated on the Jenkins X organisation repositories
Then here's how we update Dockerfiles / helm charts as we release, say, new base images:
https://github.com/jenkins-x/builder-base/blob/master/jx/scripts/release.sh#L28-L29

Are you aware of the repo.example.com/some-tag#sha256:... syntax for pulling images from docker registry? It is almost exactly designed to solve the problem you are describing.
updated from a comment:
You're solving the wrong problem; the file is only used to load content into the cluster -- from that moment forward, the authoritative copy of the metadata is in the cluster. The kubectl patch command can be a surgical way of changing some content without resorting to sed (or worse), but one should not try and maintain cluster state outside the cluster

Related

What is the root password of postgresql-ha/helm?

Installed PostgreSQL in AWS Eks through Helm https://bitnami.com/stack/postgresql-ha/helm
I need to fulfill some tasks in deployments with root rights, but when
su -
requires a password that I don't know and where to take it, and to access the desired folders, such as /opt/bitnami/postgresql/
Error: Permission denied
How to get the necessary rights or what password?
Image attached: bitnami root error
I need [...] to place the .so libraries I need for postgresql in [...] /opt/bitnami/postgresql/lib
I'd consider this "extending" rather than "configuring" PostgreSQL; it's not a task you can do with a Helm chart alone. On a standalone server it's not something you could configure with only a text editor, for example, and while the Bitnami PostgreSQL-HA chart has a pretty wide swath of configuration options, none of them allow providing extra binary libraries.
The first step to doing this is to create a custom Docker image that includes the shared library. That can start FROM the Bitnami PostgreSQL image this chart uses:
ARG postgresql_tag=11.12.0-debian-10-r44
FROM bitnami/postgresql:${postgresql_tag}
# assumes the shared library is in the same directory as
# the Dockerfile
COPY whatever.so /opt/bitnami/postgresql/lib
# or RUN curl ..., or RUN apt-get, or ...
#
# You do not need EXPOSE, ENTRYPOINT, CMD, etc.
# These come from the base image
Build this image and push it to a Docker registry, the same way you do for your application code. (In a purely local context you might be able to docker build the image in minikube's context.)
When you deploy the chart, it has options to override the image it runs, so you can point it at your own custom image. Your Helm values could look like:
postgresqlImage:
registry: registry.example.com:5000
repository: infra/postgresql
tag: 11.12.0-debian-10-r44
# `docker run registry.example.com:5000/infra/postgresql:11.12.0-debian-10-r44`
and then you can provide this file via the helm install -f option when you deploy the chart.
You should almost never try to manually configure a Kubernetes pod by logging into it with kubectl exec. It is extremely routine to delete pods, and in many cases Kubernetes does this automatically (if the image tag in a Deployment or StatefulSet changes; if a HorizontalPodAutoscaler scales down; if a Node is taken offline); in these cases your manual changes will be lost. If there are multiple replicas of a pod (with an HA database setup there almost certainly will be) you also need to make identical changes in every replica.
Like they told you in the comments, you are using the wrong approach to the problem. Executing inside a container to make manual operations is (most of the times) useless, since Pods (and the containers which are part of such Pods) are ephimeral entities, which will be lost whenever the Pod restart.
Unless the path you are trying to interact with is supported by a persisted volume, as soon as the container will be restared, all your changes will be lost.
HELM Charts, like the bitnami-ha chart, exposes several way to refine / modify the default installation:
You could build a custom docker image starting from the one used by default, adding there the libraries and whatever you need. This way the container will be already "ready" in the way you want, as soon as it starts
You could add an additional Init Container to perfom operations such as preparing files for the main container on emptydir volumes, which can then be mounted at the expected path
You could inject an entrypoint script which does what you want at start, before calling the main entrypoint
Check the Readme as it lists all the possibilities offered by the Chart (such as how to override the image with your custom one and more)

Best practices for storing images locally to avoid problems when the image source is down (security too)?

I'm using argocd and helm charts to deploy multiple applications in a cluster. My cluster happens to be on bare metal, but I don't think that matters for this question. Also, sorry, this is probably a pretty basic question.
I ran into a problem yesterday where one of the remote image sources used by one of my helm charts was down. This brought me to a halt because I couldn't stand up one of the main services for my cluster without that image and I didn't have a local copy of it.
So, my question is, what would you consider to be best practice for storing images locally to avoid this kind of problem? Can I store charts and images locally once I've pulled them for the first time so that I don't have to always rely on third parties? Is there a way to set up a pass-through cache for helm charts and docker images?
If your scheduled pods were unable to start on a specific node with an Failed to pull image "your.docker.repo/image" error, you should consider having these images already downloaded on the nodes.
Think of how you can docker pull the images on your nodes. It may be a linux cronjob, kubernetes operator or any other solution that will ensure presence of docker image on the node even if you have connectivity issues.
As one of the options:
Create your own helm chart repository to store helm charts locally (optionally)
Create local image registry and push there needed images, also tag them accordingly for future simplicity
On each node add insecure registry by editing /etc/docker/daemon.json and adding
{
"insecure-registries" : ["myregistrydomain.com:5000"]
}
restart docker service on each node to apply changes
change your helm charts templates, set proper image path from local repo
recreate chart with new properties, (optionally)push chart to created in step 1 local helm repo
FInally install the chart - this time it should pick up images from local repo.
You may also be interested in Kubernetes-Helm Charts pointing to a local docker image

Using kubectl roll outs to update my images, but need to also keep my deployment object in version control

In My CICD, I am:
generating a new image with a unique tag. foo:dev-1339 and pushing it to my image repo (ECR).
Then I am using a rolling update to update my deployment.
kubectl rolling-update frontend --image=foo:dev-1339
But I have a conflict here.
What if I also need to update some part of my deployment object as stored in a deployment.yaml file. Lets say harden a health check or add a parameter?
Then when I re apply my deployment object as a whole it will not be in sync with the current replica set, the tag will get reverted and I will lose that image update as it exists in the cluster.
How do I avoid this race condition?
A typical solution here is to use a templating layer like Helm or Kustomize.
In Helm, you'd keep your Kubernetes YAML specifications in a directory structure called a chart, but with optional templating. You can specify things like
image: myname/myapp:{{ .Values.tag | default "latest" }}
and then deploy the chart with
helm install myapp --name myapp --set tag=20191211.01
Helm keeps track of these values (in Secret objects in the cluster) so they don't get tracked in source control. You could check in a YAML-format file with settings and use helm install -f to reference that file instead.
In Kustomize, your CI tool would need to create a kustomize.yaml file for per-deployment settings, but then could set
images:
- name: myname/myapp
newTag: 20191211.01
If you trust your CI tool to commit to source control then it can check this modified file in as part of its deployment sequence.
Imperative vs Declarative workflow
There is two fundamental ways of using kubectl for applying changes to your cluster. The Imperative way, when you do commands is a good way for experimentation and development environment. kubectl rolling-updated is an example of an imperative command. See Managing Kubernetes using Imperative Commands.
For a production environment, it is recommended to use a Declarative workflow, by editing manifest-files, store them in a Git-repository. Automatically start a CICD work when you commit or merge. kubectl apply -f <file> or more interesting kubectl apply -k <file> is an example of this workflow. See Declarative Management using Config files or more interesting Declarative Management using Kustomize
CICD for building image and deployment
Building an artifact from source code, including a container image may be done in a CICD pipeline. Managing application config and applying it to the Kubernetes cluster may also be done in a CICD pipeline. You may want to automatize it all, e.g. for doing Continuous Deployment and combine both pipelines to a single long pipeline. This is a more complicated setup and there is no single answer on how to do it. When the build-parts is done, it may trigger an update of the image field in the app configuration repository to trigger the configuration-pipeline.
Unfortunately there is no solution, either from the command line or through the yaml files
As per the doc here, "...a Deployment is a higher-level controller that automates rolling updates of applications declaratively, and therefore is recommended" over the use of Replication Controllers and kubectl rolling-update. Updating the image of a Deployment will trigger Deployment's rollout.
An approach could be to update the Deployment configuration yaml (or json) under version control in the source repo and apply the changed Deployment configuration from the version control to the cluster.

Kubernetes rolling update vs set image

After some intense google and SO search i couldn't find any document that mentions both rolling update and set image, and can stress the difference between the two.
Can anyone shed light? When would I rather use either of those?
EDIT: It's worth mentioning that i'm already working with deployments (rather than replication controller directly) and that I'm using yaml configuration files. It would also be nice to know if there's a way to perform any of those using configuration files rather than direct commands.
In older k8s versions the ReplicationController was the only resource to manage a group of replicated pods. To update the pods of a ReplicationController you use kubectl rolling-update.
Later, k8s introduced the Deployment which manages ReplicaSet resources. The Deployment could be updated via kubectl set image.
Working with Deployment resources (as you already do) is the preferred way. I guess the ReplicationController and its rolling-update command are mainly still there for backward compatibility.
UPDATE: As mentioned in the comments:
To update a Deployment I used kubectl patch as it could also change things like adding new env vars whereas kubectl set image is rather limited and can only change the image version. Also note, that patch can be applied to all k8s resources and is not restricted to be used with a Deployment.
Later, I shifted my deployment processes to use helm - a really neat and k8s native package management tool. Can highly recommend to have a look at it.

Is there a way to make kubectl apply restart deployments whose image tag has not changed?

I've got a local deployment system that is mirroring our production system. Both are deployed by calling kubectl apply -f deployments-and-services.yaml
I'm tagging all builds with the current git hash, which means that for clean deploys to GKE, all the services have a new docker image tag which means that apply will restart them, but locally to minikube the tag is often not changing which means that new code is not run. Before I was working around this by calling kubectl delete and then kubectl create for deploying to minikube, but as the number of services I'm deploying has increased, that is starting to stretch the dev cycle too far.
Ideally, I'd like a better way to tell kubectl apply to restart a deployment rather than just depending on the tag?
I'm curious how people have been approaching this problem.
Additionally, I'm building everything with bazel which means that I have to be pretty explicit about setting up my build commands. I'm thinking maybe I should switch to just delete/creating the one service I'm working on and leave the others running.
But in that case, maybe I should just look at telepresence and run the service I'm dev'ing on outside of minikube all together? What are best practices here?
I'm not entirely sure I understood your question but that may very well be my reading comprehension :)
In any case here's a few thoughts that popped up while reading this (again not sure what you're trying to accomplish)
Option 1: maybe what you're looking for is to scale down and back up, i.e. scale your deployment to say 0 and then back up, given you're using configmap and maybe you only want to update that, the command would be kubectl scale --replicas=0 -f foo.yaml and then back to whatever
Option 2: if you want to apply the deployment and not kill any pods for example, you would use the cascade=false (google it)
Option 3: lookup the rollout option to manage deployments, not sure if it works on services though
Finally, and that's only me talking, share some more details like which version of k8s are you using? maybe provide an actual use case example to better describe the issue.
Kubernetes, only triggers a deployment when something has changed, if you have image pull policy to always you can delete your pods to get the new image, if you want kube to handle the deployment you can update the kubernetes yaml file to container a constantly changing metadata field (I use seconds since epoch) which will trigger a change. Ideally you should be tagging your images with unique tags from your CI/CD pipeline with the commit reference they have been built from. this gets around this issue and allows you to take full advantage of the kubernetes rollback feature.