updating the deployment, need to change multiple values - kubernetes

I am trying to automate the update to the deployment using
kubectl set
I have no issues using kubectl set image command to push new version of the docker image out, but I also need to add a new persistent disk for the new image to use. I don't believe i can set 2 different options using the set command. What would be the best option to do this?

http://kubernetes.io/docs/user-guide/managing-deployments/#in-place-updates-of-resources has the different options you have.
You can use kubectl apply to modify multiple fields at once.
Apply a configuration to a resource by filename or stdin. This
resource will be created if it doesn’t exist yet. To use ‘apply’,
always create the resource initially with either ‘apply’ or ‘create
–save-config’. JSON and YAML formats are accepted.
Alternately, one can use kubectl patch.
Update field(s) of a resource using strategic merge patch JSON and
YAML formats are accepted.

Related

Automatically use secret when pulling from private registry

Is it possible to globally (or at least per namespace), configure kubernetes to always use an image pull secret when connecting to a private repo?
There are two use cases:
when a user specifies a container in our private registry in a deployment
when a user points a Helm chart at our private repo (and so we have no control over the image pull secret tag).
I know it is possible to do this on a service account basis but without writing a controller to add this to every new service account created it would get a bit of a mess.
Is there are way to set this globally so if kube tries to pull from registry X it uses secret Y?
Thanks
As far as I know, usually the default serviceAccount is responsible for pulling the images.
To easily add imagePullSecrets to a serviceAccount you can use the patch command:
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "mySecret"}]}'
It's possible to use kubectl patch in a script that inserts imagePullSecrets on serviceAccounts across all namespaces.
If it´s too complicated to manage multiple namespaces you can have look at kubernetes-replicator, which syncs resources between namespaces.
Solution 2:
This section of the doc explains how you can set the private registry on a node basis:
Here are the recommended steps to configuring your nodes to use a
private registry. In this example, run these on your desktop/laptop:
Run docker login [server] for each set of credentials you want to use. This updates $HOME/.docker/config.json.
View $HOME/.docker/config.json in an editor to ensure it contains just the credentials you want to use.
Get a list of your nodes, for example:
If you want the names:
nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')
If you want to get the IPs:
nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(#.type=="ExternalIP")]}{.address} {end}')
Copy your local .docker/config.json to one of the search paths list above. for example:
for n in $nodes; do scp ~/.docker/config.json root#$n:/var/lib/kubelet/config.json; done
Solution 3:
A (very dirty!) way I discovered to not need to set up an imagePullSecret on a deployment / serviceAccount basis is to:
Set ImagePullPolicy: IfNotPresent
Pulling the image in each node
2.1. manually using docker pull myrepo/image:tag.
2.2. using a script or a tool like docker-puller to automate that process.
Well, I think I don't need to explain how ugly is that.
PS: If it helps, I found an issue on kubernetes/kops about the feature of creating a global configuration for private registry.
Two simple questions, where are you running your k8s cluster? Where is your registry located?
Here there are a few approaches to your issue:
https://kubernetes.io/docs/concepts/containers/images/#using-a-private-registry

Using full Declarative approach in Kubernetes

We can use a declarative approach for creating and updating kubernetes resources using kubectl apply -f , how can we do the same for recyclying the resources that are no longer needed.
I have used kubectl delete , but that looks like imperative , and sometimes we will need to delete things in proper order.
Is there a way to always use kubectl apply and it figures out itself which resources to keep and which to delete. Just like in Terraform.
Or we should conclude that currently the declarative approach works for resource creation and update only.
Use case:
For example , we have decided not to provide the K8S API to end users and instead provide them a repository where they keep and update thier yaml files that a bot can apply to the cluster on each update when the pull request is merged. So we need this declarative delete as well so that we don't have to clean up things after users. Terraform provider maybe the solution but in that case things will lock to terraform and users will need to learn one more tool instead of using the native k8s format.
Truns out that they have added a declarative approach for pruning the resources that are no longer present in the yaml manifests:
kubectl apply -f <directory/> --prune -l your=label
With too many cautions though.
As an alternative to kubectl delete, you can use kubectl apply to
identify objects to be deleted after their configuration files have
been removed from the directory. Apply with --prune queries the API
server for all objects matching a set of labels, and attempts to match
the returned live object configurations against the object
configuration files. If an object matches the query, and it does not
have a configuration file in the directory, and it has a
last-applied-configuration annotation, it is deleted.

What's the differences between patch and replace the deployment in k8s?

I want to update the image for the k8s deployment and I found two RESTAPI in k8s to update the deployment: PATCH and PUT.
I found out, that the PATCH is for updating and the PUT is for replacing in the official document but after testing with the two command:
kubectl patch -p ...
kubectl replace -f ...
it seems to has no differences between the two method.
Both of them can rollback and name of the new pod changed.
I wondered if it is only different in the request body for this two commands? (patch only need the changed part and put need the whole parts)
According to the documenation:
kubectl patch
is to change the live configuration of a Deployment object. You do not change the configuration file that you originally used to create the Deployment object.
kubectl replace
If replacing an existing resource, the complete resource spec must be provided.
replace is a full replacement. You have to have ALL the fields present.
patch is partial.

How to update kubernetes deployment without updating image

Background.
We are using k8s 1.7. We use deployment.yml to maintain/update k8s cluster state. In deployment.yml, pod's image is set to ${some_image}:latest. Once deployment is created, pod's image will update to ${some_image}:${build_num}, whenever there is code merge into master.
What happen now is, let's say if we need to modified the resource limited in deployment.yml and re-apply it. The image of deployment will be updated to ${some_image} :latest as well. We want to keep the image as it is in cluster state, without maintaining the actual tag in deployment.yml. We know that the replcas can be omitted in file, and it takes the value from cluster state by default.
Question,
On 1.7, the spec.template.spec.containers[0].image is required.
Is it possible to apply deployment.yml without updating the image to ${some_image}:latest as well (an argument like --ignore-image-change, or a specific field in deployment.yml)? If so, how?
Also, I see the image is optional in 1.10 documentation.
Is it true? if so, since which version?
--- Updates ---
CI build and deploy new image on every merge into master. At deploy, CI run the command kubectl set image deployment/app container=${some_image}:${build_num} where ${build_num} is the build number of the pipeline.
To apply deployment.yml, we run kubectl apply -f deployment.yml
However, in deployment.yml file, we specified the latest tag of the image, because it is impossible to keep this field up-to-date
Using “:latest” tag is against best practices in Kubernetes deployments for a number of reasons - rollback and versioning being some of them. To properly resolve this you should maybe rethink you CI/CD pipeline approach. We use ci-pipeline or ci-job version to tag images for example.
Is it possible to update deployment without updating the image to the file specified. If so, how?
To update pod without changing the image you have some options, each with some constraints, and they all require some Ops gymnastics and introduce additional points of failure since it goes against recommended approach.
k8s can pull the image from your remote registry (you must keep track of hashes since your latest is out of your direct control - potential issues here). You can check used hash on local docker registry of a node that pod is running from.
k8s can pull the image from local node registry (you must ensure that on all potential nodes for running pods at “:latest” is on the same page in local registry for this to work - potential issues here). Once there, you can play with container’s imagePullPolicy such that when CI tool is deploying - it uses apply of yaml (in contrast to create) and sets image policu to Always, immediately folowing by apply of image policy of Never (also potential issue here), restricting pulling policy to already pulled image to local repository (as mentioned, potential issues here as well).
Here is an excerpt from documentation about this approach: By default, the kubelet will try to pull each image from the specified registry. However, if the imagePullPolicy property of the container is set to IfNotPresent or Never, then a local image is used (preferentially or exclusively, respectively).
If you want to rely on pre-pulled images as a substitute for registry authentication, you must ensure all nodes in the cluster have the same pre-pulled images.
more about how k8s is handling images and why latest tagging can bite back is given here: https://kubernetes.io/docs/concepts/containers/images/
In case you don't want to deal with complex syntax in deployment.yaml in CI, you have the option to use a template processor. For example mustache. It would change the CI process a little bit:
update image version in template config (env1.yaml)
generate deployment.yaml from template deployment.mustache and env1.yaml
$ mustache env1.yml deployment.mustache > deployment.yaml
apply configuration to cluster.
$ kubectl apply -f deployment.yaml
The main benefits:
env1.yaml always contains the latest master build image, so you are creating the deployment object using correct image.
env1.yaml is easy to update or generate at the CI step.
deployment.mustache stays immutable, and you are sure that all that could possibly change in the final deployment.yaml is an image version.
There are many other template rendering solutions in case mustache doesn't fit well in your CI.
Like Const above I highly recommend against using :latest in any docker image and instead use CI/CD to solve the version problem.
We have the same issue on the Jenkins X project where we have many git repositories and as we change things like libraries or base docker images we need to change lots of versions in pom.xml, package.json, Dockerfiles, helm charts etc.
We use a simple CLI tool called UpdateBot which automates the generation of Pull Requests on all downstream repositories. We tend to think of this as Continuous Delivery for libraries and base images ;). e.g. here's the current Pull Requests that UpdateBot has generated on the Jenkins X organisation repositories
Then here's how we update Dockerfiles / helm charts as we release, say, new base images:
https://github.com/jenkins-x/builder-base/blob/master/jx/scripts/release.sh#L28-L29
Are you aware of the repo.example.com/some-tag#sha256:... syntax for pulling images from docker registry? It is almost exactly designed to solve the problem you are describing.
updated from a comment:
You're solving the wrong problem; the file is only used to load content into the cluster -- from that moment forward, the authoritative copy of the metadata is in the cluster. The kubectl patch command can be a surgical way of changing some content without resorting to sed (or worse), but one should not try and maintain cluster state outside the cluster

Automated alternative for initiating a rolling update for a deployment

So in order to update the images running on a pod, I have to modify the deployment config (yaml file), and run something like kubectl apply -f deploy.yaml.
This means, if I'm not editing the yaml file manually I'll have to use some template / search and replace functionality. Which isn't really ideal.
Are there any better approaches?
It seems there is a kubectl rolling-update command, but I'm not sure if this works for 'deployments'.
For example running the following: kubectl rolling-update wordpress --image=eu.gcr.io/abcxyz/wordpress:deploy-1502443760
Produces an error of:
error: couldn't find a replication controller with source id == default/wordpress
I am using this for changing images in Deployments:
kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
If you view the yaml files as source of truth then use a tag like stable in the yaml and only issue kubectl set image commands when the tag is moved (use the sha256 image id to actually trigger a rollout; the image names are matched like a string so updating from :stable to :stable is a noop even if the tag now points to a different image).
See updating a deployment for more details.
The above requires the deployment replica count to be set more then 1, which is explained here: https://stackoverflow.com/a/45649024/1663462.