Un-hardcode deploy config image tag name - kubernetes

Right now our DC (deployment config) has this hardcoded it it:
/// dc.yaml
image: containers.nabisco.com/cdt-org/cdt-dev:latest
then we roll out the dc with:
$ oc rollout latest dc/cdtcae-prod-deployment
however one problem I am noticing is that sometimes the "latest" tag refers to an old one and the newer one doesn't get pulled in - might be a bug with OpenShift or Kubernetes or what not.
we want to use git commit hashes to uniquely identify deployments, for the moment.
My question is - is there a way to override / update the image: line above, using the command line, so this line:
image: containers.nabisco.com/cdt-org/cdt-dev:latest
would get overriden by something like this:
oc rollout --tag="$my_git_commit_hash" dc/cdtcae-prod-deployment

I heard that the best option would be to use the following setting in your yaml DC config:
imagePullPolicy: "Always"
then you can just hard code some unique value
image: containers.nabisco.com/cdt-org/cdt-dev:foobarbaz
and it will always pull the latest one, instead of using the cache.

Related

What does x-airflow-common do in the airflow docker-compose.yaml file

Decided to try and really understand the docker-compose.yaml file for airflow. At the beginning of the file there is this piece of code.
x-airflow-common:
&airflow-common
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.2.5}
What I'm gathering is that x-airflow-common is defining a variable and that the the &airflow-common says that "any image in this file that points to *airflow-common look here. That is why further down we see.
<<: *airflow-common
Which says "look in this docker-compose file for the image declared in airflow-common" then when it runs the command against the image: scheduler, celery worker etc. The airflow image sees those commands and knows what type of image to spin up for that container.
Hoping someone can confirm/correct my assumptions or point me to good documentation for this. I've been searching the past two days, but have been unable to locate anything that "dissects" this file.
This uses a feature in Docker Compose called YAML anchors. It allows you to create a sort of template block, and then create other services that are based on that template, but replace certain settings in it.
This section on the Compose specification docs can probably explain it better than I can.

Kubernetes + Helm - only restart pods if new version/change

Whenever I run my basic deploy command, everything is redeployed in my environment. Is there any way to tell Helm to only apply things if there were changes made or is this just the way it works?
I'm running:
helm upgrade --atomic MyInstall . -f CustomEnvironmentData.yaml
I didn't see anything in the Helm Upgrade documentation that seemed to indicate this capability.
I don't want to bounce my whole evironment unless I have to.
There's no way to tell Helm to do this, but also no need. If you submit an object to the Kubernetes API server that exactly matches something that's already there, generally nothing will happen.
For example, say you have a Deployment object that specifies image: my/image:{{ .Values.tag }} and replicas: 3. You submit this once with tag: 20200904.01. Now you run the helm upgrade command you show, with that tag value unchanged in the CustomEnvironmentData.yaml file. This will in fact trigger the deployment controller inside Kubernetes. That sees that it wants 3 pods to exist with the image my/image:20200904.01. Those 3 pods already exist, so it does nothing.
(This is essentially the same as the "don't use the latest tag" advice: if you try to set image: my/image:latest, and redeploy your Deployment with this tag, since the Deployment spec is unchanged Kubernetes won't do anything, even if the version of the image in the registry has changed.)
You should probably use helm diff upgrade
https://github.com/databus23/helm-diff
$ helm diff upgrade - h
Show a diff explaining what a helm upgrade would change.
This fetches the currently deployed version of a release
and compares it to a chart plus values.
This can be used visualize what changes a helm upgrade will
perform.
Usage:
diff upgrade[flags] [RELEASE] [CHART]
Examples:
helm diff upgrade my-release stable / postgresql--values values.yaml
Flags:
-h, --help help for upgrade
--detailed - exitcode return a non - zero exit code when there are changes
--post - renderer string the path to an executable to be used for post rendering. If it exists in $PATH, the binary will be used, otherwise it will try to look for the executable at the given path
--reset - values reset the values to the ones built into the chart and merge in any new values
--reuse - values reuse the last release's values and merge in any new values
--set stringArray set values on the command line(can specify multiple or separate values with commas: key1 = val1, key2 = val2)
--suppress stringArray allows suppression of the values listed in the diff output
- q, --suppress - secrets suppress secrets in the output
- f, --values valueFiles specify values in a YAML file(can specify multiple)(default[])
--version string specify the exact chart version to use.If this is not specified, the latest version is used
Global Flags:
--no - color remove colors from the output

What is the best way of creating Helm Charts in a single repository for different deployment environments?

We are using Helm Charts for deploying a service in several environments on Kubernetes cluster. Now for each environment there are a list of variables like the database url, docker image tag etc. What is the most obvious and correct way of defining Helm related values.yaml in such case where all the Helm template files remain same for all the environment except for some parameters as stated above.
One way to do this would be using multiple value files, which helm now allows. Assume you have the following values files:
values1.yaml:
image:
repository: myimage
tag: 1.3
values2.yaml
image:
pullPolicy: Always
These can both be used on command line with helm as:
$ helm install -f values1.yaml,values2.yaml <mychart>
In this case, these values will be merged into
image:
repository: myimage
tag: 1.3
pullPolicy: Always
You can see the values that will be used by giving the "--dry-run --debug" options to the "helm install" command.
Order is important. If the same value appears in both files, the values from values2.yaml will take precedent, as it was specified last. Each chart also comes with a values file. Those values will be used for anything not specified in your own values file, as if it were first in the list of values files you provided.
In your case, you could specify all the common settings in values1.yaml and override them as necessary with values2.yaml.

Kubectl rollout status , freeze

I updated the image of the deployment using the command ,
kubectl set image deployments/deployment_name deployment_name=image .
I get a response
deployment "bumblebee" image updated.
Then when I describe the deployment it points to the new image , But when I checkout the status using ,
kubectl rollout status deployment/deployment_name ,
I get a message : Waiting for deployment spec update to be observed... (And it freezes after this)
And no new replica sets are created and when I try to check this out in my dashboard I can see this
1
Thanks in advance for the help
The process itself is pretty straight forward, so I have to assume some human error or something was missed here, I would suggest the following debug:
1) start by getting to a place where you have a working deployment, as in reset your environment.
One important consideration would be the repository the image is being pulled form, I know you stated that you're seeing the new image name but is this a public repository that's accessible or is it like an AWS ecr?
Is the rollout working at this point?
2) once you have a working deployment, delete it, and create a new deployment in the exact same way you got the first one to work with the new image - you want to see if it's a problem with the image or something else (for example yml indentation)
3) if you get a new deployment with the new image working, then we can circle back to the original problem which is using the set command, some suggestions would be to maybe use the edit command: kubectl edit deployment/deployment-name
maybe use notepad to manually edit the image, maybe there are spaces or some windows linux issue like LF vs CR LF etc.. let me know if that helps.

Limit resources in docker-compose v3

mem_limit is no longer supported in version 3 of docker-compose.yml file. The documentation tells that I should use the deploy.resources key instead but also that this part will only be effective with swarm or docker stack.
cpu_shares, cpu_quota, cpuset, mem_limit, memswap_limit: These have been replaced by the resources key under deploy. Note that deploy configuration only takes effect when using docker stack deploy, and is ignored by docker-compose.
... as written in the docs.
How do I set memory/cpu limits with docker-compose with v3 format of the yml file?
I was wondering the same thing and found this:
https://github.com/docker/compose/issues/4513
So in short it's just not possible to do that, you have to use the version 2.1 of the docker-compose format to be able to specify limits that are not ignored by docker-compose up
you can try docker-compose --compatibility up which is CLI flag that convert v3 files to their v2 equivalent, with deploy options translated when possible.
I was searching for this issue a while ago. I have found my answer here. At first, I tried to implement this functionality without using docker stack, but that did not work.
Here is the piece of code which you would use to limit container's CPU/memory consumption. For additional attributes, you can search the documentation of docker.
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
Compose file does not recognize deploy attributes, unless you deploy the application as a stack.
This is not the case anymore. According to the new documentation here: https://docs.docker.com/compose/compose-file/compose-file-v3/#deploy
It states that resources are repspected by docker compose.
I can confirm this now.