k8s/gke/gcr - scope an image to a namespace - kubernetes

I have a gke cluster with a number of different namespaces. I would like to be able to in effect namespace my images the same way my other resources are namespaced. That is, I would like pods in different namespaces to be able to reference an image using the same name but for them to get different images depending on which namespace they are in. One way to achieve this (if it were supported) might be to substitute the name of the namespace into the image name in the yml, eg:
containers:
- image: eu.gcr.io/myproject/$(NAMESPACE)-myimage
name: myimage
Then I could push eu.gcr.io/myproject/mynamespace-myimage to make my image available to namespace mynamespace.
Is there any tidy way to achieve this kind of thing. If not, and since I've been unable to find anybody else asking similar questions, is there some way in which this is a bad thing to want to do?

I don't think this is possible. Kubernetes supports expansion on fields like command and args. This lets you use substitutions with a variable set in container's env field, which can come from configmap/secret. Example: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#use-environment-variables-to-define-arguments
However I don't think variable expansion works for the image field. :(
What you're trying to do does not seem like a great idea: Having different images across multiple environments defeats the purpose of having test/staging environments.
Instead you should probably should use the same image to test on all platforms, by changing env vars, configMaps etc.

Related

argo env parameters and inheritance

I wasn't really sure how to label this question, because i'll be good with any of the solutions above (inheritance of containers or defining parameters for the entire workflow without explicitly setting them in each step template).
i am currently working with argo yaml's and i want to define certain values that will be inputted once (and also, be optional), and will be used by every pod in the yaml.
i'm sure there's a better way to do this than what i found by now, but i can't find anything in the docs.
currently the way i saw was defining that parameter as a workflow argument, and then for each container defined - defining it as an input parameter/env parameter.
my question is this - isn't there a way to define those 'env' variables at the top level? of the workflow? so that every pod will use them without me explicitly telling it to?
or - maybe even create one container that has those arguments defined, so that every other container i define inherits from that container and i wouldn't have to write those parameters as input/env for each one i add?
i wouldn't want to add these three values to each container i define. it makes the yaml very big and hard to read and maintain.
container:
env:
- name: env_config
value: "{{workflow.parameters.env_config}}"
- name: flow_config
value: "{{workflow.parameters.flow_config}}"
- name: flow_type_config
value: "{{workflow.parameters.flow_type_config}}"
would love to get your input, even if it's pointing me at the direction of the right doc to read, as i haven't found anything close to it yet.
Thanks!
just realised i haven't updated, so for anyone interested, what i ended up doing is setting an anchor inside a container template:
templates:
#this template is here to define what env parameters each container has using an anchor.
- name: env-template
container:
env: &env_parameters
- name: env_config
value: "{{workflow.parameters.env_config}}"
- name: flow_config
value: "{{workflow.parameters.flow_config}}"
- name: run_config
value: "{{workflow.parameters.run_config}}"
and than using that anchor in each container.
container:
image: image
imagePullPolicy: Always
env: *env_parameters
You could use a templating tool like Kustomize or Helm to cut down on the duplication.
You could also write the params to a JSON file, pull it into each Pod as an artifact, and then have a script loop over the values and assign them to env vars. But for this to be worth the additional write step and artifacts yaml, you'd need to be dealing with a lot of env vars.
If you're using the exact same inputs for a large number of steps, it's probably worth considering whether those steps are similar enough to abstract out to one parameterized template. Perhaps you could loop over an array like ["mode1", "mode2", "mode3"...] instead of writing the steps out in series.
Honestly, duplication isn't the worst thing ever. An IDE with a nice find/replace feature should make it simple enough to make changes as necessary.

How to use Kustomize and create an env like: "http://${namePrefix}service-a/some-path" or "jdbc:db2://${namePrefix}service-b:${dbPort}/${dbName}"

Lets say I need to create environment variables or ConfigMap entries like this:
- name: JDBC_URL
value: "jdbc:db2://alice-service-a:50000/db1"
- name: KEYCLOAK_BASE_URL
value: "http://alice-keycloak:8080/auth"
Where alice- is the namePrefix. How do I do this using Kustomize?
The containers I use actually do need references to other containers that are string concatenations of "variables" like above.
It doesn't look like Kustomize's vars can do this. The documentation entry Unstructured Edits seems to describe this and is under a heading called "Eschewed Features", so I guess that isn't going to happen. A similar feature request, #775 Support envsubst style variable expansion was closed.
Coming from Helm, that was easy.
What are my options if I want to move from Helm to Kustomize, but need to create an env or ConfigMap entry like e.g. jdbc:db2://${namePrefix}-service-b:${dbPort}/${dbName} (admittedly a contrived example)?
I'm guessing I'll have to resort to functionality external to Kustomize, like envsubst. Are there any best practices for cobbling this together, or am I writing my own custom-deploy-script.sh?
I'm afraid I've come up against one of the limitations of Kustomize.
The State of Kubernetes Configuration Management: An Unsolved Problem | by Jesse Suen | Argo Project has this to say under "Kustomize: The Bad":
No parameters & templates. The same property that makes kustomize applications so readable, can also make it very limiting. For example, I was recently trying to get the kustomize CLI to set an image tag for a custom resource instead of a Deployment, but was unable to. Kustomize does have a concept of “vars,” which look a lot like parameters, but somehow aren’t, and can only be used in Kustomize’s sanctioned whitelist of field paths. I feel like this is one of those times when the solution, despite making the hard things easy, ends up making the easy things hard.
Instead, I've started using gomplate: A flexible commandline tool for template rendering in addition to Kustomize to solve the challenge above, but having to use two tools that weren't designed to work together is not ideal.
EDIT: We ended up using ytt for this instead of gomplate.
I can heavily recommend the article: The State of Kubernetes Configuration Management: An Unsolved Problem. Nice to know I'm not the only one hitting this road block.

K8s: Editing vs Patching vs Updating

In the kubectl Cheat Sheet (https://kubernetes.io/docs/reference/kubectl/cheatsheet/), there are 3 ways to modify resources. You can either update, patch or edit.
What are the actual differences between them and when should I use each of them?
I would like to add a few things to night-gold's answer. I would say that there are no better and worse ways of modifying your resources. Everything depends on particular situation and your needs.
It's worth to emphasize the main difference between editing and patching namely the first one is an interactive method and the second one we can call batch method which unlike the first one may be easily used in scripts. Just imagine that you need to make change in dozens or even a few hundreds of different kubernetes resources/objects and it is much easier to write a simple script in which you can patch all those resources in an automated way. Opening each of them for editing wouldn't be very convenient and effective. Just a short example:
kubectl patch resource-type resource-name --type json -p '[{"op": "remove", "path": "/spec/someSection/someKey"}]'
Although at first it may look unnecessary complicated and not very convenient to use in comparison with interactive editing and manually removing specific line from specific section, in fact it is a very quick and effective method which may be easily implemented in scripts and can save you a lot of work and time when you work with many objects.
As to apply command, you can read in the documentation:
apply manages applications through files defining Kubernetes
resources. It creates and updates resources in a cluster through
running kubectl apply. This is the recommended way of managing
Kubernetes applications on production.
It also gives you possibility of modifying your running configuration by re-applying it from updated yaml manifest e.g. pulled from git repository.
If by update you mean rollout ( formerly known as rolling-update ), as you can see in documentation it has quite different function. It is mostly used for updating deployments. You don't use it for making changes in arbitrary type of resource.
I don't think I have the answer to this but I hope this will help.
All three methods do the same thing, they modify some resources configuration but the command and way to it is not the same.
As describe in the documentation:
Editing is when you open the yaml configuration file that is in the kubernetes cluster and edit it (with vim or other) to get direct modification on your cluster. I would not recommand this outside of testing purpose, reapplying conf from orignal yaml file will delete modificaitons.
Patching seems the same to me, but without opening the file and targetting specific parts of the resources.
Updating in the documentation it seems that's it's all other method to update a resource without using patch or edit. Some of those can be used for debug/testing, for example forcing a resource replace, or update an image version. Others are used to update them with new configurations.
From experience, I only used editing and some command of update for testing, most of time I reapply the configurations.

Validate Kubernetes Object Creation

I would like to implement functionality (or even better reuse existing libraries/APIs!) that would intercept a kubectl command to create an object and perform some pre-creation validation tasks on it before allowing kubectl command to proceed.
e.g.
check various values in the yaml against external DB for example
check a label conforms to the internal naming convention
and so on..
Is there an accepted pattern or existing tools etc?
Any guidance appreciated
The way to do this is by creating a ValidatingAdmissionWebhook. It's not for the faint of heart and even a brief example would be an overkill as a SO answer. A few pointers to start:
https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook
https://banzaicloud.com/blog/k8s-admission-webhooks/
https://container-solutions.com/a-gentle-intro-to-validation-admission-webhooks-in-kubernetes/
I hope this helps :-)
I usually append - - dry-run to kubectl command to check and validate the YAML config

What should be the name pattern for k8s service?

I was wondering what is the best name pattern for a service object in k8s environment.
Should it be %service-name%-service or just %service-name%?
workflow-service or just workflow?
What are the arguments for both sides?
In kubernetes the service dns follow the below pattern
<service-name>.<namespace-name>.svc.cluster.local
i have seen people using svc or service appended to the service name with '-' as the delimiter like below, say, redis
redis-service
redis-svc
redis
all the three are perfectly fine but the first one makes more sense interms of readability and standard way of representing the service object.
In fact, while creating a service it is not needed to append "-service" in the name. The general way of doing it is to name the service same as the name of the pods it is pointing to. Hope this helps.
Thank you!
this is simply a matter of taste. if you want verbosity, add -service. but since resources are separate anyway, why be verbose.