Kustomize: How to set metadata.name of a resource we need to patch [duplicate] - kubernetes

This question already has answers here:
How can I create a namespace with kustomize?
(6 answers)
Closed 1 year ago.
How to set metadata.name from a variable value, when kustomizing a base ressource.
For creating for example a namespace, but we don’t know its name in advance but need to “kustomize” like adding commonLabels etc to it?
The way Kustomize operates is that you kustomize a base resource already defined with an apiVersion, kind, metadata.name. So I haven't found a way to afterwards set the final resource name.

If I understand you correctly there are few options depending on your needs:
Use Helm
Helm helps you manage Kubernetes applications — Helm Charts help you
define, install, and upgrade even the most complex Kubernetes
application.
Use PodPreset
You can use a PodPreset object to inject information like secrets,
volume mounts, and environment variables etc into pods at creation
time.
Use ConfigMaps
ConfigMaps allow you to decouple configuration artifacts from image
content to keep containerized applications portable.
You can modify your deployments dynamically and than run kubectl replace -f FILE command. Or use kubectl edit DEPLOYMENT command in order to automatically applly the changes.
Please let me know if that helped.

Related

For how long should I keep the storage driver Secret in my cluster?

I'm using helm 3.4.2 for upgrade my charts to my AKS cluster and I saw that every time I deploy something new, it creates a new secret called sh.helm.v... This is the first time I'm using helm.
I was reading the doc and found that at version 3.x helm is using secrets to store driver as default. Cool, but every time I deploy it creates a new secret and I'm now sure if this is the best to keep it all in my cluster.
Soo, should I keep then all in my cluster? Like, every time I deploy some thing, it creates a secret and live there
or
Can I remove the last before? Like, deploy v5 now and erase v1, v2, v3 and keep the v4 and v5 for some reason. If it's ok to do it, does anyone has a clue for how to do it? Using a bash ou kubectl?
Thanks a lot!
So yes, There are few major changes in Helm3, comparing to Helm2.
Secrets are now used as the default storage driver
In Helm 3, Secrets are now used as the default storage driver. Helm 2 used ConfigMaps by default to store release information. In
Helm 2.7.0, a new storage backend that uses Secrets for storing
release information was implemented, and it is now the default
starting in Helm 3.
Also
Release Names are now scoped to the Namespace
In Helm 3, information about a particular release is now stored in the
same namespace as the release itself. With this greater alignment to
native cluster namespaces, the helm list command no longer lists all
releases by default. Instead, it will list only the releases in the
namespace of your current kubernetes context (i.e. the namespace shown
when you run kubectl config view --minify). It also means you must
supply the --all-namespaces flag to helm list to get behaviour similar
to Helm 2.
Soo, should I keep then all in my cluster? Like, every time I deploy
some thing, it creates a secret and live there or
Can I remove the last before?
I dont thinks its a good practice to remove anything manually. If it is not mandatory necessary - sure better not touch them. However, you can delete unused ones, if you sure you will not need old revisions in the future.
#To check all secretes were created by helm:
kubectl get secret -l "owner=helm" --all-namespaces
#To delete revision you can simply remove appropriate secret..
kubectl delete secret -n <namespace> <secret-name>
Btw(just FYI), taking into an account the fact Helm3 is scoped to namespaces - you can simply delete deployment by deleting its corresponding namespace
And the last remark, maybe it would be useful for: you can pass --history-max to helm upgrade to
limit the maximum number of revisions saved per release. Use 0 for no
limit (default 10)

Manually creating and editing Kubernetes objects

Most Kubernetes objects can be created with kubectl create, but if you need e.g. a DaemonSet — you're out of luck.
On top of that, the objects being created through kubectl can only be customized minimally (e.g. kubectl create deployment allows you to only specify the image to run and nothing else).
So, considering that Kubernetes actually expects you to either edit a minimally configured object with kubectl edit to suit your needs or write a spec from scratch and then use kubectl apply to apply it, how does one figure out all possible keywords and their meanings to properly describe the object they need?
I expected to find something similar to Docker Compose file reference, but when looking at DaemonSet docs, I found only a single example spec that doesn't even explain most of it's keys.
The spec of the resources in .yaml file that you can run kubectl apply -f on is described in Kubernetes API reference.
Considering DeamonSet, its spec is described here. It's template is actually the same as in Pod resource.

Automatically use secret when pulling from private registry

Is it possible to globally (or at least per namespace), configure kubernetes to always use an image pull secret when connecting to a private repo?
There are two use cases:
when a user specifies a container in our private registry in a deployment
when a user points a Helm chart at our private repo (and so we have no control over the image pull secret tag).
I know it is possible to do this on a service account basis but without writing a controller to add this to every new service account created it would get a bit of a mess.
Is there are way to set this globally so if kube tries to pull from registry X it uses secret Y?
Thanks
As far as I know, usually the default serviceAccount is responsible for pulling the images.
To easily add imagePullSecrets to a serviceAccount you can use the patch command:
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "mySecret"}]}'
It's possible to use kubectl patch in a script that inserts imagePullSecrets on serviceAccounts across all namespaces.
If it´s too complicated to manage multiple namespaces you can have look at kubernetes-replicator, which syncs resources between namespaces.
Solution 2:
This section of the doc explains how you can set the private registry on a node basis:
Here are the recommended steps to configuring your nodes to use a
private registry. In this example, run these on your desktop/laptop:
Run docker login [server] for each set of credentials you want to use. This updates $HOME/.docker/config.json.
View $HOME/.docker/config.json in an editor to ensure it contains just the credentials you want to use.
Get a list of your nodes, for example:
If you want the names:
nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')
If you want to get the IPs:
nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(#.type=="ExternalIP")]}{.address} {end}')
Copy your local .docker/config.json to one of the search paths list above. for example:
for n in $nodes; do scp ~/.docker/config.json root#$n:/var/lib/kubelet/config.json; done
Solution 3:
A (very dirty!) way I discovered to not need to set up an imagePullSecret on a deployment / serviceAccount basis is to:
Set ImagePullPolicy: IfNotPresent
Pulling the image in each node
2.1. manually using docker pull myrepo/image:tag.
2.2. using a script or a tool like docker-puller to automate that process.
Well, I think I don't need to explain how ugly is that.
PS: If it helps, I found an issue on kubernetes/kops about the feature of creating a global configuration for private registry.
Two simple questions, where are you running your k8s cluster? Where is your registry located?
Here there are a few approaches to your issue:
https://kubernetes.io/docs/concepts/containers/images/#using-a-private-registry

How to bind kubernetes resource to helm release

If I run kubectl apply -f <some statefulset>.yaml separately, is there a way to bind the stateful set to a previous helm release? (eg by specifying some tags in the yaml file)
As far as I know - you cannot do it.
Yes, you can always create resources via templates before installing the Helm chart.
However, I have never seen a solution for your question.

Passing variables to args field in a yaml file, kubernetes

I am writing a YAML file to use Kubernetes and I wondering how to pass variables to args field.
I need to do something like this :
args: ['--arg1=http://12.12.12.12:8080','--arg2=11.11.11.11']
But I don't want to hard code those values for --arg1 and --arg2, instead it should be like,
args: ['--arg1='$HOST1,'--arg2='$HOST2]
How should I do this?
You have two options that are quite different and really depend on your use-case, but both are worth knowing:
1) Helm would allow you to create templates of Kubernetes definitions, that can use variables.
Variables are supplied when you install the Helm chart, and before the resulting manifests are deployed to Kubernetes.
You can change the variables later on, but what it does is regenerate the YAML and re-deploy "static" versions of the result (template+variables=YAML that's sent to Kubernetes)
2) ConfigMaps allow you to separate a configuration from the pod manifest, and share this configuration across several pods/deployments.
You can later reference the ConfigMap from your pod/deployment manifests.
Hope this helps!