How to access kubernetes namespace from a yaml file - kubernetes

Im trying to create yaml for deployment with kubernetes. I am using a same script for different environment, which is separated with namespace. Now, I need to access the namespace name within the deployment yaml, such as
"name":"$(namespace)"
in the yaml file. Is it possible to do so?

edit sorry, I may have misunderstood your question: if you want access to the current namespace in which the Pod is running, you can inject that into its environment via an env: valueFrom: construct, described in greater detail here:
env:
- name: MY_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
Omit the namespace: from the yaml and provide it to kubectl as kubectl --namespace=foo1 create -f my-thing.yaml (assuming, of course, you're using kubectl; the idea behind the answer is the same, just the mechanics will change if using a different method)
You can also specify the default namespace in ~/.kube/config in the context, and address it that way: kubectl --context=server-foo1 which allows associating different credentials with the different namespaces, too. They all boil down to the same effect in the end, it's just a matter of which is the most convenient for your case.
The most extreme(?) form is that you can also have multiple configs and switch between them via env KUBECONFIG=$TMPDIR/foo1.yaml kubectl create -f my-thing.yaml

Related

How to use a node ip inside a configmap in k8s

I want to inject the value of k8s 'node ip' to a config map when a pod gets created.
Any way how to do that?
A configmap is not bound to a host (multiple pods on different hosts can share the same configmap). But you can get details in a running pod.
You can get the host IP the following way in an environment variable. Add the following in your pods spec section:
env:
- name: MY_NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
Details about passing other values to env vars can be found in the official documentation.
Unfortunately you can't get the hostIP in a volume, as the downwardAPI doesn't have access to status.hostIP (docu)

Accessing kubernetes namespace (value) from inside a pod

I am fairly new to kubernetes. Wanted to know if a program running inside a pod can access the namespace in which the pod is running.
Let me explain my usecase. There are two pods in my application's namespace. One pod has to be statefulset and must have atleast 3 replicas. Other pod (say POD-A) can be just a normal deployment. Now POD-A needs to talk to a particular instance of the statefulset. I read in an article that it can be done using this address format -
<StatefulSet>-<Ordinal>.<Service>.<Namespace>.svc.cluster.local.
In my application, the namespace part changes dynamically with each deployment. So can this value be read dynamically from a program running inside a pod?
Please help me if I have misunderstood something here. Any alternate/simpler solutions are also welcome. Thanks in advance!
https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api has an example of this and more.
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
You can get the namespace of a pod using the downward API : https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api (mount the namespace as a environment variable).
Or , if a serviceAcount is mounted in the pod , the namespace the pod is living in can be found in the file: /var/run/secrets/kubernetes.io/serviceaccount/namespace.

Change the secret a Kubernetes deployment expects

I've been having a recurring problem with a deployment for a particular pod, fooserviced, recently. I usually get a CreateContainerConfigError when I update the pod, and the detail given is Error: secrets "fooserviced-envars" not found. I'm not sure when I named the file this poorly but so far the only solution I've found is to re-add the environment variables file using
kubectl create secret generic fooserviced-envars --from-env-file ./fooserviced-envvars.txt
So now, when I do kubectl get secrets I see both fooserviced-envars and fooserviced-envvars. I'd like to change the deployment to use fooserviced-envvars; how would I do this?
You can edit the deployment via kubectl edit deployment deploymentname which will open an editor and you can change the secret there live.
Another way to do this would be to run kubectl get deployment deploymentname -o yaml > deployment.yaml which will give you the yaml file and you can edit it in your editor and kubectl apply the modified yaml.
Make sure that secret is on the same namespace. otherwise you cannot use it
If you want to change deployment, change your kubernetes deployment yaml file . e.g.
env:
- name: POSTGRES_DB_URL
valueFrom:
secretKeyRef:
key: postgres_db_url
name: fooserviced-envars
then kubectl apply your_deployment_file

How to submit a kubectl job and pass the user as runas

I have a container that I want to run on Kubernetes Let's say image1
when I run kubectl apply -f somePod.yml (which runs the image1) how can I start the image with the user that runned the command kubectl?
It's not possible by design. Please find explanation below:
In the most cases Jobs create Pods, so I use Pods in my explanation.
In case of Jobs it means just a bit different YAML file.
$ kubectl explain job.spec.
$ kubectl explain job.spec.template.spec
Users run kubectl using user accounts.
Pods are runing using service accounts. There is no way to run pod "from user account".
Note: in recent versions spec.ServiceAccount was replaced by spec.serviceAccountName
However, you can use user account credentials by running kubectl inside a Pod's container or making curl requests to Kubernetes api-server from inside a pod container.
Using Secrets is the most convenient way to do that.
What else you can do differentiate users in the cluster:
create a namespace for each user
limit user permission to specific namespace
create default service account in that namespace.
This way if the user creates a Pod without specifying spec.ServiceAccountName, by default it will use default service-account from Pod's namespace.
You can even set for the default service account the same permissions as for the user account. The only difference would be that service accounts exist in the specific namespace.
If you need to use the same namespace for all users, you can use helm charts to set the correct serviceAccountName for each user ( imagine you have service-accounts with the same name as users ) by using --set command line arguments as follows:
$ cat testchart/templates/job.yaml
...
serviceAccountName: {{ .Values.saname }}
...
$ export SANAME=$(kubectl config view --minify -o jsonpath='{.users[0].name}')
$ helm template ./testchart --set saname=$SANAME
---
# Source: testchart/templates/job.yaml
...
serviceAccountName: kubernetes-admin
...
You can also specify namespace for each user in the same way.
I am still not sure if I good understood your question.
However, kubectl doesn't have an option to pass user or service account when creating jobs:
kubectl create job --help
Create a job with the specified name.
Examples:
# Create a job
kubectl create job my-job --image=busybox
# Create a job with command
kubectl create job my-job --image=busybox -- date
# Create a job from a CronJob named "a-cronjob"
kubectl create job test-job --from=cronjob/a-cronjob
Options:
--allow-missing-template-keys=true: If true, ignore any errors in templates when a field or
map key is missing in the template. Only applies to golang and jsonpath output formats.
--dry-run=false: If true, only print the object that would be sent, without sending it.
--from='': The name of the resource to create a Job from (only cronjob is supported).
--image='': Image name to run.
-o, --output='': Output format. One of:
json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file.
--save-config=false: If true, the configuration of current object will be saved in its
annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to
perform kubectl apply on this object in the future.
--template='': Template string or path to template file to use when -o=go-template,
-o=go-template-file. The template format is golang templates
[http://golang.org/pkg/text/template/#pkg-overview].
--validate=true: If true, use a schema to validate the input before sending it
Usage:
kubectl create job NAME --image=image [--from=cronjob/name] -- [COMMAND] [args...] [flags]
[options]
Use "kubectl options" for a list of global command-line options (applies to all commands).
You can specify many factors inside your YAML definition. For example you could create ServiceAccount or specify runAsUser in a pod configuration. However, this requires to have a definition file instead of run-level with kubectl.
Here you can find how to do it with runAsUser parameter.
You could also consider using ServiceAccount. Here you have article which might help you. However you would need to create specific ServiceAccount
It would look similar like:
apiVersion: v1
kind: Pod
metadata:
name: pod-demo-sa
spec:
serviceAccountName: demo-sa
containers:
— name: alpine
image: alpine:3.9
command:
— "sleep"
— "10000"
If this would be for some labs or practice you could also think about creating customized docker image using Dockerfile.
Unfortunately previous options are hardcoded. Other solution would need a specific scritp and many pipelines.
In addition, as you mentioned in title, to pass some values to configuration you can use ConfigMap.

Pod status as `CreateContainerConfigError` in Minikube cluster

I am trying to run Sonarqube service using the following helm chart.
So the set-up is like it starts a MySQL and Sonarqube service in the minikube cluster and Sonarqube service talks to the MySQL service to dump the data.
When I do helm install followed by kubectl get pods I see the MySQL pod status as running, but the Sonarqube pod status shows as CreateContainerConfigError. I reckon it has to do with the mounting volume thingy: link. Although I am not quite sure how to fix it (pretty new to Kubernetes environment and till learning :) )
This can be solved by various ways, I suggest better go for kubectl describe pod podname name, you now might see the cause of why the service that you've been trying is failing. In my case, I've found that some of my key-values were missing from the configmap while doing the deployment.
I ran into this problem myself today as I was trying to create secrets and using them in my pod definition yaml file. It would help if you check the output of kubectl get secrets and kubectl get configmaps if you are using any of them and validate if the # of data items you wanted are listed correctly.
I recognized that in my case problem was that when we create secrets with multiple data items: the output of kubectl get secrets <secret_name> had only 1 item of data while I had specified 2 items in my secret_name_definition.yaml. This is because of the difference between using kubectl create -f secret_name_definition.yaml vs kubectl create secret <secret_name> --from-file=secret_name_definition.yaml The difference is that in the case of the former, all the items listed in the data section of the yaml will be considered as key-value pairs and hence the # of items will be shown as the correct output when we query using kubectl get secrets secret_name but in the case of the latter only the first data item in the secret_name_definition.yaml will be evaluated for the key-value pairs and hence the output of kubectl get secrets secret_name will show only 1 data item and this is when we see the error "CreateContainerConfigError".
Note that this problem wouldn't occur if we use kubectl create secret <secret_name> with the options --from-literal= because then we would have to use the prefix --from-literal= for every key-value pair we want to define.
Similarly, if we are using --from-file= option, we still have to specify the prefix multiple times, one for each key-value pair, but just that we can pass the raw value of the key when we use --from-literal and the encoded form (i.e. value of the key will now be echo raw_value | base64 of it as a value when we use --from-file.
For example, say the keys are "username" and "password", if creating the secret using the command kubectl create -f secret_definition.yaml we need to have the values for both "username" and "password" encoded as mentioned in the "Create a Secret" section of https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/
I would like to highlight the "Note:" section in https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/ Also, https://kubernetes.io/docs/concepts/configuration/secret/ has a very clear explanation of creating secrets
Also make sure that the deployment.yaml now has the correct definiton for this container:
env:
- name: DB_HOST
value: 127.0.0.1
# These secrets are required to start the pod.
# [START cloudsql_secrets]
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# [END cloudsql_secrets]
As quoted by others, "kubectl describe pods pod_name" would help but in my case I only understood that the container wasn't being created first of all and the output of "kubectl logs pod_name -c container_name" didn't help much.
Recently, I had encountered the same CreateContainerConfigError error and after little debugging I found out that it was because I was using a kubernetes secret in my Deployment yaml, which was not actually present/created in that namespace where the pods were getting created.
Also after reading the previous answer I guess this can be assured that this particular error is focused around kubernetes secrets!
Check your secrets and config maps (kubectl get [secrets|configmaps]) that already exist and are correctly pointed in the YAML descriptor file, in both cases an incorrect secret/configmap (not created, mispelling, etc.) results in CreateContainerConfigError.
As already pointed in the answers can check the error with kubectl describe pod [pod name] and something like this should appear at the bottom of the ouput:
Warning Failed 85s (x12 over 3m37s) kubelet, gke-****-default-pool-300d3c89-9jkz
Error: configmaps "config-map-1" not found
UPDATE: From #alexis-wilke
The list of events can be ephemeral in some versions and this message disappear quickly. As a rule of thumb, check events list immediately when booting a pod, or if you have CreateContainerConfigError without events double check secrets and config maps as they can leave the pod in this state with no trace at some point
I also ran into this issue, and the problem was due to an environment variable using a field ref, on a controller. The other controller and the worker were able to resolve the reference. We didn't have time to track down the cause of the issue and wound up tearing down the cluster and rebuilding it.
- name: DD_KUBERNETES_KUBELET_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
Apr 02 16:35:46 ip-10-30-45-105.ec2.internal sh[1270]: E0402 16:35:46.502567 1270 pod_workers.go:186] Error syncing pod 3eab4618-5564-11e9-a980-12a32bf6e6c0 ("datadog-datadog-spn8j_monitoring(3eab4618-5564-11e9-a980-12a32bf6e6c0)"), skipping: failed to "StartContainer" for "datadog" with CreateContainerConfigError: "host IP unknown; known addresses: [{Hostname ip-10-30-45-105.ec2.internal}]"
Try to use the option --from-env-file instead of --from-file and see if this problem disappears. I got the same error and looking into the pod events, it suggested that the key-value pairs inside the mysecrets.txt file is not properly read. If you have only one line, Kubernetes takes the content inside the file as value and the filename as key. To avoid this issue, you need to read the file as environment variable files as shown below.
mysecrets.txt:
MYSQL_PASSWORD=dfsdfsdfkhk
For example:
kubectl create secret generic secret-name --from-env-file=mysecrets.txt
kubectl create configmap generic configmap-name --from-env-file=myconfigs.txt