How to submit a kubectl job and pass the user as runas - kubernetes

I have a container that I want to run on Kubernetes Let's say image1
when I run kubectl apply -f somePod.yml (which runs the image1) how can I start the image with the user that runned the command kubectl?

It's not possible by design. Please find explanation below:
In the most cases Jobs create Pods, so I use Pods in my explanation.
In case of Jobs it means just a bit different YAML file.
$ kubectl explain job.spec.
$ kubectl explain job.spec.template.spec
Users run kubectl using user accounts.
Pods are runing using service accounts. There is no way to run pod "from user account".
Note: in recent versions spec.ServiceAccount was replaced by spec.serviceAccountName
However, you can use user account credentials by running kubectl inside a Pod's container or making curl requests to Kubernetes api-server from inside a pod container.
Using Secrets is the most convenient way to do that.
What else you can do differentiate users in the cluster:
create a namespace for each user
limit user permission to specific namespace
create default service account in that namespace.
This way if the user creates a Pod without specifying spec.ServiceAccountName, by default it will use default service-account from Pod's namespace.
You can even set for the default service account the same permissions as for the user account. The only difference would be that service accounts exist in the specific namespace.
If you need to use the same namespace for all users, you can use helm charts to set the correct serviceAccountName for each user ( imagine you have service-accounts with the same name as users ) by using --set command line arguments as follows:
$ cat testchart/templates/job.yaml
...
serviceAccountName: {{ .Values.saname }}
...
$ export SANAME=$(kubectl config view --minify -o jsonpath='{.users[0].name}')
$ helm template ./testchart --set saname=$SANAME
---
# Source: testchart/templates/job.yaml
...
serviceAccountName: kubernetes-admin
...
You can also specify namespace for each user in the same way.

I am still not sure if I good understood your question.
However, kubectl doesn't have an option to pass user or service account when creating jobs:
kubectl create job --help
Create a job with the specified name.
Examples:
# Create a job
kubectl create job my-job --image=busybox
# Create a job with command
kubectl create job my-job --image=busybox -- date
# Create a job from a CronJob named "a-cronjob"
kubectl create job test-job --from=cronjob/a-cronjob
Options:
--allow-missing-template-keys=true: If true, ignore any errors in templates when a field or
map key is missing in the template. Only applies to golang and jsonpath output formats.
--dry-run=false: If true, only print the object that would be sent, without sending it.
--from='': The name of the resource to create a Job from (only cronjob is supported).
--image='': Image name to run.
-o, --output='': Output format. One of:
json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file.
--save-config=false: If true, the configuration of current object will be saved in its
annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to
perform kubectl apply on this object in the future.
--template='': Template string or path to template file to use when -o=go-template,
-o=go-template-file. The template format is golang templates
[http://golang.org/pkg/text/template/#pkg-overview].
--validate=true: If true, use a schema to validate the input before sending it
Usage:
kubectl create job NAME --image=image [--from=cronjob/name] -- [COMMAND] [args...] [flags]
[options]
Use "kubectl options" for a list of global command-line options (applies to all commands).
You can specify many factors inside your YAML definition. For example you could create ServiceAccount or specify runAsUser in a pod configuration. However, this requires to have a definition file instead of run-level with kubectl.
Here you can find how to do it with runAsUser parameter.
You could also consider using ServiceAccount. Here you have article which might help you. However you would need to create specific ServiceAccount
It would look similar like:
apiVersion: v1
kind: Pod
metadata:
name: pod-demo-sa
spec:
serviceAccountName: demo-sa
containers:
— name: alpine
image: alpine:3.9
command:
— "sleep"
— "10000"
If this would be for some labs or practice you could also think about creating customized docker image using Dockerfile.
Unfortunately previous options are hardcoded. Other solution would need a specific scritp and many pipelines.
In addition, as you mentioned in title, to pass some values to configuration you can use ConfigMap.

Related

Use Kustomize or kubectl to deploy 1-container pod or service from command line

Very new to Kubernetes. In the past I've used kubectl/Kustomize to deploy pods/services using the same repetitive pattern:
On the file system, in my project, I'll have two YAML files such as kustomization.yml and my-app-service.yml that look something like:
kustomization.yml
resources:
- my-app-service.yml
my-app-service.yml
apiVersion: v1
kind: Service
metadata:
name: my-app-service
... etc.
From the command-line, I'll run something like the following:
kubectl apply -k /path/to/the/kustomization.yml
And if all goes well -- boom -- I've got the service running on my cluster. Neato.
I am now trying to troubleshoot something and want to deploy a single container of the google/cloud-sdk image on my cluster, so that I can SSH into it and do some network/authorization-related testing.
The idea is:
Deploy it
SSH in, do my testing
Delete the container/clean it up
I believe K8s doesn't deal with containers and so I'll probably need to deploy a 1-container pod or service (not clear on the difference there), but you get the idea.
I could just create 2 YAML files and run the kubectl apply ... on them like I do for everything else, but that feels a bit heavy-handed. I'm wondering if there is an easier way to do this all from the command-line without having to create YAML files each time I want to do a test like this. Is there?
You can create a pod running a container with the given image with this command:
kubectl run my-debug-pod --image=google/cloud-sdk
And if you want to create a Deployment without writing the Yaml, you can do:
kubectl create deployment my-app --image=my-container-image
You could also just generate the Yaml and save it to file, and then use kubectl apply -f deployment.yaml, generate it with:
kubectl create deployment my-app --image=my-container-image \
--dry-run=client -o yaml > deployment.yaml

Cannot update Kubernetes pod from yaml generated from kubectl get pod pod_name -o yaml

I have a pod in my kubernetes which needed an update to have securityContext. So generated a yaml file using -
kubectl get pod pod_name -o yaml > mypod.yaml
After updating the required securityContext and executing command -
kubectl apply -f mypod.yaml
no changes are observed in pod.
Where as a fresh newly created yaml file works perfectly fine.
new yaml file -
apiVersion: v1
kind: Pod
metadata:
name: mypod
namespace: default
spec:
securityContext:
runAsUser: 1010
containers:
- command:
- sleep
- "4800"
image: ubuntu
name: myubuntuimage
Immutable fields
In Kubernetes you can find information about Immutable fields.
A lot of fields in APIs tend to be immutable, they can't be changed after creation. This is true for example for many of the fields in pods. There is currently no way to declaratively specify that fields are immutable, and one has to rely on either built-in validation for core types, or have to build a validating webhooks for CRDs.
Why ?
There are resources in Kubernetes which have immutable fields by design, i.e. after creation of an object, those fields cannot be mutated anymore. E.g. a pod's specification is mostly unchangeable once it is created. To change the pod, it must be deleted, recreated and rescheduled.
Editing existing pod configuration
If you want to apply new config with security context using kubectl apply you will get error like below:
The Pod "mypod" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
Same output will be if you would use kubectl patch
kubectl patch pod mypod -p '{"spec":{"securityContext":{"runAsUser":1010}}}'
Also kubectl edit will not change this specific configuration
$ kubectl edit pod
Edit cancelled, no changes made.
Solution
If you need only one pod, you must delete it and create new one with requested configuration.
Better solution is to use resource which will make sure to fulfil some own requirements, like Deployment. After change of the current configuration, deployment will create new Replicaset which will create new pods with new configuration.
by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment.

Change the secret a Kubernetes deployment expects

I've been having a recurring problem with a deployment for a particular pod, fooserviced, recently. I usually get a CreateContainerConfigError when I update the pod, and the detail given is Error: secrets "fooserviced-envars" not found. I'm not sure when I named the file this poorly but so far the only solution I've found is to re-add the environment variables file using
kubectl create secret generic fooserviced-envars --from-env-file ./fooserviced-envvars.txt
So now, when I do kubectl get secrets I see both fooserviced-envars and fooserviced-envvars. I'd like to change the deployment to use fooserviced-envvars; how would I do this?
You can edit the deployment via kubectl edit deployment deploymentname which will open an editor and you can change the secret there live.
Another way to do this would be to run kubectl get deployment deploymentname -o yaml > deployment.yaml which will give you the yaml file and you can edit it in your editor and kubectl apply the modified yaml.
Make sure that secret is on the same namespace. otherwise you cannot use it
If you want to change deployment, change your kubernetes deployment yaml file . e.g.
env:
- name: POSTGRES_DB_URL
valueFrom:
secretKeyRef:
key: postgres_db_url
name: fooserviced-envars
then kubectl apply your_deployment_file

Zalando postgres operator issue with config

Getting below issue with Zalando Postgres operator. The default manifests are applied on the Kubernetes cluster(hosted on-prem) as provided here:
https://github.com/zalando/postgres-operator/tree/4a099d698d641b80c5aeee5bee925921b7283489/manifests
Verified if there are any issues in the operator names or any in configmaps or in the service-account definitions but couldn't figure out much.
kubectl logs -f postgres-operator-944b9d484-9h796
2019/10/24 16:31:02 Spilo operator v1.2.0
2019/10/24 16:31:02 Fully qualified configmap name: default/postgres-operator
panic: configmaps "postgres-operator" is forbidden: User "system:serviceaccount:default:zalando-postgres-operator" cannot get resource "configmaps" in API group "" in the namespace "default"
goroutine 1 [running]:
github.com/zalando/postgres-operator/pkg/controller.(*Controller).initOperatorConfig(0xc0004a6000)
/root/go/src/github.com/zalando/postgres-operator/pkg/controller/controller.go:102 +0x687
github.com/zalando/postgres-operator/pkg/controller.(*Controller).initController(0xc0004a6000)
/root/go/src/github.com/zalando/postgres-operator/pkg/controller/controller.go:253 +0x825
github.com/zalando/postgres-operator/pkg/controller.(*Controller).Run(0xc0004a6000, 0xc000464660, 0xc000047a70)
/root/go/src/github.com/zalando/postgres-operator/pkg/controller/controller.go:348 +0x2f
main.main()
/workspace/cmd/main.go:82 +0x256
Any help here?
I have set up postgres-operator in my environment and it is working perfectly in my case. Please make sure that you have followed steps:
Clone postgres-operator repo:
$ git clone https://github.com/zalando/postgres-operator
$ cd postgres-operator
Operator from Zalando can be configured in two ways - using a classical configmap, or using a CRD configuration object, which is more powerful:
$ kubectl create -f manifests/operator-service-account-rbac.yaml
serviceaccount/zalando-postgres-operator created
clusterrole.rbac.authorization.k8s.io/zalando-postgres-operator created
clusterrolebinding.rbac.authorization.k8s.io/zalando-postgres-operator created
In order to use the CRD config, you must change a value in the postgres-operator itself. Change the last few lines in manifests/postgres-operator.yaml so they read:
env:
# provided additional ENV vars can overwrite individual config map entries
#- name: CONFIG_MAP_NAME
# value: "postgres-operator"
# In order to use the CRD OperatorConfiguration instead, uncomment these lines and comment out the two lines above
- name: POSTGRES_OPERATOR_CONFIGURATION_OBJECT
value: postgresql-operator-default-configuration
The service account name given in that file does not match that given by the operator service account definition, so you must adjust and create the actual config object referenced. This is placed in manifests/postgresql-operator-default-configuration.yaml. These are the values that must be set:
configuration:
kubernetes:
pod_environment_configmap: postgres-pod-config
pod_service_account_name: zalando-postgres-operator
Let’s create the operator and it’s configuration.
$ kubectl create -f manifests/postgres-operator.yaml
deployment.apps/postgres-operator created
Please wait few minutes before type following command:
$ kubectl create -f postgresql-operator-default-configuration.yaml
operatorconfiguration.acid.zalan.do/postgresql-operator-default-configuration created
Now, you will be able to see your POD running:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-operator-599fd68d95-c8z67 1/1 Running 0 21m
You can also refer to this article, hope it will helps you.

Pod status as `CreateContainerConfigError` in Minikube cluster

I am trying to run Sonarqube service using the following helm chart.
So the set-up is like it starts a MySQL and Sonarqube service in the minikube cluster and Sonarqube service talks to the MySQL service to dump the data.
When I do helm install followed by kubectl get pods I see the MySQL pod status as running, but the Sonarqube pod status shows as CreateContainerConfigError. I reckon it has to do with the mounting volume thingy: link. Although I am not quite sure how to fix it (pretty new to Kubernetes environment and till learning :) )
This can be solved by various ways, I suggest better go for kubectl describe pod podname name, you now might see the cause of why the service that you've been trying is failing. In my case, I've found that some of my key-values were missing from the configmap while doing the deployment.
I ran into this problem myself today as I was trying to create secrets and using them in my pod definition yaml file. It would help if you check the output of kubectl get secrets and kubectl get configmaps if you are using any of them and validate if the # of data items you wanted are listed correctly.
I recognized that in my case problem was that when we create secrets with multiple data items: the output of kubectl get secrets <secret_name> had only 1 item of data while I had specified 2 items in my secret_name_definition.yaml. This is because of the difference between using kubectl create -f secret_name_definition.yaml vs kubectl create secret <secret_name> --from-file=secret_name_definition.yaml The difference is that in the case of the former, all the items listed in the data section of the yaml will be considered as key-value pairs and hence the # of items will be shown as the correct output when we query using kubectl get secrets secret_name but in the case of the latter only the first data item in the secret_name_definition.yaml will be evaluated for the key-value pairs and hence the output of kubectl get secrets secret_name will show only 1 data item and this is when we see the error "CreateContainerConfigError".
Note that this problem wouldn't occur if we use kubectl create secret <secret_name> with the options --from-literal= because then we would have to use the prefix --from-literal= for every key-value pair we want to define.
Similarly, if we are using --from-file= option, we still have to specify the prefix multiple times, one for each key-value pair, but just that we can pass the raw value of the key when we use --from-literal and the encoded form (i.e. value of the key will now be echo raw_value | base64 of it as a value when we use --from-file.
For example, say the keys are "username" and "password", if creating the secret using the command kubectl create -f secret_definition.yaml we need to have the values for both "username" and "password" encoded as mentioned in the "Create a Secret" section of https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/
I would like to highlight the "Note:" section in https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/ Also, https://kubernetes.io/docs/concepts/configuration/secret/ has a very clear explanation of creating secrets
Also make sure that the deployment.yaml now has the correct definiton for this container:
env:
- name: DB_HOST
value: 127.0.0.1
# These secrets are required to start the pod.
# [START cloudsql_secrets]
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# [END cloudsql_secrets]
As quoted by others, "kubectl describe pods pod_name" would help but in my case I only understood that the container wasn't being created first of all and the output of "kubectl logs pod_name -c container_name" didn't help much.
Recently, I had encountered the same CreateContainerConfigError error and after little debugging I found out that it was because I was using a kubernetes secret in my Deployment yaml, which was not actually present/created in that namespace where the pods were getting created.
Also after reading the previous answer I guess this can be assured that this particular error is focused around kubernetes secrets!
Check your secrets and config maps (kubectl get [secrets|configmaps]) that already exist and are correctly pointed in the YAML descriptor file, in both cases an incorrect secret/configmap (not created, mispelling, etc.) results in CreateContainerConfigError.
As already pointed in the answers can check the error with kubectl describe pod [pod name] and something like this should appear at the bottom of the ouput:
Warning Failed 85s (x12 over 3m37s) kubelet, gke-****-default-pool-300d3c89-9jkz
Error: configmaps "config-map-1" not found
UPDATE: From #alexis-wilke
The list of events can be ephemeral in some versions and this message disappear quickly. As a rule of thumb, check events list immediately when booting a pod, or if you have CreateContainerConfigError without events double check secrets and config maps as they can leave the pod in this state with no trace at some point
I also ran into this issue, and the problem was due to an environment variable using a field ref, on a controller. The other controller and the worker were able to resolve the reference. We didn't have time to track down the cause of the issue and wound up tearing down the cluster and rebuilding it.
- name: DD_KUBERNETES_KUBELET_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
Apr 02 16:35:46 ip-10-30-45-105.ec2.internal sh[1270]: E0402 16:35:46.502567 1270 pod_workers.go:186] Error syncing pod 3eab4618-5564-11e9-a980-12a32bf6e6c0 ("datadog-datadog-spn8j_monitoring(3eab4618-5564-11e9-a980-12a32bf6e6c0)"), skipping: failed to "StartContainer" for "datadog" with CreateContainerConfigError: "host IP unknown; known addresses: [{Hostname ip-10-30-45-105.ec2.internal}]"
Try to use the option --from-env-file instead of --from-file and see if this problem disappears. I got the same error and looking into the pod events, it suggested that the key-value pairs inside the mysecrets.txt file is not properly read. If you have only one line, Kubernetes takes the content inside the file as value and the filename as key. To avoid this issue, you need to read the file as environment variable files as shown below.
mysecrets.txt:
MYSQL_PASSWORD=dfsdfsdfkhk
For example:
kubectl create secret generic secret-name --from-env-file=mysecrets.txt
kubectl create configmap generic configmap-name --from-env-file=myconfigs.txt