Can Kubernetes Cronjobs Reuse Environment Variables From Existing Deployment? - kubernetes

I am using a Kubernetes Cronjob to run period database restores and post restore scripts which runs against the target environment which include tasks such as working with the database, redis, and file system.
The issue I am facing is that I have to re-define all the environment variables I use in my Deployment within the Cronjob (E.g., DATABASE_NAME, DATABASE_PASSWORD, REDIS_HOST etc.).
While repeating all the environment variables works, it is error prone as I have already forgotten to update the jobs which results in me having to re-run the entire process which takes 2-4 hours to run depending on what environment.
Is there a way to reference an existing Deployment and re-use the defined environment variables within my Cronjob?

You can use 'kind: PodPreset' object to define and inject comman env variables into multiple kuberentes objects like deployments/statefulsets/pods/replicasets etc.
Follow the link for help --> https://kubernetes.io/docs/tasks/inject-data-application/podpreset/

I don't think so you can reuse environment variables until it is coming from secrets or configmaps.So if you don't want to use secrets for non sensitive data then you can use configmaps as like below
kubectl create configmap redis-uname --from-literal=username=jp
[root#master ~]# kubectl get cm redis-uname -o yaml
apiVersion: v1
data:
username: jp
kind: ConfigMap
metadata:
creationTimestamp: "2019-11-28T21:38:18Z"
name: redis-uname
namespace: default
resourceVersion: "1090703"
selfLink: /api/v1/namespaces/default/configmaps/redis-uname
uid: 1a9e3cce-50b1-448b-8bae-4b2c6ccb6861
[root#master ~]#
[root#master ~]# echo -n 'K8sCluster!' | base64
SzhzQ2x1c3RlciE=
[root#master ~]# cat redis-sec.yaml
apiVersion: v1
kind: Secret
metadata:
name: redissecret
data:
password: SzhzQ2x1c3RlciE=
[root#master ~]#
[root#master ~]# kubectl apply -f redis-sec.yaml
secret/redissecret created
[root#master ~]# kubectl get secret redissecret -o yaml
apiVersion: v1
data:
password: SzhzQ2x1c3RlciE=
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"password":"SzhzQ2x1c3RlciE="},"kind":"Secret","metadata":{"annotations":{},"name":"redissecret","namespace":"default"}}
creationTimestamp: "2019-11-28T21:40:18Z"
name: redissecret
namespace: default
resourceVersion: "1090876"
selfLink: /api/v1/namespaces/default/secrets/redissecret
uid: 2b6acdcd-d7c6-4e50-bd0e-8c323804155b
type: Opaque
[root#master ~]#
apiVersion: v1
kind: Pod
metadata:
name: "redis-sec-env-pod"
spec:
containers:
- name: redis-sec-env-cn
image: "redis"
env:
- name: username
valueFrom:
configMapKeyRef:
name: redis-uname
key: username
- name: password
valueFrom:
secretKeyRef:
name: redissecret
key: password
[root#master ~]# kubectl apply -f reddis_sec_pod.yaml
pod/redis-sec-env-pod created
[root#master ~]# kubectl exec -it redis-sec-env-pod sh
# env|grep -i user
username=jp
# env|grep -i pass
password=K8sCluster!
#

Related

Define a standalone patch as YAML

I have a need to define a standalone patch as YAML.
More specifically, I want to do the following:
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "registry-my-registry"}]}'
The catch is I can't use kubectl patch. I'm using a GitOps workflow with flux, and that resource I want to patch is a default resource created outside of flux.
In other terms, I need to do the same thing as the command above but with kubectl apply only:
kubectl apply patch.yaml
I wasn't able to figure out if you can define such a patch.
The key bit is that I can't predict the name of the default secret token on a new cluster (as the name is random, i.e. default-token-uudge)
Fields set and deleted from Resource Config are merged into Resources by Kubectl apply:
If a Resource already exists, Apply updates the Resources by merging the
local Resource Config into the remote Resources
Fields removed from the Resource Config will be deleted from the remote Resource
You can learn more about Kubernetes Field Merge Semantics.
If your limitation is not knowing the secret default-token-xxxxx name, no problem, just keep that field out of your yaml.
As long as the yaml has enough fields to identify the target resource (name, kind, namespace) it will add/edit the fields you set.
I created a cluster (minikube in this example, but it could be any) and retrieved the current default serviceAccount:
$ kubectl get serviceaccount default -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2020-07-01T14:51:38Z"
name: default
namespace: default
resourceVersion: "330"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: a9e5ff4a-8bfb-466f-8873-58c2172a5d11
secrets:
- name: default-token-j6zx2
Then, we create a yaml file with the content's that we want to add:
$ cat add-image-pull-secrets.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: default
imagePullSecrets:
- name: registry-my-registry
Now we apply and verify:
$ kubectl apply -f add-image-pull-secrets.yaml
serviceaccount/default configured
$ kubectl get serviceaccount default -o yaml
apiVersion: v1
imagePullSecrets:
- name: registry-my-registry
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","imagePullSecrets":[{"name":"registry-my-registry2"}],"kind":"ServiceAccount","metadata":{"annotations":{},"name":"default","namespace":"default"}}
creationTimestamp: "2020-07-01T14:51:38Z"
name: default
namespace: default
resourceVersion: "2382"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: a9e5ff4a-8bfb-466f-8873-58c2172a5d11
secrets:
- name: default-token-j6zx2
As you can see, the ImagePullPolicy was added to the resource.
I hope it fits your needs. If you have any further questions let me know in the comments.
Let say, your service account YAML looks like bellow:
$ kubectl get sa demo -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo
namespace: default
secrets:
- name: default-token-uudge
Now, you want to add or change the imagePullSecrets for that service account. To do so, edit the YAML file and add imagePullSecrets.
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo
namespace: default
secrets:
- name: default-token-uudge
imagePullSecrets:
- name: myregistrykey
And finally, apply the changes:
$ kubectl apply -f service-account.yaml

Set container environment variables while creating a deployment with kubectl create

We can create a deployment with:
kubectl create deployment nginx-deployment --image=nginx
How can we pass an environment variable, say, key=value, for container while creating a deployment using kubectl?
Additionally, can we also use configmap or secret values as environment variables?
kubectl run nginx-pod --generator=run-pod/v1 --image=nginx --env="key1=value1" --env="key2=value2"...
Reference - run.
kubectl create deployment command does not have option to pass environment variable as a flag on the imperative command .. possible available flags on create deployment command are as below (listed by autocomplete on on kubectl cli)
$ kubectl create deployment nginx --image=nginx --
--add-dir-header --client-certificate= --insecure-skip-tls-verify --log-flush-frequency= --profile-output= --token
--allow-missing-template-keys --client-key --kubeconfig --logtostderr --request-timeout --token=
--alsologtostderr --client-key= --kubeconfig= --match-server-version --request-timeout= --user
--as --cluster --log-backtrace-at --namespace --save-config --user=
--as= --cluster= --log-backtrace-at= --namespace= --server --username
--as-group --context --log-dir --output --server= --username=
--as-group= --context= --log-dir= --output= --skip-headers --v
--cache-dir --dry-run --log-file --password --skip-log-headers --v=
--cache-dir= --generator --log-file= --password= --stderrthreshold --validate
--certificate-authority --generator= --log-file-max-size --profile --stderrthreshold= --vmodule
--certificate-authority= --image --log-file-max-size= --profile= --template --vmodule=
--client-certificate --image= --log-flush-frequency --profile-output --template=
Alternately kubectl run command can be used to create a deployment which will allow you to pass env flag on the imperative command , refer below example
$ kubectl run nginx --image=nginx --env="TEST"="/var/tmp"
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created
Now to check the env variable has been correctly set you can connect to the POD and display the env variables to verify it
Connect
$ kubectl exec -it nginx /bin/bash
List env variables on the pod
root#nginx:/# env | grep -i test
TEST=/var/tmp
Refer official doc example for second part of your question link
When creating a pod you can specify the environment variables using the --env option eg.
kubectl run nginx-pod --restart Never --image=nginx --env=key1=value1,key2=value2
Checkout kubectl run documentation
However, you cannot do this with kubectl create deployment. I'll recommend you use a declarative manifest instead.
There is a possibility as other community members pointed to pass a variable to a pod but I would advise you to use a declarative approach to creating objects in Kubernetes. Why would I?
There is a nice comic explaining differences between imperative and declarative approach.
Below are examples to:
Create a basic NGINX deployment
Create a Configmap and Secret in a declarative approach
Apply above Configmap and Secret to already created NGINX deployment
Create a basic NGINX deployment
Create a YAML definition of NGINX similar to this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Apply it by running $ kubectl apply -f FILE_NAME.yaml
Create a basic ConfigMap
Create a YAML definition of ConfigMap similar to this:
apiVersion: v1
kind: ConfigMap
metadata:
name: config-for-nginx
data:
port: "12345"
Apply it by running $ kubectl apply -f FILE_NAME.yaml
Create a basic Secret
Create a YAML definition of Secret similar to this:
apiVersion: v1
kind: Secret
metadata:
name: password-for-nginx
type: Opaque
data:
password: c3VwZXJoYXJkcGFzc3dvcmQK
Take a specific look at:
password: c3VwZXJoYXJkcGFzc3dvcmQK
This password is base64 encoded.
To create this password invoke command from your terminal:
$ echo "YOUR_PASSWORD" | base64
Paste the output to the YAML definition and apply it with:
$ kubectl apply -f FILE_NAME.
Apply above Configmap and Secret to already created NGINX deployment
You can edit your previously created NGINX deployment and add the part responsible for adding the ConfigMap and Secret to be available as environmental variables:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
env:
- name: NGINX_PASSWORD
valueFrom:
secretKeyRef:
name: password-for-nginx
key: password
- name: NGINX_PORT
valueFrom:
configMapKeyRef:
name: config-for-nginx
key: port
ports:
- containerPort: 80
Please take a specific look on below part which will add the environmental variables to created pods as NGINX_PASSWORD and NGINX_PORT:
env:
- name: NGINX_PASSWORD
valueFrom:
secretKeyRef:
name: password-for-nginx
key: password
- name: NGINX_PORT
valueFrom:
configMapKeyRef:
name: config-for-nginx
key: port
The secretKeyRef is reference to created Secret and the configMapKeyRef is the reference for created ConfigMap.
Apply it by running $ kubectl apply -f FILE_NAME.yaml once more.
It will terminate old pods and create new one with new configuration.
To check if environmental variables are configured correctly invoke below commands:
$ kubectl get pods
$ kubectl exec -it NAME_OF_THE_POD -- /bin/bash
$ echo $NGINX_PORT
$ echo $NGINX_PASSWORD
You should see the variables from ConfigMap and Secret accordingly.

how does K8S handles multiple remote docker registeries in POD definition using imagePullSecrets list

I would like to access multiple remote registries to pull images.
In the k8s documentation they say:
(If you need access to multiple registries, you can create one secret
for each registry. Kubelet will merge any imagePullSecrets into a
single virtual .docker/config.json)
and so the POD definition should be something like this:
apiVersion: v1
kind: Pod
spec:
containers:
- name: ...
imagePullSecrets:
- name: secret1
- name: secret2
- ....
- name: secretN
Now I am not sure how K8S will pick the right secret for each image? will all secrets be verified one by one each time? and how K8S will handle the failed retries? and if a specific amount of unauthorized retries could lead to some lock state in k8sor docker registries?
/ Thanks
You can use following script to add two authentications in one secret
#!/bin/bash
u1="user_1_here"
p1="password_1_here"
auth1=$(echo -n "$u1:$p1" | base64 -w0)
u2="user_2_here"
p2="password_2_here"
auth2=$(echo -n "$u2:$p2" | base64 -w0)
cat <<EOF > docker_config.json
{
"auths": {
"repo1_name_here": {
"auth": "$auth1"
},
"repo2_name_here": {
"auth": "$auth2"
}
}
}
EOF
base64 -w0 docker_config.json > docker_config_b64.json
cat <<EOF | kubectl apply -f -
apiVersion: v1
type: kubernetes.io/dockerconfigjson
kind: Secret
data:
.dockerconfigjson: $(cat docker_config_b64.json)
metadata:
name: specify_secret_name_here
namespace: specify_namespace_here
EOF
Kubernetes isn't going to try all secrets until find the correct. When you create the secret, you are referencing that it's a docker registry:
$ kubectl create secret docker-registry user1-secret --docker-server=https://index.docker.io/v1/ --docker-username=user1 --docker-password=PASSWORD456 --docker-email=user1#email.com
$ kubectl create secret docker-registry user2-secret --docker-server=https://index.docker.io/v1/ --docker-username=user2 --docker-password=PASSWORD123 --docker-email=user2#email.com
$ kubectl get secrets user1-secret -o yaml
apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJkb2NrZXIuZXhhbXBsZS5jb20iOnsidXNlcm5hbWUiOiJrdWJlIiwicGFzc3dvcmQiOiJQV19TVFJJTkciLCJlbWFpbCI6Im15QGVtYWlsLmNvbSIsImF1dGgiOiJhM1ZpWlRwUVYxOVRWRkpKVGtjPSJ9fX0=
kind: Secret
metadata:
creationTimestamp: "2020-01-13T13:15:52Z"
name: user1-secret
namespace: default
resourceVersion: "1515301"
selfLink: /api/v1/namespaces/default/secrets/user1-secret
uid: d2f3bb0c-3606-11ea-a202-42010a8000ad
type: kubernetes.io/dockerconfigjson
As you can see, type is kubernetes.io/dockerconfigjson is telling Kubernetes to treat this differently.
So, when you reference the address of your container as magic.example.com/magic-image on your yaml, Kubernetes will have enough information to connect the dots and use the right secret to pull your image.
apiVersion: v1
kind: Pod
metadata:
name: busyboxes
namespace: default
spec:
imagePullSecrets:
- name: user1-secret
- name: user2-secret
containers:
- name: jenkins
image: user1/jenkins
imagePullPolicy: Always
- name: busybox
image: user2/busybox
imagePullPolicy: Always
So as this example describes, it's possible to have 2 or more docker registry secrets with the same --docker-server value. Kubernetes will manage to take care of it seamlessly.

Kube / create deployment with config map

I new in kube, and im trying to create deployment with configmap file. I have the following:
app-mydeploy.yaml
--------
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-mydeploy
labels:
app: app-mydeploy
spec:
replicas: 3
selector:
matchLabels:
app: mydeploy
template:
metadata:
labels:
app: mydeploy
spec:
containers:
- name: mydeploy-1
image: mydeploy:tag-latest
envFrom:
- configMapRef:
name: map-mydeploy
map-mydeploy
-----
apiVersion: v1
kind: ConfigMap
metadata:
name: map-mydeploy
namespace: default
data:
my_var: 10.240.12.1
I created the config and the deploy with the following commands:
kubectl create -f app-mydeploy.yaml
kubectl create configmap map-mydeploy --from-file=map-mydeploy
when im doing kubectl describe deployments, im getting among the rest:
Environment Variables from:
map-mydeploy ConfigMap Optional: false
also kubectl describe configmaps map-mydeploy give me the right results.
the issue is that my container is CrashLoopBackOff, when I look at the logs, it says: time="2019-02-05T14:47:53Z" level=fatal msg="Required environment variable my_var is not set.
this log is from my container that says that the my_var is not defined in the env vars.
what im doing wrong?
I think you are missing you key in the command
kubectl create configmap map-mydeploy --from-file=map-mydeploy
try to this kubectl create configmap map-mydeploy --from-file=my_var=map-mydeploy
also I highly recommend that if you are just using one value, create you configMap from literal kubectl create configmap my-config --from-literal=my_var=10.240.12.1 then related the configMap in your deployment as you are currently doing it.

Secret volumes do not work on multinode docker setup

I have setup a multinode kubernetes 1.0.3 cluster using instructions from https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker-multinode.md.
I create a secret volume using the following spec in myns namespace:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: myns
labels:
name: mysecret
data:
myvar: "bUNqVlhCVjZqWlZuOVJDS3NIWkZHQmNWbXBRZDhsOXMK"
Create secret volume:
$ kubectl create -f mysecret.yml --namespace=myns
Check to see if secret volume exists:
$ kubectl get secrets --namespace=myns
NAME TYPE DATA
mysecret Opaque 1
Here is the Pod spec of the consumer of the secret volume:
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: myns
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
name: busybox
volumeMounts:
- name: mysecret
mountPath: /etc/mysecret
readOnly: true
volumes:
- name: mysecret
secret:
secretName: mysecret
Create the Pod
kubectl create -f busybox.yml --namespace=myns
Now if I exec into the docker container to inspect the contents of the /etc/mysecret directory. I find it to be empty.
What namespace are your pod and secret in? They must be in the same namespace. Would you post a gist or pastebin of the Kubelet log? That contains information that can help us diagnose this.
Also, are you running the Kubelet on your host directly or in a container?