Set container environment variables while creating a deployment with kubectl create - kubernetes

We can create a deployment with:
kubectl create deployment nginx-deployment --image=nginx
How can we pass an environment variable, say, key=value, for container while creating a deployment using kubectl?
Additionally, can we also use configmap or secret values as environment variables?

kubectl run nginx-pod --generator=run-pod/v1 --image=nginx --env="key1=value1" --env="key2=value2"...
Reference - run.

kubectl create deployment command does not have option to pass environment variable as a flag on the imperative command .. possible available flags on create deployment command are as below (listed by autocomplete on on kubectl cli)
$ kubectl create deployment nginx --image=nginx --
--add-dir-header --client-certificate= --insecure-skip-tls-verify --log-flush-frequency= --profile-output= --token
--allow-missing-template-keys --client-key --kubeconfig --logtostderr --request-timeout --token=
--alsologtostderr --client-key= --kubeconfig= --match-server-version --request-timeout= --user
--as --cluster --log-backtrace-at --namespace --save-config --user=
--as= --cluster= --log-backtrace-at= --namespace= --server --username
--as-group --context --log-dir --output --server= --username=
--as-group= --context= --log-dir= --output= --skip-headers --v
--cache-dir --dry-run --log-file --password --skip-log-headers --v=
--cache-dir= --generator --log-file= --password= --stderrthreshold --validate
--certificate-authority --generator= --log-file-max-size --profile --stderrthreshold= --vmodule
--certificate-authority= --image --log-file-max-size= --profile= --template --vmodule=
--client-certificate --image= --log-flush-frequency --profile-output --template=
Alternately kubectl run command can be used to create a deployment which will allow you to pass env flag on the imperative command , refer below example
$ kubectl run nginx --image=nginx --env="TEST"="/var/tmp"
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created
Now to check the env variable has been correctly set you can connect to the POD and display the env variables to verify it
Connect
$ kubectl exec -it nginx /bin/bash
List env variables on the pod
root#nginx:/# env | grep -i test
TEST=/var/tmp
Refer official doc example for second part of your question link

When creating a pod you can specify the environment variables using the --env option eg.
kubectl run nginx-pod --restart Never --image=nginx --env=key1=value1,key2=value2
Checkout kubectl run documentation
However, you cannot do this with kubectl create deployment. I'll recommend you use a declarative manifest instead.

There is a possibility as other community members pointed to pass a variable to a pod but I would advise you to use a declarative approach to creating objects in Kubernetes. Why would I?
There is a nice comic explaining differences between imperative and declarative approach.
Below are examples to:
Create a basic NGINX deployment
Create a Configmap and Secret in a declarative approach
Apply above Configmap and Secret to already created NGINX deployment
Create a basic NGINX deployment
Create a YAML definition of NGINX similar to this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Apply it by running $ kubectl apply -f FILE_NAME.yaml
Create a basic ConfigMap
Create a YAML definition of ConfigMap similar to this:
apiVersion: v1
kind: ConfigMap
metadata:
name: config-for-nginx
data:
port: "12345"
Apply it by running $ kubectl apply -f FILE_NAME.yaml
Create a basic Secret
Create a YAML definition of Secret similar to this:
apiVersion: v1
kind: Secret
metadata:
name: password-for-nginx
type: Opaque
data:
password: c3VwZXJoYXJkcGFzc3dvcmQK
Take a specific look at:
password: c3VwZXJoYXJkcGFzc3dvcmQK
This password is base64 encoded.
To create this password invoke command from your terminal:
$ echo "YOUR_PASSWORD" | base64
Paste the output to the YAML definition and apply it with:
$ kubectl apply -f FILE_NAME.
Apply above Configmap and Secret to already created NGINX deployment
You can edit your previously created NGINX deployment and add the part responsible for adding the ConfigMap and Secret to be available as environmental variables:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
env:
- name: NGINX_PASSWORD
valueFrom:
secretKeyRef:
name: password-for-nginx
key: password
- name: NGINX_PORT
valueFrom:
configMapKeyRef:
name: config-for-nginx
key: port
ports:
- containerPort: 80
Please take a specific look on below part which will add the environmental variables to created pods as NGINX_PASSWORD and NGINX_PORT:
env:
- name: NGINX_PASSWORD
valueFrom:
secretKeyRef:
name: password-for-nginx
key: password
- name: NGINX_PORT
valueFrom:
configMapKeyRef:
name: config-for-nginx
key: port
The secretKeyRef is reference to created Secret and the configMapKeyRef is the reference for created ConfigMap.
Apply it by running $ kubectl apply -f FILE_NAME.yaml once more.
It will terminate old pods and create new one with new configuration.
To check if environmental variables are configured correctly invoke below commands:
$ kubectl get pods
$ kubectl exec -it NAME_OF_THE_POD -- /bin/bash
$ echo $NGINX_PORT
$ echo $NGINX_PASSWORD
You should see the variables from ConfigMap and Secret accordingly.

Related

Kubernetes deployment created but not listed

I've just started learning Kubernetes and I have created the deployment using the command kubectl run demo1-k8s --image=demo1-k8s:1.0 --port 8080 --image-pull-policy=Never. I got the message that deployment get created. But when I listed the deployment (kubectl get deployments), deployments not listed instead and I got the message No resources found in default namespace.
Any idea guys?
From the docs kubectl run creates a pod and not deployment. So you can use kubectl get pods command to check if the pod is created or not. For creating deployment use kubectl create deployment as documented here
for deployment creation you need to use kubectl create deployment. with kubectl run a pod will be created.
kubectl create deployment demo1-k8s --image=demo1-k8s:1.0
the template is like kubectl create deployment <deployment-name> --<flags>
but it's always better if you use yaml to create deployment or other k8s resource. just create a .yaml file.
deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo1-k8s
spec:
selector:
matchLabels:
app: demo1-k8s
replicas: 4 # Update the replicas from 2 to 4
template:
metadata:
labels:
app: demo1-k8s
spec:
containers:
- name: demo1-k8s
image: demo1-k8s:1.0
imagePullPolicy: Never
ports:
- containerPort: 8080
run this command kubectl apply -f deploy.yaml

Kubectl imperative command for deployment

I used to create deployments quickly using imperative commands
kubectl run nginx --image=nginx --restart=Always --port=80 --replicas=3 .
Now run command with deployment seems to have been deprecated. Is there any other way to do the same with kubectl create... with replicas and ports?
Since the Kubernetes v1.18 the kubectl run will no longer create deployments but pods.
What might be used instead is the imperative option of kubectl create deployment.
So the following command:
k create deploy nginx --image nginx
will do the trick for you.
It will create Deployment object in imperative way. (No need for intermediate yaml files)
# Run:
kubectl create deploy nginx --image nginx && kubectl scale deploy nginx --replicas 3
# Check:
kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 3/3 3 3 14s
Note there is no --replicas flag of kubectl create deployemnt so the scaling is controlled separately.
Try this-
kubectl create deploy nginx --image=nginx --dry-run -o yaml > webapp.yaml
change the replicas to 5 in the yaml and create it
kubectl create -f webapp.yaml
Okay, the generators were deprecated because of the pain it was to maintain that code. For easy deployment generator via CLI the best recommendation its helm3, it now doesn't need tillier and its very straightforward to use:
https://helm.sh/docs/intro/install/
Then, after installing running an Nginx deployment via CLI:
Add the repo
helm repo add bitnami https://charts.bitnami.com/bitnami
Also, you can first check what is going to get installed by adding --dry-run
helm install Nginx bitnami/nginx --dry-run
Then run without --dry-run if you are satisfied with what is going to get deployed.
I am using kubernetes : v1.22.5
Usage of imeperative command:
kubectl create deploy mydep --image=nginx --replicas=3 --dry-run=client -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mydep
name: mydep
spec:
replicas: 3
selector:
matchLabels:
app: mydep
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mydep
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
The imperative way to do this, including creating the replicas on the commandline without first saving the yaml and then editing the yaml, would be by running the following command:
kubectl create deploy nginx --image nginx --replicas 3 --port 80
you can add the --restart=Always switch if you need it to the above command.
And, if you still want to save the yaml, for any reason, like pushing it to git, you should be able to redirect the above command as usual. No change in the way shell redirection works.
kubectl create deploy nginx --image nginx --replicas 3 --port 80 --output yaml --dry-run=client > file.yaml
Create nginx-deployment.yaml file with below content.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
and run kubectl create -f nginx-deployment.yaml

Can Kubernetes Cronjobs Reuse Environment Variables From Existing Deployment?

I am using a Kubernetes Cronjob to run period database restores and post restore scripts which runs against the target environment which include tasks such as working with the database, redis, and file system.
The issue I am facing is that I have to re-define all the environment variables I use in my Deployment within the Cronjob (E.g., DATABASE_NAME, DATABASE_PASSWORD, REDIS_HOST etc.).
While repeating all the environment variables works, it is error prone as I have already forgotten to update the jobs which results in me having to re-run the entire process which takes 2-4 hours to run depending on what environment.
Is there a way to reference an existing Deployment and re-use the defined environment variables within my Cronjob?
You can use 'kind: PodPreset' object to define and inject comman env variables into multiple kuberentes objects like deployments/statefulsets/pods/replicasets etc.
Follow the link for help --> https://kubernetes.io/docs/tasks/inject-data-application/podpreset/
I don't think so you can reuse environment variables until it is coming from secrets or configmaps.So if you don't want to use secrets for non sensitive data then you can use configmaps as like below
kubectl create configmap redis-uname --from-literal=username=jp
[root#master ~]# kubectl get cm redis-uname -o yaml
apiVersion: v1
data:
username: jp
kind: ConfigMap
metadata:
creationTimestamp: "2019-11-28T21:38:18Z"
name: redis-uname
namespace: default
resourceVersion: "1090703"
selfLink: /api/v1/namespaces/default/configmaps/redis-uname
uid: 1a9e3cce-50b1-448b-8bae-4b2c6ccb6861
[root#master ~]#
[root#master ~]# echo -n 'K8sCluster!' | base64
SzhzQ2x1c3RlciE=
[root#master ~]# cat redis-sec.yaml
apiVersion: v1
kind: Secret
metadata:
name: redissecret
data:
password: SzhzQ2x1c3RlciE=
[root#master ~]#
[root#master ~]# kubectl apply -f redis-sec.yaml
secret/redissecret created
[root#master ~]# kubectl get secret redissecret -o yaml
apiVersion: v1
data:
password: SzhzQ2x1c3RlciE=
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"password":"SzhzQ2x1c3RlciE="},"kind":"Secret","metadata":{"annotations":{},"name":"redissecret","namespace":"default"}}
creationTimestamp: "2019-11-28T21:40:18Z"
name: redissecret
namespace: default
resourceVersion: "1090876"
selfLink: /api/v1/namespaces/default/secrets/redissecret
uid: 2b6acdcd-d7c6-4e50-bd0e-8c323804155b
type: Opaque
[root#master ~]#
apiVersion: v1
kind: Pod
metadata:
name: "redis-sec-env-pod"
spec:
containers:
- name: redis-sec-env-cn
image: "redis"
env:
- name: username
valueFrom:
configMapKeyRef:
name: redis-uname
key: username
- name: password
valueFrom:
secretKeyRef:
name: redissecret
key: password
[root#master ~]# kubectl apply -f reddis_sec_pod.yaml
pod/redis-sec-env-pod created
[root#master ~]# kubectl exec -it redis-sec-env-pod sh
# env|grep -i user
username=jp
# env|grep -i pass
password=K8sCluster!
#

how does K8S handles multiple remote docker registeries in POD definition using imagePullSecrets list

I would like to access multiple remote registries to pull images.
In the k8s documentation they say:
(If you need access to multiple registries, you can create one secret
for each registry. Kubelet will merge any imagePullSecrets into a
single virtual .docker/config.json)
and so the POD definition should be something like this:
apiVersion: v1
kind: Pod
spec:
containers:
- name: ...
imagePullSecrets:
- name: secret1
- name: secret2
- ....
- name: secretN
Now I am not sure how K8S will pick the right secret for each image? will all secrets be verified one by one each time? and how K8S will handle the failed retries? and if a specific amount of unauthorized retries could lead to some lock state in k8sor docker registries?
/ Thanks
You can use following script to add two authentications in one secret
#!/bin/bash
u1="user_1_here"
p1="password_1_here"
auth1=$(echo -n "$u1:$p1" | base64 -w0)
u2="user_2_here"
p2="password_2_here"
auth2=$(echo -n "$u2:$p2" | base64 -w0)
cat <<EOF > docker_config.json
{
"auths": {
"repo1_name_here": {
"auth": "$auth1"
},
"repo2_name_here": {
"auth": "$auth2"
}
}
}
EOF
base64 -w0 docker_config.json > docker_config_b64.json
cat <<EOF | kubectl apply -f -
apiVersion: v1
type: kubernetes.io/dockerconfigjson
kind: Secret
data:
.dockerconfigjson: $(cat docker_config_b64.json)
metadata:
name: specify_secret_name_here
namespace: specify_namespace_here
EOF
Kubernetes isn't going to try all secrets until find the correct. When you create the secret, you are referencing that it's a docker registry:
$ kubectl create secret docker-registry user1-secret --docker-server=https://index.docker.io/v1/ --docker-username=user1 --docker-password=PASSWORD456 --docker-email=user1#email.com
$ kubectl create secret docker-registry user2-secret --docker-server=https://index.docker.io/v1/ --docker-username=user2 --docker-password=PASSWORD123 --docker-email=user2#email.com
$ kubectl get secrets user1-secret -o yaml
apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJkb2NrZXIuZXhhbXBsZS5jb20iOnsidXNlcm5hbWUiOiJrdWJlIiwicGFzc3dvcmQiOiJQV19TVFJJTkciLCJlbWFpbCI6Im15QGVtYWlsLmNvbSIsImF1dGgiOiJhM1ZpWlRwUVYxOVRWRkpKVGtjPSJ9fX0=
kind: Secret
metadata:
creationTimestamp: "2020-01-13T13:15:52Z"
name: user1-secret
namespace: default
resourceVersion: "1515301"
selfLink: /api/v1/namespaces/default/secrets/user1-secret
uid: d2f3bb0c-3606-11ea-a202-42010a8000ad
type: kubernetes.io/dockerconfigjson
As you can see, type is kubernetes.io/dockerconfigjson is telling Kubernetes to treat this differently.
So, when you reference the address of your container as magic.example.com/magic-image on your yaml, Kubernetes will have enough information to connect the dots and use the right secret to pull your image.
apiVersion: v1
kind: Pod
metadata:
name: busyboxes
namespace: default
spec:
imagePullSecrets:
- name: user1-secret
- name: user2-secret
containers:
- name: jenkins
image: user1/jenkins
imagePullPolicy: Always
- name: busybox
image: user2/busybox
imagePullPolicy: Always
So as this example describes, it's possible to have 2 or more docker registry secrets with the same --docker-server value. Kubernetes will manage to take care of it seamlessly.

How to create multi container pod from without yaml config of pod or deployment

Trying to figure out how do I create multicontainer pod from terminal with kubectl without yaml config of any resource
tried kubectl run --image=redis --image=nginx but second --image just overrides the first one .. :)
You can't do this in a single kubectl command, but you could do it in two: using a kubectl run command followed by a kubectl patch command:
kubectl run mypod --image redis && kubectl patch deploy mypod --patch '{"spec": {"template": {"spec": {"containers": [{"name": "patch-demo", "image": "nginx"}]}}}}'
kubectl run is for running 1 or more instances of a container image on your cluster
see https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands
Go with yaml config file
follow the below steps
create patch-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: patch-demo
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: nginx
image: nginx
----
deploy patch-demo.yaml
create patch-containers.yaml as below
---
spec:
template:
spec:
containers:
- name: redis
image: redis
---
patch the above yaml to include redis container
kubectl patch deployment patch-demo --patch "$(cat patch-containers.yaml)"