Multi container pod with command sleep k8 - kubernetes

I am trying out mock exams on udemy and have created a multi container pod . but exam result says command is not set correctly on container test2 .I am not able to identify the issue.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: multi-pod
name: multi-pod
spec:
containers:
- image: nginx
name: test1
env:
- name: type
value: demo1
- image: busybox
name: test2
env:
- name: type
value: demo2
command: ["sleep", "4800"]

An easy way to do this is by using imperative kubectl command to generate the yaml for a single container and edit the yaml to add the other container
kubectl run nginx --image=nginx --command -oyaml --dry-run=client -- sh -c 'sleep 1d' > nginx.yaml
In this example sleep 1d is the command.
The generated yaml looks like below.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- command:
- sh
- -c
- sleep 1d
image: nginx
name: nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

Your issue is with your YAML in line 19.
Please keep in mind that YAML syntax is very sensitive for spaces and tabs.
Your issue:
- image: busybox
name: test2
env:
- name: type
value: demo2 ### Issue is in this line, you have one extra space
command: ["sleep", "4800"]
Solution:
Remove space, it wil looks like that:
env:
- name: type
value: demo2
For validation of YAML you can use external validators like yamllint.
If you would paste your YAML to mentioned validator, you will receive error:
(<unknown>): mapping values are not allowed in this context at line 19 column 14
After removing this extra space you will get
Valid YAML!

Related

How to use hyphen key with hyphen in kubernetes secret?

I want to inject the following secret key/value in pods: test-with=1 and testwith=1. First I create the secret:
kubectl create secret generic test --from-literal=test-with=1 --from-literal=testwith=0
Then I create a yaml file for a pod with the following specification:
containers:
...
envFrom:
- secretRef:
name: test
The pod is running, but the result of env command inside the container only shows:
...
TERM=xterm
testwith=0
...
The test-with=1 does not show up. How can i declare the secret to see the key/value?
Variables with delimitations in names are displayed at the top when viewed through printenv.
Checked:
$ kubectl create secret generic test --from-literal=test-with=1 --from-literal=testwith=0
$ kubectl get secret/test -o yaml
apiVersion: v1
data:
test-with: MQ==
testwith: MA==
kind: Secret
metadata:
...
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: check-env
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx
image: nginx
envFrom:
- secretRef:
name: test
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
hostNetwork: true
dnsPolicy: Default
EOF
$ kubectl exec -it shell-demo -- printenv | grep test
test-with=1
testwith=0
GKE v1.18.16-gke.502
I don't see any issue. Here is how I replicated it:
kubectl create secret generic test --from-literal=test-with=1 --from-literal=testwith=0
cat<<EOF | kubectl apply -f-
apiVersion: v1
kind: Pod
metadata:
labels:
run: cent
name: cent
spec:
containers:
- image: centos:7
name: cent
command:
- sleep
- "9999"
envFrom:
- secretRef:
name: test
EOF
➜ ~ kubectl exec -it cent -- bash
[root#cent /]# env | grep test
test-with=1
testwith=0
Most probably this is image issue

Get value of configMap from mountPath

I created configmap this way.
kubectl create configmap some-config --from-literal=key4=value1
After that i created pod which looks like this
.
I connect to this pod this way
k exec -it nginx-configmap -- /bin/sh
I found the folder /some/path but i could get value from key4.
If you refer to your ConfigMap in your Pod this way:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: config-volume
volumes:
- name: config-volume
configMap:
name: some-config
it will be available in your Pod as a file /var/www/html/key4 with the content of value1.
If you rather want it to be available as an environment variable you need to refer to it this way:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
envFrom:
- configMapRef:
name: some-config
As you can see you don't need for it any volumes and volume mounts.
Once you connect to such Pod by running:
kubectl exec -ti mypod -- /bin/bash
You will see that your environment variable is defined:
root#mypod:/# echo $key4
value1

Kubernetes ValidationError invalid type for io.k8s.api.core.v1.ContainerPort.containerPort: got "string", expected "integer";

I have following pod manifest file.in that, I have defined some environment variables.
I want to assign an environment variable value to the container port as follow.
- containerPort: $(PORT_HTTP)
but this yaml trigger error when I try to create it:
ValidationError(Pod.spec.containers[0].ports[0].containerPort): invalid type for io.k8s.api.core.v1.ContainerPort.containerPort: got "string", expected "integer"; if you choose to ignore these errors, turn validation off with --validate=false
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: webapp
name: webapp
spec:
containers:
- env:
- name: PORT_HTTP
value: 8080
- name: PORT_HTTPS
value: 8443
image: nginx
name: webapp
ports:
- containerPort: $(PORT_HTTP)
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
How to convert string value to integer value in yaml.
Environment variable substitution doesn't happen in kubernetes. To achieve this, you can use Helm. Or you can use shell command as follows,
( echo "cat <<EOF" ; cat pod.yaml; echo EOF ) | sh > pod-variable-resolved.yaml
And then use it to create pod in kubernetes.
kubectl apply -f pod-variable-resolved.yaml

can i use a configmap created from an init container in the pod

I am trying to "pass" a value from the init container to a container. Since values in a configmap are shared across the namespace, I figured I can use it for this purpose. Here is my job.yaml (with faked-out info):
apiVersion: batch/v1
kind: Job
metadata:
name: installer-test
spec:
template:
spec:
containers:
- name: installer-test
image: installer-test:latest
env:
- name: clusterId
value: "some_cluster_id"
- name: in_artifactoryUrl
valueFrom:
configMapKeyRef:
name: test-config
key: artifactorySnapshotUrl
initContainers:
- name: artifactory-snapshot
image: busybox
command: ['kubectl', 'create configmap test-config --from-literal=artifactorySnapshotUrl=http://artifactory.com/some/url']
restartPolicy: Never
backoffLimit: 0
This does not seem to work (EDIT: although the statements following this edit note may still be correct, this is not working because kubectl is not a recognizable command in the busybox image), and I am assuming that the pod can only read values from a configmap created BEFORE the pod is created. Has anyone else come across the difficulty of passing values between containers, and what did you do to solve this?
Should I deploy the configmap in another pod and wait to deploy this one until the configmap exists?
(I know I can write files to a volume, but I'd rather not go that route unless it's absolutely necessary, since it essentially means our docker images must be coupled to an environment where some specific files exist)
You can create an EmptyDir volume, and mount this volume onto both containers. Unlike persistent volume, EmptyDir has no portability issue.
apiVersion: batch/v1
kind: Job
metadata:
name: installer-test
spec:
template:
spec:
containers:
- name: installer-test
image: installer-test:latest
env:
- name: clusterId
value: "some_cluster_id"
volumeMounts:
- name: tmp
mountPath: /tmp/artifact
initContainers:
- name: artifactory-snapshot
image: busybox
command: ['/bin/sh', '-c', 'cp x /tmp/artifact/x']
volumeMounts:
- name: tmp
mountPath: /tmp/artifact
restartPolicy: Never
volumes:
- name: tmp
emptyDir: {}
backoffLimit: 0
If for various reasons, you don't want to use share volume. And you want to create a configmap or a secret, here is a solution.
First you need to use a docker image which contains kubectl : gcr.io/cloud-builders/kubectl:latest for example. (docker image which contains kubectl manage by Google).
Then this (init)container needs enough rights to create resource on Kubernetes cluster. Ok by default, kubernetes inject a token of default service account named : "default" in container, but I prefer to make more explicit, then add this line :
...
initContainers:
- # Already true by default but if use it, prefer to make it explicit
automountServiceAccountToken: true
name: artifactory-snapshot
And add "edit" role to "default" service account:
kubectl create rolebinding default-edit-rb --clusterrole=edit --serviceaccount=default:myapp --namespace=default
Then complete example :
apiVersion: batch/v1
kind: Job
metadata:
name: installer-test
spec:
template:
spec:
initContainers:
- # Already true by default but if use it, prefer to make it explicit.
automountServiceAccountToken: true
name: artifactory-snapshot
# You need to use docker image which contains kubectl
image: gcr.io/cloud-builders/kubectl:latest
command:
- sh
- -c
# the "--dry-run -o yaml | kubectl apply -f -" is to make command idempotent
- kubectl create configmap test-config --from-literal=artifactorySnapshotUrl=http://artifactory.com/some/url --dry-run -o yaml | kubectl apply -f -
containers:
- name: installer-test
image: installer-test:latest
env:
- name: clusterId
value: "some_cluster_id"
- name: in_artifactoryUrl
valueFrom:
configMapKeyRef:
name: test-config
key: artifactorySnapshotUrl
First of all, kubectl is a binary. It was downloaded in your machine before you could use the command. But, In your POD, the kubectl binary doesn't exist. So, you can't use kubectl command from a busybox image.
Furthermore, kubectl uses some credential that is saved in your machine (probably in ~/.kube path). So, If you try to use kubectl from inside an image, this will fail because of missing credentials.
For your scenario, I will suggest the same as #ccshih, use volume sharing.
Here is the official doc about volume sharing between init-container and container.
The yaml that is used here is ,
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
Here init-containers saves a file in the volume and later the file was available in inside the container. Try the tutorial by yourself for better understanding.

How does argument passing in Kubernetes work?

A problem:
Docker arguments will pass from command line:
docker run -it -p 8080:8080 joethecoder2/spring-boot-web -Dcassandra_ip=127.0.0.1 -Dcassandra_port=9042
However, Kubernetes POD arguments will not pass from singlePod.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: spring-boot-web-demo
labels:
purpose: demonstrate-spring-boot-web
spec:
containers:
- name: spring-boot-web
image: docker.io/joethecoder2/spring-boot-web
env: ["name": "-Dcassandra_ip", "value": "127.0.0.1"]
command: ["java","-jar", "spring-boot-web-0.0.1-SNAPSHOT.jar", "-D","cassandra_ip=127.0.0.1", "-D","cassandra_port=9042"]
args: ["-Dcassandra_ip=127.0.0.1", "-Dcassandra_port=9042"]
restartPolicy: OnFailure
when I do:
kubectl create -f ./singlePod.yaml
Why don't you pass the arguments as env variables? It looks like you're using spring boot, so this shouldn't even require changes in the code since spring boot injects env variables.
The following should work:
apiVersion: v1
kind: Pod
metadata:
name: spring-boot-web-demo
labels:
purpose: demonstrate-spring-boot-web
spec:
containers:
- name: spring-boot-web
image: docker.io/joethecoder2/spring-boot-web
command: ["java","-jar", "spring-boot-web-0.0.1-SNAPSHOT.jar"]
env:
- name: cassandra_ip
value: "127.0.0.1"
- name: cassandra_port
value: "9042"