How does argument passing in Kubernetes work? - kubernetes

A problem:
Docker arguments will pass from command line:
docker run -it -p 8080:8080 joethecoder2/spring-boot-web -Dcassandra_ip=127.0.0.1 -Dcassandra_port=9042
However, Kubernetes POD arguments will not pass from singlePod.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: spring-boot-web-demo
labels:
purpose: demonstrate-spring-boot-web
spec:
containers:
- name: spring-boot-web
image: docker.io/joethecoder2/spring-boot-web
env: ["name": "-Dcassandra_ip", "value": "127.0.0.1"]
command: ["java","-jar", "spring-boot-web-0.0.1-SNAPSHOT.jar", "-D","cassandra_ip=127.0.0.1", "-D","cassandra_port=9042"]
args: ["-Dcassandra_ip=127.0.0.1", "-Dcassandra_port=9042"]
restartPolicy: OnFailure
when I do:
kubectl create -f ./singlePod.yaml

Why don't you pass the arguments as env variables? It looks like you're using spring boot, so this shouldn't even require changes in the code since spring boot injects env variables.
The following should work:
apiVersion: v1
kind: Pod
metadata:
name: spring-boot-web-demo
labels:
purpose: demonstrate-spring-boot-web
spec:
containers:
- name: spring-boot-web
image: docker.io/joethecoder2/spring-boot-web
command: ["java","-jar", "spring-boot-web-0.0.1-SNAPSHOT.jar"]
env:
- name: cassandra_ip
value: "127.0.0.1"
- name: cassandra_port
value: "9042"

Related

Apache Flink Kubernetes operator - how to override the Docker entrypoint?

I tried to override the container entry point of a Flink application in a Dockerfile, but it looks like that the Apache Flink kubernetes operator ignores it.
The Dockerfile is the following:
FROM flink:1.14.2-scala_2.12-java11
ENV FLINK_HOME=/opt/flink
COPY custom-docker-entrypoint.sh /
RUN chmod a+x /custom-docker-entrypoint.sh
COPY --chown=flink:flink --from=build /target/*.jar /opt/flink/flink-web-upload/
ENTRYPOINT ["/custom-docker-entrypoint.sh"]
EXPOSE 6123 8081
CMD ["help"]
The definition of the FlinkDeployment uses the new image:
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: flink-example
namespace: default
spec:
image: "flink-example:0.1.0"
#...
In the description of the pod
kubectl describe pod flink-example
I see the following output:
Containers:
flink-main-container:
Command:
/docker-entrypoint.sh
I also tried to define the custom-docker-entrypoint.sh in the main container's command:
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: flink-example
namespace: default
spec:
flinkVersion: v1_14
image: "flink-example:0.1.0"
podTemplate:
apiVersion: v1
kind: Pod
metadata:
name: pod-template
spec:
containers:
- name: flink-main-container
# command: [ 'sh','-c','/custom-docker-entrypoint.sh' ]
command: [ "/custom-docker-entrypoint.sh" ]
Thank you.
You can overwrite it via:
flinkConfiguration:
kubernetes.entry.path: "/custom-docker-entrypoint.sh"
The Operator (by default) uses Flink's native Kubernetes integration. See: https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#kubernetes-entry-path

dockerfile and kubernetes jobs ( assistance needed)

I have my dockerfile in which i have used postgres:12 image and i modified it using some ddl scripts and then i build this image and i can run the container through docker run command but how i can use Kubernetes jobs to run build image , as I dont have good exp on k8s.
This is my dockerfile here you can see it.
docker build . -t dockerdb
FROM postgres:12
ENV POSTGRES_PASSWORD xyz#123123!233
ENV POSTGRES_DB test
ENV POSTGRES_USER test
COPY ./Scripts /docker-entrypoint-initdb.d/
How i can customize the below code using the below requirement
apiVersion: batch/v1
kind: Job
metadata:
name: job-1
spec:
template:
metadata:
name: job-1
spec:
containers:
- name: postgres
image: gcr.io/project/pg_12:dev
command:
- /bin/sh
- -c
- "not sure what command should i give in last line"
Not sure how you are running the docker image
if you are running your docker image without passing any command you can directly run the image in Job.
docker run <imagename>
once your Dockerimage is ready and build you can run it directly
You job will get executed without passing any command
apiVersion: batch/v1
kind: Job
metadata:
name: job-1
spec:
template:
metadata:
name: job-1
spec:
containers:
- name: postgres
image: gcr.io/project/pg_12:dev
if you want to pass any argument or command that you can pass further
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: <CHANGE IMAGE URL>
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
Just to update above template is for Cronjob, Cronjob run on specific time.

Multi container pod with command sleep k8

I am trying out mock exams on udemy and have created a multi container pod . but exam result says command is not set correctly on container test2 .I am not able to identify the issue.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: multi-pod
name: multi-pod
spec:
containers:
- image: nginx
name: test1
env:
- name: type
value: demo1
- image: busybox
name: test2
env:
- name: type
value: demo2
command: ["sleep", "4800"]
An easy way to do this is by using imperative kubectl command to generate the yaml for a single container and edit the yaml to add the other container
kubectl run nginx --image=nginx --command -oyaml --dry-run=client -- sh -c 'sleep 1d' > nginx.yaml
In this example sleep 1d is the command.
The generated yaml looks like below.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- command:
- sh
- -c
- sleep 1d
image: nginx
name: nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
Your issue is with your YAML in line 19.
Please keep in mind that YAML syntax is very sensitive for spaces and tabs.
Your issue:
- image: busybox
name: test2
env:
- name: type
value: demo2 ### Issue is in this line, you have one extra space
command: ["sleep", "4800"]
Solution:
Remove space, it wil looks like that:
env:
- name: type
value: demo2
For validation of YAML you can use external validators like yamllint.
If you would paste your YAML to mentioned validator, you will receive error:
(<unknown>): mapping values are not allowed in this context at line 19 column 14
After removing this extra space you will get
Valid YAML!

k8s: configMap does not work in deployment

We ran into an issue recently as to using environment variables inside container.
OS: windows 10 pro
k8s cluster: minikube
k8s version: 1.18.3
1. The way that doesn't work, though it's preferred way for us
Here is the deployment.yaml using 'envFrom':
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
labels:
app.kubernetes.io/name: db
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: db
template:
metadata:
labels:
app.kubernetes.io/name: db
spec:
serviceAccountName: default
securityContext:
{}
containers:
- name: db
image: "postgres:9.4"
ports:
- name: http
containerPort: 5432
protocol: TCP
envFrom:
- configMapRef:
name: db-configmap
here is the db.properties:
POSTGRES_HOST_AUTH_METHOD=trust
step 1:
kubectl create configmap db-configmap ./db.properties
step 2:
kebuctl apply -f ./deployment.yaml
step 3:
kubectl get pod
Run the above command, get the following result:
db-8d7f7bcb9-7l788 0/1 CrashLoopBackOff 1 9s
That indicates the environment variables POSTGRES_HOST_AUTH_METHOD is not injected.
2. The way that works (we can't work with this approach)
Here is the deployment.yaml using 'env':
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
labels:
app.kubernetes.io/name: db
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: db
template:
metadata:
labels:
app.kubernetes.io/name: db
spec:
serviceAccountName: default
securityContext:
{}
containers:
- name: db
image: "postgres:9.4"
ports:
- name: http
containerPort: 5432
protocol: TCP
env:
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
step 1:
kubectl apply -f ./deployment.yaml
step 2:
kubectl get pod
Run the above command, get the following result:
db-fc58f998d-nxgnn 1/1 Running 0 32s
the above indicates the environment is injected so that the db starts.
What did I do wrong in the first case?
Thank you in advance for the help.
Update:
Provide the configmap:
kubectl describe configmap db-configmap
Name: db-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
db.properties:
----
POSTGRES_HOST_AUTH_METHOD=trust
For creating config-maps for usecase-1. please use the below command
kubectl create configmap db-configmap --from-env-file db.properties
Are you missing the key? (see "key:" (no quotes) below) And I think you need to provide the name of the env-variable...which people usually use the key-name, but you don't have to. I've repeated the same value ("POSTGRES_HOST_AUTH_METHOD") below as the environment variable NAME and the keyname of the config-map.
#start env .. where we add environment variables
env:
# Define the environment variable
- name: POSTGRES_HOST_AUTH_METHOD
#value: "UseHardCodedValueToDebugSometimes"
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to environment variable (above "name:")
name: db-configmap
# Specify the key associated with the value
key: POSTGRES_HOST_AUTH_METHOD
My example (trying to use your values)....comes from this generic example:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data
pods/pod-single-configmap-env-variable.yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: special.how
restartPolicy: Never
PS
You can use "describe" to take a looksie at your config-map, after you (think:) ) you have set it up correctly.
kubectl describe configmap db-configmap --namespace=IfNotDefaultNameSpaceHere
See when you do it like you described.
deployment# exb db-7785cdd5d8-6cstw
root#db-7785cdd5d8-6cstw:/# env | grep -i TRUST
db.properties=POSTGRES_HOST_AUTH_METHOD=trust
the env set is not exactly POSTGRES_HOST_AUTH_METHOD its actually taking filename in env.
create configmap via
kubectl create cm db-configmap --from-env-file db.properties and it will actually put env POSTGRES_HOST_AUTH_METHOD in pod.

Get value of configMap from mountPath

I created configmap this way.
kubectl create configmap some-config --from-literal=key4=value1
After that i created pod which looks like this
.
I connect to this pod this way
k exec -it nginx-configmap -- /bin/sh
I found the folder /some/path but i could get value from key4.
If you refer to your ConfigMap in your Pod this way:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: config-volume
volumes:
- name: config-volume
configMap:
name: some-config
it will be available in your Pod as a file /var/www/html/key4 with the content of value1.
If you rather want it to be available as an environment variable you need to refer to it this way:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
envFrom:
- configMapRef:
name: some-config
As you can see you don't need for it any volumes and volume mounts.
Once you connect to such Pod by running:
kubectl exec -ti mypod -- /bin/bash
You will see that your environment variable is defined:
root#mypod:/# echo $key4
value1