kubernetes deploy Pod and python file start - kubernetes

apiVersion: v1
kind: Pod
metadata:
name: test
labels:
app: test-pod
spec:
containers:
- name: testserver
image: test_server:2.5
ports:
- containerPort: 8080
- containerPort: 5100
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- name: testserver
mountPath: /app/test/csv
# command: ["/bin/bash"]
# args: ["-c", "python /app/api/Python_Rest.py"]
- name: testdb
image: lev_test_db:1.4
ports:
- containerPort: 1433
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- name: testdb
mountPath: /var/opt/mssql/data
volumes:
- name: testserver
hostPath:
path: /usr/testhostpath/testserver
- name: levmldb
hostPath:
path: /usr/testhostpath/testdb
If you do it in the way I commented out, tomcat does not work properly because the python server is running before tomcat succeeds.
Deploy tomcat conatiner using yaml file in kubernetes environment, and if tomcat succeeds normally, I want to run python file. What should I do?

You can use the sleep command to delay the testserver start
A little more fancy solution can be
command:
- "sleep"
- "100"
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- |
python /app/api/Python_Rest.py

Related

Show Pod IP Address using environment variable

I want to display the pod IP address in an nginx pod. Currently I am using an init container to initialize the pod by writing to a volume.
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox:1.28
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command:
- echo
- $(POD_IP) >> /work-dir/index.html
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
This should work in theory, but the file redirect doesn't work and the mounted file in the nginx container is blank. There's probably an easier way to do this, but I'm curious why this doesn't work.
Nothing is changed, except how command is passed in the init container. See this for an explanation.
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox:1.28
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command:
- 'sh'
- '-c'
- 'echo $(POD_IP) > /work-dir/index.html'
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}

Mounting volume resulting empty folder in kubernetes minikube?

I have created a deployment and I wanted to mount the host path to the container, and when I check the container I see only empty folder.
Why am I getting this error? What can be the cause?
EDIT: I am using Windows OS.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myservicepod6
labels:
app: servicepod
spec:
replicas: 1
selector:
matchLabels:
app: servicepod
template:
metadata:
labels:
app: servicepod
spec:
containers:
- name: php
image: php:7.2-apache
command: ["/bin/sh", "-c"]
args: ["service apache2 start; sleep infinity"]
ports:
- name: serviceport
containerPort: 80
volumeMounts:
- mountPath: /var/www/html/
name: hostvolume
volumes:
- name: hostvolume
hostPath:
path: /C/Users/utkarsh/pentesting/learnings/kubernetes/app/objectmanagement/deployments/src/*
EDIT FOR THE ANSWER -
I start the minkube - minikube start --mount-string="$HOME/test/src/code/file:/data"
Then I changed the deployment file like below
Showing only volume part
spec:
volumes:
- name: hostvolume
hostPath:
path: /C/Users/utkarsh/pentesting/learnings/kubernetes/app/deployments/src
containers:
- name: php
image: php:7.2-apache
command: ["/bin/sh", "-c"]
args: ["service apache2 start; sleep infinity"]
ports:
- name: serviceport
containerPort: 80
volumeMounts:
- name: hostvolume
mountPath: /test/src/code/file
When I log into the pod and went to the directory (/test/src/code/file) I found the directory empty
let me know what am I missing?
After a detailed search and hit and trial method - Found the way
Only for minikube:-
First we need to mount the host folder into the directory name:
minikube mount src/:/var/www/html
Then we need to define hostPath and mountPath as
/var/www/html
Because now we have mount the folder to html folder.
volumes:
- name: hostvolume
hostPath:
path: /var/www/html
containers:
- name: php
image: php:7.2-apache
command: ["/bin/sh", "-c"]
args: ["service apache2 start; sleep infinity"]
workingDir: /var/www/html
ports:
- name: serviceport
containerPort: 80
volumeMounts:
- name: hostvolume
mountPath: /var/www/html

How to run a keycloak as second container after first container postgres Database start up at multi-container pod environment of kubernetes?

In a multi-container pod:
step-1: Deploy first container Postgres Database and create a schema
step-2: Wait until the Postgres pod came up
step-3: then start deploying second container keycloak
I have written below deployment file to run :
apiVersion: apps/v1
kind: Deployment
metadata:
name: idms
namespace: default
labels:
app: idms
spec:
replicas: 1
selector:
matchLabels:
app: idms
template:
metadata:
labels:
app: idms
spec:
containers:
- name: postgres
image: registry.prod.srv.da.nsn-rdnet.net/edge/postgres:12.3-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
lifecycle:
postStart:
exec:
command: ["/bin/bash","-c","sleep 5 && PGPASSWORD=$POSTGRES_PASSWORD psql $POSTGRES_DB -U $POSTGRES_USER -c \'CREATE SCHEMA IF NOT EXISTS keycloak;\'"]
envFrom:
- configMapRef:
name: postgres-config
- name: keycloak
image: quay.io/keycloak/keycloak:10.0.1
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: REALM
value: "ntc"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_ADDR
value: "localhost"
- name: DB_PORT
value: "5432"
- name: DB_DATABASE
value: "postgresdb"
- name: DB_USER
value: "xxxxxxxxx"
- name: DB_PASSWORD
value: "xxxxxxxxx"
- name: DB_SCHEMA
value: "keycloak"
- name: KEYCLOAK_IMPORT
value: "/opt/jboss/keycloak/startup/elements/realm.json"
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
- mountPath: /opt/jboss/keycloak/startup/elements
name: elements
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
volumes:
- name: elements
configMap:
name: keycloak-elements
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
but keycloak is starting with H2 embedded database instead of Postgres. if I am using init-container to nslookup on Postgres on deployment file like below :
initContainers:
- name: init-postgres
image: busybox
command: ['sh', '-c', 'until nslookup postgres; do echo waiting for postgres; sleep 2; done;']
pod is getting stuck at "podinitialization"
you forget to add the
- name: DB_VENDOR
value: POSTGRES
in the deployment YAML file due to that keycloak by default using the H2 database mode.
YAML ref file : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment/blob/main/keycload-deployment.yaml

Kubernetes NFS volume with dynamic path

I am trying to mount my applications' logs directory to nfs dynamically including node_name.
No success so far.
I tried as below:
kind: Pod
apiVersion: v1
metadata:
name: nfs-in-a-pod
spec:
containers:
- name: app
image: alpine
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs
subPath: /$(NODE_NAME)
command: ["/bin/sh"]
args: ["-c", "sleep 500000"]
volumes:
- name: nfs-volume
nfs:
server: ip_adress_here
path: /mnt/events
I think instead of subPath you should use subPathExpr, as mentioned in the documentation.
Use the subPathExpr field to construct subPath directory names from Downward API environment variables. This feature requires the VolumeSubpathEnvExpansion feature gate to be enabled. It is enabled by default starting with Kubernetes 1.15. The subPath and subPathExpr properties are mutually exclusive.
In this example, a Pod uses subPathExpr to create a directory pod1 within the hostPath volume /var/log/pods, using the pod name from the Downward API. The host directory /var/log/pods/pod1 is mounted at /logs in the container.
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: container1
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
image: busybox
command: [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ]
volumeMounts:
- name: workdir1
mountPath: /logs
subPathExpr: $(POD_NAME)
restartPolicy: Never
volumes:
- name: workdir1
hostPath:
path: /var/log/pods
Hope that´s it.

Getting Consul and Registrator to work in Kubernetes

I'm trying to use Consul with Registrator in GCE & K8s. Everything launches fine except `Registrator'.
Here is my deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: consul
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
service: consul
spec:
restartPolicy: Always
containers:
- name: consul
image: eu.gcr.io/xxx/consul
ports:
- containerPort: 8300
protocol: TCP
- containerPort: 8400
protocol: TCP
- containerPort: 8500
protocol: TCP
- containerPort: 53
protocol: UDP
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
args:
- -server
- -bootstrap
- -advertise=$(MY_POD_IP)
- name: registrator
args:
- -internal
- -ip=$(MY_POD_IP)
- consul://localhost:8500
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: eu.gcr.io/xxx/registrator
volumeMounts:
- mountPath: /tmp/docker.sock
name: registrator-claim0
volumes:
- name: registrator-claim0
persistentVolumeClaim:
claimName: registrator-claim0
status: {}
Here are the log outputs:
Consul:
Registrator:
In docker-compose everything works fine, but I haven't got my head completeley around K8s and GCE. Thanks for the help!
I have switched to Linkerd which works very well together with k8s.