The k8s docker container mounts the host, but fails to output log files to the host. Can you tell me the reason?
kubernets yaml like this
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
namespace: test
spec:
replicas: 1
template:
spec:
containers:
- name: db
image: postgres:11.0-alpine
command:
- "docker-entrypoint.sh"
- "postgres"
- "-c"
- "logging_collector=on"
- "-c"
- "log_directory=/var/lib/postgresql/log"
ports:
- containerPort: 5432
protocol: TCP
volumeMounts:
- name: log-fs
mountPath: /var/lib/postgresql/log
volumes:
- name: log-fs
hostPath:
path: /var/log
Related
I have created a deployment and I wanted to mount the host path to the container, and when I check the container I see only empty folder.
Why am I getting this error? What can be the cause?
EDIT: I am using Windows OS.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myservicepod6
labels:
app: servicepod
spec:
replicas: 1
selector:
matchLabels:
app: servicepod
template:
metadata:
labels:
app: servicepod
spec:
containers:
- name: php
image: php:7.2-apache
command: ["/bin/sh", "-c"]
args: ["service apache2 start; sleep infinity"]
ports:
- name: serviceport
containerPort: 80
volumeMounts:
- mountPath: /var/www/html/
name: hostvolume
volumes:
- name: hostvolume
hostPath:
path: /C/Users/utkarsh/pentesting/learnings/kubernetes/app/objectmanagement/deployments/src/*
EDIT FOR THE ANSWER -
I start the minkube - minikube start --mount-string="$HOME/test/src/code/file:/data"
Then I changed the deployment file like below
Showing only volume part
spec:
volumes:
- name: hostvolume
hostPath:
path: /C/Users/utkarsh/pentesting/learnings/kubernetes/app/deployments/src
containers:
- name: php
image: php:7.2-apache
command: ["/bin/sh", "-c"]
args: ["service apache2 start; sleep infinity"]
ports:
- name: serviceport
containerPort: 80
volumeMounts:
- name: hostvolume
mountPath: /test/src/code/file
When I log into the pod and went to the directory (/test/src/code/file) I found the directory empty
let me know what am I missing?
After a detailed search and hit and trial method - Found the way
Only for minikube:-
First we need to mount the host folder into the directory name:
minikube mount src/:/var/www/html
Then we need to define hostPath and mountPath as
/var/www/html
Because now we have mount the folder to html folder.
volumes:
- name: hostvolume
hostPath:
path: /var/www/html
containers:
- name: php
image: php:7.2-apache
command: ["/bin/sh", "-c"]
args: ["service apache2 start; sleep infinity"]
workingDir: /var/www/html
ports:
- name: serviceport
containerPort: 80
volumeMounts:
- name: hostvolume
mountPath: /var/www/html
In a multi-container pod:
step-1: Deploy first container Postgres Database and create a schema
step-2: Wait until the Postgres pod came up
step-3: then start deploying second container keycloak
I have written below deployment file to run :
apiVersion: apps/v1
kind: Deployment
metadata:
name: idms
namespace: default
labels:
app: idms
spec:
replicas: 1
selector:
matchLabels:
app: idms
template:
metadata:
labels:
app: idms
spec:
containers:
- name: postgres
image: registry.prod.srv.da.nsn-rdnet.net/edge/postgres:12.3-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
lifecycle:
postStart:
exec:
command: ["/bin/bash","-c","sleep 5 && PGPASSWORD=$POSTGRES_PASSWORD psql $POSTGRES_DB -U $POSTGRES_USER -c \'CREATE SCHEMA IF NOT EXISTS keycloak;\'"]
envFrom:
- configMapRef:
name: postgres-config
- name: keycloak
image: quay.io/keycloak/keycloak:10.0.1
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: REALM
value: "ntc"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_ADDR
value: "localhost"
- name: DB_PORT
value: "5432"
- name: DB_DATABASE
value: "postgresdb"
- name: DB_USER
value: "xxxxxxxxx"
- name: DB_PASSWORD
value: "xxxxxxxxx"
- name: DB_SCHEMA
value: "keycloak"
- name: KEYCLOAK_IMPORT
value: "/opt/jboss/keycloak/startup/elements/realm.json"
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
- mountPath: /opt/jboss/keycloak/startup/elements
name: elements
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
volumes:
- name: elements
configMap:
name: keycloak-elements
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
but keycloak is starting with H2 embedded database instead of Postgres. if I am using init-container to nslookup on Postgres on deployment file like below :
initContainers:
- name: init-postgres
image: busybox
command: ['sh', '-c', 'until nslookup postgres; do echo waiting for postgres; sleep 2; done;']
pod is getting stuck at "podinitialization"
you forget to add the
- name: DB_VENDOR
value: POSTGRES
in the deployment YAML file due to that keycloak by default using the H2 database mode.
YAML ref file : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment/blob/main/keycload-deployment.yaml
kubectl logs nfs-685944f556-r2pjr
Serving /exports
Serving /
rpcinfo: can't contact rpcbind: : RPC: Unable to receive; errno = Connection refused
Starting rpcbind
exportfs: / does not support NFS export
NFS started
nfs.deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs
labels:
app: nfs
spec:
replicas: 1
selector:
matchLabels:
app: nfs
template:
metadata:
labels:
app: nfs
spec:
containers:
- name: nfs-server
image: gcr.io/google_containers/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: nfs-data
What does exportfs refer to? How can I diagnose this further?
Within the nfs pod, not too sure why it's exporting /?
[root#nfs-685944f556-r2pjr /]# cat /etc/exports
/exports *(rw,fsid=0,insecure,no_root_squash)
/ *(rw,fsid=0,insecure,no_root_squash)
not too sure why it's exporting /
It is done by run_nfs.sh script, running with two arguments:
/bin/bash /usr/local/bin/run_nfs.sh /exports /
There's an issue with the image gcr.io/google_containers/volume-nfs, so it is suggested to use jsafrane/nfs-data image instead
See the corresponding github discussion
Trying to mount config files from a hostPath to a kubernetes container. This works using minikube and VirtualBox shared folder, but I am unable to make this work on Linux.
I making use of AWS EKS and the following architecture https://aws.amazon.com/quickstart/architecture/amazon-eks/. I think my problem is that the files need to live on each of the EKS Node instances.
Here is the architecture diagram:
Below is the Deployment file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: openhim-core-deployment
spec:
replicas: 1
selector:
matchLabels:
component: openhim-core
template:
metadata:
labels:
component: openhim-core
spec:
volumes:
- name: core-config
hostPath:
path: /var/config/openhim-core
containers:
- name: openhim-core
image: jembi/openhim-core:5.rc
ports:
- containerPort: 8080
- containerPort: 5000
- containerPort: 5001
volumeMounts:
- name: core-config
mountPath: /usr/src/app/config
env:
- name: NODE_ENV
value: development
After much pain I found that I am trying to place the configuration on the Linux Bastion host where I have access to kubectl but in fact this configuration will have to be on each of the EC2 instances in every availability zone.
The solution for me was to make use of a initContainer.
apiVersion: apps/v1
kind: Deployment
metadata:
name: openhim-core-deployment
spec:
replicas: 1
selector:
matchLabels:
component: openhim-core
template:
metadata:
labels:
component: openhim-core
spec:
volumes:
- name: core-config
hostPath:
path: /var/config/openhim-core
containers:
- name: openhim-core
image: jembi/openhim-core:5
ports:
- containerPort: 8080
- containerPort: 5000
- containerPort: 5001
volumeMounts:
- name: core-config
mountPath: /usr/src/app/config
env:
- name: NODE_ENV
value: development
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/usr/src/app/config/development.json"
- https://s3.eu-central-1.amazonaws.com/../development.json
volumeMounts:
- name: core-config
mountPath: "/usr/src/app/config"
volumes:
- name: core-config
emptyDir: {}
is there a way to deploy the docker image to our Kubernetes Cluster?
I have been trying to add it with the below yaml file.
but when I run status it says the environment is not setup.
What I basically tried to do is to convert the docker run command into a kubectl deployment file:
docker run -d -it --privileged=true --net=host --name=Db2wh -v /mnt/clusterfs:/mnt/bludata0 -v /mnt/clusterfs:/mnt/blumeta0 store/ibmcorp/db2wh_ee:v2.10.0-db2wh-ppcle
Can you please help me?
#testreplicaset.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: db2whce
name: db2whce
spec:
# modify replicas according to your case
replicas: 1
selector:
matchLabels:
app: db2whce
template:
metadata:
labels:
app: db2whce
spec:
containers:
- name: db2whce
image: store/ibmcorp/db2wh_ee:v2.10.0-db2wh-linux
ports:
- containerPort: 8443
- containerPort: 389
- containerPort: 50022
- containerPort: 50001
- containerPort: 50000
- containerPort: 9929
- containerPort: 9300
- containerPort: 8998
- containerPort: 5000
- containerPort: 22
args:
- "--privileged=true"
- "--net=host"
- "--name=Db2wh"
- "-v /mnt/clusterfs:/mnt/bludata0"
- "-v /mnt/clusterfs:/mnt/blumeta0"
resources:
requests:
cpu: 3
memory: 14Gi
volumeMounts:
- mountPath: /mnt/bludata0
name: db2wh-pvc
- mountPath: /mnt/clusterfs
name: db2wh-pvc
volumes:
- name: db2wh-pvc
persistentVolumeClaim:
claimName: db2wh-pvc
imagePullSecrets:
- name: dockerstore