Run consul agent with config-dir caused not found exception - kubernetes

In our docker-compose.yaml we have:
version: "3.5"
services:
consul-server:
image: consul:latest
command: "agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -config-dir=./usr/src/app/consul.d/"
volumes:
- ./consul.d/:/usr/src/app/consul.d
In the consul.d folder we have statically defined our services. It works fine with docker-compose.
But when trying to run it on Kubernetes with this configmap:
ahmad#ahmad-pc:~$ kubectl describe configmap consul-config -n staging
Name: consul-config
Namespace: staging
Labels: <none>
Annotations: <none>
Data
====
trip.json:
----
... omitted for clarity ...
and consul.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: consul-server
name: consul-server
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: consul-server
template:
metadata:
labels:
io.kompose.service: consul-server
spec:
containers:
- image: quay.io/bitnami/consul:latest
name: consul-server
ports:
- containerPort: 8500
#env:
#- name: CONSUL_CONF_DIR # Consul seems not respecting this env variable
# value: /consul/conf/
volumeMounts:
- name: config-volume
mountPath: /consul/conf/
command: ["agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -config-dir=/consul/conf/"]
volumes:
- name: config-volume
configMap:
name: consul-config
I got the following error:
ahmad#ahmad-pc:~$ kubectl describe pod consul-server-7489787fc7-8qzhh -n staging
...
Error: failed to start container "consul-server": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -config-dir=/consul/conf/\":
stat agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -config-dir=/consul/conf/:
no such file or directory": unknown
But when I run the container without command: agent... and bash into it, I can list files mounted in the right place.
Why consul gives me a not found error despite that folder exists?

To execute command in the pod you have to define a command in command field and arguments for the command in args field. command field is the same as ENTRYPOINT in Docker and args field is the same as CMD.
In this case you define /bin/sh as ENTRYPOINT and "-c, "consul agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -data-dir=/bitnami/consul/data/ -config-dir=/consul/conf/" as arguments so it can execute consul agent ...:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: consul-server
name: consul-server
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: consul-server
template:
metadata:
labels:
io.kompose.service: consul-server
spec:
containers:
- image: quay.io/bitnami/consul:latest
name: consul-server
ports:
- containerPort: 8500
env:
- name: CONSUL_CONF_DIR # Consul seems not respecting this env variable
value: /consul/conf/
volumeMounts:
- name: config-volume
mountPath: /consul/conf/
command: ["bin/sh"]
args: ["-c", "consul agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -data-dir=/bitnami/consul/data/ -config-dir=/consul/conf/"]
volumes:
- name: config-volume
configMap:
name: consul-config

Related

capsh command inside kubernetes container

Pod is running state but logging inside the container and and running capsh --print, give error as:
sh: capsh: not found
Running same image with --cap-add SYS_ADMIN or --privileged as docker container gives desired output.
What changes in deployment or extra permissions are needed for it to work inside k8s container?
Deployment :
kind: Deployment
apiVersion: apps/v1
metadata:
name: sample-deployment
namespace: sample
labels:
app: sample
spec:
replicas: 1
selector:
matchLabels:
app: sample
template:
metadata:
labels:
app: sample
spec:
containers:
- name: sample
image: alpine:3.17
command:
- sh
- -c
- while true; do echo Hello World; sleep 10; done
env:
- name: NFS_EXPORT_0
value: /var/opt/backup
- name: NFS_LOG_LEVEL
value: DEBUG
volumeMounts:
- name: backup
mountPath: /var/opt/backup
securityContext:
capabilities:
add: ["SYS_ADMIN"]
volumes:
- name: backup
persistentVolumeClaim:
claimName: sample-pvc

NFS mount within K8s pods failing

I am trying to get NFS share mounted into a k8s pod but its failing with the below error
mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
I tried to start rpcbind using CMD command in docker container that also did not work.
My deployment yaml is as below
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
selector:
matchLabels:
app: nfs-client-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.0
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-provisioner
- name: NFS_SERVER
value: NFS SERVER PATH
- name: NFS_PATH
value: /filesharepath
volumes:
- name: nfs-client-root
nfs:
server: <NFS IP>
path: /filesharepath
I saw in github that there is an identical issue which says rpcbind needs to be on the base system.
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/issues/224
Please assist.

k8s docker container mounts the host, but fails to output log files

The k8s docker container mounts the host, but fails to output log files to the host. Can you tell me the reason?
kubernets yaml like this
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
namespace: test
spec:
replicas: 1
template:
spec:
containers:
- name: db
image: postgres:11.0-alpine
command:
- "docker-entrypoint.sh"
- "postgres"
- "-c"
- "logging_collector=on"
- "-c"
- "log_directory=/var/lib/postgresql/log"
ports:
- containerPort: 5432
protocol: TCP
volumeMounts:
- name: log-fs
mountPath: /var/lib/postgresql/log
volumes:
- name: log-fs
hostPath:
path: /var/log

How to run the command in deployment?

I have the following deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: dev
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
imagePullSecrets:
- name: regcred
containers:
- name: keycloak
image: "hub.svc.databaker.io/service/keycloak:0.1.8"
imagePullPolicy: "IfNotPresent"
command:
- "-Dkeycloak.migration.action=import -Dkeycloak.migration.provider=dir -Dkeycloak.profile.feature.upload_scripts=enabled -Dkeycloak.migration.dir=/opt/jboss/keycloak/import-dir -Dkeycloak.migration.strategy=OVERWRITE_EXISTING"
And it can not be deployed. The error message is:
CrashLoopBackOff: back-off 5m0s restarting failed container=keycloak pod=keycloak-86c677456b-tqk6w_dev(6fb23dcc-9fe8-42fb-98d0-619a93f74da1)
I guess because of the command.
I would like to run a command analog to docker:
keycloak:
networks:
- auth
image: hub.svc.databaker.io/service/keycloak:0.1.7
container_name: keycloak
command:
- "-Dkeycloak.migration.action=import -Dkeycloak.migration.provider=dir -Dkeycloak.profile.feature.upload_scripts=enabled -Dkeycloak.migration.dir=/opt/jboss/keycloak/import-dir -Dkeycloak.migration.strategy=OVERWRITE_EXISTING"
How to run a command in K8S deployment?
Update
I have changed the deployment to:
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
imagePullSecrets:
- name: regcred
containers:
- name: keycloak
image: "hub.svc.databaker.io/service/keycloak:0.1.8"
imagePullPolicy: "IfNotPresent"
args:
- "-Dkeycloak.migration.action=import"
- "-Dkeycloak.migration.provider=dir"
- "-Dkeycloak.profile.feature.upload_scripts=enabled"
- "-Dkeycloak.migration.dir=/opt/jboss/keycloak/import-dir"
- "-Dkeycloak.migration.strategy=OVERWRITE_EXISTING"
and receive the error:
RunContainerError: failed to start container "012966e22a00e23a7d1f2d5a12e19f6aa9fcb390293f806e840bc007a733c1b0": Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"-Dkeycloak.migration.action=import -Dkeycloak.migration.provider=dir -Dkeycloak.profile.feature.upload_scripts=enabled -Dkeycloak.migration.dir=/opt/jboss/keycloak/import-dir -Dkeycloak.migration.strategy=OVERWRITE_EXISTING\": stat -Dkeycloak.migration.action=import -Dkeycloak.migration.provider=dir -Dkeycloak.profile.feature.upload_scripts=enabled -Dkeycloak.migration.dir=/opt/jboss/keycloak/import-dir -Dkeycloak.migration.strategy=OVERWRITE_EXISTING: no such file or directory": unknown
If your container has an entrypoint already you can provide the arguments only. This can be done with args. To define or override the entrypoint use command.
keycloak:
networks:
- auth
image: hub.svc.databaker.io/service/keycloak:0.1.7
container_name: keycloak
command: ["./standalone.sh"]
args:
- "-Dkeycloak.migration.action=import"
- "-Dkeycloak.migration.provider=dir"
- "-Dkeycloak.profile.feature.upload_scripts=enabled"
- "-Dkeycloak.migration.dir=/opt/jboss/keycloak/import-dir"
- "-Dkeycloak.migration.strategy=OVERWRITE_EXISTING"

unable to mount a specific directory from couchdb pod kubernetes

Hi I am trying to mount a directory from pod where couchdb is running . directory is /opt/couchdb/data and for mounting in kubernetes I am using this config for deployment .
apiVersion: v1
kind: Service
metadata:
name: couchdb0-peer0org1
spec:
ports:
- port: 5984
targetPort: 5984
type: NodePort
selector:
app: couchdb0-peer0org1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: couchdb0-peer0org1
spec:
selector:
matchLabels:
app: couchdb0-peer0org1
strategy:
type: Recreate
template:
metadata:
labels:
app: couchdb0-peer0org1
spec:
containers:
- image: hyperledger/fabric-couchdb
imagePullPolicy: IfNotPresent
name: couchdb0
env:
- name: COUCHDB_USER
value: admin
- name: COUCHDB_PASSWORD
value: admin
ports:
- containerPort: 5984
name: couchdb0
volumeMounts:
- name: datacouchdbpeer0org1
mountPath: /opt/couchdb/data
subPath: couchdb0
volumes:
- name: datacouchdbpeer0org1
persistentVolumeClaim:
claimName: worker1-incoming-volumeclaim
so by applying this deployments . I always gets result for the pods .
couchdb0-peer0org1-b89b984cf-7gjfq 0/1 CrashLoopBackOff 1 9s
couchdb0-peer0org2-86f558f6bb-jzrwf 0/1 CrashLoopBackOff 1 9s
But now the strange thing if I changed mounted directory from /opt/couchdb/data to /var/lib/couchdb then it works fine . But the issue is that I have to store the data for couchdb database in statefull manner .
Edit your /etc/exports with following content
"path/exported/directory *(rw,sync,no_subtree_check,no_root_squash)"
and then restart NFS server:
sudo /etc/init.d/nfs-kernel-server restart*
no_root_squash is used, remote root users are able to change any file on the shared file. This a quick solution but have some security concerns