Cannot execute netcat inside a Docker/Kubernetes container - kubernetes

I am trying to deploy a few Docker containers via Kubernetes within an AWS cluster. I have no problems with the deployment itself, but I am unable to execute netcat from inside a container.
If I enter the container (kubectl exec -it example-0 bash) and I attempt to execute: nc -w 1 -q 1 127.0.0.1 2181, it fails with:
(UNKNOWN) [127.0.0.1] 2181 (?) : Connection refused
This becomes a problem for some of my containers because I am using the netcat command to implement the corresponding readiness probes.
Example:
Deploying a Zookeeper container:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zookeeper
labels:
app: zookeeper
spec:
serviceName: zookeeper
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
terminationGracePeriodSeconds: 1800
nodeSelector:
proc_host: "yes"
host_role: "iw"
containers:
- name: zookeeper
image: imageregistry:443/mydomain/zookeeper
imagePullPolicy: Always
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: peer
- containerPort: 3888
name: leader-election
resources:
limits:
memory: 120Mi
requests:
cpu: "10m"
memory: 100Mi
lifecycle:
preStop:
exec:
command: ["sh", "-ce", "kill -s TERM 1; while $(kill -0 1 2>/dev/null); do sleep 1; done"]
env:
- name: LOGGING_LEVEL
value: WARN
- name: ID_OFFSET
value: "4"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- '[ "imok" = "$(echo ruok | nc -w 1 -q 1 127.0.0.1 2181)" ]'
initialDelaySeconds: 15
timeoutSeconds: 5
volumeMounts:
- name: zookeeper
mountPath: /tmp/zookeeper
volumes:
- name: zookeeper
hostPath:
path: /var/lib/kafka/zookeeper
and it deploys fine, but the readiness probe fails with the following error:
Readiness probe failed: (UNKNOWN) [127.0.0.1] 2181 (?) : Connection refused
I am new to Kubernetes, does anybody know what I am missing?
Thank you for your help.

Related

kubernetes how to copy a cfg file into container before contaner running?

My service need a cfg file which need to be changed before containers start running. So it is not suitable to pack the cfg into docker image.
I need to copy from cluster to container, and then the service in container start and reads this cfg.
How can I do this?
I think for your use-case , Init Containers might be the best fit. Init Containers are like small scripts that you can run before starting your own containers in kubernetes pod , they must exit. You can have this config file updated in shared Persistent Volume between your Init container and your container.
Following article gives a nice example as to how this can be done
https://medium.com/#jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519
UPDATE :
I found another answer from stackoverflow which might be related and give you a better approach in handling this
can i use a configmap created from an init container in the pod
An example
ouput
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 70s
[root#green--1 ~]# k exec -it cassandra-0 -n green -- /bin/bash
root#cassandra-0:/# ls /config/cassandra/
cassandra.yaml
How the directory is shared
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
namespace: green
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 1
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
initContainers:
- name: copy
image: busybox:1.28
command: ["/bin/sh", "-c", "cp /config/cassandra.yaml /config/cassandra/"]
volumeMounts:
- name: tmp-config
mountPath: /config/cassandra/
- name: cassandraconfig
mountPath: /config/
containers:
- name: cassandra
#image: gcr.io/google-samples/cassandra:v13
image: cassandra:3.11.6
imagePullPolicy: Always
#command: [ "/bin/sh","-c","su cassandra && mkdir -p /etc/cassandra/ && cp /config/cassandra/cassandra.yaml /etc/cassandra/" ]
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "500m"
memory: 4Gi
requests:
cpu: "500m"
memory: 4Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- "cp /config/cassandra/cassandra.yaml /etc/cassandra/"
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 1G
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.green.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "green"
- name: CASSANDRA_DC
value: "ee1-green"
- name: CASSANDRA_RACK
value: "Rack1-green"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- /ready-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 55
volumeMounts:
- name: cassandradata
mountPath: /cassandra_data
- name: tmp-config
mountPath: /config/cassandra/
volumes:
- name: cassandraconfig
configMap:
name: cassandraconfig
- name: tmp-config
emptyDir: {}
# 1 Creating a volume to move date from init container to main container without making mount ReadOnly
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
volumeClaimTemplates:
- metadata:
name: cassandradata
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: rook-ceph-block
resources:
requests:
storage: 5Gi
You could use ConfigMap. Create a config map resource and config your container to load the config map accordingly. You container will then be able to load those variables from its environment variable.
Here is the reference: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/

Istio direct Pod to Pod communication

I have a problem with the communication to a Pod from a Pod deployed with Istio? I actually need it to make Hazelcast discovery working with Istio, but I'll try to generalize the issue here.
Let's have a sample hello world service deployed on Kubernetes. The service replies to the HTTP request on the port 8000.
$ kubectl create deployment nginx --image=crccheck/hello-world
The created Pod has an internal IP assigned:
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
hello-deployment-84d876dfd-s6r5w 1/1 Running 0 8m 10.20.3.32 gke-rafal-test-istio-1-0-default-pool-91f437a3-cf5d <none>
In the job curl.yaml, we can use the Pod IP directly.
apiVersion: batch/v1
kind: Job
metadata:
name: curl
spec:
template:
spec:
containers:
- name: curl
image: byrnedo/alpine-curl
command: ["curl", "10.20.3.32:8000"]
restartPolicy: Never
backoffLimit: 4
Running the job without Istio works fine.
$ kubectl apply -f curl.yaml
$ kubectl logs pod/curl-pptlm
...
Hello World
...
However, when I try to do the same with Istio, it does not work. The HTTP request gets blocked by Envoy.
$ kubectl apply -f <(istioctl kube-inject -f curl.yaml)
$ kubectl logs pod/curl-s2bj6 curl
...
curl: (7) Failed to connect to 10.20.3.32 port 8000: Connection refused
I've played with Service Entries, MESH_INTERNAL, and MESH_EXTERNAL, but with no success. How to bypass Envoy and make a direct call to a Pod?
EDIT: The output of istioctl kube-inject -f curl.yaml.
$ istioctl kube-inject -f curl.yaml
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
name: curl
spec:
backoffLimit: 4
template:
metadata:
annotations:
sidecar.istio.io/status: '{"version":"dbf2d95ff300e5043b4032ed912ac004974947cdd058b08bade744c15916ba6a","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
creationTimestamp: null
spec:
containers:
- command:
- curl
- 10.16.2.34:8000/
image: byrnedo/alpine-curl
name: curl
resources: {}
- args:
- proxy
- sidecar
- --domain
- $(POD_NAMESPACE).svc.cluster.local
- --configPath
- /etc/istio/proxy
- --binaryPath
- /usr/local/bin/envoy
- --serviceCluster
- curl.default
- --drainDuration
- 45s
- --parentShutdownDuration
- 1m0s
- --discoveryAddress
- istio-pilot.istio-system:15010
- --zipkinAddress
- zipkin.istio-system:9411
- --connectTimeout
- 10s
- --proxyAdminPort
- "15000"
- --concurrency
- "2"
- --controlPlaneAuthPolicy
- NONE
- --statusPort
- "15020"
- --applicationPorts
- ""
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ISTIO_META_CONFIG_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: ISTIO_META_INTERCEPTION_MODE
value: REDIRECT
image: docker.io/istio/proxyv2:1.1.1
imagePullPolicy: IfNotPresent
name: istio-proxy
ports:
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
readinessProbe:
failureThreshold: 30
httpGet:
path: /healthz/ready
port: 15020
initialDelaySeconds: 1
periodSeconds: 2
resources:
limits:
cpu: "2"
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
securityContext:
readOnlyRootFilesystem: true
runAsUser: 1337
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/certs/
name: istio-certs
readOnly: true
initContainers:
- args:
- -p
- "15001"
- -u
- "1337"
- -m
- REDIRECT
- -i
- '*'
- -x
- ""
- -b
- ""
- -d
- "15020"
image: docker.io/istio/proxy_init:1.1.1
imagePullPolicy: IfNotPresent
name: istio-init
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 10m
memory: 10Mi
securityContext:
capabilities:
add:
- NET_ADMIN
restartPolicy: Never
volumes:
- emptyDir:
medium: Memory
name: istio-envoy
- name: istio-certs
secret:
optional: true
secretName: istio.default
status: {}
---
When a pod with an istio side car is started, the follwing things happen
an init container changes the iptables rules so that all the outgoing tcp traffic is routed to the sidecar container (istio-proxy) on port 15001 .
the containers of the pod are started in parallel (curl and istio-proxy)
If your curl container is executed before istio-proxy listens on port 15001, you get the error.
I started this container with a sleep command, exec-d into the container and the curl worked.
$ kubectl apply -f <(istioctl kube-inject -f curl-pod.yaml)
$ k exec -it -n noistio curl -c curl bash
[root#curl /]# curl 172.16.249.198:8000
<xmp>
Hello World
## .
## ## ## ==
## ## ## ## ## ===
/""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o _,/
\ \ _,'
`'--.._\..--''
</xmp>
[root#curl /]#
curl-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: curl
spec:
containers:
- name: curl
image: centos
command: ["sleep", "3600"]
Make sure that you have configured a ingress "Gateway" and after doing that you need to configure a "VirtualService". See link below for simple example.
https://istio.io/docs/tasks/traffic-management/ingress/#configuring-ingress-using-an-istio-gateway
Once you have deployed the gateway along with the virtual service you should be able to 'curl' you service from outside the cluster from an external IP.
But if you want to check for traffic from INSIDE the cluster you will need to use istio's mirroring API to mirror the service (pod) from one pod to another pod, and THEN use your command (kubectl apply -f curl.yaml) to see the traffic.
See link below for mirroring example:
https://istio.io/docs/tasks/traffic-management/mirroring/
hope this helps

unknown host when lookup pod by name, resolved with pod restart

I have an installer that spins up two pods in my CI flow, let's call them web and activemq. When the web pod starts it tries to communicate with the activemq pod using the k8s assigned amq-deployment-0.activemq pod name.
Randomly, the web will get an unknown host exception when trying to access amq-deployment1.activemq. If I restart the web pod in this situation the web pod will have no problem communicating with the activemq pod.
I've logged into the web pod when this happens and the /etc/resolv.conf and /etc/hosts files look fine. The host machines /etc/resolve.conf and /etc/hosts are sparse with nothing that looks questionable.
Information:
There is only 1 worker node.
kubectl --version
Kubernetes v1.8.3+icp+ee
Any ideas on how to go about debugging this issue. I can't think of a good reason for it to happen randomly nor resolve itself on a pod restart.
If there is other useful information needed, I can get it. Thank in advance
For activeMQ we do have this service file
apiVersion: v1 kind: Service
metadata:
name: activemq
labels:
app: myapp
env: dev
spec:
ports:
- port: 8161
protocol: TCP
targetPort: 8161
name: http
- port: 61616
protocol: TCP
targetPort: 61616
name: amq
selector:
component: analytics-amq
app: myapp
environment: dev
type: fa-core
clusterIP: None
And this ActiveMQ stateful set (this is the template)
kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
name: pa-amq-deployment
spec:
replicas: {{ activemqs }}
updateStrategy:
type: RollingUpdate
serviceName: "activemq"
template:
metadata:
labels:
component: analytics-amq
app: myapp
environment: dev
type: fa-core
spec:
containers:
- name: pa-amq
image: default/myco/activemq:latest
imagePullPolicy: Always
resources:
limits:
cpu: 150m
memory: 1Gi
livenessProbe:
exec:
command:
- /etc/init.d/activemq
- status
initialDelaySeconds: 10
periodSeconds: 15
failureThreshold: 16
ports:
- containerPort: 8161
protocol: TCP
name: http
- containerPort: 61616
protocol: TCP
name: amq
envFrom:
- configMapRef:
name: pa-activemq-conf-all
- secretRef:
name: pa-activemq-secret
volumeMounts:
- name: timezone
mountPath: /etc/localtime
volumes:
- name: timezone
hostPath:
path: /usr/share/zoneinfo/UTC
The Web stateful set:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: pa-web-deployment
spec:
replicas: 1
updateStrategy:
type: RollingUpdate
serviceName: "pa-web"
template:
metadata:
labels:
component: analytics-web
app: myapp
environment: dev
type: fa-core
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: component
operator: In
values:
- analytics-web
topologyKey: kubernetes.io/hostname
containers:
- name: pa-web
image: default/myco/web:latest
imagePullPolicy: Always
resources:
limits:
cpu: 1
memory: 2Gi
readinessProbe:
httpGet:
path: /versions
port: 8080
initialDelaySeconds: 30
periodSeconds: 15
failureThreshold: 76
livenessProbe:
httpGet:
path: /versions
port: 8080
initialDelaySeconds: 30
periodSeconds: 15
failureThreshold: 80
securityContext:
privileged: true
ports:
- containerPort: 8080
name: http
protocol: TCP
envFrom:
- configMapRef:
name: pa-web-conf-all
- secretRef:
name: pa-web-secret
volumeMounts:
- name: shared-volume
mountPath: /MySharedPath
- name: timezone
mountPath: /etc/localtime
volumes:
- nfs:
server: 10.100.10.23
path: /MySharedPath
name: shared-volume
- name: timezone
hostPath:
path: /usr/share/zoneinfo/UTC
This web pod also has a similar "unknown host" problem finding an external database we have configured. The issue being resolved similarly by restarting the pod. Here is the configuration of that external service. Maybe it is easier to tackle the problem from this angle? ActiveMQ has no problem using the database service name to find the DB and startup.
apiVersion: v1
kind: Service
metadata:
name: dbhost
labels:
app: myapp
env: dev
spec:
type: ExternalName
externalName: mydb.host.com
Is it possible that it is a question of which pod, and the app in its container, is started up first and which second?
In any case, connecting using a Service and not the pod name would be recommended as the pod's name assigned by Kubernetes changes between pod restarts.
A way to test connectivity, is to use telnet (or curl for the protocols it supports), if found in the image:
telnet <host/pod/Service> <port>
Not able to find a solution, I created a workaround. I set up the entrypoint.sh in my image to lookup the domain I need to access and write to the log, exiting on error:
#!/bin/bash
#disable echo and exit on error
set +ex
#####################################
# verfiy that the db service can be found or exit container
#####################################
# we do not want to install nslookup to determine if the db_host_name is valid name
# we have ping available though
# 0-success, 1-error pinging but lookup worked (services can not be pinged), 2-unreachable host
ping -W 2 -c 1 ${db_host_name} &> /dev/null
if [ $? -le 1 ]
then
echo "service ${db_host_name} is known"
else
echo "${db_host_name} service is NOT recognized. Exiting container..."
exit 1
fi
Next since only a pod restart fixed the issue. In my ansible deploy, I do a rollout check, querying the log to see if I need to do a pod restart. For example:
rollout-check.yml
- name: "Rollout status for {{rollout_item.statefulset}}"
shell: timeout 4m kubectl rollout status -n {{fa_namespace}} -f {{ rollout_item.statefulset }}
ignore_errors: yes
# assuming that the first pod will be the one that would have an issue
- name: "Get {{rollout_item.pod_name}} log to check for issue with dns lookup"
shell: kubectl logs {{rollout_item.pod_name}} --tail=1 -n {{fa_namespace}}
register: log_line
# the entrypoint will write dbhost service is NOT recognized. Exiting container... to the log
# if there is a problem getting to the dbhost
- name: "Try removing {{rollout_item.component}} pod if unable to deploy"
shell: kubectl delete pods -l component={{rollout_item.component}} --force --grace-period=0 --ignore-not-found=true -n {{fa_namespace}}
when: log_line.stdout.find('service is NOT recognized') > 0
I repeat this rollout check 6 times as sometimes even after a pod restart the service cannot be found. The additional checks are instant once the pod is successfully up.
- name: "Web rollout"
include_tasks: rollout-check.yml
loop:
- { c: 1, statefulset: "{{ dest_deploy }}/web.statefulset.yml", pod_name: "pa-web-deployment-0", component: "analytics-web" }
- { c: 2, statefulset: "{{ dest_deploy }}/web.statefulset.yml", pod_name: "pa-web-deployment-0", component: "analytics-web" }
- { c: 3, statefulset: "{{ dest_deploy }}/web.statefulset.yml", pod_name: "pa-web-deployment-0", component: "analytics-web" }
- { c: 4, statefulset: "{{ dest_deploy }}/web.statefulset.yml", pod_name: "pa-web-deployment-0", component: "analytics-web" }
- { c: 5, statefulset: "{{ dest_deploy }}/web.statefulset.yml", pod_name: "pa-web-deployment-0", component: "analytics-web" }
- { c: 6, statefulset: "{{ dest_deploy }}/web.statefulset.yml", pod_name: "pa-web-deployment-0", component: "analytics-web" }
loop_control:
loop_var: rollout_item

Deploying DB2 Warehouse to IBM Cloud with Kubernetes

is there a way to deploy the docker image to our Kubernetes Cluster?
I have been trying to add it with the below yaml file.
but when I run status it says the environment is not setup.
What I basically tried to do is to convert the docker run command into a kubectl deployment file:
docker run -d -it --privileged=true --net=host --name=Db2wh -v /mnt/clusterfs:/mnt/bludata0 -v /mnt/clusterfs:/mnt/blumeta0 store/ibmcorp/db2wh_ee:v2.10.0-db2wh-ppcle
Can you please help me?
#testreplicaset.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: db2whce
name: db2whce
spec:
# modify replicas according to your case
replicas: 1
selector:
matchLabels:
app: db2whce
template:
metadata:
labels:
app: db2whce
spec:
containers:
- name: db2whce
image: store/ibmcorp/db2wh_ee:v2.10.0-db2wh-linux
ports:
- containerPort: 8443
- containerPort: 389
- containerPort: 50022
- containerPort: 50001
- containerPort: 50000
- containerPort: 9929
- containerPort: 9300
- containerPort: 8998
- containerPort: 5000
- containerPort: 22
args:
- "--privileged=true"
- "--net=host"
- "--name=Db2wh"
- "-v /mnt/clusterfs:/mnt/bludata0"
- "-v /mnt/clusterfs:/mnt/blumeta0"
resources:
requests:
cpu: 3
memory: 14Gi
volumeMounts:
- mountPath: /mnt/bludata0
name: db2wh-pvc
- mountPath: /mnt/clusterfs
name: db2wh-pvc
volumes:
- name: db2wh-pvc
persistentVolumeClaim:
claimName: db2wh-pvc
imagePullSecrets:
- name: dockerstore

Kubernetes Storage on bare-metal/private cloud

I'm just starting with Kubernetes on 2 node (master-minion) setup on 2 private cloud servers. I've installed it, did basic config and got it running some simple pods/services from the master to the minion.
My question is:
How can I use persistent storage with the pods when not using Google Cloud?
For my first tests I got a Ghost Blog pod running, but if i tear the pod the changes are lost. Tried adding volume to the pod, but can't actually find any documentation about how it is done when not on GC.
My try:
apiVersion: v1beta1
id: ghost
kind: Pod
desiredState:
manifest:
version: v1beta1
id: ghost
containers:
- name: ghost
image: ghost
volumeMounts:
- name: ghost-persistent-storage
mountPath: /var/lib/ghost
ports:
- hostPort: 8080
containerPort: 2368
volumes:
- name: ghost-persistent-storage
source:
emptyDir: {}
Found this: Persistent Installation of MySQL and WordPress on Kubernetes
Can't figure it out how to add storage (NFS?) to my testing install.
In the new API (v1beta3), we've added many more volume types, including NFS volumes. The NFS volume type assumes you already have an NFS server running somewhere to point the pod at. Give it a shot and let us know if you have any problems!
NFS Example:
https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/nfs
GlusterFS Example:
https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/glusterfs
Hope that helps!
You could try https://github.com/suquant/glusterd solution.
Glusterfs server in kubernetes cluster
Idea very simple, cluster manager listen kubernetes api and add to /etc/hosts "metadata.name" and pod ip address.
1. Create pods
gluster1.yaml
apiVersion: v1
kind: Pod
metadata:
name: gluster1
namespace: mynamespace
labels:
component: glusterfs-storage
spec:
nodeSelector:
host: st01
containers:
- name: glusterfs-server
image: suquant/glusterd:3.6.kube
imagePullPolicy: Always
command:
- /kubernetes-glusterd
args:
- --namespace
- mynamespace
- --labels
- component=glusterfs-storage
ports:
- containerPort: 24007
- containerPort: 24008
- containerPort: 49152
- containerPort: 38465
- containerPort: 38466
- containerPort: 38467
- containerPort: 2049
- containerPort: 111
- containerPort: 111
protocol: UDP
volumeMounts:
- name: brick
mountPath: /mnt/brick
- name: fuse
mountPath: /dev/fuse
- name: data
mountPath: /var/lib/glusterd
securityContext:
capabilities:
add:
- SYS_ADMIN
- MKNOD
volumes:
- name: brick
hostPath:
path: /opt/var/lib/brick1
- name: fuse
hostPath:
path: /dev/fuse
- name: data
emptyDir: {}
gluster2.yaml
apiVersion: v1
kind: Pod
metadata:
name: gluster2
namespace: mynamespace
labels:
component: glusterfs-storage
spec:
nodeSelector:
host: st02
containers:
- name: glusterfs-server
image: suquant/glusterd:3.6.kube
imagePullPolicy: Always
command:
- /kubernetes-glusterd
args:
- --namespace
- mynamespace
- --labels
- component=glusterfs-storage
ports:
- containerPort: 24007
- containerPort: 24008
- containerPort: 49152
- containerPort: 38465
- containerPort: 38466
- containerPort: 38467
- containerPort: 2049
- containerPort: 111
- containerPort: 111
protocol: UDP
volumeMounts:
- name: brick
mountPath: /mnt/brick
- name: fuse
mountPath: /dev/fuse
- name: data
mountPath: /var/lib/glusterd
securityContext:
capabilities:
add:
- SYS_ADMIN
- MKNOD
volumes:
- name: brick
hostPath:
path: /opt/var/lib/brick1
- name: fuse
hostPath:
path: /dev/fuse
- name: data
emptyDir: {}
3. Run pods
kubectl create -f gluster1.yaml
kubectl create -f gluster2.yaml
2. Manage glusterfs servers
kubectl --namespace=mynamespace exec -ti gluster1 -- sh -c "gluster peer probe gluster2"
kubectl --namespace=mynamespace exec -ti gluster1 -- sh -c "gluster peer status"
kubectl --namespace=mynamespace exec -ti gluster1 -- sh -c "gluster volume create media replica 2 transport tcp,rdma gluster1:/mnt/brick gluster2:/mnt/brick force"
kubectl --namespace=mynamespace exec -ti gluster1 -- sh -c "gluster volume start media"
3. Usage
gluster-svc.yaml
kind: Service
apiVersion: v1
metadata:
name: glusterfs-storage
namespace: mynamespace
spec:
ports:
- name: glusterfs-api
port: 24007
targetPort: 24007
- name: glusterfs-infiniband
port: 24008
targetPort: 24008
- name: glusterfs-brick0
port: 49152
targetPort: 49152
- name: glusterfs-nfs-0
port: 38465
targetPort: 38465
- name: glusterfs-nfs-1
port: 38466
targetPort: 38466
- name: glusterfs-nfs-2
port: 38467
targetPort: 38467
- name: nfs-rpc
port: 111
targetPort: 111
- name: nfs-rpc-udp
port: 111
targetPort: 111
protocol: UDP
- name: nfs-portmap
port: 2049
targetPort: 2049
selector:
component: glusterfs-storage
Run service
kubectl create -f gluster-svc.yaml
After you can mount NFS in cluster by hostname "glusterfs-storage.mynamespace"