How to deploy logstash with persistent volume on kubernetes? - kubernetes

Using GKE to deploy logstash by statefulset kind with pvc. Also need to install an output plugin.
When don't use while true; do sleep 1000; done; in container's command args, it can't deploy with pvc successfully.
The pod will cause CrashLoopBackOff error.
Normal Created 13s (x2 over 14s) kubelet Created container logstash
Normal Started 13s (x2 over 13s) kubelet Started container logstash
Warning BackOff 11s (x2 over 12s) kubelet Back-off restarting failed container
From here I found it can try to add sleep. So the statefulset with pvc can deploy successfully.
But when check its logs will find:
/bin/sh: bin/logstash-plugin: No such file or directory
/bin/sh: bin/logstash: No such file or directory
How to do it in a good way to start container to install an logstash output plugin?
The whole manifest file:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: logstash-to-gcs
namespace: logging
spec:
serviceName: "logstash"
selector:
matchLabels:
app: logstash
updateStrategy:
type: RollingUpdate
replicas: 3
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.10.0
resources:
limits:
memory: 2Gi
ports:
- containerPort: 5044
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
- name: logstash-data
mountPath: /usr/share/logstash
command: ["/bin/sh","-c"]
args:
- bin/logstash-plugin install logstash-output-google_cloud_storage;
bin/logstash -f /usr/share/logstash/pipeline/logstash.conf;
while true; do sleep 1000; done;
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
volumeClaimTemplates:
- metadata:
name: logstash-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi

For my containers I have a script I have ran on Container Deployments and it works to install the plugins and then run logstash.
Post Arg> sh /usr/share/logstash/config/plugins.sh
#!binbash
echo running post install scripts for plugins..;
logstash-plugin install logstash-filter-sentimentalizer
logstash-plugin install logstash-input-mysql
echo finished post install scripts for plugins..;
sleep 1
exec logstash

Related

deploying wazuh-manager and replace ossec.conf after pods running - kubernetes

I'm deploying wazuh-manager on my kubernetes cluster and I need to disabled some security check features from the ossec.conf and I'm trying to copy the config-map ossec.conf(my setup) with the one from the wazuh-manager image but if I'm creating the "volume mount" on /var/ossec/etc/ossec.conf" it will delete everything from the /var/ossec/etc/(when wazuh-manager pods is deployed it will copy all files that this manager needs).
So, I'm thinking to create a new volume mount "/wazuh/ossec.conf" with "lifecycle poststart sleep > exec command "cp /wazuh/ossec.conf > /var/ossec/etc/ " but I'm getting an error that "cannot find /var/ossec/etc/".
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: wazuh-manager
labels:
node-type: master
spec:
replicas: 1
selector:
matchLabels:
appComponent: wazuh-manager
node-type: master
serviceName: wazuh
template:
metadata:
labels:
appComponent: wazuh-manager
node-type: master
name: wazuh-manager
spec:
volumes:
- name: ossec-conf
configMap:
name: ossec-config
containers:
- name: wazuh-manager
image: wazuh-manager4.8
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "cp /wazuh/ossec.conf >/var/ossec/etc/ossec.conf"]
resources:
securityContext:
capabilities:
add: ["SYS_CHROOT"]
volumeMounts:
- name: ossec-conf
mountPath: /wazuh/ossec.conf
subPath: master.conf
readOnly: true
ports:
- containerPort: 8855
name: registration
volumeClaimTemplates:
- metadata:
name: wazuh-disk
spec:
accessModes: ReadWriteOnce
storageClassName: wazuh-csi-disk
resources:
requests:
storage: 50
error:
$ kubectl get pods -n wazuh
wazuh-1670333556-0 0/1 PostStartHookError: command '/bin/sh -c cp /wazuh/ossec.conf > /var/ossec/etc/ossec.conf' exited with 1: /bin/sh: /var/ossec/etc/ossec.conf: No such file or directory...
Within the wazuh-kubernetes repository you have a file for each of the Wazuh manager cluster nodes:
wazuh/wazuh_managers/wazuh_conf/master.conf for the Wazuh Manager master node.
wazuh/wazuh_managers/wazuh_conf/worker.conf for the Wazuh Manager worker node.
With these files, in the Kustomization.yml script, configmaps are created:
configMapGenerator:
-name: indexer-conf
files:
- indexer_stack/wazuh-indexer/indexer_conf/opensearch.yml
- indexer_stack/wazuh-indexer/indexer_conf/internal_users.yml
-name: wazuh-conf
files:
-wazuh_managers/wazuh_conf/master.conf
-wazuh_managers/wazuh_conf/worker.conf
-name: dashboard-conf
files:
- indexer_stack/wazuh-dashboard/dashboard_conf/opensearch_dashboards.yml
Then, in the deployment manifest, they are mounted to persist the configurations in the ossec.conf file of each cluster node:
wazuh/wazuh_managers/wazuh-master-sts.yaml:
...
specification:
volumes:
-name:config
configMap:
name: wazuh-conf
...
volumeMounts:
-name:config
mountPath: /wazuh-config-mount/etc/ossec.conf
subPath: master.conf
...
It should be noted that the configuration files that you need to copy into the /var/ossec/ directory must be mounted on the /wazuh-config-mount/ directory and then the Wazuh Manager image entrypoint takes care of copying it to its location at the start of the container. As an example, the configmap is mounted to /wazuh-config-mount/etc/ossec.conf and then copied to /var/ossec/etc/ossec.conf at startup.

Kubernetes MountVolume.NewMounter initialization failed for volume [name] : path [name] does not exist

i am trying to deploy elasticsearch cluster on Kubernetes, for that i am using local persistent volumes
here is my manifest files
persistantvolume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-local-pv
spec:
capacity:
storage: 500Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /home/kb/data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
storage.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
service.yaml
apiVersion: v1
kind: Service
metadata:
name: es
labels:
service: elasticsearch
spec:
clusterIP: None
ports:
- port: 9200
name: serving
- port: 9300
name: node-to-node
selector:
service: elasticsearch
elasticsearch.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
labels:
service: elasticsearch
spec:
serviceName: es
replicas: 3
selector:
matchLabels:
service: elasticsearch
template:
metadata:
labels:
service: elasticsearch
spec:
terminationGracePeriodSeconds: 300
initContainers:
- name: fix-the-volume-permission
image: busybox
command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: increase-the-vm-max-map-count
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
- name: increase-the-ulimit
image: busybox
command:
- sh
- -c
- ulimit -n 65536
securityContext:
privileged: true
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: tcp
resources:
requests:
memory: 4Gi
limits:
memory: 6Gi
env:
- name: cluster.name
value: elasticsearch-cluster
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.zen.ping.unicast.hosts
value: "elasticsearch-0.es.default.svc.cluster.local,elasticsearch-1.es.default.svc.cluster.local,elasticsearch-2.es.default.svc.cluster.local"
- name: ES_JAVA_OPTS
value: -Xms4g -Xmx4g
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 10Gi
kubectl apply -f persistantvolume.yaml
kubectl apply -f storage.yaml
kubectl apply -f service.yaml
kubectl apply -f elasticsearch.yaml
my pod is in Init:0/3 state and kube describe pod podname is
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 44s default-scheduler Successfully assigned default/elasticsearch-0 to minikube
Warning FailedMount 12s (x7 over 44s) kubelet MountVolume.NewMounter initialization failed for volume "example-local-pv" : path "/home/kb/data" does not exist
i am a beginner in Kubernetes please help me what i am missing here /home/kb/data do exists in my local drive
Assuming you launched minikube with one of its VM drivers, the /home/kb/data directory exists in your local drive but probably NOT inside its VM. Does that make sense? The Kubernetes local-storage thing won't create missing directories. If you JUST want to "fix the error", then minikube ssh -- mkdir /home/kb/data might do the trick. This answer explains more background details about this.

Cannot run Pumba on Openshift

I want to try Pumba Yaml file on my Openshift Cluster.My pod is giving CrashLoopBackOff.
After checking the logs I found the error to be this
container_linux.go:247: starting container process caused "exec: \"pumba\": executable file not found in $PATH".
If anyone evere faced any error like this?.
The image doesn’t contain any shell as an entry-point to execute pumba command.
So, what you need to do is to change yaml as follows:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: pumba
spec:
template:
metadata:
labels:
name: pumba
spec:
containers:
- image: orangesys/alpine-pumba:0.2.4
name: pumba
args:
- pumba
- --debug
- --random
- --interval
- "30s"
- kill
- --signal
- "SIGKILL"
volumeMounts:
- name: dockersocket
mountPath: /var/run/docker.sock
volumes:
- hostPath:
path: /var/run/docker.sock
name: dockersocket
Works as expected
AME READY STATUS RESTARTS AGE
pumba-qdqx6 1/1 Running 0 38s

hostPath as volume in kubernetes

I am trying to configure hostPath as the volume in kubernetes. I have logged into VM server, from where I usually use kubernetes commands such as kubectl.
Below is the pod yaml:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: helloworldanilhostpath
spec:
replicas: 1
template:
metadata:
labels:
run: helloworldanilhostpath
spec:
volumes:
- name: task-pv-storage
hostPath:
path: /home/openapianil/samplePV
type: Directory
containers:
- name: helloworldv1
image: ***/helloworldv1:v1
ports:
- containerPort: 9123
volumeMounts:
- name: task-pv-storage
mountPath: /mnt/sample
In VM server, I have created "/home/openapianil/samplePV" folder and I have a file present in it. It has a sample.txt file.
once I try to create this deployment. it does not happen with error -
Warning FailedMount 28s (x7 over 59s) kubelet, aks-nodepool1-39499429-1 MountVolume.SetUp failed for volume "task-pv-storage" : hostPath type check failed: /home/openapianil/samplePV is not a directory.
Can anyone please help me in understanding the problem here.
hostPath type volumes refer to directories on the Node (VM/machine) where your Pod is scheduled for running (aks-nodepool1-39499429-1 in this case). So you'd need to create this directory at least on that Node.
To make sure your Pod is consistently scheduled on that specific Node you need to set spec.nodeSelector in the PodTemplate:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: helloworldanilhostpath
spec:
replicas: 1
template:
metadata:
labels:
run: helloworldanilhostpath
spec:
nodeSelector:
kubernetes.io/hostname: aks-nodepool1-39499429-1
volumes:
- name: task-pv-storage
hostPath:
path: /home/openapianil/samplePV
type: Directory
containers:
- name: helloworldv1
image: ***/helloworldv1:v1
ports:
- containerPort: 9123
volumeMounts:
- name: task-pv-storage
mountPath: /mnt/sample
In most cases it's a bad idea to use this type of volume; there are some special use cases, but chance are yours is not one them!
If you need local storage for some reason then a slightly better solution is to use local PersistentVolumes.

Kubernetes statefulset ends in completed state

I'm running a k8s cluster on Google GKE where I have a statefulsets running Redis and ElasticSearch.
So every now and then the pods end up in a completed state and so they aren't running anymore and my services depending on it fail.
These pods will also never restart by themselves, a simple kubectl delete pod x will resolve the problem but I want my pods to heal by themselves.
I'm running the latest version available 1.6.4, I have no clue why they aren't pickup and restarted like any other regular pod. Maybe I'm missing something obvious.
edit: I've also notice the pod get a termination signal and shuts down properly so I'm wondering where that is coming from. I'm not manually shutting down and I experience the same with ElasticSearch
This is my statefulset resource declaration:
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: "redis"
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:3.2-alpine
ports:
- name: redis-server
containerPort: 6379
volumeMounts:
- name: redis-storage
mountPath: /data
volumeClaimTemplates:
- metadata:
name: redis-storage
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
Check the version of docker you run, and whether the docker daemon was restarted during that time.
If the docker daemon was restarted, all the container would be terminated (unless you use the new "live restore" feature in 1.12). In some docker versions, docker may incorrectly reports "exit code 0" for all containers terminated in this situation. See https://github.com/docker/docker/issues/31262 for more details.
source: https://stackoverflow.com/a/43051371/5331893
I am using same configuration as you but removing the annotation in the volumeClaimTemplates since I am trying this on minikube:
$ cat sc.yaml
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: "redis"
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:3.2-alpine
ports:
- name: redis-server
containerPort: 6379
volumeMounts:
- name: redis-storage
mountPath: /data
volumeClaimTemplates:
- metadata:
name: redis-storage
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
Now trying to simulate the case where redis fails, so execing into the pod and killing the redis server process:
$ k exec -it redis-0 sh
/data # kill 1
/data # $
See that immediately after the process dies I can see that the STATUS has changed to Completed:
$ k get pods
NAME READY STATUS RESTARTS AGE
redis-0 0/1 Completed 1 38s
It took some time for me to get the redis up and running:
$ k get pods
NAME READY STATUS RESTARTS AGE
redis-0 1/1 Running 2 52s
But soon after that I could see it starting the pod, can you see the events triggered when this happened? Like was there a problem when re-attaching the volume to the pod?