Cannot run Pumba on Openshift - kubernetes

I want to try Pumba Yaml file on my Openshift Cluster.My pod is giving CrashLoopBackOff.
After checking the logs I found the error to be this
container_linux.go:247: starting container process caused "exec: \"pumba\": executable file not found in $PATH".
If anyone evere faced any error like this?.

The image doesn’t contain any shell as an entry-point to execute pumba command.
So, what you need to do is to change yaml as follows:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: pumba
spec:
template:
metadata:
labels:
name: pumba
spec:
containers:
- image: orangesys/alpine-pumba:0.2.4
name: pumba
args:
- pumba
- --debug
- --random
- --interval
- "30s"
- kill
- --signal
- "SIGKILL"
volumeMounts:
- name: dockersocket
mountPath: /var/run/docker.sock
volumes:
- hostPath:
path: /var/run/docker.sock
name: dockersocket
Works as expected
AME READY STATUS RESTARTS AGE
pumba-qdqx6 1/1 Running 0 38s

Related

command to run existing imahe of RabbitMQ on a pod

Can anyone please help me to find which command is used to run an existing image of rabbitMQ on a pod and check if it is done?
which command is used to run an existing image of rabbitMQ
You can check by getting the YAML output from the cluster or from the YAML config file
to get the YAML output of RabbitMQ POD you can do
kubectl get <deployment/statefulet/pod> rabbitmq <name of resource> -o yaml
example
kubectl get deployment rabbitmq
in YAML you can check the Command or ARG getting passed to image which is running the image.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: rabbitmq
spec:
replicas: 1
serviceName: rabbitmq
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:3-management
command: ["test"]
env:
- name: "RABBITMQ_ERLANG_COOKIE"
value: "1WqgH8N2v1qDBDZDbNy8Bg9IkPWLEpu79m6q+0t36lQ="
volumeMounts:
- mountPath: /var/lib/rabbitmq
name: rabbitmq-data
volumes:
- name: rabbitmq-data
hostPath:
path: /data/rabbitmq
type: DirectoryOrCreate
If you want to check the running command inside the docker file
Here is the docker file : Click here

Executing a Script using a Cronjob Kubernetes Cluster

I have a 3 node K8 v1.21 cluster in AWS and looking for SOLID config to run a script using a cronjob. I have seen many documents on here and Google using cronjob and hostPath to Persistent Volumes/Claims to using ConfigMaps, the list goes one.
I keep getting "Back-off restarting failed container/CrashLoopBackOff" errors.
Any help is much appreciated.
cronjob.yaml
The script I am trying to run is basic for testing only
#! /bin/<br/>
kubectl create deployment nginx --image=nginx
Still getting the same error.
kubectl describe pod/xxxx
This hostPath in AWS cluster created using eksctl works.
apiVersion: v1
kind: Pod
metadata:
name: redis-hostpath
spec:
containers:
- image: redis
name: redis-container
volumeMounts:
- mountPath: /test-mnt
name: test-vol
volumes:
- name: test-vol
hostPath:
path: /test-vol
UPDATE
Tried running your config in GCP on a fresh cluster. Only thing I changed was the /home/script.sh to /home/admin/script.sh
Did you test this on your cluster?
Warning FailedPostStartHook 5m27s kubelet Exec lifecycle hook ([/home/mchung/script.sh]) for Container "busybox" in Pod "dumb-job-1635012900-qphqr_default(305c4ed4-08d1-4585-83e0-37a2bc008487)" failed - error: rpc error: code = Unknown desc = failed to exec in container: failed to create exec "0f9f72ccc6279542f18ebe77f497e8c2a8fd52f8dfad118c723a1ba025b05771": cannot exec in a deleted state: unknown, message: ""
Normal Killing 5m27s kubelet FailedPostStartHook
Assuming you're running it in a remote multi-node cluster (since you mentioned AWS in your question), hostPath is NOT an option there for volume mount. Your best choice would be to use a ConfigMap and use it as volume mount.
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-script
data:
script.sh: |
# write down your script here
And then:
apiVersion: batch/v1
kind: CronJob
metadata:
name: redis-job
spec:
schedule: '*/5 * * * *'
jobTemplate:
spec:
template:
spec:
containers:
- name: redis-container
image: redis
args:
- /bin/sh
- -c
- /home/user/script.sh
volumeMounts:
- name: redis-data
mountPath: /home/user/script.sh
subPath: script.sh
volumes:
- name: redis-data
configMap:
name: redis-script
Hope this helps. Let me know if you face any difficulties.
Update:
I think you're doing something wrong. kubectl isn't something you should run from another container / pod. Because it requires the necessary binary to be existed into that container and an appropriate context set. I'm putting a working manifest below for you to understand the whole concept of running a script as a part of cron job:
apiVersion: v1
kind: ConfigMap
metadata:
name: script-config
data:
script.sh: |-
name=StackOverflow
echo "I love $name <3"
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: dumb-job
spec:
schedule: '*/1 * * * *' # every minute
jobTemplate:
spec:
template:
spec:
containers:
- name: busybox
image: busybox:stable
lifecycle:
postStart:
exec:
command:
- /home/script.sh
volumeMounts:
- name: some-volume
mountPath: /home/script.sh
volumes:
- name: some-volume
configMap:
name: script-config
restartPolicy: OnFailure
What it'll do is it'll print some texts in the STDOUT in every minute. Please note that I have put only the commands that container is capable to execute, and kubectl is certainly not one of them which exists in that container out-of-the-box. I hope that is enough to answer your question.

How to deploy logstash with persistent volume on kubernetes?

Using GKE to deploy logstash by statefulset kind with pvc. Also need to install an output plugin.
When don't use while true; do sleep 1000; done; in container's command args, it can't deploy with pvc successfully.
The pod will cause CrashLoopBackOff error.
Normal Created 13s (x2 over 14s) kubelet Created container logstash
Normal Started 13s (x2 over 13s) kubelet Started container logstash
Warning BackOff 11s (x2 over 12s) kubelet Back-off restarting failed container
From here I found it can try to add sleep. So the statefulset with pvc can deploy successfully.
But when check its logs will find:
/bin/sh: bin/logstash-plugin: No such file or directory
/bin/sh: bin/logstash: No such file or directory
How to do it in a good way to start container to install an logstash output plugin?
The whole manifest file:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: logstash-to-gcs
namespace: logging
spec:
serviceName: "logstash"
selector:
matchLabels:
app: logstash
updateStrategy:
type: RollingUpdate
replicas: 3
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.10.0
resources:
limits:
memory: 2Gi
ports:
- containerPort: 5044
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
- name: logstash-data
mountPath: /usr/share/logstash
command: ["/bin/sh","-c"]
args:
- bin/logstash-plugin install logstash-output-google_cloud_storage;
bin/logstash -f /usr/share/logstash/pipeline/logstash.conf;
while true; do sleep 1000; done;
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
volumeClaimTemplates:
- metadata:
name: logstash-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
For my containers I have a script I have ran on Container Deployments and it works to install the plugins and then run logstash.
Post Arg> sh /usr/share/logstash/config/plugins.sh
#!binbash
echo running post install scripts for plugins..;
logstash-plugin install logstash-filter-sentimentalizer
logstash-plugin install logstash-input-mysql
echo finished post install scripts for plugins..;
sleep 1
exec logstash

Kubernetes Pod is changing status from running to completed very soon ,how do i prevent that

Created a pod using yaml and once pod is created I am running kubectl exec to run my gatling perf test code
kubectl exec gradlecommandfromcommandline -- ./gradlew gatlingRun-
simulations.RuntimeParameters -DUSERS=500 -DRAMP_DURATION=5 -DDURATION=30
but this is ending at kubectl console with below message :-
command terminated with exit code 137
On Investigation its found that pod is changing status from running to completed stage.
How do i increase life span of a pod so that it waits for my command to get executed.Here is pod yaml
apiVersion: v1
kind: Pod
metadata:
name: gradlecommandfromcommandline
labels:
purpose: gradlecommandfromcommandline
spec:
containers:
- name: gradlecommandfromcommandline
image: tarunkumard/tarungatlingscript:v1.0
workingDir: /opt/gatling-fundamentals/
command: ["./gradlew"]
args: ["gatlingRun-simulations.RuntimeParameters", "-DUSERS=500", "-
DRAMP_DURATION=5", "-DDURATION=30"]
restartPolicy: OnFailure
Here is yaml file to make pod running always
apiVersion: v1
kind: Pod
metadata:
name: gradlecommandfromcommandline
labels:
purpose: gradlecommandfromcommandline
spec:
volumes:
- name: docker-sock
hostPath:
path: /home/vagrant/k8s/pods/gatling/user-files/simulations # A file or
directory location on the node that you want to mount into the Pod
# command: [ "git clone https://github.com/TarunKDas2k18/PerfGatl.git" ]
containers:
- name: gradlecommandfromcommandline
image: tarunkumard/tarungatlingscript:v1.0
workingDir: /opt/gatling-fundamentals/
command: ["./gradlew"]
args: ["gatlingRun-simulations.RuntimeParameters", "-DUSERS=500", "-
DRAMP_DURATION=5", "-DDURATION=30"]
- name: gatlingperftool
image: tarunkumard/gatling:FirstScript # Run the ubuntu 16.04
command: [ "/bin/bash", "-c", "--" ] # You need to run some task inside a
container to keep it running
args: [ "while true; do sleep 10; done;" ] # Our simple program just sleeps inside
an infinite loop
volumeMounts:
- mountPath: /opt/gatling/user-files/simulations # The mount path within the
container
name: docker-sock # Name must match the hostPath volume name
ports:
- containerPort: 80

hostPath as volume in kubernetes

I am trying to configure hostPath as the volume in kubernetes. I have logged into VM server, from where I usually use kubernetes commands such as kubectl.
Below is the pod yaml:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: helloworldanilhostpath
spec:
replicas: 1
template:
metadata:
labels:
run: helloworldanilhostpath
spec:
volumes:
- name: task-pv-storage
hostPath:
path: /home/openapianil/samplePV
type: Directory
containers:
- name: helloworldv1
image: ***/helloworldv1:v1
ports:
- containerPort: 9123
volumeMounts:
- name: task-pv-storage
mountPath: /mnt/sample
In VM server, I have created "/home/openapianil/samplePV" folder and I have a file present in it. It has a sample.txt file.
once I try to create this deployment. it does not happen with error -
Warning FailedMount 28s (x7 over 59s) kubelet, aks-nodepool1-39499429-1 MountVolume.SetUp failed for volume "task-pv-storage" : hostPath type check failed: /home/openapianil/samplePV is not a directory.
Can anyone please help me in understanding the problem here.
hostPath type volumes refer to directories on the Node (VM/machine) where your Pod is scheduled for running (aks-nodepool1-39499429-1 in this case). So you'd need to create this directory at least on that Node.
To make sure your Pod is consistently scheduled on that specific Node you need to set spec.nodeSelector in the PodTemplate:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: helloworldanilhostpath
spec:
replicas: 1
template:
metadata:
labels:
run: helloworldanilhostpath
spec:
nodeSelector:
kubernetes.io/hostname: aks-nodepool1-39499429-1
volumes:
- name: task-pv-storage
hostPath:
path: /home/openapianil/samplePV
type: Directory
containers:
- name: helloworldv1
image: ***/helloworldv1:v1
ports:
- containerPort: 9123
volumeMounts:
- name: task-pv-storage
mountPath: /mnt/sample
In most cases it's a bad idea to use this type of volume; there are some special use cases, but chance are yours is not one them!
If you need local storage for some reason then a slightly better solution is to use local PersistentVolumes.