Worker unable to connect to the master and invalid args in webport for Locust - kubernetes

I am trying to set up a load test for an endpoint. This is what I have followed so far:
Dockerfile
FROM python:3.8
# Add the external tasks directory into /tasks
WORKDIR /src
ADD requirements.txt .
RUN pip install --no-cache-dir --upgrade locust==2.10.1
ADD run.sh .
ADD load_test.py .
ADD load_test.conf .
# Expose the required Locust ports
EXPOSE 5557 5558 8089
# Set script to be executable
RUN chmod 755 run.sh
# Start Locust using LOCUS_OPTS environment variable
CMD ["bash", "run.sh"]
# Modified from:
# https://github.com/scrollocks/locust-loadtesting/blob/master/locust/docker/Dockerfile
run.sh
#!/bin/bash
LOCUST="locust"
LOCUS_OPTS="--config=load_test.conf"
LOCUST_MODE=${LOCUST_MODE:-standalone}
if [[ "$LOCUST_MODE" = "master" ]]; then
LOCUS_OPTS="$LOCUS_OPTS --master"
elif [[ "$LOCUST_MODE" = "worker" ]]; then
LOCUS_OPTS="$LOCUS_OPTS --worker --master-host=$LOCUST_MASTER_HOST"
fi
echo "${LOCUST} ${LOCUS_OPTS}"
$LOCUST $LOCUS_OPTS
# Copied from
# https://github.com/scrollocks/locust-loadtesting/blob/master/locust/docker/locust/run.sh
This is how I have written the load test locust script:
import json
from locust import HttpUser, constant, task
class CategorizationUser(HttpUser):
wait_time = constant(1)
#task
def predict(self):
payload = json.dumps(
{
"text": "Face and Body Paint washable Rubies Halloween item 91#385"
}
)
_ = self.client.post("/predict", data=payload)
I am invoking that with a configuration:
locustfile = load_test.py
headless = false
users = 1000
spawn-rate = 1
run-time = 5m
host = IP
html = locust_report.html
So, after building and pushing the Docker image and creating a k8s cluster on GKE, I am deploying it. This is how the deployment.yaml looks like:
# Copied from
# https://github.com/scrollocks/locust-loadtesting/blob/master/locust/kubernetes/templates/deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: locust-master-deployment
labels:
name: locust
role: master
spec:
replicas: 1
selector:
matchLabels:
name: locust
role: master
template:
metadata:
labels:
name: locust
role: master
spec:
containers:
- name: locust
image: gcr.io/PROJECT_ID/IMAGE_URI
imagePullPolicy: Always
env:
- name: LOCUST_MODE
value: master
- name: LOCUST_LOG_LEVEL
value: DEBUG
ports:
- name: loc-master-web
containerPort: 8089
protocol: TCP
- name: loc-master-p1
containerPort: 5557
protocol: TCP
- name: loc-master-p2
containerPort: 5558
protocol: TCP
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: locust-worker-deployment
labels:
name: locust
role: worker
spec:
replicas: 2
selector:
matchLabels:
name: locust
role: worker
template:
metadata:
labels:
name: locust
role: worker
spec:
containers:
- name: locust
image: gcr.io/PROJECT_ID/IMAGE_URI
imagePullPolicy: Always
env:
- name: LOCUST_MODE
value: worker
- name: LOCUST_MASTER
value: locust-master-service
- name: LOCUST_LOG_LEVEL
value: DEBUG
After deployment, I am exposing the required ports like so:
kubectl expose pod locust-master-deployment-f9d4c4f59-8q6wk \
--type NodePort \
--port 5558 \
--target-port 5558 \
--name locust-5558
kubectl expose pod locust-master-deployment-f9d4c4f59-8q6wk \
--type NodePort \
--port 5557 \
--target-port 5557 \
--name locust-5557
kubectl expose pod locust-master-deployment-f9d4c4f59-8q6wk \
--type LoadBalancer \
--port 80 \
--target-port 8089 \
--name locust-web
The cluster and the nodes provision successfully. But the moment I hit the IP of locust-web, I am getting:
Any suggestions on how to resolve the bug?

Since you are exposing your pods and you are trying to access to them outside the cluster (your web application), you have to port-forward them or add an Ingress in order to access to your locust pods.
My first approach will be trying to ping or send requests to your locust pods with a simple port-forward.
More infos about the port forwarding here.

Probably the environment variables set by k8s is colliding with locust’s (LOCUST_WEB_PORT specifically). Change your setup so that no containers are named ”locust”.
See https://github.com/locustio/locust/issues/1226 for a similar issue.

Related

Is it possible to add dependent environment variables in Google Cloud Run?

I would like to specify dependent environment variables on a Cloud Run service.
If the environment variables have been defined in a .env file it would look like this
DATABASE_NAME=my-database
DATABASE_USER=root
DATABASE_PASSWORD=P4SSw0rd!
DATABASE_PORT=5432
DATABASE_HOST="/socket/my-database-socket"
DATABASE_URL="user=${DATABASE_USER} password=${DATABASE_PASSWORD} dbname=${DATABASE_NAME} host=${DATABASE_HOST}"
In this example, DATABASE_URL depends on every other environment variables.
To deploy the service I run the following command:
gcloud run deploy my-service \
--image gcr.io/my-project/my-image:latest \
--region europe-west1 \
--port 80 \
--platform managed \
--allow-unauthenticated \
--set-env-vars 'DATABASE_NAME=my-database' \
--set-env-vars 'DATABASE_USER=root' \
--set-env-vars 'DATABASE_PASSWORD=P4SSw0rd!' \
--set-env-vars 'DATABASE_PORT=5432' \
--set-env-vars 'DATABASE_HOST="/socket/my-database-socket"' \
--set-env-vars 'DATABASE_URL="user=$(DATABASE_USER) password=$(DATABASE_PASSWORD) dbname=$(DATABASE_NAME) host=$(DATABASE_HOST)"'
Here is the created YAML definition of the service (some values are omitted)
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: my-service
spec:
template:
metadata:
name: ...
spec:
containerConcurrency: 80
timeoutSeconds: 300
containers:
- image: ...
ports:
- name: http1
containerPort: 80
env:
- name: DATABASE_NAME
value: my-database
- name: DATABASE_USER
value: root
- name: DATABASE_PASSWORD
value: P4SSw0rd!
- name: DATABASE_HOST
value: /socket/my-database-socket
- name: DATABASE_URL
value: user=$(DATABASE_USER) password=$(DATABASE_PASSWORD) dbname=$(DATABASE_NAME) host=$(DATABASE_HOST)
The problem is that when the service is running, the env vars in DATABASE_URL seem not interpolated.
I read that Kubernetes supports dependent env vars but I can't figure out how to make this run in Cloud Run.
I am wondering if it is supported in Cloud Run in the end.
It's likely this may work in Knative open source (which uses Kubernetes to execute pods) but not on Google Cloud Run (fully hosted), which runs on a proprietary execution engine.

Kubernetes pod in minikube can't access a service

I can't access a service from a pod, when I run the curl serviceIP:port command from my pod console, I get the following error:
root#strongswan-deployment-7bc4c96494-qmb46:/# curl -v 10.111.107.133:80
* Trying 10.111.107.133:80...
* TCP_NODELAY set
* connect to 10.111.107.133 port 80 failed: Connection timed out
* Failed to connect to 10.111.107.133 port 80: Connection timed out
* Closing connection 0
curl: (28) Failed to connect to 10.111.107.133 port 80: Connection timed out
Here is my yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: strongswan-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: strongswan
template:
metadata:
labels:
app: strongswan
spec:
containers:
- name: strongswan-container
image: 192.168.39.1:5000/mystrongswan
ports:
- containerPort: 80
command: ["/bin/bash", "-c", "--"]
args: ["while true; do sleep 30; done;"]
securityContext:
privileged: true
imagePullSecrets:
- name: dockerregcred
---
apiVersion: v1
kind: Service
metadata:
namespace: default
name: strongswan-service
spec:
selector:
app: strongswan
ports:
- port: 80 # Port exposed to the cluster
protocol: TCP
targetPort: 80 # Port on which the pod listens
I tried with an Nginx pod and this time it works, I am able to connect to the Nginx service with the curl command.
I don't see where the problem comes from, since it works for the Nginx pod. What I did wrong?
I use minikube :
user#user-ThinkCentre-M91p:~/minikube$ minikube version
minikube version: v1.20.0
commit: c61663e942ec43b20e8e70839dcca52e44cd85ae
EDIT
My second pod yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: godart-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: godart
template:
metadata:
labels:
app: godart
spec:
containers:
- name: godart-container
image: 192.168.39.1:5000/mygodart
ports:
- containerPort: 9020
imagePullSecrets:
- name: dockerregcred
---
apiVersion: v1
kind: Service
metadata:
namespace: default
name: godart-service
spec:
selector:
app: godart
ports:
- port: 9020 # Port exposed to the cluster
protocol: TCP
targetPort: 9020 # Port on which the pod listens
The error :
[root#godart-deployment-648fb8757c-6mscv /]# curl -v 10.104.206.191:9020
* About to connect() to 10.104.206.191 port 9020 (#0)
* Trying 10.104.206.191...
* Connection timed out
* Failed connect to 10.104.206.191:9020; Connection timed out
* Closing connection 0
curl: (7) Failed connect to 10.104.206.191:9020; Connection timed out
the dockerfile :
FROM centos/systemd
ENV container docker
RUN yum -y update; yum clean all
RUN yum -y install systemd; yum clean all; \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
COPY /godart* /home
RUN yum install -y /home/GoDart-3.3-b10.el7.x86_64.rpm
RUN yum install -y /home/GoDartHmi-3.3-b10.el7.x86_64.rpm
CMD ["/usr/sbin/init"]
EDIT EDIT:
I solved my problem by adding a file that can respond to an http request, this is the file:
var http = require('http');
var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello World!');
};
var www = http.createServer(handleRequest);
www.listen(9020, "0.0.0.0");
To make it work you must have a Node js environment installed.
Run the script with the command:node filename.js
And after that I am able to curl my services.
I don't really understand why it works now, does anyone have an explanation ?
Thank you
Your strongswan-container container is using bash -c -- "while true; do sleep 30; done;" as command.
The sleep command obviously does not listen to any port.
When you try to curl your service on port 80, a TCP connection is attempted towards the Pod on port 80, but since there is no such port listening in the Pod the connection attempt fails.
how can I fix this error without using the sleep command?
If I good understand your question I know 2 solutions of your problem. First you can understand how work CrashLoopBackOff. Then you can change Container restart policy. The most important field should be: lastProbeTime. This means Timestamp of when the Pod condition was last probed.
Second solution should be creating a readiness probe. You can read more about it also here.

Cannot find module '/usr/src/app/server.js'

I have tested the app using minikube locally and it works. When I use the same Doeckerfile with deploymnt.yml, the pod returns to Error state with the below reason
Error: Cannot find module '/usr/src/app/server.js'
Dockerfile:
FROM node:13-alpine
WORKDIR /api
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
Deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-app-dep
labels:
app: nodejs-app
spec:
replicas: 1
selector:
matchLabels:
app: nodejs-app
template:
metadata:
labels:
app: nodejs-app
spec:
serviceAccountName: opp-sa
imagePullSecrets:
- name: xxx
containers:
- name: nodejs-app
image: registry.xxxx.net/k8s_app
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
Assuming it could be a problem with "node_modules", I had "ls" on the WORDIR inside the Dockerfile and it does show me "node_modules". Does anyone what else to check to resolve this issue ?
Since I can't give you this level of suggestions on a comment I'm writing you a fully working example so you can compare to yours and check if there is something different.
Sources:
Your Dockerfile:
FROM node:13-alpine
WORKDIR /api
COPY package*.json .
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
Sample package.json:
{
"name": "docker_web_app",
"version": "1.0.0",
"description": "Node.js on Docker",
"author": "First Last <first.last#example.com>",
"main": "server.js",
"scripts": {
"start": "node server.js"
},
"dependencies": {
"express": "^4.16.1"
}
}
sample server.js:
'use strict';
const express = require('express');
// Constants
const PORT = 8080;
const HOST = '0.0.0.0';
// App
const app = express();
app.get('/', (req, res) => {
res.send('Hello World');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
Build image:
$ ls
Dockerfile package.json server.js
$ docker build -t k8s_app .
...
Successfully built 2dfbfe9f6a2f
Successfully tagged k8s_app:latest
$ docker images k8s_app
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s_app latest 2dfbfe9f6a2f 4 minutes ago 118MB
Your deployment sample + service for easy access (called nodejs-app.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-app-dep
labels:
app: nodejs-app
spec:
replicas: 1
selector:
matchLabels:
app: nodejs-app
template:
metadata:
labels:
app: nodejs-app
spec:
containers:
- name: web-app
image: k8s_app
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: web-app-svc
spec:
type: NodePort
selector:
app: nodejs-app
ports:
- port: 8080
targetPort: 8080
Note: I'm using the minikube docker registry for this example, that's why imagePullPolicy: Never is set.
Now I'll deploy it:
$ kubectl apply -f nodejs-app.yaml
deployment.apps/nodejs-app-dep created
service/web-app-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nodejs-app-dep-5d75f54c7d-mfw8x 1/1 Running 0 3s
Whenever you need to troubleshoot inside a pod you can use kubectl exec -it <pod_name> -- /bin/sh (or /bin/bash depending on the base image.)
$ kubectl exec -it nodejs-app-dep-5d75f54c7d-mfw8x -- /bin/sh
/api # ls
Dockerfile node_modules package-lock.json package.json server.js
The pod is running and the files are in the WORKDIR folder as stated in the Dockerfile.
Finally let's test accessing from outside the cluster:
$ minikube service list
|-------------|-------------|--------------|-------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|-------------|--------------|-------------------------|
| default | web-app-svc | 8080 | http://172.17.0.2:31446 |
|-------------|-------------|--------------|-------------------------|
$ curl -i http://172.17.0.2:31446
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/html; charset=utf-8
Content-Length: 11
ETag: W/"b-Ck1VqNd45QIvq3AZd8XYQLvEhtA"
Date: Thu, 14 May 2020 18:49:40 GMT
Connection: keep-alive
Hello World$
The Hello World is being served as desired.
To Summarize:
I Build the Docker Image in minikube ssh so it is cached.
Created the manifest containing the deployment pointing to the image, added the service part to allow access externally using Nodeport.
NodePort routes all traffic to the Minikube IP in the port assigned to the service (i.e:31446) and deliver to the pods matching the selector listening on port 8080.
A few pointers for troubleshooting:
kubectl describe pod <pod_name>: provides precious information when the pod status is in any kind of error.
kubectl exec is great to troubleshoot inside the container as it's running, it's pretty similar to docker run command.
Review your code files to ensure there is no baked path in it.
Try using WORKDIR /usr/src/app instead of /api and see if you get a different result.
Try using a .dockerignore file with node_modules on it's content.
Try out and let me know in the comments if you need further help
#willrof, thanks for the detailed write-up. A reply to your response is limited to 30 characters and hence I'm posting as new comment.
My problem was resolved yesterday. It was with COPY . .
It works perfectly fine in my local but, when I tried to deploy onto the cluster with the same Dockerfile, I was running into the issue of "cannot find module..."
So it finally worked when the directory path was mentioned instead of . . while copying files
COPY /api /usr/app #copy project basically
WORKDIR /usr/app #set workdir just before npm install
RUN npm install
EXPOSE 3000
Moving WORKDIR statement before installing "node_modules" worked in my case. I'm surprised to figure this as the problem though it worked locally with COPY . .

How to test if container is running postgres

I just deployed a docker with Postgres on it on AWS EKS.
Below is the description details.
How do i access or test if postgres is working. I tried accessing both IP with post within VPC from worker node.
psql -h #IP -U #defaultuser -p 55432
Below is the deployment.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 55432
# envFrom:
# - configMapRef:
# name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: efs
Surprisingly I am able to connect to psql but on 5432. :( Not sure what I am doing wrong. I passed containerPort as 55432
In short, you need to run the following command to expose your database on 55432 port.
kubectl expose deployment postgres --port=55432 --target-port=5432 --name internal-postgresql-svc
From now on, you can connect to it via port 55432 from inside your cluster by using the service name as a hostname, or via its ClusterIP address:
kubectl get internal-postgresql-svc
What you did in your deployment manifest file, you just attached additional information about the network connections a container uses, between misleadingly, because your container exposes 5432 port only (you can verify it by your self here). You should use a Kubernetes Service - abstraction which enables access to your PODs, and does the necessary port mapping behind the scene.
Please check also different port Types, if you want to expose your postgresql database outside of the Kubernetes cluster.
To test if progress is running fine inside POD`s container:
kubectl run postgresql-postgresql-client --rm --tty -i --restart='Never' --namespace default --image bitnami/postgresql --env="PGPASSWORD=<HERE_YOUR_PASSWORD>" --command -- psql --host <HERE_HOSTNAME=SVC_OR_IP> -U <HERE_USERNAME>

Combining multiple Local-SSD on a node in Kubernetes (GKE)

The data required by my container is too large to fit on one local SSD. I also need to access the SSD's as one filesystem from my container. So I would need to attach multiple ones. How do I combine them (single partition, RAID0, etc) and make them accessible as one volume mount in my container?
This link shares how to mount an SSD https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd to a mount path. I am not sure how you would merge multiple.
edit
The question asks how one would "combine" multiple SSD devices, individually mounted, on a single node in GKE.
WARNING
this is experimental and not intended for production use without
knowing what you are doing and only tested on gke version 1.16.x.
The approach includes a daemonset using a configmap to use nsenter (with wait tricks) for host namespace and privileged access so you can manage the devices. Specifically for GKE Local SSDs, we can unmount those devices and then raid0 them. InitContainer for the dirty work as this type of task seems most apparent for something you'd need to mark complete, and to then kill privileged container access (or even the Pod). Here is how it is done.
The example assumes 16 SSDs, however, you'll want to adjust the hardcoded values as necessary. Also, ensure your OS image reqs, I use Ubuntu. Also make sure the version of GKE you use starts local-ssd's at sd[b]
ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: local-ssds-setup
namespace: search
data:
setup.sh: |
#!/bin/bash
# returns exit codes: 0 = found, 1 = not found
isMounted() { findmnt -rno SOURCE,TARGET "$1" >/dev/null;} #path or device
# existing disks & mounts
SSDS=(/dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq)
# install mdadm utility
apt-get -y update && apt-get -y install mdadm --no-install-recommends
apt-get autoremove
# OPTIONAL: determine what to do with existing, I wipe it here
if [ -b "/dev/md0" ]
then
echo "raid array already created"
if isMounted "/dev/md0"; then
echo "already mounted - unmounting"
umount /dev/md0 &> /dev/null || echo "soft error - assumed device was mounted"
fi
mdadm --stop /dev/md0
mdadm --zero-superblock "${SSDS[#]}"
fi
# unmount disks from host filesystem
for i in {0..15}
do
umount "${SSDS[i]}" &> /dev/null || echo "${SSDS[i]} already unmounted"
done
if isMounted "/dev/sdb";
then
echo ""
echo "unmount failure - prevent raid0" 1>&2
exit 1
fi
# raid0 array
yes | mdadm --create /dev/md0 --force --level=0 --raid-devices=16 "${SSDS[#]}"
echo "raid array created"
# format
mkfs.ext4 -F /dev/md0
# mount, change /mnt/ssd-array to whatever
mkdir -p /mnt/ssd-array
mount /dev/md0 /mnt/ssd-array
chmod a+w /mnt/ssd-array
wait.sh: |
#!/bin/bash
while sudo fuser /var/{lib/{dpkg,apt/lists},cache/apt/archives}/lock >/dev/null 2>&1; do sleep 1; done
DeamonSet pod spec
spec:
hostPID: true
nodeSelector:
cloud.google.com/gke-local-ssd: "true"
volumes:
- name: setup-script
configMap:
name: local-ssds-setup
- name: host-mount
hostPath:
path: /tmp/setup
initContainers:
- name: local-ssds-init
image: marketplace.gcr.io/google/ubuntu1804
securityContext:
privileged: true
volumeMounts:
- name: setup-script
mountPath: /tmp
- name: host-mount
mountPath: /host
command:
- /bin/bash
- -c
- |
set -e
set -x
# Copy setup script to the host
cp /tmp/setup.sh /host
# Copy wait script to the host
cp /tmp/wait.sh /host
# Wait for updates to complete
/usr/bin/nsenter -m/proc/1/ns/mnt -- chmod u+x /tmp/setup/wait.sh
# Give execute priv to script
/usr/bin/nsenter -m/proc/1/ns/mnt -- chmod u+x /tmp/setup/setup.sh
# Wait for Node updates to complete
/usr/bin/nsenter -m/proc/1/ns/mnt /tmp/setup/wait.sh
# If the /tmp folder is mounted on the host then it can run the script
/usr/bin/nsenter -m/proc/1/ns/mnt /tmp/setup/setup.sh
containers:
- image: "gcr.io/google-containers/pause:2.0"
name: pause
For high performance use cases, use the Ephemeral storage on local SSDs GKE feature. All local SSDs will be configures as a (striped) raid0 array and mounted into the pod.
Quick summary:
Create the node pool or cluster with the option: --ephemeral-storage local-ssd-count=X
Schedule to nodes with cloud.google.com/gke-ephemeral-storage-local-ssd.
Add an emptyDir volume.
Mount it with volumeMounts.
Here's how I used it with a DaemonSet:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
nodeSelector:
cloud.google.com/gke-ephemeral-storage-local-ssd: "true"
volumes:
- name: localssd
emptyDir: {}
containers:
- name: myapp
image: <IMAGE>
volumeMounts:
- mountPath: /scratch
name: localssd
You can use DaemonSet yaml file to deploy the pod will run on startup, assuming already created a cluster with 2 local-SSD (this pod will be in charge of creating the Raid0 disk)
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: ssd-startup-script
labels:
app: ssd-startup-script
spec:
template:
metadata:
labels:
app: ssd-startup-script
spec:
hostPID: true
containers:
- name: ssd-startup-script
image: gcr.io/google-containers/startup-script:v1
imagePullPolicy: Always
securityContext:
privileged: true
env:
- name: STARTUP_SCRIPT
value: |
#!/bin/bash
sudo curl -s https://get.docker.com/ | sh
echo Done
The pod that will have access to the disk array in the above example is “/mnt/disks/ssd-array”
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: ubuntu
volumeMounts:
- mountPath: /mnt/disks/ssd-array
name: ssd-array
args:
- sleep
- "1000"
nodeSelector:
cloud.google.com/gke-local-ssd: "true"
tolerations:
- key: "local-ssd"
operator: "Exists"
effect: "NoSchedule"
volumes:
- name: ssd-array
hostPath:
path: /mnt/disks/ssd-array
After deploying the test-pod, SSH to the pod from your cloud-shell or any instance.
Then run :
kubectl exec -it test-pod -- /bin/bash
After that you should be able to see the created file in the ssd-array disk.
cat test-file.txt