I am having the below Pod definition.
apiVersion: v1
kind: Pod
metadata:
name: mongodb
spec:
volumes:
- name: mongodb-data
awsElasticBlockStore:
volumeID: vol-0c0d9800c22f8c563
fsType: ext4
containers:
- image: mongo
name: mongodb
volumeMounts:
- name: mongodb-data
mountPath: /data/db
ports:
- containerPort: 27017
protocol: TCP
I have created volumne in AWS and tried to mount to the container. The container is not starting.
kubectl get po
NAME READY STATUS RESTARTS AGE
mongodb 0/1 ContainerCreating 0 6m57s
When I created the volume and assigned it to a Availability zone where the node is running and and the pod was scheduled on that node, the volume was mounted successfully. If the pod is not scheduled on the node, the mount fails. How can I make sure that the volume can be accessed by all the nodes
According to the documentation:
There are some restrictions when using an awsElasticBlockStore volume:
the nodes on which Pods are running must be AWS EC2 instances
those instances need to be in the same region and availability-zone as the EBS volume
EBS only supports a single EC2 instance mounting a volume
Make sure all of the above are met. If your nodes are in different zones than you might need to create additional EBS volumes, for example:
aws ec2 create-volume --availability-zone=eu-west-1a --size=10 --volume-type=gp2
Please let me know if that helped.
Related
I'm running into a weird issue with my newer deployments where volumes aren't mounting correctly.
Example..
There are PV/PVCs for three NFS directories that relate to one deployment:
NFS/in
NFS/out
NFS/config
In the deployment, those PVCs mount to the corresponding volumeMounts
volumeMounts/in
volumeMounts/out
volumeMounts/config
With my older deployments, this works as expected. With the new deployments, the NFS directories are mounting to the incorrect mount points... The contents of NFS/in are mounted in volumeMounts/config. NFS/config are mounted in volumeMounts/in.
This is vanilla Kubernetes on a bare metal node. The only configuration change from default that has been made was yanking PVC protection due to PVCs not being deleted on request:
kubectl patch pvc PVC_NAME -p '{"metadata":{"finalizers": []}}' --type=merge
Any ideas on what causes the directories to mount in the incorrect volumeMounts?
You have to set ClaimName though your deployment or statefulset :
...
apiVersion: apps/v1
kind: StatefulSet
.......
containers:
- name: container-name
image: container-image:container-tag
volumeMounts:
- name: claim1
mountPath: /path/to/directory
volumes:
- name: claim1
persistentVolumeClaim:
claimName: PVC_NAME
...
I deploy prometheus in kubernetes on this manual
As a storage scheme was invented:
Prometeus in kubernetes stores the metrics within 24 hours.
Prometheus not in kubernetes stores the metrics in 1 week.
A federation is set up between them.
Who faced with the fact that after removing the pods after a certain period of time (much less than 24 hours) metrics are missing on it.
This is perfectly normal if you do not have a persistent storage configured for your prometheus pod. You should use PV/PVC to define a stable place where you keep your prometheus data, otherwise if your pod is recreated, it starts with a clean slate.
PV/PVC needs dedicated storage servers in the cluster. If there is no money for storage servers, here is a cheaper approach:
Label a node:
$ kubectl label nodes <node name> prometheus=yes
Force all the prometheus pods to be created on the same labeled node by using nodeSelector:
nodeSelector:
prometheus: yes
Create an emptyDir volume for each prometheus pod. An emptyDir volume is first created when the Prometheus pod is assigned to the labeled node and exists as long as that pod is running on that node and is safe across container crashes and pod restarts.
spec:
containers:
- image: <prometheus image>
name: <prometheus pod name>
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
This approach makes all the Prometheus pods run on the same node with persistent storage for the metrics - a cheaper approach that prays the Prometheus node does not crash.
Kubernetes features quite a few types of volumes, including emptyDir:
An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. As the name says, it is initially empty. Containers in the pod can all read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.
...
By default, emptyDir volumes are stored on whatever medium is backing the node.
Is the emtpyDir actually mounted on the node, and accessible to a container outside the pod, or to the node FS itself?
Yes it is also accessible on the node. It is bind mounted into the container (sort of). The source directories are under /var/lib/kubelet/pods/PODUID/volumes/kubernetes.io~empty-dir/VOLUMENAME
You can find the location on the host like this:
sudo ls -l /var/lib/kubelet/pods/`kubectl get pod -n mynamespace mypod -o 'jsonpath={.metadata.uid}'`/volumes/kubernetes.io~empty-dir
You can list all emptyDir volumes on the host using this command
df
To view only volumes mapped to a specific volume
df | grep -i cache-volume
where cache-volume is the volume name in your pod definition
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
I hava a kubernetes cluster up running on AWS. Now when I'm trying to attach a AWS EBS as a volume to a pod, I got a "special device does not exist" problem.
Output: mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-xxxxxxx does not exist
I did some research and found that the correct AWS EBS device path should be like this format:
/var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-west-2a/vol-xxxxxxxx
My doubt is that it might because I set up the Kubernetes cluster according to this tutorial and did not set the cloud provider, and therefore the AWS device "does not exit". I wonder if my doubt is correct, and if yes, how to set the cloud provider after the cluster is already up running.
You need to set the cloud provider to properly mount an EBS volume. To do that after the fact set --cloud-provider=aws in the following services:
controller-manager
apiserver
kubelet
Restart everything and try mounting again.
An example pod which mounts an EBS volume explicitly may look like this:
apiVersion: v1
kind: Pod
metadata:
name: test-ebs
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-ebs
name: test-volume
volumes:
- name: test-volume
# This AWS EBS volume must already exist.
awsElasticBlockStore:
volumeID: <volume-id>
fsType: ext4
The Kubernetes version is an important factor here. The EBS mounts was experimental in 1.2.x, I tried it then but without success. In the last releases I never tried it again but be sure to check your IAM roles on the k8s vm's to make sure they have the rights to provision EBS disks.
Using Kubernetes on bare metal and trying to figure out how to mount a external bloc storage volume from an OpenStack cloud provider.
I understand I need to use the Cinder plugin.
https://github.com/kubernetes/kubernetes/tree/master/pkg/volume/cinder
I modified an example I found to build a test pod, the volume is simply defined as the following, in the pod definition:
apiVersion: v1
kind: Pod
metadata:
name: test
labels:
name: test
spec:
containers:
- image: busybox
name: busybox
command:
- "sleep"
- "3600"
volumeMounts:
- name: persistent-storage
mountPath: /var/lib/storage
volumes:
- name: persistent-storage
cinder:
volumeID: bd82f7e2-wece-4c01-a505-4acf60b07f4a
fsType: ext4
I have a volumeID I got from the OpenStack volume API.
I put it there, but I am not sure the volume is actually being mounted:
I am not sure how to check actually, but I would guess that df -h would show a remote volume being mounted on the host and in the container, but I don't see any.
I would think Kubernetes would send me an error if the volume was not mounted, the pod would fail or something... but it runs.
So, the question is: how do I verify the volume is mounted? and as I believe it is not mounted, what should I do to make this cinder plugin work?
The conclusion of my search on this was that the nodes using the block storage also need to be on the same OpenStack cluster.
That is, it is not (easily/standard) possible to mount Cinder block storage into a cluster of nodes that is not on the Open Stack cluster.
See:
Kubernetes: using OpenStack Cinder from one cloud provider while nodes on another