I created 2 pv on a usb drive with:
apiVersion: v1
kind: PersistentVolume
metadata:
name: prometheus-alertmanager
spec:
capacity:
storage: 4Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: prometheus-alertmanager
hostPath:
path: /mnt/tinkerext/Prometheus
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: prometheus-server
spec:
capacity:
storage: 14Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: prometheus-server
hostPath:
path: /mnt/tinkerext/Prometheus
---
after that, i tried to use helm to install prometheus in the default namespace with helm install prometheus prometheus-community/prometheus --version 15.5.4 as mentioned in helm prometheus
immediately after a successful install, i am met with issues with prometheus-alertmanager and prometheus-server. I checked that it was because of unbound persistentvolumeclaim.
My question is, I have been trying to get prometheus to claim the pv that i have created, but no luck there.
I even inspected prometheus values with helm get values prometheus > helm-prometheus.yaml and made changes to the below: (by adding storageClassName)
---
# Source: prometheus/templates/alertmanager/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
component: "alertmanager"
app: prometheus
release: release-name
chart: prometheus-15.5.4
heritage: Helm
name: release-name-prometheus-alertmanager
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "2Gi"
storageClassName: prometheus-alertmanager
---
# Source: prometheus/templates/server/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
component: "server"
app: prometheus
release: release-name
chart: prometheus-15.5.4
heritage: Helm
name: release-name-prometheus-server
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "8Gi"
storageClassName: prometheus-server
But this still doesn't work. How should I be correctly specifying for prometheus to claim the PV that I have created?
Note: I updated my helm after editing it with helm upgrade -f helm-prometheus.yaml prometheus prometheus-community/prometheus
Related
I'm deploying postgres DB using helm by this steps:
applying pv:
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: admin-4
name: postgresql-pv-admin-4
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
applying PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: admin-4
name: postgresql-pvc-admin-4
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
running helm command:
helm install postgres bitnami/postgresql --set persistence.enabled=true --set persistence.existingClaim=postgresql-pvc-admin-4 --set volumePermissions.enabled=true -n admin-4
This is the output:
On the latest bitnami/postgresql chart (chart version 11.8.1), the fields to set are:
primary:
persistence:
enabled: true
existingClaim: postgresql-pvc-admin-4
I'm running a k3s cluster in two different locations. I'm currently running a PV in one of these locations and I'm trying to develop a configuration that can be read as one PV but clone/mirror that drive to another location, all of this through k3s PV and PVC. Any clever ideas on how to achieve this?
My PV and PVC looks like this:
# pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-data-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 32Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/srv/dev-disk-by-uuid-********-****-****-****-************/kubernetes"
# pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-data-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 12Gi
EDIT: SEE BELOW
I am new trying to build a local cluster with 2 physical machines with kubeadm. I am following this https://github.com/mongodb/mongodb-enterprise-kubernetes steps and everything is ok. At first i am installing kubernetes operator, but when i tried to install ops manager i am geting:
0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims ops manager.
the yaml i used to install ops manager is:
---
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
name: opsmanager1
spec:
replicas: 2
version: 4.2.0
adminCredentials: mongo-db-admin1 # Should match metadata.name
# in the Kubernetes secret
# for the admin user
externalConnectivity:
type: NodePort
applicationDatabase:
members: 3
version: 4.4.0
persistent: true
podSpec:
persistence:
single:
storage: 1Gi
i can't figure out what the problem is. I am at a testing phase, and my goal is to make a scaling mongo database. Thanks in advance
edit: i made a few changes.I created storage class like this:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: localstorage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
reclaimPolicy: Delete
allowVolumeExpansion: True
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mongo-01
labels:
type: local
spec:
storageClassName: localstorage
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/master/mongo01"
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mongo-02
labels:
type: local
spec:
storageClassName: localstorage
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/master/mongo02"
And now my yaml for ops manger is:
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
name: ops-manager-localmode
spec:
replicas: 2
version: 4.2.12
adminCredentials: mongo-db-admin1
externalConnectivity:
type: NodePort
statefulSet:
spec:
# the Persistent Volume Claim will be created for each Ops Manager Pod
volumeClaimTemplates:
- metadata:
name: mongodb-versions
spec:
storageClassName: localstorage
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
template:
spec:
containers:
- name: mongodb-ops-manager
volumeMounts:
- name: mongodb-versions
# this is the directory in each Pod where all MongoDB
# archives must be put
mountPath: /mongodb-ops-manager/mongodb-releases
backup:
enabled: false
applicationDatabase:
members: 3
version: 4.4.0
persistent: true
But i get a new error : Warning ProvisioningFailed 44s (x26 over 6m53s) persistentvolume-controller no volume plugin matched name: kubernetes.io/no-provisioner
At a quick glance, it looks like you don't have any volume that can create a PVC on your cluster. see https://v1-15.docs.kubernetes.io/docs/concepts/storage/volumes/
Your app needs to create a persistant volume, but your cluster doesn't know how to do that.
I am setting up a container through Google cloud platform (GCP) kubernetes engine. I have a requirement to mount multiple volumes as the containers are created that way. These volume have to be persistent and hence I went with an NFS approach. I have a VM where NFS service is running and it exports couple of directories.
I am giving yaml sample files below.
deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-branch
labels:
component: myapp-branch
spec:
template:
metadata:
labels:
component: myapp-branch
spec:
imagePullSecrets:
- name: myprivatekey
containers:
- name: myapp-branch
image: mydockerrepo/myapp/webapp:6.6
command: ["/bin/sh", "-ec", "while :; do echo '.'; sleep 100 ; done"]
env:
- name: myapp_UID
value: "1011"
- name: myapp_GID
value: "1011"
- name: myapp_USER
value: "myapp_branch"
- name: myapp_XMS_G
value: "1"
- name: myapp_XMX_G
value: "6"
volumeMounts:
- mountPath: /mypath1/path1
name: pvstorestorage
- mountPath: /mypath2/path2
name: mykeys
volumes:
- name: pvstorestorage
persistentVolumeClaim:
claimName: standalone
- name: mykeys
persistentVolumeClaim:
claimName: conf
PVAndPVC.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: standalone
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
server: 10.2.1.6
path: "/exports/path1"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: standalone
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: conf
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
server: 10.2.1.6
path: "/exports/path2"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: conf
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Gi
after applying them, I see that both the volume mounts of container (/mypath1/path1 and /mypath2/path2) are mounted to same mount point of nfs (/exports/path2, the second one). This is happening with persistentVolumeClaim, I tried EmptyDir, its working fine.
If any one tried this approach and know the solution, it would be really helpful.
You must add a rule in your PVC (PersistentVolumeClaim) definitions to make them match their correct respective PV (PersistentVolume).
Having the same name is not enough.
Change your PV and PVC definitions into something like (untested) :
apiVersion: v1
kind: PersistentVolume
metadata:
name: standalone
labels:
type: standalone
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
server: 10.2.1.6
path: "/exports/path1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: conf
labels:
type: conf
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
server: 10.2.1.6
path: "/exports/path2"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: standalone
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Gi
selector:
matchLabels:
type: standalone
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: conf
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Gi
selector:
matchLabels:
type: conf
(typically, I added a metadata.labels.type in PVs and a spec.selector.matchLabels in PVCs)
Also, use kubectl get pv and kubectl get pvc to see how it is working and ease debugging
my pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: database-disk
labels:
stage: production
name: database
app: mysql
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
running kubectl apply -f pvc.yaml in microk8s got following error:
error validating data:ValidationData(PersistentVolumeClaim): unknown field "storage" in io.k8s.api.core.v1.PersistenVolumeClaim if choose to ignore these errors turn validation off with --validate=false
Edit: storage indentation wrong when I copied text on my VM :( ,its working fine now
You forgot to specify the volumeMode. Add the volumeMode option and it should work.
Like this:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: database-disk
labels:
stage: production
name: database
app: mysql
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 20Gi
If you're using a storageClass, define one as default to be used or specify in the claim the storageClassName.
I have defined this in GCloud:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
name: slow
parameters:
type: pd-standard
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Delete
volumeBindingMode: Immediate