How to enable persistence in prometheus-community chart? - kubernetes

I am trying to set up persistent storage with the new prometheus-community helm chart. I have modified the helm values files as seen below. Currently when the chart is reinstalled (I use Tiltfiles for this) the PVC is deleted and therefore the data is not persisted.
I assume that the problem could have something to do with the fact that there is no statefulset running for the server, but I am not sure how to fix it.
(The solutions from here does not solve my problem, as it is for the old chart.)
server:
persistentVolume:
enabled: true
storageClass: default
accessModes:
- ReadWriteOnce
size: 8Gi

I enabled the statefulset on the prometheus server and now it seems to work.
server:
persistentVolume:
enabled: true
storageClass: default-hdd-retain
accessModes:
- ReadWriteOnce
size: 40Gi
statefulSet:
enabled: true

Related

Pod restart alone and lose data Minikube k8s

i deployed hdfs into minikube using helm chart, i also using minikube multinodes(3nodes), after the deploying of hdfs, everytime i run minikube start the pods restart everything from 0 ( i lose the data that i put it on hdfs),meanwhile i tried to apply a pvc :
persistence:
nameNode:
enabled: true
storageClass: standard-ssd
accessMode: ReadWriteOnce
size: 50Gi
dataNode:
enabled: true
storageClass: standard-hdd
accessMode: ReadWriteOnce
size: 200Gi
but i get this error : Error validatin data :[apiVersion not set, kind not set]if you choose to ignore these errors, turn validation offset
How can i solve this problem ?

Kuberentes External provisioning wont create volume

Hi there I followed this tutorial back in march and got it up and running. I have been refreshing on the area and was doing it again... but now it no longer works. Is there any changes that have occurred to kubernetes that has changed in the mean time that caused this issue?
Here is the error code in PVC:
Another question addresses this by pointing to another link, but this link isnt working any more!
I am running my kubernetes cluster on Minikube version v1.17.1
Check your kubectl client version is compatible with the minikube version, in case needs, do upgrade the components. Also, check PV status on your system and and try defining the storage class, storageClassName: standard in your pv definition as below,
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgresql-storage
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
hostPath:
path: /data/postgresql/

Airflow: Unable to get persistent logs using KubernetesExecutor and PV (official helm chart)

I feel a bit like an idiot but I cannot seem to get the logging working on persistent volumes when using the KubernetesExecutor and the freshly released official Helm chart.
After creating a simple PV and PVC manually, I changed the following on the bottom of values.yaml file:
logs:
persistence:
# Enable persistent volume for storing logs
enabled: true
# Volume size for logs
size: 100Gi
# If using a custom storageClass, pass name here
storageClassName:
## the name of an existing PVC to use
existingClaim: airflow-logs
This process is partly described in the official Helm documentation.
Still, the airflow-scheduler pod crashes due to permission errors as i cannot write in the mounted logs folder: logs here.
When the persistent logging is turned off, all is working, except for task logging as these are deleted when the worker-pod is deleted.
Any help towards a solution would be greatly appreciated!
I assumed that using standard persistent volume approach was the easiest (I am still a k8s novice) I did not expect that setting up one using azure-file-storage-class (SC) was this easy. These mounts can be set up using 777 rights from the SC yaml file, not sure if this is the sole cure as I also set the uid/gid in the SC mount options. Anyhow, all seems to be working perfectly.
As a reference for otheres here my azure-file-sc.yaml:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: airflow-logs
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=50000
- gid=0
- mfsymlinks
- cache=strict
- actimeo=30
parameters:
skuName: Standard_LRS
My azure-file-pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: airflow-logs
namespace: airflow
labels:
app: airflow-logs
spec:
accessModes:
- ReadWriteMany
storageClassName: airflow-logs
resources:
requests:
storage: 20Gi
The values.yaml is unchanged.
With this, the persistent logging works like a charm in Azure Kubernetes Service (AKS).
Hope this helps others!
Dennis

how to resize pvc on kubernetes when i used storageclass

I changed the size of pvc.
According to the documents on the Internet, I went through the following steps.
I first added the following command line to the storageclass file.
allowVolumeExpansion: true
After changing the size with the following command ,I removed the pod to be made again with pvc.
But at the end of the steps, the amount of pvc does not change.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-fp
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 2Gi
storageClassName: rook-ceph-blockp
The output of these commands should be resized in pvc.
While not changing.
What is the version of your Kubernetes cluster ? PVC resize feature is by default enabled only for k8s version 1.11 and above. For prior versions of k8s, ExpandPersistentVolumes feature and PersistentVolumeClaimResize admission controller needs to be enabled explicitly.
What is backend storage provider ? Does it supports volume resize with PVC ?
As of now bellow providers support PVC resize:
AWS-EBS, GCE-PD, Azure Disk, Azure File, Glusterfs, Cinder, Portworx, and Ceph RBD
You can find more information at https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/

Kubernetes Persistent Volume Claim Indefinitely in Pending State

I created a PersistentVolume sourced from a Google Compute Engine persistent disk that I already formatted and provision with data. Kubernetes says the PersistentVolume is available.
kind: PersistentVolume
apiVersion: v1
metadata:
name: models-1-0-0
labels:
name: models-1-0-0
spec:
capacity:
storage: 200Gi
accessModes:
- ReadOnlyMany
gcePersistentDisk:
pdName: models-1-0-0
fsType: ext4
readOnly: true
I then created a PersistentVolumeClaim so that I could attach this volume to multiple pods across multiple nodes. However, kubernetes indefinitely says it is in a pending state.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: models-1-0-0-claim
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 200Gi
selector:
matchLabels:
name: models-1-0-0
Any insights? I feel there may be something wrong with the selector...
Is it even possible to preconfigure a persistent disk with data and have pods across multiple nodes all be able to read from it?
I quickly realized that PersistentVolumeClaim defaults the storageClassName field to standard when not specified. However, when creating a PersistentVolume, storageClassName does not have a default, so the selector doesn't find a match.
The following worked for me:
kind: PersistentVolume
apiVersion: v1
metadata:
name: models-1-0-0
labels:
name: models-1-0-0
spec:
capacity:
storage: 200Gi
storageClassName: standard
accessModes:
- ReadOnlyMany
gcePersistentDisk:
pdName: models-1-0-0
fsType: ext4
readOnly: true
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: models-1-0-0-claim
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 200Gi
selector:
matchLabels:
name: models-1-0-0
With dynamic provisioning, you shouldn't have to create PVs and PVCs separately. In Kubernetes 1.6+, there are default provisioners for GKE and some other cloud environments, which should let you just create a PVC and have it automatically provision a PV and an underlying Persistent Disk for you.
For more on dynamic provisioning, see:
https://kubernetes.io/blog/2017/03/dynamic-provisioning-and-storage-classes-kubernetes/
Had the same issue but it was another reason that's why I am sharing it here to help community.
If you have deleted PersistentVolumeClaim and then re-create it again with the same definition, it will be Pending forever, why?
persistentVolumeReclaimPolicy is Retain by default in PersistentVolume. In case we have deleted PersistentVolumeClaim, the PersistentVolume still exists and the volume is considered released. But it is not yet available for another claim because the previous claimant's data remains on the volume.
so you need to manually reclaim the volume with the following steps:
Delete the PersistentVolume (associated underlying storage asset/resource like EBS, GCE PD, Azure Disk, ...etc will NOT be deleted, still exists)
(Optional) Manually clean up the data on the associated storage asset accordingly
(Optional) Manually delete the associated storage asset (EBS, GCE PD, Azure Disk, ...etc)
If you still need the same data, you may skip cleaning and deleting associated storage asset (step 2 and 3 above), so just simply re-create a new PersistentVolume with same storage asset definition then you should be good to create PersistentVolumeClaim again.
One last thing to mention, Retain is not the only option for persistentVolumeReclaimPolicy, below are some other options that you may need to use or try based on use-case scenarios:
Recycle: performs a basic scrub on the volume (e.g., rm -rf //*) - makes it available again for a new claim. Only NFS and HostPath support recycling.
Delete: Associated storage asset such as AWS EBS, GCE PD, Azure Disk, or OpenStack Cinder...etc volume is deleted
For more information, please check kubernetes documentation.
Still need more clarification or have any questions, please don't hesitate to leave a comment and I will be more than happy to clarify and assist.
If you're using Microk8s, you have to enable storage before you can start a PersistentVolumeClaim successfully.
Just do:
microk8s.enable storage
You'll need to delete your deployment and start again.
You may also need to manually delete the "pending" PersistentVolumeClaims because I found that uninstalling the Helm chart which created them didn't clear the PVCs out.
You can do this by first finding a list of names:
kubectl get pvc --all-namespaces
then deleting each name with:
kubectl delete pvc name1 name2 etc...
Once storage is enabled, reapplying your deployment should get things going.
I was facing the same problem, and realise that k8s actually does a just-in-time provision, i.e.
When a pvc is created, it stays in PENDING state, and no corresponding pv is created.
The pvc & pv (EBS volume) are created only after you have created a deployment which uses the pvc.
I am using EKS with kubernetes version 1.16 and the behaviour is controlled by StorageClass Volume Binding Mode.
I had same problem. My PersistentVolumeClaim yaml was originally as follows:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
storageClassName: “”
accessModes:
– ReadWriteOnce 
volumeName: pv
resources:
requests:
storage: 1Gi
and my pvc status was:
after remove volumeName :
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
storageClassName: “”
accessModes:
– ReadWriteOnce 
resources:
requests:
storage: 1Gi
I've seen this behaviour in microk8s 1.14.1 when two PersistentVolumes have the same value for spec/hostPath/path, e.g.
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-name
labels:
type: local
app: app
spec:
storageClassName: standard
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/k8s-app-data"
It seems that microk8s is event-based (which isn't necessary on a one-node cluster) and throws away information about any failing operations resulting in unnecessary horrible feedback for almost all failures.
I had this problem with helmchart of the apache airflow(stable), setting storageClass to azurefile helped. What you should do in such cases with the cloud providers? Just search for the storage classes that support the needed access mode. ReadWriteMany means that SIMULTANEOUSLY many processes will read and write to the storage. In this case(azure) it is azurefile.
path: /opt/airflow/logs
## configs for the logs PVC
##
persistence:
## if a persistent volume is mounted at `logs.path`
##
enabled: true
## the name of an existing PVC to use
##
existingClaim: ""
## sub-path under `logs.persistence.existingClaim` to use
##
subPath: ""
## the name of the StorageClass used by the PVC
##
## NOTE:
## - if set to "", then `PersistentVolumeClaim/spec.storageClassName` is omitted
## - if set to "-", then `PersistentVolumeClaim/spec.storageClassName` is set to ""
##
storageClass: "azurefile"
## the access mode of the PVC
##
## WARNING:
## - must be: `ReadWriteMany`
##
## NOTE:
## - different StorageClass support different access modes:
## https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
##
accessMode: ReadWriteMany
## the size of PVC to request
##
size: 1Gi
When you want to bind manually a PVC to a PV with an existing disk, the storageClassName should not be specified... but... the cloud provider has set by default the "standard" StorageClass making it always entered whatever you try when patching the PVC/PV.
You can check your provider set it as default when doing kubectl get storageclass (it will be written "(default")).
To fix this the best is to get your existing StorageClass YAML and add this annotation:
annotations:
storageclass.kubernetes.io/is-default-class: "false"
Apply and good :)
Am using microk8s
Fixed the problem by running the commands below
systemctl start open-iscsi.service
(had open-iscsi installed earlier using apt install open-iscsi but had not started it)
Then enabled storage as follows
microk8s.enable storage
Then, deleted the Stateful Sets and the pending Persistence Volume Claims from Lens so I can start over.
Worked well after that.
I faced the same issue in which the PersistentVolumeClaim was in Pending Phase indefinitely, I tried providing the storageClassName as 'default' in PersistentVolume just like I did for PersistentVolumeClaim but it did not fix this issue.
I made one change in my persistentvolume.yml and moved the PersistentVolumeClaim config on top of the file and then PersistentVolume as the second config in the yml file. It has fixed that issue.
We need to make sure that PersistentVolumeClaim is created first and the PersistentVolume is created afterwards to resolve this 'Pending' phase issue.
I am posting this answer after testing it for a few times, hoping that it might help someone struggling with it.
Make sure your VM also has enough disk space.