I've installed Percona XtraDB on kubernetes using 1.3.0 operator.
After using it, I wanted to delete the namespace. So I deleted them in the order which I applied them.
Everything is deleted and nothing is visible in svc, pods but there are two resources which are in errored state and cannot be deleted.
~ kubectl get perconaxtradbclusters -n pxc
NAME ENDPOINT STATUS PXC PROXYSQL AGE
cluster1 Error 0 0 4h1m
cluster2 Error 0 0 3h34m
I am not able to delete both of them and due to this I'm not able to create a cluster with the same name.
When I run the delete command, it gets stuck forever
~ kubectl delete perconaxtradbclusters -n pxc cluster1
perconaxtradbcluster.pxc.percona.com "cluster1" deleted
The command execution is never completed.
The yaml of the object
apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBCluster
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"pxc.percona.com/v1-3-0","kind":"PerconaXtraDBCluster"}
creationTimestamp: "2020-04-21T18:06:13Z"
deletionGracePeriodSeconds: 0
deletionTimestamp: "2020-04-21T18:38:33Z"
finalizers:
- delete-pxc-pods-in-order
generation: 2
name: cluster2
namespace: pxc
resourceVersion: "5445879"
selfLink: /apis/pxc.percona.com/v1/namespaces/pxc/perconaxtradbclusters/cluster2
uid: 8c100840-b7a8-40d1-b976-1f80c469622b
spec:
allowUnsafeConfigurations: false
backup:
image: percona/percona-xtradb-cluster-operator:1.3.0-backup
schedule:
- keep: 3
name: sat-night-backup
schedule: 0 0 * * 6
storageName: s3-us-west
- keep: 5
name: daily-backup
schedule: 0 0 * * *
storageName: fs-pvc
serviceAccountName: percona-xtradb-cluster-operator
storages:
fs-pvc:
type: filesystem
volume:
persistentVolumeClaim:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 6Gi
s3-us-west:
s3:
bucket: S3-BACKUP-BUCKET-NAME-HERE
credentialsSecret: my-cluster-name-backup-s3
region: us-west-2
type: s3
pmm:
enabled: false
image: percona/percona-xtradb-cluster-operator:1.3.0-pmm
serverHost: monitoring-service
serverUser: pmm
proxysql:
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
enabled: true
gracePeriod: 30
image: percona/percona-xtradb-cluster-operator:1.3.0-proxysql
podDisruptionBudget:
maxUnavailable: 1
resources:
requests:
cpu: 600m
memory: 1G
size: 3
volumeSpec:
persistentVolumeClaim:
resources:
requests:
storage: 2Gi
pxc:
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
gracePeriod: 600
image: percona/percona-xtradb-cluster-operator:1.3.0-pxc
podDisruptionBudget:
maxUnavailable: 1
resources:
requests:
cpu: 600m
memory: 4G
size: 3
volumeSpec:
persistentVolumeClaim:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 60Gi
storageClassName: local-storage
secretsName: my-cluster-secrets
sslInternalSecretName: my-cluster-ssl-internal
sslSecretName: my-cluster-ssl
status:
conditions:
- lastTransitionTime: "2020-04-21T18:06:13Z"
message: 'wrong PXC options: set version: new version: Malformed version: '
reason: ErrorReconcile
status: "True"
type: Error
message:
- 'Error: wrong PXC options: set version: new version: Malformed version: '
proxysql:
ready: 0
pxc:
ready: 0
state: Error
How can I get rid of them
Your perconaxtradbclusters yaml example mentions pvc resources, so you'll probably have to delete the associated pvc first, if you haven't already done so.
Can you edit the resources to remove the finalizer blocks, and try delete them again?
kubectl edit perconaxtradbclusters cluster1 -n pxc
and delete
finalizers:
- delete-pxc-pods-in-order
If there's nothing left relying on those resources, that is.
Edit:
I would generally only use this method if I've exhausted all other possibilities and I can't find the hanging resources that are blocking the deletion. I did some digging around. This comment here describes other steps to take before resorting to removing the finalizers.
- Check that the API services are available
- Find any lingering resources that still exist and delete them.
Related
I'm using bitnami/etcd chart and it has ability to create snapshots via EFS mounted pvc.
However I get permission error after aws-efs-csi-driver is provisioned and PVC mounted to any non-root pod (user/gid is 1001)
I'm using helm chart https://kubernetes-sigs.github.io/aws-efs-csi-driver/ version 2.2.0
values of the chart:
# you can obtain the fileSystemId with
# aws efs describe-file-systems --query "FileSystems[*].FileSystemId"
storageClasses:
- name: efs
parameters:
fileSystemId: fs-exxxxxxx
directoryPerms: "777"
gidRangeStart: "1000"
gidRangeEnd: "2000"
basePath: "/snapshots"
# enable it after the following issue is resolved
# https://github.com/bitnami/charts/issues/7769
# node:
# nodeSelector:
# etcd: "true"
I then manually created the PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: etcd-snapshotter-pv
annotations:
argocd.argoproj.io/sync-wave: "60"
spec:
capacity:
storage: 32Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs
csi:
driver: efs.csi.aws.com
volumeHandle: fs-exxxxxxx
Then if I mount that EFS PVC in non-rood pod I get the following error
➜ klo etcd-snapshotter-001-ph8w9
etcd 23:18:38.76 DEBUG ==> Using endpoint etcd-snapshotter-001-ph8w9:2379
{"level":"warn","ts":1633994320.7789018,"logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0005ea380/#initially=[etcd-snapshotter-001-ph8w9:2379]","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 10.120.2.206:2379: connect: connection refused\""}
etcd-snapshotter-001-ph8w9:2379 is unhealthy: failed to commit proposal: context deadline exceeded
Error: unhealthy cluster
etcd 23:18:40.78 WARN ==> etcd endpoint etcd-snapshotter-001-ph8w9:2379 not healthy. Trying a different endpoint
etcd 23:18:40.78 DEBUG ==> Using endpoint etcd-2.etcd-headless.etcd.svc.cluster.local:2379
etcd-2.etcd-headless.etcd.svc.cluster.local:2379 is healthy: successfully committed proposal: took = 1.6312ms
etcd 23:18:40.87 INFO ==> Snapshotting the keyspace
Error: could not open /snapshots/db-2021-10-11_23-18.part (open /snapshots/db-2021-10-11_23-18.part: permission denied)
As a result I have to spawn a new "root" pod, get inside the pod and manually adjust the permissions
apiVersion: v1
kind: Pod
metadata:
name: perm
spec:
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 0
containers:
- name: app1
image: busybox
command: ["/bin/sh"]
args: ["-c", "sleep 3000"]
volumeMounts:
- name: persistent-storage
mountPath: /snapshots
securityContext:
runAsUser: 0
runAsGroup: 0
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: etcd-snapshotter
nodeSelector:
etcd: "true"
k apply -f setup.yaml
k exec -ti perm -- ash
cd /snapshots
/snapshots # chown -R 1001.1001 .
/snapshots # chmod -R 777 .
/snapshots # exit
➜ k create job --from=cronjob/etcd-snapshotter etcd-snapshotter-001
job.batch/etcd-snapshotter-001 created
➜ klo etcd-snapshotter-001-bmv79
etcd 23:31:10.22 DEBUG ==> Using endpoint etcd-1.etcd-headless.etcd.svc.cluster.local:2379
etcd-1.etcd-headless.etcd.svc.cluster.local:2379 is healthy: successfully committed proposal: took = 2.258532ms
etcd 23:31:10.32 INFO ==> Snapshotting the keyspace
{"level":"info","ts":1633995070.4244702,"caller":"snapshot/v3_snapshot.go:68","msg":"created temporary db file","path":"/snapshots/db-2021-10-11_23-31.part"}
{"level":"info","ts":1633995070.4907935,"logger":"client","caller":"v3/maintenance.go:211","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":1633995070.4908395,"caller":"snapshot/v3_snapshot.go:76","msg":"fetching snapshot","endpoint":"etcd-1.etcd-headless.etcd.svc.cluster.local:2379"}
{"level":"info","ts":1633995070.4965465,"logger":"client","caller":"v3/maintenance.go:219","msg":"completed snapshot read; closing"}
{"level":"info","ts":1633995070.544217,"caller":"snapshot/v3_snapshot.go:91","msg":"fetched snapshot","endpoint":"etcd-1.etcd-headless.etcd.svc.cluster.local:2379","size":"320 kB","took":"now"}
{"level":"info","ts":1633995070.5507936,"caller":"snapshot/v3_snapshot.go:100","msg":"saved","path":"/snapshots/db-2021-10-11_23-31"}
Snapshot saved at /snapshots/db-2021-10-11_23-31
➜ k exec -ti perm -- ls -la /snapshots
total 924
drwxrwxrwx 2 1001 1001 6144 Oct 11 23:31 .
drwxr-xr-x 1 root root 46 Oct 11 23:25 ..
-rw------- 1 1001 root 319520 Oct 11 23:31 db-2021-10-11_23-31
Is there a way to automate this?
I have this setting in storage class
gidRangeStart: "1000"
gidRangeEnd: "2000"
but it has no effect.
PVC is defined as:
➜ kg pvc etcd-snapshotter -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: efs.csi.aws.com
name: etcd-snapshotter
namespace: etcd
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 32Gi
storageClassName: efs
volumeMode: Filesystem
volumeName: etcd-snapshotter-pv
By default the StorageClass field provisioningMode is unset, please set it to provisioningMode: "efs-ap" to enable dynamic provision with access point.
I have a Kubernetes cluster running on GKE, and I created a new namespace with a ResourceQuota:
yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: bot-quota
spec:
hard:
requests.cpu: '500m'
requests.memory: 1Gi
limits.cpu: '1000m'
limits.memory: 2Gi
which I apply to my namespace (called bots), which gives me kubectl describe resourcequota --namespace=bots:
Name: bot-quota
Namespace: bots
Resource Used Hard
-------- ---- ----
limits.cpu 0 1
limits.memory 0 2Gi
requests.cpu 0 500m
requests.memory 0 1Gi
Name: gke-resource-quotas
Namespace: bots
Resource Used Hard
-------- ---- ----
count/ingresses.extensions 0 5k
count/jobs.batch 0 10k
pods 0 5k
services 0 1500
This is what I expect - and my expectation is that the bots namespace is hard limited to above limits.
Now I would like to deploy a single pod onto that namespace, using this simple yaml:
apiVersion: v1
kind: Pod
metadata:
name: podname
namespace: bots
labels:
app: someLabel
spec:
nodeSelector:
cloud.google.com/gke-nodepool: default-pool
containers:
- name: containername
image: something-image-whatever:latest
resources:
requests:
memory: '96Mi'
cpu: '300m'
limits:
memory: '128Mi'
cpu: '0.5'
args: ['0']
Given the resources specified`, I'd expect to be well in range, deploying a single instance. When I apply the yaml though:
Error from server (Forbidden): error when creating "pod.yaml": pods "podname" is forbidden: exceeded quota: bot-quota, requested: limits.cpu=2500m, used: limits.cpu=0, limited: limits.cpu=1
If I change the pod's yaml to use a cpu limit of 0.3, then the same error appear with limits.cpu=2300m requested.
In other words: it seems to miraculously add 2000m (=2) cpu to my limit.
We do NOT have any LimitRange applied.
What am I missing?
As discussed in the comments above, it is indeed related to istio. How?
As it is (now) obvious, the requests and limits are specified on container level, and NOT on pod/deployment level. Why is that relevant?
Running istio (in our case, managed istio on GKE), the container is not alone in the "workload", much rather it also has istio-init (which is terminated soon after starting) plus istio-proxy.
And these additional containers apply their own limits & resources, in the current pod I am looking at for example:
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 100m
memory: 128Mi
on istio-proxy (using: kubectl describe pods <podid>)
This indeed explains why the WHOLE deployment has 2 cpu more in the limit as expected.
I have 1 node K8 cluster on digitalocean with 1cpu/2gbRAM
and 3 node cluster on google cloud with 1cpu/2gbRAM
I ran two jobs separatley on each cloud platform with auto-scaling enabled.
First job had memory request of 200Mi
apiVersion: batch/v1
kind: Job
metadata:
name: scaling-test
spec:
parallelism: 16
template:
metadata:
name: scaling-test
spec:
containers:
- name: debian
image: debian
command: ["/bin/sh","-c"]
args: ["sleep 300"]
resources:
requests:
cpu: "100m"
memory: "200Mi"
restartPolicy: Never
More nodes of (1cpu/2gbRAM) were added to cluster automatically and after job completion extra node were deleted automatically.
After that, i ran second job with memory request 4500Mi
apiVersion: batch/v1
kind: Job
metadata:
name: scaling-test2
spec:
parallelism: 3
template:
metadata:
name: scaling-test2
spec:
containers:
- name: debian
image: debian
command: ["/bin/sh","-c"]
args: ["sleep 5"]
resources:
requests:
cpu: "100m"
memory: "4500Mi"
restartPolicy: Never
After checking later job remained in pending state . I checked pods Events log and i'm seeing following error.
0/5 nodes are available: 5 Insufficient memory **source: default-scheduler**
pod didn't trigger scale-up (it wouldn't fit if a new node is added): 1 Insufficient memory **source:cluster-autoscaler**
cluster did not auto-scaled for required requested resource for job. Is this possible using kubernetes?
CA doesn't add nodes to the cluster if it wouldn't make a pod schedulable. It will only consider adding nodes to node groups for which it was configured. So one of the reasons it doesn't scale up the cluster may be that the pod has too large (e.g. 4500Mi memory). Another possible reason is that all suitable node groups are already at their maximum size.
I'm setting up my application with Kubernetes. I have 2 Docker images (Oracle and Weblogic). I have 2 kubernetes nodes, Node1 (20 GB storage) and Node2 (60 GB) storage.
When I run kubectl apply -f oracle.yaml it tries to create oracle pod on Node1 and after few minutes it fails due to lack of storage. How can I force Kubernetes to check the free storage of that node before creating the pod there?
Thanks
First of all, you probably want to give Node1 more storage.
But if you don't want the pod to start at all you can probably run a check with an initContainer where you check how much space you are using with something like du or df. It could be a script that checks for a threshold that exits unsuccessfully if there is not enough space. Something like this:
#!/bin/bash
# Check if there are less than 10000 bytes in the <dir> directory
if [ `du <dir> | tail -1 | awk '{print $1}'` -gt "10000" ]; then exit 1; fi
Another alternative is to use a persistent volume (PV) with a persistent volume claim (PVC) that has enough space together with the default StorageClass Admission Controller, and you do allocate the appropriate space in your volume definition.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 40Gi
storageClassName: mytype
Then on your Pod:
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: mycontainer
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
The Pod will not start if your claim cannot be allocated (There isn't enough space)
You may try to specify ephemeral storage requirement for pod:
resources:
requests:
ephemeral-storage: "40Gi"
limits:
ephemeral-storage: "40Gi"
Then it would be scheduled only on nodes with sufficient ephemeral storage available.
You can verify the amount of ephemeral storage available on each node in the output of "kubectl describe node".
$ kubectl describe node somenode | grep -A 6 Allocatable
Allocatable:
attachable-volumes-gce-pd: 64
cpu: 3920m
ephemeral-storage: 26807024751
hugepages-2Mi: 0
memory: 12700032Ki
pods: 110
I need to deploy GitLab with Helm on Kubernetes.
I have the problem: PVC is Pending.
I see volume.alpha.kubernetes.io/storage-class: default in PVC description, but I set value gitlabDataStorageClass: gluster-heketi in values.yaml.
And I fine deploy simple nginx from article https://github.com/gluster/gluster-kubernetes/blob/master/docs/examples/hello_world/README.md
Yes, I use distribute storage GlusterFS https://github.com/gluster/gluster-kubernetes
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
gitlab1-gitlab-data Pending 19s
gitlab1-gitlab-etc Pending 19s
gitlab1-postgresql Pending 19s
gitlab1-redis Pending 19s
gluster1 Bound pvc-922b5dc0-6372-11e8-8f10-4ccc6a60fcbe 5Gi RWO gluster-heketi 43m
Structure for single of pangings:
# kubectl get pvc gitlab1-gitlab-data -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
volume.alpha.kubernetes.io/storage-class: default
creationTimestamp: 2018-05-29T19:43:18Z
finalizers:
- kubernetes.io/pvc-protection
name: gitlab1-gitlab-data
namespace: default
resourceVersion: "263950"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/gitlab1-gitlab-data
uid: 8958d4f5-6378-11e8-8f10-4ccc6a60fcbe
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
status:
phase: Pending
In describe I see:
# kubectl describe pvc gitlab1-gitlab-data
Name: gitlab1-gitlab-data
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: <none>
Annotations: volume.alpha.kubernetes.io/storage-class=default
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 2m (x43 over 12m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
My values.yaml file:
# Default values for kubernetes-gitlab-demo.
# This is a YAML-formatted file.
# Required variables
# baseDomain is the top-most part of the domain. Subdomains will be generated
# for gitlab, mattermost, registry, and prometheus.
# Recommended to set up an A record on the DNS to *.your-domain.com to point to
# the baseIP
# e.g. *.your-domain.com. A 300 baseIP
baseDomain: my-domain.com
# legoEmail is a valid email address used by Let's Encrypt. It does not have to
# be at the baseDomain.
legoEmail: my#mail.com
# Optional variables
# baseIP is an externally provisioned static IP address to use instead of the provisioned one.
#baseIP: 95.165.135.109
nameOverride: gitlab
# `ce` or `ee`
gitlab: ce
gitlabCEImage: gitlab/gitlab-ce:10.6.2-ce.0
gitlabEEImage: gitlab/gitlab-ee:10.6.2-ee.0
postgresPassword: NDl1ZjNtenMxcWR6NXZnbw==
initialSharedRunnersRegistrationToken: "tQtCbx5UZy_ByS7FyzUH"
mattermostAppSecret: NDl1ZjNtenMxcWR6NXZnbw==
mattermostAppUID: aadas
redisImage: redis:3.2.10
redisDedicatedStorage: true
redisStorageSize: 5Gi
redisAccessMode: ReadWriteOnce
postgresImage: postgres:9.6.5
# If you disable postgresDedicatedStorage, you should consider bumping up gitlabRailsStorageSize
postgresDedicatedStorage: true
postgresAccessMode: ReadWriteOnce
postgresStorageSize: 30Gi
gitlabDataAccessMode: ReadWriteOnce
#gitlabDataStorageSize: 30Gi
gitlabRegistryAccessMode: ReadWriteOnce
#gitlabRegistryStorageSize: 30Gi
gitlabConfigAccessMode: ReadWriteOnce
#gitlabConfigStorageSize: 1Gi
gitlabRunnerImage: gitlab/gitlab-runner:alpine-v10.6.0
# Valid values for provider are `gke` for Google Container Engine. Leaving it blank (or any othervalue) will disable fast disk options.
#provider: gke
# Gitlab pages
# The following 3 lines are needed to enable gitlab pages.
# pagesExternalScheme: http
# pagesExternalDomain: your-pages-domain.com
# pagesTlsSecret: gitlab-pages-tls # An optional reference to a tls secret to use in pages
## Storage Class Options
## If defined, volume.beta.kubernetes.io/storage-class: <storageClass>
## If not defined, but provider is gke, will use SSDs
## Otherwise default: volume.alpha.kubernetes.io/storage-class: default
gitlabConfigStorageClass: gluster-heketi
gitlabDataStorageClass: gluster-heketi
gitlabRegistryStorageClass: gluster-heketi
postgresStorageClass: gluster-heketi
redisStorageClass: gluster-heketi
healthCheckToken: 'SXBAQichEJasbtDSygrD'
# Optional, for GitLab EE images only
#gitlabEELicense: base64-encoded-license
# Additional omnibus configuration,
# see https://docs.gitlab.com/omnibus/settings/configuration.html
# for possible configuration options
#omnibusConfigRuby: |
# gitlab_rails['smtp_enable'] = true
# gitlab_rails['smtp_address'] = "smtp.example.org"
gitlab-runner:
checkInterval: 1
# runnerRegistrationToken must equal initialSharedRunnersRegistrationToken
runnerRegistrationToken: "tQtCbx5UZy_ByS7FyzUH"
# resources:
# limits:
# memory: 500Mi
# cpu: 600m
# requests:
# memory: 500Mi
# cpu: 600m
runners:
privileged: true
## Build Container specific configuration
##
# builds:
# cpuLimit: 200m
# memoryLimit: 256Mi
# cpuRequests: 100m
# memoryRequests: 128Mi
## Service Container specific configuration
##
# services:
# cpuLimit: 200m
# memoryLimit: 256Mi
# cpuRequests: 100m
# memoryRequests: 128Mi
## Helper Container specific configuration
##
# helpers:
# cpuLimit: 200m
# memoryLimit: 256Mi
# cpuRequests: 100m
# memoryRequests: 128Mi
You can see I have the StorageClass:
# kubectl get sc
NAME PROVISIONER AGE
gluster-heketi kubernetes.io/glusterfs 48m
Without a link to the actual helm you used, it's impossible for anyone to troubleshoot why the go-template isn't correctly consuming your values.yaml.
I see volume.alpha.kubernetes.io/storage-class: default in PVC description, but I set value gitlabDataStorageClass: gluster-heketi in values.yaml
I can appreciate you set whatever you wanted in values.yaml, but as long as that StorageClass doesn't match any existing StorageClass, I'm not sure what positive thing will materialize from there. You can certainly try creating a StorageClass named default containing the same values as your gluster-heketi SC, or update the PVC to use the correct SC.
To be honest, this may be a bug in the helm chart, but until it is fixed (and/or we get the link to the chart to help you know how to adjust your yaml) if you want your GitLab to deploy, you will need to work around this bad situation manually.