Zombie kubernetes pod keeps being restarted by zombie Replica Set - kubernetes

I'm managing a kubernetes cluster and there is a duplicate pod that keeps coming back, but the duplicate ReplicaSet controlling it also keeps coming back after deletion. It's very strange. I also can't set the replica set to desire 0 pods, but that might be by design.
I can't really think of more information to share.
Anyone recognise this issue and know how to fix it?
Here's the ReplicaSet
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: cash-idemix-ca-6bc646cbcc
namespace: boc-cash-portal
uid: babc0236-2053-4088-b8e8-b8ae2ed9303c
resourceVersion: '25466073'
generation: 1
creationTimestamp: '2022-11-28T13:18:42Z'
labels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/name: cash-idemix-ca
pod-template-hash: 6bc646cbcc
annotations:
deployment.kubernetes.io/desired-replicas: '1'
deployment.kubernetes.io/max-replicas: '2'
deployment.kubernetes.io/revision: '7'
ownerReferences:
- apiVersion: apps/v1
kind: Deployment
name: cash-idemix-ca
uid: 2a3300ed-f666-4a30-98b7-7ab2ebcb2a0d
controller: true
blockOwnerDeletion: true
managedFields:
- manager: kube-controller-manager
operation: Update
apiVersion: apps/v1
time: '2022-11-28T13:18:42Z'
fieldsType: FieldsV1
- manager: kube-controller-manager
operation: Update
apiVersion: apps/v1
time: '2022-11-29T13:27:37Z'
fieldsType: FieldsV1
subresource: status
selfLink: >-
/apis/apps/v1/namespaces/boc-cash-portal/replicasets/cash-idemix-ca-6bc646cbcc
status:
replicas: 1
fullyLabeledReplicas: 1
observedGeneration: 1
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/name: cash-idemix-ca
pod-template-hash: 6bc646cbcc
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/name: cash-idemix-ca
pod-template-hash: 6bc646cbcc
annotations:
kubectl.kubernetes.io/restartedAt: '2022-07-19T11:45:27Z'
spec:
volumes:
- name: fabric-ca-server-home
persistentVolumeClaim:
claimName: cash-idemix-ca-fabric-ca-server-home
containers:
- name: cash-idemix-ca
image: ca.icr.io/samara-dev-container-images/cash-idemix-ca:0.3.0
command:
- sh
args:
- '-c'
- >-
sleep 1 && fabric-ca-server start -b
$(IDEMIX_ADMIN_USERNAME):$(IDEMIX_ADMIN_PASSWORD) --port 7054
--idemix.curve gurvy.Bn254 --loglevel debug
ports:
- name: api
containerPort: 7054
protocol: TCP
env:
- name: FABRIC_CA_SERVER_HOME
value: /idemix-config/fabric-ca-gurvy
- name: IDEMIX_ADMIN_USERNAME
valueFrom:
secretKeyRef:
name: cash-idemix-ca-admin-credentials
key: IDEMIX_ADMIN_USERNAME
- name: IDEMIX_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: cash-idemix-ca-admin-credentials
key: IDEMIX_ADMIN_PASSWORD
resources: {}
volumeMounts:
- name: fabric-ca-server-home
mountPath: /idemix-config/fabric-ca-gurvy
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: cash-idemix-ca
serviceAccount: cash-idemix-ca
securityContext: {}
imagePullSecrets:
- name: all-icr-io
schedulerName: default-scheduler
Edit: The Fool was correct, there is a deployment that recreates the the ReplicaSet. Though in the settings it seems to say that it only want to create 1 replica; so I still don't see why it wants to create two of them.
I'm using Lens to manage my cluster, and it shows that the desired number of replicas is indeed 2. I can set it to 1, but the change won't persist. Anything else where I could look?
cash-idemix-ca Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cash-idemix-ca
namespace: boc-cash-portal
uid: 2a3300ed-f666-4a30-98b7-7ab2ebcb2a0d
resourceVersion: '25467341'
generation: 10
creationTimestamp: '2022-07-18T14:13:57Z'
labels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: cash-idemix-ca
app.kubernetes.io/version: manual-0.2.1
helm.sh/chart: cash-idemix-ca-0.12.0
annotations:
deployment.kubernetes.io/revision: '7'
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"cash-idemix-ca","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"cash-idemix-ca","app.kubernetes.io/version":"manual-0.2.1","helm.sh/chart":"cash-idemix-ca-0.12.0"},"name":"cash-idemix-ca","namespace":"boc-cash-portal"},"spec":{"replicas":1,"revisionHistoryLimit":0,"selector":{"matchLabels":{"app.kubernetes.io/instance":"cash-idemix-ca","app.kubernetes.io/name":"cash-idemix-ca"}},"template":{"metadata":{"labels":{"app.kubernetes.io/instance":"cash-idemix-ca","app.kubernetes.io/name":"cash-idemix-ca"}},"spec":{"containers":[{"args":["-c","sleep
1 \u0026\u0026 fabric-ca-server start -b
$(IDEMIX_ADMIN_USERNAME):$(IDEMIX_ADMIN_PASSWORD) --port 7054
--idemix.curve gurvy.Bn254 --loglevel
debug"],"command":["sh"],"env":[{"name":"FABRIC_CA_SERVER_HOME","value":"/idemix-config/fabric-ca-gurvy"},{"name":"IDEMIX_ADMIN_USERNAME","valueFrom":{"secretKeyRef":{"key":"IDEMIX_ADMIN_USERNAME","name":"cash-idemix-ca-admin-credentials"}}},{"name":"IDEMIX_ADMIN_PASSWORD","valueFrom":{"secretKeyRef":{"key":"IDEMIX_ADMIN_PASSWORD","name":"cash-idemix-ca-admin-credentials"}}}],"image":"ca.icr.io/samara-dev-container-images/cash-idemix-ca:0.3.0","imagePullPolicy":"IfNotPresent","name":"cash-idemix-ca","ports":[{"containerPort":7054,"name":"api","protocol":"TCP"}],"volumeMounts":[{"mountPath":"/idemix-config/fabric-ca-gurvy","name":"fabric-ca-server-home","readOnly":false}]}],"imagePullSecrets":[{"name":"all-icr-io"}],"serviceAccountName":"cash-idemix-ca","volumes":[{"name":"fabric-ca-server-home","persistentVolumeClaim":{"claimName":"cash-idemix-ca-fabric-ca-server-home"}}]}}}}
status:
observedGeneration: 10
replicas: 2
updatedReplicas: 1
readyReplicas: 1
availableReplicas: 1
unavailableReplicas: 1
conditions:
- type: Available
status: 'True'
lastUpdateTime: '2022-11-28T13:53:56Z'
lastTransitionTime: '2022-11-28T13:53:56Z'
reason: MinimumReplicasAvailable
message: Deployment has minimum availability.
- type: Progressing
status: 'False'
lastUpdateTime: '2022-11-29T13:37:38Z'
lastTransitionTime: '2022-11-29T13:37:38Z'
reason: ProgressDeadlineExceeded
message: ReplicaSet "cash-idemix-ca-6bc646cbcc" has timed out progressing.
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/name: cash-idemix-ca
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/name: cash-idemix-ca
annotations:
kubectl.kubernetes.io/restartedAt: '2022-07-19T11:45:27Z'
spec:
volumes:
- name: fabric-ca-server-home
persistentVolumeClaim:
claimName: cash-idemix-ca-fabric-ca-server-home
containers:
- name: cash-idemix-ca
image: ca.icr.io/samara-dev-container-images/cash-idemix-ca:0.3.0
command:
- sh
args:
- '-c'
- >-
sleep 1 && fabric-ca-server start -b
$(IDEMIX_ADMIN_USERNAME):$(IDEMIX_ADMIN_PASSWORD) --port 7054
--idemix.curve gurvy.Bn254 --loglevel debug
ports:
- name: api
containerPort: 7054
protocol: TCP
env:
- name: FABRIC_CA_SERVER_HOME
value: /idemix-config/fabric-ca-gurvy
- name: IDEMIX_ADMIN_USERNAME
valueFrom:
secretKeyRef:
name: cash-idemix-ca-admin-credentials
key: IDEMIX_ADMIN_USERNAME
- name: IDEMIX_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: cash-idemix-ca-admin-credentials
key: IDEMIX_ADMIN_PASSWORD
resources: {}
volumeMounts:
- name: fabric-ca-server-home
mountPath: /idemix-config/fabric-ca-gurvy
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: cash-idemix-ca
serviceAccount: cash-idemix-ca
securityContext: {}
imagePullSecrets:
- name: all-icr-io
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 0
progressDeadlineSeconds: 600

Related

Local Airflow Instance with minikube

I'm trying to run local Airflow instance on my laptop using minikube, deployment.yml file with the following command: kubectl apply -f ./deployment.yml.
After slightly tweaking this file I was able to end up with all three pods: postgres, webserver, scheduler running fine.
The result of the kubectl get pods
The content of the file:
---
# Source: airflow/templates/rbac/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: release-name-airflow
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
automountServiceAccountToken: true
---
# Source: airflow/charts/postgresql/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: release-name-postgresql
namespace: default
labels:
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-11.0.6
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
type: Opaque
data:
password: "**************"
# We don't auto-generate LDAP password when it's not provided as we do for other passwords
---
# Source: airflow/templates/config/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: release-name-airflow
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
type: Opaque
data:
airflow-password: "*************"
# Airflow keys must be base64-encoded, hence we need to pipe to 'b64enc' twice
# The auto-generation mechanism available at "common.secrets.passwords.manage" isn't compatible with encoding twice
# Therefore, we can only use this function if the secret already exists
airflow-fernet-key: "TldwdU0zRklTREZ0VDFkamVWUjFaMlozWTFKdWNFNUxTRXRxVm5Oa1p6az0="
airflow-secret-key: "VldWaWQySkhSVUZQZDNWQlltbG1UVzUzVkdwWmVVTkxPR1ZCZWpoQ05tUT0="
---
apiVersion: v1
kind: ConfigMap
metadata:
name: airflow-dependencies
namespace: "default"
data:
requirements.txt: |-
apache-airflow==2.2.3
pytest==6.2.4
python-slugify<5.0
funcy==1.16
apache-airflow-providers-mongo
apache-airflow-providers-postgres
apache-airflow-providers-slack
apache-airflow-providers-amazon
airflow_clickhouse_plugin
apache-airflow-providers-sftp
surveymonkey-python
---
# Source: airflow/templates/rbac/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: release-name-airflow
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
rules:
- apiGroups:
- ""
resources:
- "pods"
verbs:
- "create"
- "list"
- "get"
- "watch"
- "delete"
- "patch"
- apiGroups:
- ""
resources:
- "pods/log"
verbs:
- "get"
- apiGroups:
- ""
resources:
- "pods/exec"
verbs:
- "create"
- "get"
---
# Source: airflow/templates/rbac/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: release-name-airflow
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: release-name-airflow
subjects:
- kind: ServiceAccount
name: release-name-airflow
namespace: default
---
# Source: airflow/charts/postgresql/templates/primary/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-postgresql-hl
namespace: default
labels:
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-11.0.6
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: primary
# Use this annotation in addition to the actual publishNotReadyAddresses
# field below because the annotation will stop being respected soon but the
# field is broken in some versions of Kubernetes:
# https://github.com/kubernetes/kubernetes/issues/58662
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
type: ClusterIP
clusterIP: None
# We want all pods in the StatefulSet to have their addresses published for
# the sake of the other Postgresql pods even before they're ready, since they
# have to be able to talk to each other in order to become ready.
publishNotReadyAddresses: true
ports:
- name: tcp-postgresql
port: 5432
targetPort: tcp-postgresql
selector:
app.kubernetes.io/name: postgresql
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: primary
---
# Source: airflow/charts/postgresql/templates/primary/svc.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-postgresql
namespace: default
labels:
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-11.0.6
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: primary
annotations:
spec:
type: ClusterIP
ports:
- name: tcp-postgresql
port: 5432
targetPort: tcp-postgresql
nodePort: null
selector:
app.kubernetes.io/name: postgresql
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: primary
---
# Source: airflow/templates/web/service.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-airflow
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
spec:
type: NodePort
ports:
- name: http
port: 8080
nodePort: 30303
selector:
app.kubernetes.io/name: airflow
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: web
---
# Source: airflow/templates/scheduler/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-airflow-scheduler
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: scheduler
spec:
selector:
matchLabels:
app.kubernetes.io/name: airflow
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: scheduler
replicas: 1
strategy:
rollingUpdate: {}
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: scheduler
annotations:
checksum/configmap: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
spec:
affinity:
podAffinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: airflow
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: scheduler
namespaces:
- "default"
topologyKey: kubernetes.io/hostname
weight: 1
nodeAffinity:
serviceAccountName: release-name-airflow
securityContext:
fsGroup: 1001
initContainers:
containers:
- name: airflow-scheduler
image: "docker.io/bitnami/airflow-scheduler:2.2.3-debian-10-r57"
imagePullPolicy: "IfNotPresent"
securityContext:
runAsNonRoot: true
runAsUser: 1001
env:
- name: AIRFLOW_FERNET_KEY
valueFrom:
secretKeyRef:
name: release-name-airflow
key: airflow-fernet-key
- name: AIRFLOW_SECRET_KEY
valueFrom:
secretKeyRef:
name: release-name-airflow
key: airflow-secret-key
- name: AIRFLOW_LOAD_EXAMPLES
value: "no"
- name: AIRFLOW_DATABASE_NAME
value: "bitnami_airflow"
- name: AIRFLOW_DATABASE_USERNAME
value: "bn_airflow"
- name: AIRFLOW_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: release-name-postgresql
key: password
- name: AIRFLOW_DATABASE_HOST
value: "release-name-postgresql"
- name: AIRFLOW_DATABASE_PORT_NUMBER
value: "5432"
- name: AIRFLOW_EXECUTOR
value: LocalExecutor
- name: AIRFLOW_WEBSERVER_HOST
value: release-name-airflow
- name: AIRFLOW_WEBSERVER_PORT_NUMBER
value: "8080"
- name: AIRFLOW__CORE__DAGS_FOLDER
value: /opt/bitnami/airflow/dags
- name: AIRFLOW__CORE__ENABLE_XCOM_PICKLING
value: "True"
- name: AIRFLOW__CORE__DONOT_PICKLE
value: "False"
resources:
limits: {}
requests: {}
volumeMounts:
- mountPath: /bitnami/python/requirements.txt
name: requirements
subPath: requirements.txt
- mountPath: /opt/bitnami/airflow/dags/src
name: airflow-dags
volumes:
- name: requirements
configMap:
name: airflow-dependencies
- name: airflow-dags
hostPath:
# directory location on host
path: /Users/admin/Desktop/FXC_Airflow/dags/src
# this field is optional
type: Directory
---
# Source: airflow/templates/web/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-airflow-web
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: web
spec:
selector:
matchLabels:
app.kubernetes.io/name: airflow
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: web
replicas: 1
strategy:
rollingUpdate: {}
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: web
annotations:
checksum/configmap: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
spec:
affinity:
podAffinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: airflow
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: web
namespaces:
- "default"
topologyKey: kubernetes.io/hostname
weight: 1
nodeAffinity:
serviceAccountName: release-name-airflow
securityContext:
fsGroup: 1001
initContainers:
containers:
- name: airflow-web
image: docker.io/bitnami/airflow:2.2.3-debian-10-r62
imagePullPolicy: "IfNotPresent"
securityContext:
runAsNonRoot: true
runAsUser: 1001
env:
- name: AIRFLOW_FERNET_KEY
valueFrom:
secretKeyRef:
name: release-name-airflow
key: airflow-fernet-key
- name: AIRFLOW_SECRET_KEY
valueFrom:
secretKeyRef:
name: release-name-airflow
key: airflow-secret-key
- name: AIRFLOW_LOAD_EXAMPLES
value: "no"
- name: AIRFLOW_DATABASE_NAME
value: "bitnami_airflow"
- name: AIRFLOW_DATABASE_USERNAME
value: "bn_airflow"
- name: AIRFLOW_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: release-name-postgresql
key: password
- name: AIRFLOW_DATABASE_HOST
value: "release-name-postgresql"
- name: AIRFLOW_DATABASE_PORT_NUMBER
value: "5432"
- name: AIRFLOW_EXECUTOR
value: LocalExecutor
- name: AIRFLOW_WEBSERVER_HOST
value: "0.0.0.0"
- name: AIRFLOW_WEBSERVER_PORT_NUMBER
value: "8080"
- name: AIRFLOW_USERNAME
value: airflow
- name: AIRFLOW_PASSWORD
valueFrom:
secretKeyRef:
name: release-name-airflow
key: airflow-password
- name: AIRFLOW_BASE_URL
value: "http://127.0.0.1:8080"
- name: AIRFLOW_LDAP_ENABLE
value: "no"
- name: AIRFLOW__CORE__DAGS_FOLDER
value: /opt/bitnami/airflow/dags
- name: AIRFLOW__CORE__ENABLE_XCOM_PICKLING
value: "True"
- name: AIRFLOW__CORE__DONOT_PICKLE
value: "False"
ports:
- name: http
containerPort: 8080
livenessProbe:
failureThreshold: 6
initialDelaySeconds: 180
periodSeconds: 20
successThreshold: 1
timeoutSeconds: 5
tcpSocket:
port: http
readinessProbe:
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
tcpSocket:
port: http
resources:
limits:
cpu: "2"
memory: 4Gi
requests: {}
volumeMounts:
- mountPath: /bitnami/python/requirements.txt
name: requirements
subPath: requirements.txt
- mountPath: /opt/bitnami/airflow/dags/src
name: airflow-dags
volumes:
- name: requirements
configMap:
name: airflow-dependencies
- name: airflow-dags
hostPath:
# directory location on host
path: /Users/admin/Desktop/FXC_Airflow/dags/src
# this field is optional
type: Directory
---
# Source: airflow/charts/postgresql/templates/primary/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: release-name-postgresql
namespace: default
labels:
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-11.0.6
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: primary
annotations:
spec:
replicas: 1
serviceName: release-name-postgresql-hl
updateStrategy:
rollingUpdate: {}
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: postgresql
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: primary
template:
metadata:
name: release-name-postgresql
labels:
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-11.0.6
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: primary
annotations:
spec:
serviceAccountName: default
affinity:
podAffinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: postgresql
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: primary
namespaces:
- "default"
topologyKey: kubernetes.io/hostname
weight: 1
nodeAffinity:
securityContext:
fsGroup: 1001
initContainers:
containers:
- name: postgresql
image: docker.io/bitnami/postgresql:14.1.0-debian-10-r80
imagePullPolicy: "IfNotPresent"
securityContext:
runAsUser: 1001
env:
- name: BITNAMI_DEBUG
value: "false"
- name: POSTGRESQL_PORT_NUMBER
value: "5432"
- name: POSTGRESQL_VOLUME_DIR
value: "/bitnami/postgresql"
- name: PGDATA
value: "/bitnami/postgresql/data"
# Authentication
- name: POSTGRES_USER
value: "bn_airflow"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: release-name-postgresql
key: password
- name: POSTGRES_DB
value: "bitnami_airflow"
# Replication
# Initdb
# Standby
# LDAP
- name: POSTGRESQL_ENABLE_LDAP
value: "no"
# TLS
- name: POSTGRESQL_ENABLE_TLS
value: "no"
# Audit
- name: POSTGRESQL_LOG_HOSTNAME
value: "false"
- name: POSTGRESQL_LOG_CONNECTIONS
value: "false"
- name: POSTGRESQL_LOG_DISCONNECTIONS
value: "false"
- name: POSTGRESQL_PGAUDIT_LOG_CATALOG
value: "off"
# Others
- name: POSTGRESQL_CLIENT_MIN_MESSAGES
value: "error"
- name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES
value: "pgaudit"
ports:
- name: tcp-postgresql
containerPort: 5432
livenessProbe:
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
exec:
command:
- /bin/sh
- -c
- exec pg_isready -U "bn_airflow" -d "dbname=bitnami_airflow" -h 127.0.0.1 -p 5432
readinessProbe:
failureThreshold: 6
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
exec:
command:
- /bin/sh
- -c
- -e
- |
exec pg_isready -U "bn_airflow" -d "dbname=bitnami_airflow" -h 127.0.0.1 -p 5432
[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]
resources:
limits: {}
requests:
cpu: 250m
memory: 256Mi
volumeMounts:
- name: dshm
mountPath: /dev/shm
- name: data
mountPath: /bitnami/postgresql
volumes:
- name: dshm
emptyDir:
medium: Memory
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "8Gi"
The idea is that after successful deployment I would be able to access webserver UI through the localhost:30303, but can't for some reason. It feels like there should be a minor change to fix it...
For now what I've tried is to connect to the webserver pod: kubectl exec -it <webserver pod name> -- /bin/bash and run two commands airflow db init and airflow web server -p 8080.

cp: cannot stat '/opt/flink/opt/flink-metrics-prometheus-*.jar': No such file or directory in apache flink

I am upgrade apache flink 1.10 to apache flink 1.11 in kubernetes, but the jobmanager kubernetes pod log shows:
cp: cannot stat '/opt/flink/opt/flink-metrics-prometheus-*.jar': No such file or directory
this is my jobmanager pod yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
name: report-flink-jobmanager
namespace: middleware
selfLink: /apis/apps/v1/namespaces/middleware/deployments/report-flink-jobmanager
uid: b7bd8f0d-cddb-44e7-8bbe-b96e68dbfbcd
resourceVersion: '13655071'
generation: 44
creationTimestamp: '2020-06-08T02:11:33Z'
labels:
app.kubernetes.io/instance: report-flink
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: flink
app.kubernetes.io/version: 1.10.0
component: jobmanager
helm.sh/chart: flink-0.1.15
annotations:
deployment.kubernetes.io/revision: '6'
meta.helm.sh/release-name: report-flink
meta.helm.sh/release-namespace: middleware
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: report-flink
app.kubernetes.io/name: flink
component: jobmanager
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: report-flink
app.kubernetes.io/name: flink
component: jobmanager
spec:
volumes:
- name: flink-config-volume
configMap:
name: report-flink-config
items:
- key: flink-conf.yaml
path: flink-conf.yaml.tpl
- key: log4j.properties
path: log4j.properties
- key: security.properties
path: security.properties
defaultMode: 420
- name: flink-pro-persistent-storage
persistentVolumeClaim:
claimName: flink-pv-claim
containers:
- name: jobmanager
image: 'flink:1.11'
command:
- /bin/bash
- '-c'
- >-
cp /opt/flink/opt/flink-metrics-prometheus-*.jar
/opt/flink/opt/flink-s3-fs-presto-*.jar /opt/flink/lib/ && wget
https://repo1.maven.org/maven2/com/github/oshi/oshi-core/3.4.0/oshi-core-3.4.0.jar
-O /opt/flink/lib/oshi-core-3.4.0.jar && wget
https://repo1.maven.org/maven2/net/java/dev/jna/jna/5.4.0/jna-5.4.0.jar
-O /opt/flink/lib/jna-5.4.0.jar && wget
https://repo1.maven.org/maven2/net/java/dev/jna/jna-platform/5.4.0/jna-platform-5.4.0.jar
-O /opt/flink/lib/jna-platform-5.4.0.jar && cp
$FLINK_HOME/conf/flink-conf.yaml.tpl
$FLINK_HOME/conf/flink-conf.yaml && $FLINK_HOME/bin/jobmanager.sh
start; while :; do if [[ -f $(find log -name '*jobmanager*.log'
-print -quit) ]]; then tail -f -n +1 log/*jobmanager*.log; fi;
done
workingDir: /opt/flink
ports:
- name: blob
containerPort: 6124
protocol: TCP
- name: rpc
containerPort: 6123
protocol: TCP
- name: ui
containerPort: 8081
protocol: TCP
- name: metrics
containerPort: 9999
protocol: TCP
env:
- name: JVM_ARGS
value: '-Djava.security.properties=/opt/flink/conf/security.properties'
- name: FLINK_POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: APOLLO_META
valueFrom:
configMapKeyRef:
name: pro-config
key: apollo.meta
- name: ENV
valueFrom:
configMapKeyRef:
name: pro-config
key: env
resources: {}
volumeMounts:
- name: flink-config-volume
mountPath: /opt/flink/conf/flink-conf.yaml.tpl
subPath: flink-conf.yaml.tpl
- name: flink-config-volume
mountPath: /opt/flink/conf/log4j.properties
subPath: log4j.properties
- name: flink-config-volume
mountPath: /opt/flink/conf/security.properties
subPath: security.properties
- name: flink-pro-persistent-storage
mountPath: /opt/flink/data/
livenessProbe:
tcpSocket:
port: 6124
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 15
successThreshold: 1
failureThreshold: 3
readinessProbe:
tcpSocket:
port: 6123
initialDelaySeconds: 20
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: jobmanager
serviceAccount: jobmanager
securityContext: {}
schedulerName: default-scheduler
strategy:
type: Recreate
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
status:
observedGeneration: 44
replicas: 1
updatedReplicas: 1
unavailableReplicas: 1
conditions:
- type: Available
status: 'False'
lastUpdateTime: '2020-08-19T06:26:56Z'
lastTransitionTime: '2020-08-19T06:26:56Z'
reason: MinimumReplicasUnavailable
message: Deployment does not have minimum availability.
- type: Progressing
status: 'False'
lastUpdateTime: '2020-08-19T06:42:56Z'
lastTransitionTime: '2020-08-19T06:42:56Z'
reason: ProgressDeadlineExceeded
message: >-
ReplicaSet "report-flink-jobmanager-7b8b9bd6bb" has timed out
progressing.
should I remove the not exists jar file? how to fix this?

Flowable not creating the directory in AKS volume for uploaded file

I created the Yaml (Deploy, PV, PVC, and Service) files for flowable to run on AKS. It's running and I can see the flowable browser UI. The problem is that when I start a process and the process has a form to upload the file, here I am getting an error
/data/uncategorized/a6506912-816c-11ea-8c98-e20c3b5b12e4 (No such file or directory)
Here are my YAML files
Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
k8s-app: flowable-app
name: flowable-app
selfLink: /apis/extensions/v1beta1/namespaces/ingress-basic/deployments/flowable-app
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: flowable-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
k8s-app: flowable-app
name: flowable-app
spec:
containers:
- env:
- name: FLOWABLE_CONTENT_STORAGE_ROOT-FOLDER
value: /data
- name: SPRING_DATASOURCE_DRIVER-CLASS-NAME
value: org.postgresql.Driver
- name: SPRING_DATASOURCE_URL
value: jdbc:postgresql://0.0.0.0:5432/flowable
- name: SPRING_DATASOURCE_USERNAME
value: xxxxx
- name: SPRING_DATASOURCE_PASSWORD
value: xxxxx
- name: FLOWABLE_COMMON_APP_IDM-ADMIN_USER
value: admin
- name: FLOWABLE_COMMON_APP_IDM-ADMIN_PASSWORD
value: test
- name: FLOWABLE_COMMON_APP_IDM-REDIRECT-URL
value: http://1.1.1.1:8080/flowable-idm
- name: FLOWABLE_COMMON_APP_REDIRECT_ON_AUTH_SUCCESS
value: http://1.1.1.1:8080/flowable-task/
volumeMounts:
- mountPath: /data
name: flowable-data
image: xxxxx
imagePullPolicy: Always
name: flowable-app
resources: {}
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumes:
- name: flowable-data
persistentVolumeClaim:
claimName: flowable-volume-claim
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
PersistentVolume and PersistentVolumeClaim:
kind: PersistentVolume
apiVersion: v1
metadata:
name: flowable-volume
labels:
type: local
app: flowable-app
spec:
storageClassName: managed-premium
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data/flowable/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: flowable-volume-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: managed-premium
resources:
requests:
storage: 5Gi
Service:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
k8s-app: flowable-app
name: flowable-app
selfLink: /api/v1/namespaces/ingress-basic/services/flowable-app
spec:
externalTrafficPolicy: Cluster
ports:
- name: tcp-4000-4000-bj5xg
nodePort: 31789
port: 8080
protocol: TCP
targetPort: 8080
selector:
k8s-app: flowable-app
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}

Next.js Environment variables work locally but not when hosted on Kubernetes

Have a Next.js project.
This is my next.config.js file, which I followed through with on this guide: https://dev.to/tesh254/environment-variables-from-env-file-in-nextjs-570b
module.exports = withCSS(withSass({
webpack: (config) => {
config.plugins = config.plugins || []
config.module.rules.push({
test: /\.svg$/,
use: ['#svgr/webpack', {
loader: 'url-loader',
options: {
limit: 100000,
name: '[name].[ext]'
}}],
});
config.plugins = [
...config.plugins,
// Read the .env file
new Dotenv({
path: path.join(__dirname, '.env'),
systemvars: true
})
]
const env = Object.keys(process.env).reduce((acc, curr) => {
acc[`process.env.${curr}`] = JSON.stringify(process.env[curr]);
return acc;
}, {});
// Fixes npm packages that depend on `fs` module
config.node = {
fs: 'empty'
}
/** Allows you to create global constants which can be configured
* at compile time, which in our case is our environment variables
*/
config.plugins.push(new webpack.DefinePlugin(env));
return config
}
}),
);
I have a .env file which holds the values I need. It works when run on localhost.
In my Kubernetes environment, within the deploy file which I can modify, I have the same environment variables set up. But when I try and identify them they come off as undefined, so my application cannot run.
I refer to it like:
process.env.SOME_VARIABLE
which works locally.
Does anyone have experience making environment variables function on Next.js when deployed? Not as simple as it is for a backend service. :(
EDIT:
This is what the environment variable section looks like.
EDIT 2:
Full deploy file, edited to remove some details
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "38"
creationTimestamp: xx
generation: 40
labels:
app: appname
name: appname
namespace: development
resourceVersion: xx
selfLink: /apis/extensions/v1beta1/namespaces/development/deployments/appname
uid: xxx
spec:
progressDeadlineSeconds: xx
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: appname
tier: sometier
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: appname
tier: sometier
spec:
containers:
- env:
- name: NODE_ENV
value: development
- name: PORT
value: "3000"
- name: SOME_VAR
value: xxx
- name: SOME_VAR
value: xxxx
image: someimage
imagePullPolicy: Always
name: appname
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 3000
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 100m
memory: 100Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: xxx
lastUpdateTime: xxxx
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 40
readyReplicas: 1
replicas: 1
updatedReplicas: 1
.env Works in docker or docker-compose, they do not work in Kubernetes, if you want to add them you can by configmaps objects or add directly to each deployment an example (from documentation):
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow
Also, the best and standard way is to use config maps, for example:
containers:
- env:
- name: DB_DEFAULT_DATABASE
valueFrom:
configMapKeyRef:
key: DB_DEFAULT_DATABASE
name: darwined-env
And the config map:
apiVersion: v1
data:
DB_DEFAULT_DATABASE: darwined_darwin_dev_1
kind: ConfigMap
metadata:
creationTimestamp: null
labels:
io.kompose.service: darwin-env
name: darwined-env
Hope this helps.

Error creating replicaset - unknown unknown field "replicas" in io.k8s.api.apps.v1.ReplicaSet

Team, I am trying to create a replica set but getting error as
error validating data:
[ValidationError(ReplicaSet): unknown field "replicas" in
io.k8s.api.apps.v1.ReplicaSet, ValidationError(ReplicaSet): unknown
field "selector" in io.k8s.api.apps.v1.ReplicaSet,
ValidationError(ReplicaSet.spec): missing required field "selector" in
io.k8s.api.apps.v1.ReplicaSetSpec]; if you choose to ignore these
errors, turn validation off with --validate=false
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: test-pod-10sec-via-rc1
labels:
app: pod-label
spec:
template:
metadata:
name: test-pod-10sec-via-rc1
labels:
app: feature-pod-label
namespace: test-space
spec:
containers:
- name: main
image: ubuntu:latest
command: ["bash"]
args: ["-xc", "sleep 10"]
volumeMounts:
- name: in-0
mountPath: /in/0
readOnly: true
volumes:
- name: in-0
persistentVolumeClaim:
claimName: 123-123-123
readOnly: true
nodeSelector:
kubernetes.io/hostname: node1
replicas: 1
selector:
matchLabels:
app: feature-pod-label
You have indentation issue in your yaml file, the correct yaml will be:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: test-pod-10sec-via-rc1
labels:
app: pod-label
spec:
template:
metadata:
name: test-pod-10sec-via-rc1
labels:
app: feature-pod-label
namespace: test-space
spec:
template:
spec:
containers:
- name: main
image: ubuntu:latest
command: ["bash"]
args: ["-xc", "sleep 10"]
volumeMounts:
- name: in-0
mountPath: /in/0
readOnly: true
volumes:
- name: in-0
persistentVolumeClaim:
claimName: 123-123-123
readOnly: true
nodeSelector:
kubernetes.io/hostname: node1
replicas: 1
selector:
matchLabels:
app: feature-pod-label