kubernetes: Unable to change deployment strategy - kubernetes

I have a deployment up and running.
Here is its export:
apiVersion: v1items:
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations: deployment.kubernetes.io/revision: "2" kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{"deployment.kubernetes.io/revision":"1"},"creationTimestamp":"2019-04-14T15:32:12Z","generation":1,"name":"frontend","namespace":"default","resourceVersion":"3138","selfLink":"/apis/extensions/v1beta1/namespaces/default/deployments/frontend","uid":"796046e1-5eca-11e9-a16c-0242ac110033"},"spec":{"minReadySeconds":20,"progressDeadlineSeconds":600,"replicas":4,"revisionHistoryLimit":10,"selector":{"matchLabels":{"name":"webapp"}},"strategy":{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"},"template":{"metadata":{"creationTimestamp":null,"labels":{"name":"webapp"}},"spec":{"containers":[{"image":"kodekloud/webapp-color:v2","imagePullPolicy":"IfNotPresent","name":"simple-webapp","ports":[{"containerPort":8080,"protocol":"TCP"}],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30}}},"status":{"availableReplicas":4,"conditions":[{"lastTransitionTime":"2019-04-14T15:33:00Z","lastUpdateTime":"2019-04-14T15:33:00Z","message":"Deployment has minimum availability.","reason":"MinimumReplicasAvailable","status":"True","type":"Available"},{"lastTransitionTime":"2019-04-14T15:32:12Z","lastUpdateTime":"2019-04-14T15:33:00Z","message":"ReplicaSet \"frontend-7965b86db7\" has successfully progressed.","reason":"NewReplicaSetAvailable","status":"True","type":"Progressing"}],"observedGeneration":1,"readyReplicas":4,"replicas":4,"updatedReplicas":4}} creationTimestamp: 2019-04-14T15:32:12Z generation: 2 labels: name: webapp name: frontend namespace: default resourceVersion: "3653" selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/frontend
uid: 796046e1-5eca-11e9-a16c-0242ac110033 spec:
minReadySeconds: 20
progressDeadlineSeconds: 600
replicas: 4
revisionHistoryLimit: 10
selector:
matchLabels:
name: webapp
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
name: webapp
spec:
containers:
- image: kodekloud/webapp-color:v2
imagePullPolicy: IfNotPresent
name: simple-webapp
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 4
conditions:
- lastTransitionTime: 2019-04-14T15:33:00Z
lastUpdateTime: 2019-04-14T15:33:00Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2019-04-14T15:32:12Z
lastUpdateTime: 2019-04-14T15:38:01Z
message: ReplicaSet "frontend-65998dcfd8" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 2
readyReplicas: 4
replicas: 4
updatedReplicas: 4
kind: List
metadata:
resourceVersion: "
I am editing the metadata.strategy.type from RollingUpdate to Recreate.
However, running kubectl apply -f frontend.yml yields the following error:
Why is that?
Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"deployment.kubernetes.io/revision":"1","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployme
nt\",\"metadata\":{\"annotations\":{\"deployment.kubernetes.io/revision\":\"1\"},\"creationTimestamp\":\"2019-04-14T15:32:12Z\",\"generation\":1,\"name\":\"frontend\",\"namespace
\":\"default\",\"resourceVersion\":\"3138\",\"selfLink\":\"/apis/extensions/v1beta1/namespaces/default/deployments/frontend\",\"uid\":\"796046e1-5eca-11e9-a16c-0242ac110033\"},\"
spec\":{\"minReadySeconds\":20,\"progressDeadlineSeconds\":600,\"replicas\":4,\"revisionHistoryLimit\":10,\"selector\":{\"matchLabels\":{\"name\":\"webapp\"}},\"strategy\":{\"rol
lingUpdate\":{\"maxSurge\":\"25%\",\"maxUnavailable\":\"25%\"},\"type\":\"Recreate\"},\"template\":{\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"name\":\"webapp\"}},\"s
pec\":{\"containers\":[{\"image\":\"kodekloud/webapp-color:v2\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"simple-webapp\",\"ports\":[{\"containerPort\":8080,\"protocol\":\"
TCP\"}],\"resources\":{},\"terminationMessagePath\":\"/dev/termination-log\",\"terminationMessagePolicy\":\"File\"}],\"dnsPolicy\":\"ClusterFirst\",\"restartPolicy\":\"Always\",\
"schedulerName\":\"default-scheduler\",\"securityContext\":{},\"terminationGracePeriodSeconds\":30}}},\"status\":{\"availableReplicas\":4,\"conditions\":[{\"lastTransitionTime\":
\"2019-04-14T15:33:00Z\",\"lastUpdateTime\":\"2019-04-14T15:33:00Z\",\"message\":\"Deployment has minimum availability.\",\"reason\":\"MinimumReplicasAvailable\",\"status\":\"Tru
e\",\"type\":\"Available\"},{\"lastTransitionTime\":\"2019-04-14T15:32:12Z\",\"lastUpdateTime\":\"2019-04-14T15:33:00Z\",\"message\":\"ReplicaSet \\\"frontend-7965b86db7\\\" has
successfully progressed.\",\"reason\":\"NewReplicaSetAvailable\",\"status\":\"True\",\"type\":\"Progressing\"}],\"observedGeneration\":1,\"readyReplicas\":4,\"replicas\":4,\"upda
tedReplicas\":4}}\n"},"generation":1,"resourceVersion":"3138"},"spec":{"strategy":{"$retainKeys":["rollingUpdate","type"],"type":"Recreate"}},"status":{"$setElementOrder/conditio
ns":[{"type":"Available"},{"type":"Progressing"}],"conditions":[{"lastUpdateTime":"2019-04-14T15:33:00Z","message":"ReplicaSet \"frontend-7965b86db7\" has successfully progressed
.","type":"Progressing"}],"observedGeneration":1}}
to:
Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
Name: "frontend", Namespace: "default"
Object: &{map["kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensi
ons/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{\"deployment.kubernetes.io/revision\":\"1\"},\"creationTimestamp\":\"2019-04-14T15:32:12Z\",\"generation\":1,
\"name\":\"frontend\",\"namespace\":\"default\",\"resourceVersion\":\"3138\",\"selfLink\":\"/apis/extensions/v1beta1/namespaces/default/deployments/frontend\",\"uid\":\"796046e1-
5eca-11e9-a16c-0242ac110033\"},\"spec\":{\"minReadySeconds\":20,\"progressDeadlineSeconds\":600,\"replicas\":4,\"revisionHistoryLimit\":10,\"selector\":{\"matchLabels\":{\"name\"
:\"webapp\"}},\"strategy\":{\"rollingUpdate\":{\"maxSurge\":\"25%\",\"maxUnavailable\":\"25%\"},\"type\":\"RollingUpdate\"},\"template\":{\"metadata\":{\"creationTimestamp\":null
,\"labels\":{\"name\":\"webapp\"}},\"spec\":{\"containers\":[{\"image\":\"kodekloud/webapp-color:v2\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"simple-webapp\",\"ports\":[{
\"containerPort\":8080,\"protocol\":\"TCP\"}],\"resources\":{},\"terminationMessagePath\":\"/dev/termination-log\",\"terminationMessagePolicy\":\"File\"}],\"dnsPolicy\":\"Cluster
First\",\"restartPolicy\":\"Always\",\"schedulerName\":\"default-scheduler\",\"securityContext\":{},\"terminationGracePeriodSeconds\":30}}},\"status\":{\"availableReplicas\":4,\"
conditions\":[{\"lastTransitionTime\":\"2019-04-14T15:33:00Z\",\"lastUpdateTime\":\"2019-04-14T15:33:00Z\",\"message\":\"Deployment has minimum availability.\",\"reason\":\"Minim
umReplicasAvailable\",\"status\":\"True\",\"type\":\"Available\"},{\"lastTransitionTime\":\"2019-04-14T15:32:12Z\",\"lastUpdateTime\":\"2019-04-14T15:33:00Z\",\"message\":\"Repli
caSet \\\"frontend-7965b86db7\\\" has successfully progressed.\",\"reason\":\"NewReplicaSetAvailable\",\"status\":\"True\",\"type\":\"Progressing\"}],\"observedGeneration\":1,\"readyReplicas\":4,\"replicas\":4,\"updatedReplicas\":4}}\n" "deployment.kubernetes.io/revision":"2"] "name":"frontend" "namespace":"default" "selfLink":"/apis/extensions/v1beta1/namespaces/default/deployments/frontend" "resourceVersion":"3653" "labels":map["name":"webapp"] "uid":"796046e1-5eca-11e9-a16c-0242ac110033" "generation":'\x02' "creationTimestamp":"2019-04-14T15:32:12Z"] "spec":map["selector":map["matchLabels":map["name":"webapp"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"webapp"]] "spec":map["dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler" "containers":[map["terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"simple-webapp" "image":"kodekloud/webapp-color:v2" "ports":[map["containerPort":'\u1f90' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log"]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e']] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"]] "minReadySeconds":'\x14' "revisionHistoryLimit":'\n' "progressDeadlineSeconds":'\u0258' "replicas":'\x04'] "status":map["observedGeneration":'\x02' "replicas":'\x04' "updatedReplicas":'\x04' "readyReplicas":'\x04' "availableReplicas":'\x04' "conditions":[map["type":"Available" "status":"True" "lastUpdateTime":"2019-04-14T15:33:00Z" "lastTransitionTime":"2019-04-14T15:33:00Z" "reason":"MinimumReplicasAvailable" "message":"Deployment has minimum availability."] map["lastUpdateTime":"2019-04-14T15:38:01Z" "lastTransitionTime":"2019-04-14T15:32:12Z" "reason":"NewReplicaSetAvailable" "message":"ReplicaSet \"frontend-65998dcfd8\" has successfully progressed." "type":"Progressing" "status":"True"]]]]}
for: "frontend.yml": Operation cannot be fulfilled on deployments.extensions "frontend": the object has been modified; please apply your changes to the latest version and try again

This is a known issue with the Kubernetes Terraform Provider. It has been present since at least version 0.11.7.
The issue has been fixed with the latest version, as a result of this merge request.

Related

Grafana alert provisioning issue

I'd like to be able to provision alerts, and try to follow this instructions, but no luck!
OS Grafana version: 9.2.0 Running in Kubernetes
Steps that I take:
Created a new alert rule from UI.
Extract alert rule from API:
curl -k https://<my-grafana-url>/api/v1/provisioning/alert-rules/-4pMuQFVk -u admin:<my-admin-password>
It returns the following:
---
id: 18
uid: "-4pMuQFVk"
orgID: 1
folderUID: 3i72aQKVk
ruleGroup: cpu_alert_group
title: my_cpu_alert
condition: B
data:
- refId: A
queryType: ''
relativeTimeRange:
from: 600
to: 0
datasourceUid: _SaubQF4k
model:
editorMode: code
expr: system_cpu_usage
hide: false
intervalMs: 1000
legendFormat: __auto
maxDataPoints: 43200
range: true
refId: A
- refId: B
queryType: ''
relativeTimeRange:
from: 0
to: 0
datasourceUid: "-100"
model:
conditions:
- evaluator:
params:
- 3
type: gt
operator:
type: and
query:
params:
- A
reducer:
params: []
type: last
type: query
datasource:
type: __expr__
uid: "-100"
expression: A
hide: false
intervalMs: 1000
maxDataPoints: 43200
refId: B
type: classic_conditions
updated: '2022-12-07T20:01:47Z'
noDataState: NoData
execErrState: Error
for: 5m
Deleted the alert rule from UI.
Made a configmap from above alert-rule, as such:
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-alerting
data:
alerting.yaml: |-
apiVersion: 1
groups:
- id: 18
uid: "-4pMuQFVk"
orgID: 1
folderUID: 3i72aQKVk
ruleGroup: cpu_alert_group
title: my_cpu_alert
condition: B
data:
- refId: A
queryType: ''
relativeTimeRange:
from: 600
to: 0
datasourceUid: _SaubQF4k
model:
editorMode: code
expr: system_cpu_usage
hide: false
intervalMs: 1000
legendFormat: __auto
maxDataPoints: 43200
range: true
refId: A
- refId: B
queryType: ''
relativeTimeRange:
from: 0
to: 0
datasourceUid: "-100"
model:
conditions:
- evaluator:
params:
- 3
type: gt
operator:
type: and
query:
params:
- A
reducer:
params: []
type: last
type: query
datasource:
type: __expr__
uid: "-100"
expression: A
hide: false
intervalMs: 1000
maxDataPoints: 43200
refId: B
type: classic_conditions
updated: '2022-12-07T20:01:47Z'
noDataState: NoData
execErrState: Error
for: 5m
I mount above configmap in grafana container (in /etc/grafna/provisioning/alerting).
The full manifest of the deployment is as follow:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
meta.helm.sh/release-name: grafana
meta.helm.sh/release-namespace: monitoring
creationTimestamp: "2022-12-08T18:31:30Z"
generation: 4
labels:
app.kubernetes.io/instance: grafana
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana
app.kubernetes.io/version: 9.3.0
helm.sh/chart: grafana-6.46.1
name: grafana
namespace: monitoring
resourceVersion: "648617"
uid: dc06b802-5281-4f31-a2b2-fef3cf53a70b
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/instance: grafana
app.kubernetes.io/name: grafana
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
checksum/config: 98cac51656714db48a116d3109994ee48c401b138bc8459540e1a497f994d197
checksum/dashboards-json-config: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
checksum/sc-dashboard-provider-config: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
checksum/secret: 203f0e4d883461bdd41fe68515fc47f679722dc2fdda49b584209d1d288a5f07
creationTimestamp: null
labels:
app.kubernetes.io/instance: grafana
app.kubernetes.io/name: grafana
spec:
automountServiceAccountToken: true
containers:
- env:
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
key: admin-user
name: grafana
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: admin-password
name: grafana
- name: GF_PATHS_DATA
value: /var/lib/grafana/
- name: GF_PATHS_LOGS
value: /var/log/grafana
- name: GF_PATHS_PLUGINS
value: /var/lib/grafana/plugins
- name: GF_PATHS_PROVISIONING
value: /etc/grafana/provisioning
image: grafana/grafana:9.3.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 10
httpGet:
path: /api/health
port: 3000
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
name: grafana
ports:
- containerPort: 3000
name: grafana
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /api/health
port: 3000
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/grafana/grafana.ini
name: config
subPath: grafana.ini
- mountPath: /etc/grafana/provisioning/alerting
name: grafana-alerting
- mountPath: /var/lib/grafana
name: storage
dnsPolicy: ClusterFirst
enableServiceLinks: true
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 472
runAsGroup: 472
runAsUser: 472
serviceAccount: grafana
serviceAccountName: grafana
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: grafana-alerting
name: grafana-alerting
- configMap:
defaultMode: 420
name: grafana
name: config
- emptyDir: {}
name: storage
However, grafana fail to start with this error:
Failed to start grafana. error: failure to map file alerting.yaml: failure parsing rules: rule group has no name set
I fixed above error by adding Group Name, but similar errors about other missing elements kept showing up again and again (to the point that I stopped chasing, as I couldn't figure out what exactly the correct schema is). Digging in, it looks like the format/schema returned from API in step 2, is different than the schema that's pointed out in the documentation.
Why is the schema of the alert-rule returned from API different? Am I supposed to convert it, and if so how? Otherwise, what am I doing wrong?
Edit: Replaced Statefulset with deployment, since I was able to reproduce this in a normal/minimal Grafana deployment too.

Zombie kubernetes pod keeps being restarted by zombie Replica Set

I'm managing a kubernetes cluster and there is a duplicate pod that keeps coming back, but the duplicate ReplicaSet controlling it also keeps coming back after deletion. It's very strange. I also can't set the replica set to desire 0 pods, but that might be by design.
I can't really think of more information to share.
Anyone recognise this issue and know how to fix it?
Here's the ReplicaSet
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: cash-idemix-ca-6bc646cbcc
namespace: boc-cash-portal
uid: babc0236-2053-4088-b8e8-b8ae2ed9303c
resourceVersion: '25466073'
generation: 1
creationTimestamp: '2022-11-28T13:18:42Z'
labels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/name: cash-idemix-ca
pod-template-hash: 6bc646cbcc
annotations:
deployment.kubernetes.io/desired-replicas: '1'
deployment.kubernetes.io/max-replicas: '2'
deployment.kubernetes.io/revision: '7'
ownerReferences:
- apiVersion: apps/v1
kind: Deployment
name: cash-idemix-ca
uid: 2a3300ed-f666-4a30-98b7-7ab2ebcb2a0d
controller: true
blockOwnerDeletion: true
managedFields:
- manager: kube-controller-manager
operation: Update
apiVersion: apps/v1
time: '2022-11-28T13:18:42Z'
fieldsType: FieldsV1
- manager: kube-controller-manager
operation: Update
apiVersion: apps/v1
time: '2022-11-29T13:27:37Z'
fieldsType: FieldsV1
subresource: status
selfLink: >-
/apis/apps/v1/namespaces/boc-cash-portal/replicasets/cash-idemix-ca-6bc646cbcc
status:
replicas: 1
fullyLabeledReplicas: 1
observedGeneration: 1
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/name: cash-idemix-ca
pod-template-hash: 6bc646cbcc
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/name: cash-idemix-ca
pod-template-hash: 6bc646cbcc
annotations:
kubectl.kubernetes.io/restartedAt: '2022-07-19T11:45:27Z'
spec:
volumes:
- name: fabric-ca-server-home
persistentVolumeClaim:
claimName: cash-idemix-ca-fabric-ca-server-home
containers:
- name: cash-idemix-ca
image: ca.icr.io/samara-dev-container-images/cash-idemix-ca:0.3.0
command:
- sh
args:
- '-c'
- >-
sleep 1 && fabric-ca-server start -b
$(IDEMIX_ADMIN_USERNAME):$(IDEMIX_ADMIN_PASSWORD) --port 7054
--idemix.curve gurvy.Bn254 --loglevel debug
ports:
- name: api
containerPort: 7054
protocol: TCP
env:
- name: FABRIC_CA_SERVER_HOME
value: /idemix-config/fabric-ca-gurvy
- name: IDEMIX_ADMIN_USERNAME
valueFrom:
secretKeyRef:
name: cash-idemix-ca-admin-credentials
key: IDEMIX_ADMIN_USERNAME
- name: IDEMIX_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: cash-idemix-ca-admin-credentials
key: IDEMIX_ADMIN_PASSWORD
resources: {}
volumeMounts:
- name: fabric-ca-server-home
mountPath: /idemix-config/fabric-ca-gurvy
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: cash-idemix-ca
serviceAccount: cash-idemix-ca
securityContext: {}
imagePullSecrets:
- name: all-icr-io
schedulerName: default-scheduler
Edit: The Fool was correct, there is a deployment that recreates the the ReplicaSet. Though in the settings it seems to say that it only want to create 1 replica; so I still don't see why it wants to create two of them.
I'm using Lens to manage my cluster, and it shows that the desired number of replicas is indeed 2. I can set it to 1, but the change won't persist. Anything else where I could look?
cash-idemix-ca Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cash-idemix-ca
namespace: boc-cash-portal
uid: 2a3300ed-f666-4a30-98b7-7ab2ebcb2a0d
resourceVersion: '25467341'
generation: 10
creationTimestamp: '2022-07-18T14:13:57Z'
labels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: cash-idemix-ca
app.kubernetes.io/version: manual-0.2.1
helm.sh/chart: cash-idemix-ca-0.12.0
annotations:
deployment.kubernetes.io/revision: '7'
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"cash-idemix-ca","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"cash-idemix-ca","app.kubernetes.io/version":"manual-0.2.1","helm.sh/chart":"cash-idemix-ca-0.12.0"},"name":"cash-idemix-ca","namespace":"boc-cash-portal"},"spec":{"replicas":1,"revisionHistoryLimit":0,"selector":{"matchLabels":{"app.kubernetes.io/instance":"cash-idemix-ca","app.kubernetes.io/name":"cash-idemix-ca"}},"template":{"metadata":{"labels":{"app.kubernetes.io/instance":"cash-idemix-ca","app.kubernetes.io/name":"cash-idemix-ca"}},"spec":{"containers":[{"args":["-c","sleep
1 \u0026\u0026 fabric-ca-server start -b
$(IDEMIX_ADMIN_USERNAME):$(IDEMIX_ADMIN_PASSWORD) --port 7054
--idemix.curve gurvy.Bn254 --loglevel
debug"],"command":["sh"],"env":[{"name":"FABRIC_CA_SERVER_HOME","value":"/idemix-config/fabric-ca-gurvy"},{"name":"IDEMIX_ADMIN_USERNAME","valueFrom":{"secretKeyRef":{"key":"IDEMIX_ADMIN_USERNAME","name":"cash-idemix-ca-admin-credentials"}}},{"name":"IDEMIX_ADMIN_PASSWORD","valueFrom":{"secretKeyRef":{"key":"IDEMIX_ADMIN_PASSWORD","name":"cash-idemix-ca-admin-credentials"}}}],"image":"ca.icr.io/samara-dev-container-images/cash-idemix-ca:0.3.0","imagePullPolicy":"IfNotPresent","name":"cash-idemix-ca","ports":[{"containerPort":7054,"name":"api","protocol":"TCP"}],"volumeMounts":[{"mountPath":"/idemix-config/fabric-ca-gurvy","name":"fabric-ca-server-home","readOnly":false}]}],"imagePullSecrets":[{"name":"all-icr-io"}],"serviceAccountName":"cash-idemix-ca","volumes":[{"name":"fabric-ca-server-home","persistentVolumeClaim":{"claimName":"cash-idemix-ca-fabric-ca-server-home"}}]}}}}
status:
observedGeneration: 10
replicas: 2
updatedReplicas: 1
readyReplicas: 1
availableReplicas: 1
unavailableReplicas: 1
conditions:
- type: Available
status: 'True'
lastUpdateTime: '2022-11-28T13:53:56Z'
lastTransitionTime: '2022-11-28T13:53:56Z'
reason: MinimumReplicasAvailable
message: Deployment has minimum availability.
- type: Progressing
status: 'False'
lastUpdateTime: '2022-11-29T13:37:38Z'
lastTransitionTime: '2022-11-29T13:37:38Z'
reason: ProgressDeadlineExceeded
message: ReplicaSet "cash-idemix-ca-6bc646cbcc" has timed out progressing.
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/name: cash-idemix-ca
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/name: cash-idemix-ca
annotations:
kubectl.kubernetes.io/restartedAt: '2022-07-19T11:45:27Z'
spec:
volumes:
- name: fabric-ca-server-home
persistentVolumeClaim:
claimName: cash-idemix-ca-fabric-ca-server-home
containers:
- name: cash-idemix-ca
image: ca.icr.io/samara-dev-container-images/cash-idemix-ca:0.3.0
command:
- sh
args:
- '-c'
- >-
sleep 1 && fabric-ca-server start -b
$(IDEMIX_ADMIN_USERNAME):$(IDEMIX_ADMIN_PASSWORD) --port 7054
--idemix.curve gurvy.Bn254 --loglevel debug
ports:
- name: api
containerPort: 7054
protocol: TCP
env:
- name: FABRIC_CA_SERVER_HOME
value: /idemix-config/fabric-ca-gurvy
- name: IDEMIX_ADMIN_USERNAME
valueFrom:
secretKeyRef:
name: cash-idemix-ca-admin-credentials
key: IDEMIX_ADMIN_USERNAME
- name: IDEMIX_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: cash-idemix-ca-admin-credentials
key: IDEMIX_ADMIN_PASSWORD
resources: {}
volumeMounts:
- name: fabric-ca-server-home
mountPath: /idemix-config/fabric-ca-gurvy
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: cash-idemix-ca
serviceAccount: cash-idemix-ca
securityContext: {}
imagePullSecrets:
- name: all-icr-io
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 0
progressDeadlineSeconds: 600

TLS error Nginx-ingress-controller fails to start after AKS upgrade to v1.23.5 from 1.21- traefik still tries to get from *v1beta1.Ingress

We deploy service with helm. The ingress template looks like that:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ui-app-ingress
{{- with .Values.ingress.annotations}}
annotations:
{{- toYaml . | nindent 4}}
{{- end}}
spec:
rules:
- host: {{ .Values.ingress.hostname }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ include "ui-app-chart.fullname" . }}
port:
number: 80
tls:
- hosts:
- {{ .Values.ingress.hostname }}
secretName: {{ .Values.ingress.certname }}
as you can see, we already use networking.k8s.io/v1 but if i watch the treafik logs, i find this error:
1 reflector.go:138] pkg/mod/k8s.io/client-go#v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1beta1.Ingress: failed to list *v1beta1.Ingress: the server could not find the requested resource (get ingresses.extensions)
what results in tls cert error:
time="2022-06-07T15:40:35Z" level=debug msg="Serving default certificate for request: \"example.de\""
time="2022-06-07T15:40:35Z" level=debug msg="http: TLS handshake error from 10.1.0.4:57484: remote error: tls: unknown certificate"
time="2022-06-07T15:40:35Z" level=debug msg="Serving default certificate for request: \"example.de\""
time="2022-06-07T15:53:06Z" level=debug msg="Serving default certificate for request: \"\""
time="2022-06-07T16:03:31Z" level=debug msg="Serving default certificate for request: \"<ip-adress>\""
time="2022-06-07T16:03:32Z" level=debug msg="Serving default certificate for request: \"<ip-adress>\""
PS C:\WINDOWS\system32>
i already found out that networking.k8s.io/v1beta1 is not longer served, but networking.k8s.io/v1 was defined in the template all the time as ApiVersion.
Why does it still try to get from v1beta1? And how can i fix this?
We use this TLSOptions:
apiVersion: traefik.containo.us/v1alpha1
kind: TLSOption
metadata:
name: default
namespace: default
spec:
minVersion: VersionTLS12
maxVersion: VersionTLS13
cipherSuites:
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
we use helm-treafik rolled out with terraform:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
meta.helm.sh/release-name: traefik
meta.helm.sh/release-namespace: traefik
creationTimestamp: "2021-06-12T10:06:11Z"
generation: 2
labels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: traefik
helm.sh/chart: traefik-9.19.1
name: traefik
namespace: traefik
resourceVersion: "86094434"
uid: 903a6f54-7698-4290-bc59-d234a191965c
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/name: traefik
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: traefik
helm.sh/chart: traefik-9.19.1
spec:
containers:
- args:
- --global.checknewversion
- --global.sendanonymoususage
- --entryPoints.traefik.address=:9000/tcp
- --entryPoints.web.address=:8000/tcp
- --entryPoints.websecure.address=:8443/tcp
- --api.dashboard=true
- --ping=true
- --providers.kubernetescrd
- --providers.kubernetesingress
- --providers.file.filename=/etc/traefik/traefik.yml
- --accesslog=true
- --accesslog.format=json
- --log.level=DEBUG
- --entrypoints.websecure.http.tls
- --entrypoints.web.http.redirections.entrypoint.to=websecure
- --entrypoints.web.http.redirections.entrypoint.scheme=https
- --entrypoints.web.http.redirections.entrypoint.permanent=true
- --entrypoints.web.http.redirections.entrypoint.to=:443
image: traefik:2.4.8
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /ping
port: 9000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
name: traefik
ports:
- containerPort: 9000
name: traefik
protocol: TCP
- containerPort: 8000
name: web
protocol: TCP
- containerPort: 8443
name: websecure
protocol: TCP
readinessProbe:
failureThreshold: 1
httpGet:
path: /ping
port: 9000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
resources: {}
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
readOnlyRootFilesystem: true
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data
name: data
- mountPath: /tmp
name: tmp
- mountPath: /etc/traefik
name: traefik-cm
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 65532
serviceAccount: traefik
serviceAccountName: traefik
terminationGracePeriodSeconds: 60
tolerations:
- effect: NoSchedule
key: env
operator: Equal
value: conhub
volumes:
- emptyDir: {}
name: data
- emptyDir: {}
name: tmp
- configMap:
defaultMode: 420
name: traefik-cm
name: traefik-cm
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2022-06-07T09:19:58Z"
lastUpdateTime: "2022-06-07T09:19:58Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2021-06-12T10:06:11Z"
lastUpdateTime: "2022-06-07T16:39:01Z"
message: ReplicaSet "traefik-84c6f5f98b" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 2
readyReplicas: 3
replicas: 3
updatedReplicas: 3
resource "helm_release" "traefik" {
name = "traefik"
namespace = "traefik"
create_namespace = true
repository = "https://helm.traefik.io/traefik"
chart = "traefik"
set {
name = "service.spec.loadBalancerIP"
value = azurerm_public_ip.pub_ip.ip_address
}
set {
name = "service.annotations.service\\.beta\\.kubernetes\\.io/azure-load-balancer-resource-group"
value = var.resource_group_aks
}
set {
name = "additionalArguments"
value = "{--accesslog=true,--accesslog.format=json,--log.level=DEBUG,--entrypoints.websecure.http.tls,--entrypoints.web.http.redirections.entrypoint.to=websecure,--entrypoints.web.http.redirections.entrypoint.scheme=https,--entrypoints.web.http.redirections.entrypoint.permanent=true,--entrypoints.web.http.redirections.entrypoint.to=:443}"
}
set {
name = "deployment.replicas"
value = 3
}
timeout = 600
depends_on = [
azurerm_kubernetes_cluster.aks
]
}
I found out that the problem was the version of the traefik image:
i quick fixed it by setting the latest image:
kubectl set image deployment/traefik traefik=traefik:2.7.0 -n traefik

Flowable not creating the directory in AKS volume for uploaded file

I created the Yaml (Deploy, PV, PVC, and Service) files for flowable to run on AKS. It's running and I can see the flowable browser UI. The problem is that when I start a process and the process has a form to upload the file, here I am getting an error
/data/uncategorized/a6506912-816c-11ea-8c98-e20c3b5b12e4 (No such file or directory)
Here are my YAML files
Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
k8s-app: flowable-app
name: flowable-app
selfLink: /apis/extensions/v1beta1/namespaces/ingress-basic/deployments/flowable-app
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: flowable-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
k8s-app: flowable-app
name: flowable-app
spec:
containers:
- env:
- name: FLOWABLE_CONTENT_STORAGE_ROOT-FOLDER
value: /data
- name: SPRING_DATASOURCE_DRIVER-CLASS-NAME
value: org.postgresql.Driver
- name: SPRING_DATASOURCE_URL
value: jdbc:postgresql://0.0.0.0:5432/flowable
- name: SPRING_DATASOURCE_USERNAME
value: xxxxx
- name: SPRING_DATASOURCE_PASSWORD
value: xxxxx
- name: FLOWABLE_COMMON_APP_IDM-ADMIN_USER
value: admin
- name: FLOWABLE_COMMON_APP_IDM-ADMIN_PASSWORD
value: test
- name: FLOWABLE_COMMON_APP_IDM-REDIRECT-URL
value: http://1.1.1.1:8080/flowable-idm
- name: FLOWABLE_COMMON_APP_REDIRECT_ON_AUTH_SUCCESS
value: http://1.1.1.1:8080/flowable-task/
volumeMounts:
- mountPath: /data
name: flowable-data
image: xxxxx
imagePullPolicy: Always
name: flowable-app
resources: {}
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumes:
- name: flowable-data
persistentVolumeClaim:
claimName: flowable-volume-claim
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
PersistentVolume and PersistentVolumeClaim:
kind: PersistentVolume
apiVersion: v1
metadata:
name: flowable-volume
labels:
type: local
app: flowable-app
spec:
storageClassName: managed-premium
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data/flowable/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: flowable-volume-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: managed-premium
resources:
requests:
storage: 5Gi
Service:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
k8s-app: flowable-app
name: flowable-app
selfLink: /api/v1/namespaces/ingress-basic/services/flowable-app
spec:
externalTrafficPolicy: Cluster
ports:
- name: tcp-4000-4000-bj5xg
nodePort: 31789
port: 8080
protocol: TCP
targetPort: 8080
selector:
k8s-app: flowable-app
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}

Next.js Environment variables work locally but not when hosted on Kubernetes

Have a Next.js project.
This is my next.config.js file, which I followed through with on this guide: https://dev.to/tesh254/environment-variables-from-env-file-in-nextjs-570b
module.exports = withCSS(withSass({
webpack: (config) => {
config.plugins = config.plugins || []
config.module.rules.push({
test: /\.svg$/,
use: ['#svgr/webpack', {
loader: 'url-loader',
options: {
limit: 100000,
name: '[name].[ext]'
}}],
});
config.plugins = [
...config.plugins,
// Read the .env file
new Dotenv({
path: path.join(__dirname, '.env'),
systemvars: true
})
]
const env = Object.keys(process.env).reduce((acc, curr) => {
acc[`process.env.${curr}`] = JSON.stringify(process.env[curr]);
return acc;
}, {});
// Fixes npm packages that depend on `fs` module
config.node = {
fs: 'empty'
}
/** Allows you to create global constants which can be configured
* at compile time, which in our case is our environment variables
*/
config.plugins.push(new webpack.DefinePlugin(env));
return config
}
}),
);
I have a .env file which holds the values I need. It works when run on localhost.
In my Kubernetes environment, within the deploy file which I can modify, I have the same environment variables set up. But when I try and identify them they come off as undefined, so my application cannot run.
I refer to it like:
process.env.SOME_VARIABLE
which works locally.
Does anyone have experience making environment variables function on Next.js when deployed? Not as simple as it is for a backend service. :(
EDIT:
This is what the environment variable section looks like.
EDIT 2:
Full deploy file, edited to remove some details
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "38"
creationTimestamp: xx
generation: 40
labels:
app: appname
name: appname
namespace: development
resourceVersion: xx
selfLink: /apis/extensions/v1beta1/namespaces/development/deployments/appname
uid: xxx
spec:
progressDeadlineSeconds: xx
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: appname
tier: sometier
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: appname
tier: sometier
spec:
containers:
- env:
- name: NODE_ENV
value: development
- name: PORT
value: "3000"
- name: SOME_VAR
value: xxx
- name: SOME_VAR
value: xxxx
image: someimage
imagePullPolicy: Always
name: appname
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 3000
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 100m
memory: 100Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: xxx
lastUpdateTime: xxxx
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 40
readyReplicas: 1
replicas: 1
updatedReplicas: 1
.env Works in docker or docker-compose, they do not work in Kubernetes, if you want to add them you can by configmaps objects or add directly to each deployment an example (from documentation):
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow
Also, the best and standard way is to use config maps, for example:
containers:
- env:
- name: DB_DEFAULT_DATABASE
valueFrom:
configMapKeyRef:
key: DB_DEFAULT_DATABASE
name: darwined-env
And the config map:
apiVersion: v1
data:
DB_DEFAULT_DATABASE: darwined_darwin_dev_1
kind: ConfigMap
metadata:
creationTimestamp: null
labels:
io.kompose.service: darwin-env
name: darwined-env
Hope this helps.