Postgresql database created if changed environment variable in a kubernetes replicaset - postgresql

I created a cluster of kubernetes on local using minikube and I changed the environment variables of the database name to another value than postgresql. I see that everytime I delete a pod and it replicate, creates the default database 'postgres' and the other one that I configured on the environment variables. Is that normal or I do something wrong? I only want to create the single database, the name that I define in the environment variable.
Configuration file:
kind: ReplicaSet
apiVersion: apps/v1
metadata:
name: postgresql-kubernetes-1-7f6b9f97cf
namespace: default
uid: a50e813d-4110-47fc-9708-69e6990d0355
resourceVersion: '1818'
generation: 1
creationTimestamp: '2022-11-19T16:34:04Z'
labels:
k8s-app: postgresql-kubernetes-1
pod-template-hash: 7f6b9f97cf
annotations:
deployment.kubernetes.io/desired-replicas: '1'
deployment.kubernetes.io/max-replicas: '2'
deployment.kubernetes.io/revision: '1'
ownerReferences:
- apiVersion: apps/v1
kind: Deployment
name: postgresql-kubernetes-1
uid: bcc1d5e9-82b3-4edb-9b7b-a67baa7c1117
controller: true
blockOwnerDeletion: true
managedFields:
- manager: kube-controller-manager
operation: Update
apiVersion: apps/v1
time: '2022-11-19T16:34:04Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:deployment.kubernetes.io/desired-replicas: {}
f:deployment.kubernetes.io/max-replicas: {}
f:deployment.kubernetes.io/revision: {}
f:labels:
.: {}
f:k8s-app: {}
f:pod-template-hash: {}
f:ownerReferences:
.: {}
k:{"uid":"bcc1d5e9-82b3-4edb-9b7b-a67baa7c1117"}: {}
f:spec:
f:replicas: {}
f:selector: {}
f:template:
f:metadata:
f:labels:
.: {}
f:k8s-app: {}
f:pod-template-hash: {}
f:name: {}
f:spec:
f:containers:
k:{"name":"postgresql-kubernetes-1"}:
.: {}
f:env:
.: {}
k:{"name":"POSTGRES_DB"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"POSTGRES_PASSWORD"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"POSTGRES_USER"}:
.: {}
f:name: {}
f:value: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:securityContext:
.: {}
f:privileged: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
- manager: kube-controller-manager
operation: Update
apiVersion: apps/v1
time: '2022-11-19T17:02:24Z'
fieldsType: FieldsV1
fieldsV1:
f:status:
f:availableReplicas: {}
f:fullyLabeledReplicas: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
subresource: status
spec:
replicas: 1
selector:
matchLabels:
k8s-app: postgresql-kubernetes-1
pod-template-hash: 7f6b9f97cf
template:
metadata:
name: postgresql-kubernetes-1
creationTimestamp: null
labels:
k8s-app: postgresql-kubernetes-1
pod-template-hash: 7f6b9f97cf
spec:
containers:
- name: postgresql-kubernetes-1
image: postgres
env:
- name: POSTGRES_DB
value: postgresql_kubernetes_1
- name: POSTGRES_USER
value: superuseryeababy
- name: POSTGRES_PASSWORD
value: superpasswordyeababy
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
securityContext:
privileged: false
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
status:
replicas: 1
fullyLabeledReplicas: 1
readyReplicas: 1
availableReplicas: 1
observedGeneration: 1
answer for my question and help me understand how actually a pod works

Related

Kubernetes replica does not receive traffic

I am trying to understand how kubernetes replicas work, and I am getting an unexpected (?) behavior. If I understand correctly, when a service selects a deployment it will distribute the requests accross all pods. My deployment has 3 replicas and the pods are being selected properly by the service but requests only go to one of then (the other two remain unused)
I followed this tutorial and after scaling the deployment I made multiple get requests to the service and I was expecting that requests will be distributed accross replicas but only one replica received and handled all the requests. I am not sure if that's how it works or maybe I need to do something else?
I would like to add that my first test was a very simple endpoint that will resolve the request immediatly. I tested adding load to the endpoint (a delay before resolving) and it did started sending requests to the others replicas. I would love to understand how it works I haven't being able to find any docs about it
This is the deployment yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2022-11-23T15:26:36Z"
generation: 2
labels:
app: express-echo
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:replicas: {}
manager: kubectl
operation: Update
subresource: scale
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:progressDeadlineSeconds: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:containers:
k:{"name":"express-echo"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: kubectl-create
operation: Update
time: "2022-11-23T15:26:36Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2022-11-23T15:28:18Z"
name: express-echo
namespace: default
resourceVersion: "5192"
uid: 32288873-1e30-44a1-9226-0214c1becd35
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: express-echo
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: express-echo
spec:
containers:
- image: gcr.io/gcp-project/express-echo:1.0.0
imagePullPolicy: IfNotPresent
name: express-echo
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2022-11-23T15:26:36Z"
lastUpdateTime: "2022-11-23T15:27:01Z"
message: ReplicaSet "express-echo-547f8bcfb5" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2022-11-23T15:28:18Z"
lastUpdateTime: "2022-11-23T15:28:18Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 2
readyReplicas: 3
replicas: 3
And this is the service
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
creationTimestamp: "2022-11-23T15:26:48Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app: express-echo
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:allocateLoadBalancerNodePorts: {}
f:externalTrafficPolicy: {}
f:internalTrafficPolicy: {}
f:ports:
.: {}
k:{"port":80,"protocol":"TCP"}:
.: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector: {}
f:sessionAffinity: {}
f:type: {}
manager: kubectl-expose
operation: Update
time: "2022-11-23T15:26:48Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"service.kubernetes.io/load-balancer-cleanup": {}
f:status:
f:loadBalancer:
f:ingress: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2022-11-23T15:27:24Z"
name: express-echo
namespace: default
resourceVersion: "4765"
uid: 99346a8a-1e89-476e-a21f-0d9c98d86b7d
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.0.8.195
clusterIPs:
- 10.0.8.195
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 31123
port: 80
protocol: TCP
targetPort: 3001
selector:
app: express-echo
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 1.1.1.1

Getting "response 404 (backend NotFound), service rules for the path non-existent" Using Ingress Google Cloud

I want to my backend service which is deployed on kubernetes service to access using ingress with path /sso-dev/, for that i have deployed my service on kubernetes container the deployment, service and ingress manifest is mentioned below, but while accessing the ingress load balancer api with path /sso-dev/ it throws "response 404 (backend NotFound), service rules for the path non-existent" error
I required a help just to access the backend service which is working fine with kubernetes container load balance ip.
here is my ingress configure
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/backends: '{"k8s-be-30969--6d0e236a1c7d6409":"HEALTHY","k8s1-6d0e236a-default-sso-dev-service-80-849fdb46":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s2-fr-uwdva40x-default-my-ingress-h98d0sfl
ingress.kubernetes.io/target-proxy: k8s2-tp-uwdva40x-default-my-ingress-h98d0sfl
ingress.kubernetes.io/url-map: k8s2-um-uwdva40x-default-my-ingress-h98d0sfl
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"nginx.ingress.kubernetes.io/backend-protocol":"HTTP","nginx.ingress.kubernetes.io/rewrite-target":"/"},"name":"my-ingress","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"service":{"name":"sso-dev-service","port":{"number":80}}},"path":"/sso-dev/*","pathType":"ImplementationSpecific"}]}}]}}
nginx.ingress.kubernetes.io/backend-protocol: HTTP
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: "2022-06-22T12:30:49Z"
finalizers:
- networking.gke.io/ingress-finalizer-V2
generation: 1
managedFields:
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:nginx.ingress.kubernetes.io/backend-protocol: {}
f:nginx.ingress.kubernetes.io/rewrite-target: {}
f:spec:
f:rules: {}
manager: kubectl-client-side-apply
operation: Update
time: "2022-06-22T12:30:49Z"
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:ingress.kubernetes.io/backends: {}
f:ingress.kubernetes.io/forwarding-rule: {}
f:ingress.kubernetes.io/target-proxy: {}
f:ingress.kubernetes.io/url-map: {}
f:finalizers:
.: {}
v:"networking.gke.io/ingress-finalizer-V2": {}
f:status:
f:loadBalancer:
f:ingress: {}
manager: glbc
operation: Update
subresource: status
time: "2022-06-22T12:32:13Z"
name: my-ingress
namespace: default
resourceVersion: "13073497"
uid: 253e067f-0711-4d24-a706-497692dae4d9
spec:
rules:
- http:
paths:
- backend:
service:
name: sso-dev-service
port:
number: 80
path: /sso-dev/*
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- ip: 34.111.49.35
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2022-06-22T08:52:11Z"
generation: 1
labels:
app: sso-dev
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:progressDeadlineSeconds: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:containers:
k:{"name":"cent-sha256-1"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: GoogleCloudConsole
operation: Update
time: "2022-06-22T08:52:11Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2022-06-22T11:51:22Z"
name: sso-dev
namespace: default
resourceVersion: "13051665"
uid: c8732885-b7d8-450c-86c4-19769638eb2a
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: sso-dev
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: sso-dev
spec:
containers:
- image: us-east4-docker.pkg.dev/centegycloud-351515/sso/cent#sha256:64b50553219db358945bf3cd6eb865dd47d0d45664464a9c334602c438bbaed9
imagePullPolicy: IfNotPresent
name: cent-sha256-1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2022-06-22T08:52:11Z"
lastUpdateTime: "2022-06-22T08:52:25Z"
message: ReplicaSet "sso-dev-8566f4bc55" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2022-06-22T11:51:22Z"
lastUpdateTime: "2022-06-22T11:51:22Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 3
replicas: 3
updatedReplicas: 3
Service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
cloud.google.com/neg-status: '{"network_endpoint_groups":{"80":"k8s1-6d0e236a-default-sso-dev-service-80-849fdb46"},"zones":["us-central1-c"]}'
creationTimestamp: "2022-06-22T08:53:32Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app: sso-dev
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:allocateLoadBalancerNodePorts: {}
f:externalTrafficPolicy: {}
f:internalTrafficPolicy: {}
f:ports:
.: {}
k:{"port":80,"protocol":"TCP"}:
.: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector: {}
f:sessionAffinity: {}
f:type: {}
manager: GoogleCloudConsole
operation: Update
time: "2022-06-22T08:53:32Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"service.kubernetes.io/load-balancer-cleanup": {}
f:status:
f:loadBalancer:
f:ingress: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2022-06-22T08:53:58Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:cloud.google.com/neg-status: {}
manager: glbc
operation: Update
subresource: status
time: "2022-06-22T12:30:49Z"
name: sso-dev-service
namespace: default
resourceVersion: "13071362"
uid: 03b0cbe6-1ed8-4441-b2c5-93ae5803a582
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.32.6.103
clusterIPs:
- 10.32.6.103
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 30584
port: 80
protocol: TCP
targetPort: 8080
selector:
app: sso-dev
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 104.197.93.226
You need to change the pathType to Prefix as follows, in your ingress:
pathType: Prefix
Because I noted that you are using the pathType: ImplementationSpecific . With this value, the matching depends on the IngressClass, so I think for your case the pathType Prefix should be more helpful. Additionally, you can find more information about the ingress path types supported in kubernetes in in this link.

why the postgersql kubernetes statefulset did not claim the PVC

Today I want to change the PostgreSQL statefulset PVC name, to my surprise, I did not found any clain about the PVC in the kubernetes deployment define, this is the kubernetes deployment define of PostgreSQL:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: reddwarf-postgresql-postgresql
namespace: reddwarf-storage
uid: 787a18c8-f6fb-4deb-bb07-3c3d123cf6f9
resourceVersion: '21931453'
generation: 30
creationTimestamp: '2021-08-05T05:29:03Z'
labels:
app.kubernetes.io/component: primary
app.kubernetes.io/instance: reddwarf-postgresql
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-10.9.1
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{"meta.helm.sh/release-name":"reddwarf-postgresql","meta.helm.sh/release-namespace":"reddwarf-storage"},"creationTimestamp":"2021-08-05T05:29:03Z","generation":12,"labels":{"app.kubernetes.io/component":"primary","app.kubernetes.io/instance":"reddwarf-postgresql","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"postgresql","helm.sh/chart":"postgresql-10.9.1"},"managedFields":[{"apiVersion":"apps/v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:helm.sh/chart":{}}},"f:spec":{"f:podManagementPolicy":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:serviceName":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:helm.sh/chart":{},"f:role":{}},"f:name":{}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:automountServiceAccountToken":{},"f:containers":{"k:{\"name\":\"reddwarf-postgresql\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"BITNAMI_DEBUG\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"PGDATA\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_CLIENT_MIN_MESSAGES\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_ENABLE_LDAP\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_ENABLE_TLS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_LOG_CONNECTIONS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_LOG_DISCONNECTIONS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_LOG_HOSTNAME\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_PGAUDIT_LOG_CATALOG\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_PORT_NUMBER\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_SHARED_PRELOAD_LIBRARIES\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_VOLUME_DIR\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRES_PASSWORD\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:secretKeyRef":{".":{},"f:key":{},"f:name":{}}}},"k:{\"name\":\"POSTGRES_USER\"}":{".":{},"f:name":{},"f:value":{}}},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5432,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/bitnami/postgresql\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/dev/shm\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{".":{},"f:fsGroup":{}},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"dshm\"}":{".":{},"f:emptyDir":{".":{},"f:medium":{}},"f:name":{}}}}},"f:updateStrategy":{"f:type":{}},"f:volumeClaimTemplates":{}}},"manager":"Go-http-client","operation":"Update","time":"2021-08-05T05:29:03Z"},{"apiVersion":"apps/v1","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:template":{"f:spec":{"f:containers":{"k:{\"name\":\"reddwarf-postgresql\"}":{"f:image":{}}}}}}},"manager":"kubectl-client-side-apply","operation":"Update","time":"2021-08-10T16:50:45Z"},{"apiVersion":"apps/v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:template":{"f:spec":{"f:containers":{"k:{\"name\":\"reddwarf-postgresql\"}":{"f:args":{}}}}}}},"manager":"kubectl","operation":"Update","time":"2021-08-11T01:46:21Z"},{"apiVersion":"apps/v1","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:collisionCount":{},"f:currentReplicas":{},"f:currentRevision":{},"f:observedGeneration":{},"f:replicas":{},"f:updateRevision":{},"f:updatedReplicas":{}}},"manager":"kube-controller-manager","operation":"Update","time":"2021-08-11T02:24:07Z"}],"name":"reddwarf-postgresql-postgresql","namespace":"reddwarf-storage","selfLink":"/apis/apps/v1/namespaces/reddwarf-storage/statefulsets/reddwarf-postgresql-postgresql","uid":"787a18c8-f6fb-4deb-bb07-3c3d123cf6f9"},"spec":{"podManagementPolicy":"OrderedReady","replicas":1,"revisionHistoryLimit":10,"selector":{"matchLabels":{"app.kubernetes.io/instance":"reddwarf-postgresql","app.kubernetes.io/name":"postgresql","role":"primary"}},"serviceName":"reddwarf-postgresql-headless","template":{"metadata":{"creationTimestamp":null,"labels":{"app.kubernetes.io/component":"primary","app.kubernetes.io/instance":"reddwarf-postgresql","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"postgresql","helm.sh/chart":"postgresql-10.9.1","role":"primary"},"name":"reddwarf-postgresql"},"spec":{"affinity":{"podAntiAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"podAffinityTerm":{"labelSelector":{"matchLabels":{"app.kubernetes.io/component":"primary","app.kubernetes.io/instance":"reddwarf-postgresql","app.kubernetes.io/name":"postgresql"}},"namespaces":["reddwarf-storage"],"topologyKey":"kubernetes.io/hostname"},"weight":1}]}},"automountServiceAccountToken":false,"containers":[{"env":[{"name":"BITNAMI_DEBUG","value":"false"},{"name":"POSTGRESQL_PORT_NUMBER","value":"5432"},{"name":"POSTGRESQL_VOLUME_DIR","value":"/bitnami/postgresql"},{"name":"PGDATA","value":"/bitnami/postgresql/data"},{"name":"POSTGRES_USER","value":"postgres"},{"name":"POSTGRES_PASSWORD","valueFrom":{"secretKeyRef":{"key":"postgresql-password","name":"reddwarf-postgresql"}}},{"name":"POSTGRESQL_ENABLE_LDAP","value":"no"},{"name":"POSTGRESQL_ENABLE_TLS","value":"no"},{"name":"POSTGRESQL_LOG_HOSTNAME","value":"false"},{"name":"POSTGRESQL_LOG_CONNECTIONS","value":"false"},{"name":"POSTGRESQL_LOG_DISCONNECTIONS","value":"false"},{"name":"POSTGRESQL_PGAUDIT_LOG_CATALOG","value":"off"},{"name":"POSTGRESQL_CLIENT_MIN_MESSAGES","value":"error"},{"name":"POSTGRESQL_SHARED_PRELOAD_LIBRARIES","value":"pgaudit"}],"image":"docker.io/bitnami/postgresql:13.3.0-debian-10-r75","imagePullPolicy":"IfNotPresent","livenessProbe":{"exec":{"command":["/bin/sh","-c","exec
pg_isready -U \"postgres\" -h 127.0.0.1 -p
5432"]},"failureThreshold":6,"initialDelaySeconds":30,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":5},"name":"reddwarf-postgresql","ports":[{"containerPort":5432,"name":"tcp-postgresql","protocol":"TCP"}],"readinessProbe":{"exec":{"command":["/bin/sh","-c","-e","exec
pg_isready -U \"postgres\" -h 127.0.0.1 -p 5432\n[ -f
/opt/bitnami/postgresql/tmp/.initialized ] || [ -f
/bitnami/postgresql/.initialized
]\n"]},"failureThreshold":6,"initialDelaySeconds":5,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":5},"resources":{"requests":{"cpu":"250m","memory":"256Mi"}},"securityContext":{"runAsUser":1001},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","volumeMounts":[{"mountPath":"/dev/shm","name":"dshm"},{"mountPath":"/bitnami/postgresql","name":"data"}]}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{"fsGroup":1001},"terminationGracePeriodSeconds":30,"volumes":[{"emptyDir":{"medium":"Memory"},"name":"dshm"}]}},"updateStrategy":{"type":"RollingUpdate"},"volumeClaimTemplates":[{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"creationTimestamp":null,"name":"data"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"8Gi"}},"volumeMode":"Filesystem"},"status":{"phase":"Pending"}}]}}
meta.helm.sh/release-name: reddwarf-postgresql
meta.helm.sh/release-namespace: reddwarf-storage
managedFields:
- manager: Go-http-client
operation: Update
apiVersion: apps/v1
time: '2021-08-05T05:29:03Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
.: {}
'f:meta.helm.sh/release-name': {}
'f:meta.helm.sh/release-namespace': {}
'f:labels':
.: {}
'f:app.kubernetes.io/component': {}
'f:app.kubernetes.io/instance': {}
'f:app.kubernetes.io/managed-by': {}
'f:app.kubernetes.io/name': {}
'f:helm.sh/chart': {}
'f:spec':
'f:podManagementPolicy': {}
'f:replicas': {}
'f:revisionHistoryLimit': {}
'f:selector': {}
'f:serviceName': {}
'f:template':
'f:metadata':
'f:labels':
.: {}
'f:app.kubernetes.io/component': {}
'f:app.kubernetes.io/instance': {}
'f:app.kubernetes.io/managed-by': {}
'f:app.kubernetes.io/name': {}
'f:helm.sh/chart': {}
'f:role': {}
'f:name': {}
'f:spec':
'f:affinity':
.: {}
'f:podAntiAffinity':
.: {}
'f:preferredDuringSchedulingIgnoredDuringExecution': {}
'f:automountServiceAccountToken': {}
'f:containers':
'k:{"name":"reddwarf-postgresql"}':
.: {}
'f:env':
.: {}
'k:{"name":"BITNAMI_DEBUG"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"PGDATA"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_CLIENT_MIN_MESSAGES"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_ENABLE_LDAP"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_ENABLE_TLS"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_LOG_CONNECTIONS"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_LOG_DISCONNECTIONS"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_LOG_HOSTNAME"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_PGAUDIT_LOG_CATALOG"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_PORT_NUMBER"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_SHARED_PRELOAD_LIBRARIES"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_VOLUME_DIR"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRES_PASSWORD"}':
.: {}
'f:name': {}
'f:valueFrom':
.: {}
'f:secretKeyRef':
.: {}
'f:key': {}
'f:name': {}
'k:{"name":"POSTGRES_USER"}':
.: {}
'f:name': {}
'f:value': {}
'f:imagePullPolicy': {}
'f:livenessProbe':
.: {}
'f:exec':
.: {}
'f:command': {}
'f:failureThreshold': {}
'f:initialDelaySeconds': {}
'f:periodSeconds': {}
'f:successThreshold': {}
'f:timeoutSeconds': {}
'f:name': {}
'f:ports':
.: {}
'k:{"containerPort":5432,"protocol":"TCP"}':
.: {}
'f:containerPort': {}
'f:name': {}
'f:protocol': {}
'f:readinessProbe':
.: {}
'f:exec':
.: {}
'f:command': {}
'f:failureThreshold': {}
'f:initialDelaySeconds': {}
'f:periodSeconds': {}
'f:successThreshold': {}
'f:timeoutSeconds': {}
'f:resources':
.: {}
'f:requests':
.: {}
'f:cpu': {}
'f:memory': {}
'f:securityContext':
.: {}
'f:runAsUser': {}
'f:terminationMessagePath': {}
'f:terminationMessagePolicy': {}
'f:volumeMounts':
.: {}
'k:{"mountPath":"/bitnami/postgresql"}':
.: {}
'f:mountPath': {}
'f:name': {}
'k:{"mountPath":"/dev/shm"}':
.: {}
'f:mountPath': {}
'f:name': {}
'f:dnsPolicy': {}
'f:restartPolicy': {}
'f:schedulerName': {}
'f:securityContext':
.: {}
'f:fsGroup': {}
'f:terminationGracePeriodSeconds': {}
'f:volumes':
.: {}
'k:{"name":"dshm"}':
.: {}
'f:emptyDir':
.: {}
'f:medium': {}
'f:name': {}
'f:updateStrategy':
'f:type': {}
'f:volumeClaimTemplates': {}
- manager: kubectl-client-side-apply
operation: Update
apiVersion: apps/v1
time: '2021-08-10T16:50:45Z'
fieldsType: FieldsV1
fieldsV1:
'f:spec':
'f:template':
'f:spec':
'f:containers':
'k:{"name":"reddwarf-postgresql"}':
'f:image': {}
- manager: kubectl
operation: Update
apiVersion: apps/v1
time: '2021-08-11T02:29:20Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
'f:kubectl.kubernetes.io/last-applied-configuration': {}
- manager: kube-controller-manager
operation: Update
apiVersion: apps/v1
time: '2021-11-27T03:07:58Z'
fieldsType: FieldsV1
fieldsV1:
'f:status':
'f:collisionCount': {}
'f:currentRevision': {}
'f:observedGeneration': {}
'f:replicas': {}
'f:updateRevision': {}
selfLink: >-
/apis/apps/v1/namespaces/reddwarf-storage/statefulsets/reddwarf-postgresql-postgresql
status:
observedGeneration: 30
replicas: 0
currentRevision: reddwarf-postgresql-postgresql-5695cb9676
updateRevision: reddwarf-postgresql-postgresql-5695cb9676
collisionCount: 0
spec:
replicas: 0
selector:
matchLabels:
app.kubernetes.io/instance: reddwarf-postgresql
app.kubernetes.io/name: postgresql
role: primary
template:
metadata:
name: reddwarf-postgresql
creationTimestamp: null
labels:
app.kubernetes.io/component: primary
app.kubernetes.io/instance: reddwarf-postgresql
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-10.9.1
role: primary
spec:
volumes:
- name: dshm
emptyDir:
medium: Memory
containers:
- name: reddwarf-postgresql
image: 'docker.io/bitnami/postgresql:13.3.0-debian-10-r75'
ports:
- name: tcp-postgresql
containerPort: 5432
protocol: TCP
env:
- name: BITNAMI_DEBUG
value: 'false'
- name: POSTGRESQL_PORT_NUMBER
value: '5432'
- name: POSTGRESQL_VOLUME_DIR
value: /bitnami/postgresql
- name: PGDATA
value: /bitnami/postgresql/data
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: reddwarf-postgresql
key: postgresql-password
- name: POSTGRESQL_ENABLE_LDAP
value: 'no'
- name: POSTGRESQL_ENABLE_TLS
value: 'no'
- name: POSTGRESQL_LOG_HOSTNAME
value: 'false'
- name: POSTGRESQL_LOG_CONNECTIONS
value: 'false'
- name: POSTGRESQL_LOG_DISCONNECTIONS
value: 'false'
- name: POSTGRESQL_PGAUDIT_LOG_CATALOG
value: 'off'
- name: POSTGRESQL_CLIENT_MIN_MESSAGES
value: error
- name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES
value: pgaudit
resources:
requests:
cpu: 250m
memory: 256Mi
volumeMounts:
- name: dshm
mountPath: /dev/shm
- name: data
mountPath: /bitnami/postgresql
livenessProbe:
exec:
command:
- /bin/sh
- '-c'
- exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/sh
- '-c'
- '-e'
- >
exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432
[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f
/bitnami/postgresql/.initialized ]
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 6
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 1001
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
automountServiceAccountToken: false
securityContext:
fsGroup: 1001
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/component: primary
app.kubernetes.io/instance: reddwarf-postgresql
app.kubernetes.io/name: postgresql
namespaces:
- reddwarf-storage
topologyKey: kubernetes.io/hostname
schedulerName: default-scheduler
volumeClaimTemplates:
- kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: data
creationTimestamp: null
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
volumeMode: Filesystem
status:
phase: Pending
serviceName: reddwarf-postgresql-headless
podManagementPolicy: OrderedReady
updateStrategy:
type: RollingUpdate
revisionHistoryLimit: 10
this statefulset bind the PVC named data-reddwarf-postgresql-postgresql-0 right now, but I did not found the PVC define in this statefulset yaml. where is the PVC bind define? what should I do to change the PVC to bind to a new one? I install this PostgreSQL into kubernetes from helm chart.
pvc thats gets created as a part of statefulset will have a name which is an amalgamation of 3 components joined by - :
Name defined in the volumeClaimTemplates section data
Name of the statefulset in the metadata section which is reddwarf-postgresql-postgresql
Its replica number , if it is first replica then it would be 0
So finally the name of the pvc that gets created when you create this statefulset is
data-reddwarf-postgresql-postgresql-0.which is the pvc name that you also seeing in your setup.
please note when you delete the statefulset , pvc does not deleted automatically we need to pvc separately. When you recreate/scaleup the stateful set and if the pvc which matches above naming convention& spec does not exists then it will create a pvc.
From kubernetes documentation

How to use SSL requests from Kubernetes ingress to pods

I am making a kubernetes application deployment with gitlab kubernetes integration.
I ran into an issue that after putting the pods (containers) on ssl, the browser responds with:
Bad Request
Your browser sent a request that this server could not understand.
Reason: You're speaking plain HTTP to an SSL-enabled server port.
Instead use the HTTPS scheme to access this URL, please.
Apache/2.4.38 (Debian) Server at docker.vm Port 80
I am accessing the browser url with https://***********.eu/ and have no idea why it is redirected from https to http inside the kubernetes on the way to the pods.
My Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
field.cattle.io/publicEndpoints: '[{"addresses":["******"],"port":443,"protocol":"HTTPS","serviceName":"******","ingressName":"******","hostname":"******","path":"/","allNodes":true},{"addresses":["******"],"port":443,"protocol":"HTTPS","serviceName":"******","ingressName":"******","hostname":"******","path":"/","allNodes":true}]'
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
creationTimestamp: "2021-05-21T12:54:44Z"
generation: 1
labels:
app: development
chart: auto-deploy-app-1.0.7
heritage: Tiller
release: development
managedFields:
- apiVersion: extensions/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubernetes.io/ingress.class: {}
f:kubernetes.io/tls-acme: {}
f:labels:
.: {}
f:app: {}
f:chart: {}
f:heritage: {}
f:release: {}
f:spec:
f:rules: {}
f:tls: {}
manager: Go-http-client
operation: Update
time: "2021-05-21T12:54:44Z"
- apiVersion: networking.k8s.io/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:loadBalancer:
f:ingress: {}
manager: nginx-ingress-controller
operation: Update
time: "2021-05-21T12:55:25Z"
- apiVersion: extensions/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:field.cattle.io/publicEndpoints: {}
manager: rancher
operation: Update
time: "2021-05-21T12:55:25Z"
name: development-auto-deploy
namespace: ******
resourceVersion: "******"
selfLink: /apis/networking.k8s.io/v1/namespaces/******
uid: ******
spec:
rules:
- host: ******
http:
paths:
- backend:
service:
name: development-auto-deploy
port:
number: 443
path: /
pathType: ImplementationSpecific
- host: ******
http:
paths:
- backend:
service:
name: development-auto-deploy
port:
number: 443
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- ******
- ******
secretName: development-auto-deploy-tls
</pre>
My Service.yaml
<pre>
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2021-05-21T12:54:44Z"
labels:
app: development
chart: auto-deploy-app-1.0.7
heritage: Tiller
release: development
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:chart: {}
f:heritage: {}
f:release: {}
f:spec:
f:ports:
.: {}
k:{"port":443,"protocol":"TCP"}:
.: {}
f:name: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector:
.: {}
f:app: {}
f:tier: {}
f:sessionAffinity: {}
f:type: {}
manager: Go-http-client
operation: Update
time: "2021-05-21T12:54:44Z"
name: development-auto-deploy
namespace: ******
resourceVersion: "******"
selfLink: /api/v1/namespaces/******
uid: ******
spec:
clusterIP: ******
ports:
- name: web
port: 443
protocol: TCP
targetPort: 443
selector:
app: development
tier: web
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
>
And deployment.yaml for the pod deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
app.gitlab.com/app: *******
app.gitlab.com/env: development
deployment.kubernetes.io/revision: "1"
field.cattle.io/publicEndpoints: '[{"addresses":["*******"],"port":443,"protocol":"HTTPS","serviceName":"*******","ingressName":"*******","hostname":"*******","path":"/","allNodes":true},{"addresses":["*******"],"port":443,"protocol":"HTTPS","serviceName":"*******","ingressName":"*******","hostname":"*******","path":"/","allNodes":true}]'
creationTimestamp: "2021-05-21T12:54:44Z"
generation: 2
labels:
app: development
chart: auto-deploy-app-1.0.7
heritage: Tiller
release: development
tier: web
track: stable
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:app.gitlab.com/app: {}
f:app.gitlab.com/env: {}
f:labels:
.: {}
f:app: {}
f:chart: {}
f:heritage: {}
f:release: {}
f:tier: {}
f:track: {}
f:spec:
f:progressDeadlineSeconds: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector:
f:matchLabels:
.: {}
f:app: {}
f:release: {}
f:tier: {}
f:track: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:annotations:
.: {}
f:app.gitlab.com/app: {}
f:app.gitlab.com/env: {}
f:checksum/application-secrets: {}
f:labels:
.: {}
f:app: {}
f:release: {}
f:tier: {}
f:track: {}
f:spec:
f:containers:
k:{"name":"auto-deploy-app"}:
.: {}
f:env:
.: {}
k:{"name":"DATABASE_URL"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"GITLAB_ENVIRONMENT_NAME"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"GITLAB_ENVIRONMENT_URL"}:
.: {}
f:name: {}
f:value: {}
f:envFrom: {}
f:image: {}
f:imagePullPolicy: {}
f:livenessProbe:
.: {}
f:failureThreshold: {}
f:httpGet:
.: {}
f:path: {}
f:port: {}
f:scheme: {}
f:initialDelaySeconds: {}
f:periodSeconds: {}
f:successThreshold: {}
f:timeoutSeconds: {}
f:name: {}
f:ports:
.: {}
k:{"containerPort":443,"protocol":"TCP"}:
.: {}
f:containerPort: {}
f:name: {}
f:protocol: {}
f:readinessProbe:
.: {}
f:failureThreshold: {}
f:httpGet:
.: {}
f:path: {}
f:port: {}
f:scheme: {}
f:initialDelaySeconds: {}
f:periodSeconds: {}
f:successThreshold: {}
f:timeoutSeconds: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:imagePullSecrets:
.: {}
k:{"name":"*******"}:
.: {}
f:name: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: Go-http-client
operation: Update
time: "2021-05-21T12:54:44Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
time: "2021-05-21T12:54:55Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:field.cattle.io/publicEndpoints: {}
manager: rancher
operation: Update
time: "2021-05-21T12:55:25Z"
name: development
namespace: *******
resourceVersion: "*******"
selfLink: /apis/apps/v1/namespaces/*******
uid: *******
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: development
release: development
tier: web
track: stable
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
app.gitlab.com/app: *******
app.gitlab.com/env: development
checksum/application-secrets: *******
creationTimestamp: null
labels:
app: development
release: development
tier: web
track: stable
spec:
containers:
- env:
- name: DATABASE_URL
value: ' '
- name: GITLAB_ENVIRONMENT_NAME
value: development
- name: GITLAB_ENVIRONMENT_URL
value: *******
envFrom:
- secretRef:
name: development-secret
image: *******
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 443
scheme: HTTPS
initialDelaySeconds: 15
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 15
name: auto-deploy-app
ports:
- containerPort: 443
name: web
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: 443
scheme: HTTPS
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 3
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: *******
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2021-05-21T12:54:55Z"
lastUpdateTime: "2021-05-21T12:54:55Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2021-05-21T12:54:44Z"
lastUpdateTime: "2021-05-21T12:54:55Z"
message: ReplicaSet "*******" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 2
readyReplicas: 1
replicas: 1
updatedReplicas: 1
SSL is removed somewhere and kubernetes ingress calls the pods with http:// instead of https://, but I do not know how to fix it.
So the question is: How to remove SSL termination from kubernetes ingress?
If you want SSL termination to happen at the server instead at the ingress/LoadBalancer, you can use a something called SSL Passthrough.
Load Balancer will then not terminate the SSL request at the ingress but then your server should be able to terminate those SSL request.
Use these configuration in your ingress.yaml file depending upon your ingress class
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
There is one more annotation that you can use in nginx. backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service.
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
By default NGINX uses HTTP.
Read more about it here https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol

Kubernetes pods are getting failed with volume attach issue

We are seeing some of the pods getting failed while mounting/attaching volume to pods. this is happening intermittently and after bouncing kubelet service pods are able to reattach the volumes and succeeding. seeing the below error when pod get struck.
I created single PV and corresponding PVC to use volume mounts for pods:
Error:
Warning FailedMount 16m (x4 over 43m) kubelet, ca1md-k8w03-02 Unable to attach or mount volumes: unmounted volumes=[workdir], unattached volumes=[default-token-pwvpc podmetadata docker-sock workdir]: timed out waiting for the condition
Warning FailedMount 7m32s (x5 over 41m) kubelet, ca1md-k8w03-02 Unable to attach or mount volumes: unmounted volumes=[workdir], unattached volumes=[docker-sock workdir default-token-pwvpc podmetadata]: timed out waiting for the condition
Warning FailedMount 3m2s (x10 over 45m) kubelet, ca1md-k8w03-02 Unable to attach or mount volumes: unmounted volumes=[workdir], unattached volumes=[podmetadata docker-sock workdir default-token-pwvpc]: timed out waiting for the condition
Warning FailedMount 45s (x2 over 21m) kubelet, ca1md-k8w03-02 Unable to attach or mount volumes: unmounted volumes=[workdir], unattached volumes=[workdir default-token-pwvpc podmetadata docker-sock]: timed out waiting for the condition
Version:
Client Version: v1.17.2
Server Version: v1.17.2
Host OS:
Centos 7.7
CNI:
Weave
apiVersion: v1
items:
- apiVersion: apps/v1
kind: StatefulSet
metadata:
creationTimestamp: "2020-07-17T21:55:47Z"
generation: 1
labels:
logicmonitor.com/collectorset: kubernetes-03
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:logicmonitor.com/collectorset: {}
f:spec:
f:podManagementPolicy: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector:
f:matchLabels:
.: {}
f:logicmonitor.com/collectorset: {}
f:template:
f:metadata:
f:labels:
.: {}
f:logicmonitor.com/collectorset: {}
f:namespace: {}
f:spec:
f:affinity:
.: {}
f:podAntiAffinity:
.: {}
f:requiredDuringSchedulingIgnoredDuringExecution: {}
f:containers:
k:{"name":"collector"}:
.: {}
f:env:
.: {}
k:{"name":"COLLECTOR_IDS"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"access_id"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:secretKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"access_key"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:secretKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"account"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:secretKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"collector_size"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"collector_version"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"kubernetes"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"use_ea"}:
.: {}
f:name: {}
f:value: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources:
.: {}
f:limits:
.: {}
f:memory: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:serviceAccount: {}
f:serviceAccountName: {}
f:terminationGracePeriodSeconds: {}
f:updateStrategy:
f:type: {}
f:status:
f:replicas: {}
manager: collectorset-controller
operation: Update
time: "2020-08-22T03:42:35Z"
name: kubernetes-03
namespace: default
resourceVersion: "10831902"
selfLink: /apis/apps/v1/namespaces/default/statefulsets/kubernetes-03
uid: 1296654b-77bc-4af8-9537-04f0a00bdd0c
spec:
podManagementPolicy: Parallel
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
logicmonitor.com/collectorset: kubernetes-03
serviceName: ""
template:
metadata:
creationTimestamp: null
labels:
logicmonitor.com/collectorset: kubernetes-03
namespace: default
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
logicmonitor.com/collectorset: kubernetes-03
topologyKey: kubernetes.io/hostname
containers:
- env:
- name: account
valueFrom:
secretKeyRef:
key: account
name: collectorset-controller
optional: false
- name: access_id
valueFrom:
secretKeyRef:
key: accessID
name: collectorset-controller
optional: false
- name: access_key
valueFrom:
secretKeyRef:
key: accessKey
name: collectorset-controller
optional: false
- name: kubernetes
value: "true"
- name: collector_size
value: small
- name: collector_version
value: "0"
- name: use_ea
value: "false"
- name: COLLECTOR_IDS
value: "205"
image: logicmonitor/collector:latest
imagePullPolicy: Always
name: collector
resources:
limits:
memory: 2Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: collector
serviceAccountName: collector
terminationGracePeriodSeconds: 30
updateStrategy:
type: RollingUpdate
status:
collisionCount: 0
currentReplicas: 1
currentRevision: kubernetes-03-655b46ff69
observedGeneration: 1
readyReplicas: 1
replicas: 1
updateRevision: kubernetes-03-655b46ff69
updatedReplicas: 1
kind: List
metadata:
resourceVersion: ""
selfLink: ""
kubectl get pv -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2020-07-10T00:21:49Z"
finalizers:
- kubernetes.io/pv-protection
labels:
type: local
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:type: {}
f:spec:
f:accessModes: {}
f:capacity:
.: {}
f:storage: {}
f:nfs:
.: {}
f:path: {}
f:server: {}
f:persistentVolumeReclaimPolicy: {}
f:storageClassName: {}
f:volumeMode: {}
manager: kubectl
operation: Update
time: "2020-07-10T00:21:49Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:pv.kubernetes.io/bound-by-controller: {}
f:spec:
f:claimRef:
.: {}
f:apiVersion: {}
f:kind: {}
f:name: {}
f:namespace: {}
f:resourceVersion: {}
f:uid: {}
f:status:
f:phase: {}
manager: kube-controller-manager
operation: Update
time: "2020-07-10T00:21:52Z"
name: pv-modeldata
resourceVersion: "15764"
selfLink: /api/v1/persistentvolumes/pv-data
uid: 68521e84-0aa9-4643-a047-441a61451599
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 5999Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: pvc-modeldata
namespace: default
resourceVersion: "15762"
uid: e4a1309e-339d-4ed5-8fe1-9ed32f779ea7
nfs:
path: /k8fs03
server: kubemaster01.rms.com
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
volumeMode: Filesystem
status:
phase: Bound
kind: List
metadata:
resourceVersion: ""
selfLink: ""
# kubectl get pvc -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2020-07-10T00:21:52Z"
finalizers:
- kubernetes.io/pvc-protection
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:pv.kubernetes.io/bind-completed: {}
f:pv.kubernetes.io/bound-by-controller: {}
f:spec:
f:volumeName: {}
f:status:
f:accessModes: {}
f:capacity:
.: {}
f:storage: {}
f:phase: {}
manager: kube-controller-manager
operation: Update
time: "2020-07-10T00:21:52Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:accessModes: {}
f:resources:
f:requests:
.: {}
f:storage: {}
f:storageClassName: {}
f:volumeMode: {}
manager: kubectl
operation: Update
time: "2020-07-10T00:21:52Z"
name: pvc-modeldata
namespace: default
resourceVersion: "15766"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/pvc-data
uid: e4a1309e-339d-4ed5-8fe1-9ed32f779ea7
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5999Gi
storageClassName: manual
volumeMode: Filesystem
volumeName: pv-data
status:
accessModes:
- ReadWriteMany
capacity:
storage: 5999Gi
phase: Bound
kind: List
metadata:
resourceVersion: ""
selfLink: ""
# kubectl get sc -o yaml
apiVersion: v1
items: []
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Thanks,
Chittu