External requests to nodeport ingress nginx ends with 404 - kubernetes

Im trying to run K8S cluster on bare metal servers and I have a problem accessing the apps (PODs) from the outside of cluster.
Just to sumarize my setup
nodeported ingress nginx controller with ports for HTTP (30255) and HTTPS (32673) runnin on one node. Netstat shows the ports are open
simple Hello World pod which runs on port 8080 (tested via nodeport that it actually works)
When I try to open the site through the nodeported ingress nginx I receive error 404.
curl --verbose --header 'Host: hello-world.info' http://localhost:30255/
* Trying 127.0.0.1:30255...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 30255 (#0)
> GET / HTTP/1.1
> Host: hello-world.info
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< Date: Mon, 20 Feb 2023 00:58:36 GMT
< Content-Type: text/html
< Content-Length: 146
< Connection: keep-alive
<
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Connection #0 to host localhost left intact
I have no idea what Im doing wrong, because it should be just straight forward. Please see the configurations bellow
Configuration of the ingress for the helloworld app ("WEB" service)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"nginx.ingress.kubernetes.io/rewrite-target":"/$1"},"name":"example-ingress","namespace":"default"},"spec":{"rules":[{"host":"hello-world.info","http":{"paths":[{"backend":{"service":{"name":"web","port":{"number":8080}}},"path":"/","pathType":"Prefix"}]}}]}}
nginx.ingress.kubernetes.io/rewrite-target: /$1
creationTimestamp: "2023-02-20T00:57:59Z"
generation: 1
managedFields:
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:nginx.ingress.kubernetes.io/rewrite-target: {}
f:spec:
f:rules: {}
manager: kubectl-client-side-apply
operation: Update
time: "2023-02-20T00:57:59Z"
name: example-ingress
namespace: default
resourceVersion: "11184494"
uid: 24e8a8c6-0b44-49e6-bef8-23f51ab549a6
spec:
rules:
- host: hello-world.info
http:
paths:
- backend:
service:
name: web
port:
number: 8080
path: /
pathType: Prefix
status:
loadBalancer: {}
YAML for the "WEB" service of the hello world app
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2023-02-19T23:25:50Z"
labels:
app: web
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:externalTrafficPolicy: {}
f:internalTrafficPolicy: {}
f:ports: {}
f:selector: {}
f:sessionAffinity: {}
manager: kubectl-expose
operation: Update
time: "2023-02-19T23:25:50Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:allocateLoadBalancerNodePorts: {}
f:ports:
k:{"port":8080,"protocol":"TCP"}:
.: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:type: {}
manager: agent
operation: Update
time: "2023-02-20T00:21:28Z"
name: web
namespace: default
resourceVersion: "11174849"
uid: 68a3fb17-61c0-46c1-9c5f-fb120f66fcc0
spec:
clusterIP: 10.105.221.158
clusterIPs:
- 10.105.221.158
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: web
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
YAML for the nginx ingress controller service
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx","app.kubernetes.io/version":"1.5.1"},"name":"ingress-nginx-controller","namespace":"ingress-nginx"},"spec":{"ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"appProtocol":"http","name":"http","port":80,"protocol":"TCP","targetPort":"http"},{"appProtocol":"https","name":"https","port":443,"protocol":"TCP","targetPort":"https"}],"selector":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/name":"ingress-nginx"},"type":"NodePort"}}
creationTimestamp: "2023-01-24T15:55:44Z"
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.5.1
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:labels:
.: {}
f:app.kubernetes.io/component: {}
f:app.kubernetes.io/instance: {}
f:app.kubernetes.io/name: {}
f:app.kubernetes.io/part-of: {}
f:app.kubernetes.io/version: {}
f:spec:
f:externalTrafficPolicy: {}
f:internalTrafficPolicy: {}
f:ipFamilies: {}
f:ipFamilyPolicy: {}
f:ports:
.: {}
k:{"port":80,"protocol":"TCP"}:
.: {}
f:appProtocol: {}
f:name: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
k:{"port":443,"protocol":"TCP"}:
.: {}
f:appProtocol: {}
f:name: {}
f:port: {}
f:protocol: {}
f:selector: {}
f:sessionAffinity: {}
f:type: {}
manager: kubectl-client-side-apply
operation: Update
time: "2023-01-24T15:55:44Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:ports:
k:{"port":443,"protocol":"TCP"}:
f:targetPort: {}
manager: agent
operation: Update
time: "2023-02-20T00:12:12Z"
name: ingress-nginx-controller
namespace: ingress-nginx
resourceVersion: "11172404"
uid: cd4b69c6-7e8e-4b12-a5e9-16093737979c
spec:
clusterIP: 10.102.120.71
clusterIPs:
- 10.102.120.71
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- appProtocol: http
name: http
nodePort: 30255
port: 80
protocol: TCP
targetPort: http
- appProtocol: https
name: https
nodePort: 32673
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
It feels like the requests are not being recognized at the nodeport nginx and from the ingress nginx controller pod logs I see only this message from each request I send. IP address 10.244.3.1 is the internal cluster address of the node where the ingress nginx controller is running and from where I am doing the CURL requests.
2023/02/20 01:06:25 [info] 630#630: *282876682 client 10.244.3.1 closed keepalive connection
This is my first K8s cluster, so it would be something stupid in the config, but Im sitting on this for two weeks straight and I am out of ideas and google search possibilities. Thank you

Related

Kubernetes replica does not receive traffic

I am trying to understand how kubernetes replicas work, and I am getting an unexpected (?) behavior. If I understand correctly, when a service selects a deployment it will distribute the requests accross all pods. My deployment has 3 replicas and the pods are being selected properly by the service but requests only go to one of then (the other two remain unused)
I followed this tutorial and after scaling the deployment I made multiple get requests to the service and I was expecting that requests will be distributed accross replicas but only one replica received and handled all the requests. I am not sure if that's how it works or maybe I need to do something else?
I would like to add that my first test was a very simple endpoint that will resolve the request immediatly. I tested adding load to the endpoint (a delay before resolving) and it did started sending requests to the others replicas. I would love to understand how it works I haven't being able to find any docs about it
This is the deployment yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2022-11-23T15:26:36Z"
generation: 2
labels:
app: express-echo
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:replicas: {}
manager: kubectl
operation: Update
subresource: scale
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:progressDeadlineSeconds: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:containers:
k:{"name":"express-echo"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: kubectl-create
operation: Update
time: "2022-11-23T15:26:36Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2022-11-23T15:28:18Z"
name: express-echo
namespace: default
resourceVersion: "5192"
uid: 32288873-1e30-44a1-9226-0214c1becd35
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: express-echo
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: express-echo
spec:
containers:
- image: gcr.io/gcp-project/express-echo:1.0.0
imagePullPolicy: IfNotPresent
name: express-echo
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2022-11-23T15:26:36Z"
lastUpdateTime: "2022-11-23T15:27:01Z"
message: ReplicaSet "express-echo-547f8bcfb5" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2022-11-23T15:28:18Z"
lastUpdateTime: "2022-11-23T15:28:18Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 2
readyReplicas: 3
replicas: 3
And this is the service
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
creationTimestamp: "2022-11-23T15:26:48Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app: express-echo
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:allocateLoadBalancerNodePorts: {}
f:externalTrafficPolicy: {}
f:internalTrafficPolicy: {}
f:ports:
.: {}
k:{"port":80,"protocol":"TCP"}:
.: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector: {}
f:sessionAffinity: {}
f:type: {}
manager: kubectl-expose
operation: Update
time: "2022-11-23T15:26:48Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"service.kubernetes.io/load-balancer-cleanup": {}
f:status:
f:loadBalancer:
f:ingress: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2022-11-23T15:27:24Z"
name: express-echo
namespace: default
resourceVersion: "4765"
uid: 99346a8a-1e89-476e-a21f-0d9c98d86b7d
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.0.8.195
clusterIPs:
- 10.0.8.195
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 31123
port: 80
protocol: TCP
targetPort: 3001
selector:
app: express-echo
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 1.1.1.1

Getting "response 404 (backend NotFound), service rules for the path non-existent" Using Ingress Google Cloud

I want to my backend service which is deployed on kubernetes service to access using ingress with path /sso-dev/, for that i have deployed my service on kubernetes container the deployment, service and ingress manifest is mentioned below, but while accessing the ingress load balancer api with path /sso-dev/ it throws "response 404 (backend NotFound), service rules for the path non-existent" error
I required a help just to access the backend service which is working fine with kubernetes container load balance ip.
here is my ingress configure
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/backends: '{"k8s-be-30969--6d0e236a1c7d6409":"HEALTHY","k8s1-6d0e236a-default-sso-dev-service-80-849fdb46":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s2-fr-uwdva40x-default-my-ingress-h98d0sfl
ingress.kubernetes.io/target-proxy: k8s2-tp-uwdva40x-default-my-ingress-h98d0sfl
ingress.kubernetes.io/url-map: k8s2-um-uwdva40x-default-my-ingress-h98d0sfl
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"nginx.ingress.kubernetes.io/backend-protocol":"HTTP","nginx.ingress.kubernetes.io/rewrite-target":"/"},"name":"my-ingress","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"service":{"name":"sso-dev-service","port":{"number":80}}},"path":"/sso-dev/*","pathType":"ImplementationSpecific"}]}}]}}
nginx.ingress.kubernetes.io/backend-protocol: HTTP
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: "2022-06-22T12:30:49Z"
finalizers:
- networking.gke.io/ingress-finalizer-V2
generation: 1
managedFields:
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:nginx.ingress.kubernetes.io/backend-protocol: {}
f:nginx.ingress.kubernetes.io/rewrite-target: {}
f:spec:
f:rules: {}
manager: kubectl-client-side-apply
operation: Update
time: "2022-06-22T12:30:49Z"
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:ingress.kubernetes.io/backends: {}
f:ingress.kubernetes.io/forwarding-rule: {}
f:ingress.kubernetes.io/target-proxy: {}
f:ingress.kubernetes.io/url-map: {}
f:finalizers:
.: {}
v:"networking.gke.io/ingress-finalizer-V2": {}
f:status:
f:loadBalancer:
f:ingress: {}
manager: glbc
operation: Update
subresource: status
time: "2022-06-22T12:32:13Z"
name: my-ingress
namespace: default
resourceVersion: "13073497"
uid: 253e067f-0711-4d24-a706-497692dae4d9
spec:
rules:
- http:
paths:
- backend:
service:
name: sso-dev-service
port:
number: 80
path: /sso-dev/*
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- ip: 34.111.49.35
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2022-06-22T08:52:11Z"
generation: 1
labels:
app: sso-dev
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:progressDeadlineSeconds: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:containers:
k:{"name":"cent-sha256-1"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: GoogleCloudConsole
operation: Update
time: "2022-06-22T08:52:11Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2022-06-22T11:51:22Z"
name: sso-dev
namespace: default
resourceVersion: "13051665"
uid: c8732885-b7d8-450c-86c4-19769638eb2a
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: sso-dev
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: sso-dev
spec:
containers:
- image: us-east4-docker.pkg.dev/centegycloud-351515/sso/cent#sha256:64b50553219db358945bf3cd6eb865dd47d0d45664464a9c334602c438bbaed9
imagePullPolicy: IfNotPresent
name: cent-sha256-1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2022-06-22T08:52:11Z"
lastUpdateTime: "2022-06-22T08:52:25Z"
message: ReplicaSet "sso-dev-8566f4bc55" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2022-06-22T11:51:22Z"
lastUpdateTime: "2022-06-22T11:51:22Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 3
replicas: 3
updatedReplicas: 3
Service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
cloud.google.com/neg-status: '{"network_endpoint_groups":{"80":"k8s1-6d0e236a-default-sso-dev-service-80-849fdb46"},"zones":["us-central1-c"]}'
creationTimestamp: "2022-06-22T08:53:32Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app: sso-dev
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:allocateLoadBalancerNodePorts: {}
f:externalTrafficPolicy: {}
f:internalTrafficPolicy: {}
f:ports:
.: {}
k:{"port":80,"protocol":"TCP"}:
.: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector: {}
f:sessionAffinity: {}
f:type: {}
manager: GoogleCloudConsole
operation: Update
time: "2022-06-22T08:53:32Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"service.kubernetes.io/load-balancer-cleanup": {}
f:status:
f:loadBalancer:
f:ingress: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2022-06-22T08:53:58Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:cloud.google.com/neg-status: {}
manager: glbc
operation: Update
subresource: status
time: "2022-06-22T12:30:49Z"
name: sso-dev-service
namespace: default
resourceVersion: "13071362"
uid: 03b0cbe6-1ed8-4441-b2c5-93ae5803a582
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.32.6.103
clusterIPs:
- 10.32.6.103
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 30584
port: 80
protocol: TCP
targetPort: 8080
selector:
app: sso-dev
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 104.197.93.226
You need to change the pathType to Prefix as follows, in your ingress:
pathType: Prefix
Because I noted that you are using the pathType: ImplementationSpecific . With this value, the matching depends on the IngressClass, so I think for your case the pathType Prefix should be more helpful. Additionally, you can find more information about the ingress path types supported in kubernetes in in this link.

How to use SSL requests from Kubernetes ingress to pods

I am making a kubernetes application deployment with gitlab kubernetes integration.
I ran into an issue that after putting the pods (containers) on ssl, the browser responds with:
Bad Request
Your browser sent a request that this server could not understand.
Reason: You're speaking plain HTTP to an SSL-enabled server port.
Instead use the HTTPS scheme to access this URL, please.
Apache/2.4.38 (Debian) Server at docker.vm Port 80
I am accessing the browser url with https://***********.eu/ and have no idea why it is redirected from https to http inside the kubernetes on the way to the pods.
My Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
field.cattle.io/publicEndpoints: '[{"addresses":["******"],"port":443,"protocol":"HTTPS","serviceName":"******","ingressName":"******","hostname":"******","path":"/","allNodes":true},{"addresses":["******"],"port":443,"protocol":"HTTPS","serviceName":"******","ingressName":"******","hostname":"******","path":"/","allNodes":true}]'
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
creationTimestamp: "2021-05-21T12:54:44Z"
generation: 1
labels:
app: development
chart: auto-deploy-app-1.0.7
heritage: Tiller
release: development
managedFields:
- apiVersion: extensions/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubernetes.io/ingress.class: {}
f:kubernetes.io/tls-acme: {}
f:labels:
.: {}
f:app: {}
f:chart: {}
f:heritage: {}
f:release: {}
f:spec:
f:rules: {}
f:tls: {}
manager: Go-http-client
operation: Update
time: "2021-05-21T12:54:44Z"
- apiVersion: networking.k8s.io/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:loadBalancer:
f:ingress: {}
manager: nginx-ingress-controller
operation: Update
time: "2021-05-21T12:55:25Z"
- apiVersion: extensions/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:field.cattle.io/publicEndpoints: {}
manager: rancher
operation: Update
time: "2021-05-21T12:55:25Z"
name: development-auto-deploy
namespace: ******
resourceVersion: "******"
selfLink: /apis/networking.k8s.io/v1/namespaces/******
uid: ******
spec:
rules:
- host: ******
http:
paths:
- backend:
service:
name: development-auto-deploy
port:
number: 443
path: /
pathType: ImplementationSpecific
- host: ******
http:
paths:
- backend:
service:
name: development-auto-deploy
port:
number: 443
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- ******
- ******
secretName: development-auto-deploy-tls
</pre>
My Service.yaml
<pre>
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2021-05-21T12:54:44Z"
labels:
app: development
chart: auto-deploy-app-1.0.7
heritage: Tiller
release: development
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:chart: {}
f:heritage: {}
f:release: {}
f:spec:
f:ports:
.: {}
k:{"port":443,"protocol":"TCP"}:
.: {}
f:name: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector:
.: {}
f:app: {}
f:tier: {}
f:sessionAffinity: {}
f:type: {}
manager: Go-http-client
operation: Update
time: "2021-05-21T12:54:44Z"
name: development-auto-deploy
namespace: ******
resourceVersion: "******"
selfLink: /api/v1/namespaces/******
uid: ******
spec:
clusterIP: ******
ports:
- name: web
port: 443
protocol: TCP
targetPort: 443
selector:
app: development
tier: web
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
>
And deployment.yaml for the pod deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
app.gitlab.com/app: *******
app.gitlab.com/env: development
deployment.kubernetes.io/revision: "1"
field.cattle.io/publicEndpoints: '[{"addresses":["*******"],"port":443,"protocol":"HTTPS","serviceName":"*******","ingressName":"*******","hostname":"*******","path":"/","allNodes":true},{"addresses":["*******"],"port":443,"protocol":"HTTPS","serviceName":"*******","ingressName":"*******","hostname":"*******","path":"/","allNodes":true}]'
creationTimestamp: "2021-05-21T12:54:44Z"
generation: 2
labels:
app: development
chart: auto-deploy-app-1.0.7
heritage: Tiller
release: development
tier: web
track: stable
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:app.gitlab.com/app: {}
f:app.gitlab.com/env: {}
f:labels:
.: {}
f:app: {}
f:chart: {}
f:heritage: {}
f:release: {}
f:tier: {}
f:track: {}
f:spec:
f:progressDeadlineSeconds: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector:
f:matchLabels:
.: {}
f:app: {}
f:release: {}
f:tier: {}
f:track: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:annotations:
.: {}
f:app.gitlab.com/app: {}
f:app.gitlab.com/env: {}
f:checksum/application-secrets: {}
f:labels:
.: {}
f:app: {}
f:release: {}
f:tier: {}
f:track: {}
f:spec:
f:containers:
k:{"name":"auto-deploy-app"}:
.: {}
f:env:
.: {}
k:{"name":"DATABASE_URL"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"GITLAB_ENVIRONMENT_NAME"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"GITLAB_ENVIRONMENT_URL"}:
.: {}
f:name: {}
f:value: {}
f:envFrom: {}
f:image: {}
f:imagePullPolicy: {}
f:livenessProbe:
.: {}
f:failureThreshold: {}
f:httpGet:
.: {}
f:path: {}
f:port: {}
f:scheme: {}
f:initialDelaySeconds: {}
f:periodSeconds: {}
f:successThreshold: {}
f:timeoutSeconds: {}
f:name: {}
f:ports:
.: {}
k:{"containerPort":443,"protocol":"TCP"}:
.: {}
f:containerPort: {}
f:name: {}
f:protocol: {}
f:readinessProbe:
.: {}
f:failureThreshold: {}
f:httpGet:
.: {}
f:path: {}
f:port: {}
f:scheme: {}
f:initialDelaySeconds: {}
f:periodSeconds: {}
f:successThreshold: {}
f:timeoutSeconds: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:imagePullSecrets:
.: {}
k:{"name":"*******"}:
.: {}
f:name: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: Go-http-client
operation: Update
time: "2021-05-21T12:54:44Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
time: "2021-05-21T12:54:55Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:field.cattle.io/publicEndpoints: {}
manager: rancher
operation: Update
time: "2021-05-21T12:55:25Z"
name: development
namespace: *******
resourceVersion: "*******"
selfLink: /apis/apps/v1/namespaces/*******
uid: *******
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: development
release: development
tier: web
track: stable
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
app.gitlab.com/app: *******
app.gitlab.com/env: development
checksum/application-secrets: *******
creationTimestamp: null
labels:
app: development
release: development
tier: web
track: stable
spec:
containers:
- env:
- name: DATABASE_URL
value: ' '
- name: GITLAB_ENVIRONMENT_NAME
value: development
- name: GITLAB_ENVIRONMENT_URL
value: *******
envFrom:
- secretRef:
name: development-secret
image: *******
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 443
scheme: HTTPS
initialDelaySeconds: 15
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 15
name: auto-deploy-app
ports:
- containerPort: 443
name: web
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: 443
scheme: HTTPS
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 3
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: *******
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2021-05-21T12:54:55Z"
lastUpdateTime: "2021-05-21T12:54:55Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2021-05-21T12:54:44Z"
lastUpdateTime: "2021-05-21T12:54:55Z"
message: ReplicaSet "*******" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 2
readyReplicas: 1
replicas: 1
updatedReplicas: 1
SSL is removed somewhere and kubernetes ingress calls the pods with http:// instead of https://, but I do not know how to fix it.
So the question is: How to remove SSL termination from kubernetes ingress?
If you want SSL termination to happen at the server instead at the ingress/LoadBalancer, you can use a something called SSL Passthrough.
Load Balancer will then not terminate the SSL request at the ingress but then your server should be able to terminate those SSL request.
Use these configuration in your ingress.yaml file depending upon your ingress class
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
There is one more annotation that you can use in nginx. backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service.
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
By default NGINX uses HTTP.
Read more about it here https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol

How to curl service like in the docs for kubernetes?

I am following this doc https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
but I am expecting to be able to curl some localhost:8080 or something like that.
What the the exact curl command and port that is expect FROM THE HOST. Not on the cluster, not on a node, but FROM THE HOST.
I am running in microk8s.
This is the file I have applied, copied from the docs:
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: my-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
template:
metadata:
labels:
run: my-nginx
spec:
volumes:
- name: secret-volume
secret:
secretName: nginxsecret
- name: configmap-volume
configMap:
name: nginxconfigmap
containers:
- name: nginxhttps
image: bprashanth/nginxhttps:1.0
ports:
- containerPort: 443
- containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/ssl
name: secret-volume
- mountPath: /etc/nginx/conf.d
name: configmap-volume
This is the output of the k get deployment my-nginx -o yaml command:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"my-nginx","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"run":"my-nginx"}},"template":{"metadata":{"labels":{"run":"my-nginx"}},"spec":{"containers":[{"image":"bprashanth/nginxhttps:1.0","name":"nginxhttps","ports":[{"containerPort":443},{"containerPort":80}],"volumeMounts":[{"mountPath":"/etc/nginx/ssl","name":"secret-volume"},{"mountPath":"/etc/nginx/conf.d","name":"configmap-volume"}]}],"volumes":[{"name":"secret-volume","secret":{"secretName":"nginxsecret"}},{"configMap":{"name":"nginxconfigmap"},"name":"configmap-volume"}]}}}}
creationTimestamp: "2021-01-31T19:25:30Z"
generation: 1
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
f:progressDeadlineSeconds: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:run: {}
f:spec:
f:containers:
k:{"name":"nginxhttps"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:ports:
.: {}
k:{"containerPort":80,"protocol":"TCP"}:
.: {}
f:containerPort: {}
f:protocol: {}
k:{"containerPort":443,"protocol":"TCP"}:
.: {}
f:containerPort: {}
f:protocol: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:volumeMounts:
.: {}
k:{"mountPath":"/etc/nginx/conf.d"}:
.: {}
f:mountPath: {}
f:name: {}
k:{"mountPath":"/etc/nginx/ssl"}:
.: {}
f:mountPath: {}
f:name: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
f:volumes:
.: {}
k:{"name":"configmap-volume"}:
.: {}
f:configMap:
.: {}
f:defaultMode: {}
f:name: {}
f:name: {}
k:{"name":"secret-volume"}:
.: {}
f:name: {}
f:secret:
.: {}
f:defaultMode: {}
f:secretName: {}
manager: kubectl-client-side-apply
operation: Update
time: "2021-01-31T19:25:30Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
time: "2021-01-31T19:25:31Z"
name: my-nginx
namespace: default
resourceVersion: "764711"
selfLink: /apis/apps/v1/namespaces/default/deployments/my-nginx
uid: 77061fd6-8a88-4e0d-891b-6dcc5df2c95e
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: my-nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: my-nginx
spec:
containers:
- image: bprashanth/nginxhttps:1.0
imagePullPolicy: IfNotPresent
name: nginxhttps
ports:
- containerPort: 443
protocol: TCP
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/nginx/ssl
name: secret-volume
- mountPath: /etc/nginx/conf.d
name: configmap-volume
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: secret-volume
secret:
defaultMode: 420
secretName: nginxsecret
- configMap:
defaultMode: 420
name: nginxconfigmap
name: configmap-volume
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2021-01-31T19:25:31Z"
lastUpdateTime: "2021-01-31T19:25:31Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2021-01-31T19:25:30Z"
lastUpdateTime: "2021-01-31T19:25:31Z"
message: ReplicaSet "my-nginx-5b6fb7fb46" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
There is also this default.conf as in the docs
cat default.conf
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
listen 443 ssl;
root /usr/share/nginx/html;
index index.html;
server_name localhost;
ssl_certificate /etc/nginx/ssl/tls.crt;
ssl_certificate_key /etc/nginx/ssl/tls.key;
location / {
try_files $uri $uri/ =404;
}
}
try curl -k 127.0.0.1:80 or curl -k localhost:80. actually the 8080 port is for the service but when you try localhost or 127.0.0.1 they don't connect through service so the port need to be the container's port which is 80.
You can set up a proxy to your pod with:
kubectl port-forward [name of your pod] [port-on-the-host]:[pod-port]
Then you can access it via your host:
$ curl 127.0.0.1:pod-port
in your case:
$ curl 127.0.0.1:80
In your case 80 is a targetPortwhich is the port on the pod that the request gets sent to
But it is solution without using services.
Read more: kubernetes-port-forward.

Installing database in Kubernetes

I'm trying to install CockroachDB with Rancher and getting some problems, showing:
FailedBinding (5) 14 sec ago no persistent volumes available for this claim and no storage class is set
How can this be solved?
Here are the configurations in my local machine:
PersistentVolumeClaim: datadir-cockroachdb-0
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: "2021-01-07T23:50:42Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app.kubernetes.io/component: cockroachdb
app.kubernetes.io/instance: cockroachdb
app.kubernetes.io/name: cockroachdb
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app.kubernetes.io/component: {}
f:app.kubernetes.io/instance: {}
f:app.kubernetes.io/name: {}
f:spec:
f:accessModes: {}
f:resources:
f:requests:
.: {}
f:storage: {}
f:volumeMode: {}
f:status:
f:phase: {}
manager: k3s
operation: Update
time: "2021-01-07T23:50:41Z"
name: datadir-cockroachdb-0
namespace: default
resourceVersion: "188922"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/datadir-cockroachdb-0
uid: ef83d3c7-0309-44a8-b379-0134835d97a9
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
volumeMode: Filesystem
status:
phase: Pending
CockroachDB
clusterDomain: cluster.local
conf:
attrs: []
cache: 25%
cluster-name: ''
disable-cluster-name-verification: false
http-port: 8080
join: []
locality: ''
logtostderr: INFO
max-disk-temp-storage: 0
max-offset: 500ms
max-sql-memory: 25%
port: 26257
single-node: false
sql-audit-dir: ''
image:
credentials: {}
pullPolicy: IfNotPresent
repository: cockroachdb/cockroach
tag: v20.1.3
ingress:
annotations: {}
enabled: false
hosts: []
labels: {}
paths:
- /
tls: []
init:
affinity: {}
annotations: {}
labels:
app.kubernetes.io/component: init
nodeSelector: {}
resources: {}
tolerations: []
labels: {}
networkPolicy:
enabled: false
ingress:
grpc: []
http: []
service:
discovery:
annotations: {}
labels:
app.kubernetes.io/component: cockroachdb
ports:
grpc:
external:
name: grpc
port: 26257
internal:
name: grpc-internal
port: 26257
http:
name: http
port: 8080
public:
annotations: {}
labels:
app.kubernetes.io/component: cockroachdb
type: ClusterIP
statefulset:
annotations: {}
args: []
budget:
maxUnavailable: 1
env: []
labels:
app.kubernetes.io/component: cockroachdb
nodeAffinity: {}
nodeSelector: {}
podAffinity: {}
podAntiAffinity:
type: soft
weight: 100
podManagementPolicy: Parallel
priorityClassName: ''
replicas: 3
resources: {}
secretMounts: []
tolerations: []
updateStrategy:
type: RollingUpdate
storage:
hostPath: ''
persistentVolume: volume1
annotations: {}
enabled: true
labels: {}
size: 1Gi
storageClass: local-storage ''
tls:
certs:
clientRootSecret: cockroachdb-root
nodeSecret: cockroachdb-node
provided: false
tlsSecret: false
enabled: false
init:
image:
credentials: {}
pullPolicy: IfNotPresent
repository: cockroachdb/cockroach-k8s-request-cert
tag: '0.4'
serviceAccount:
create: true
name: ''
Storage: 1Gi
PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: '{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"local"},"name":"volume1"},"spec":{"accessModes":["ReadWriteOnce"],"capacity":{"storage":"10Gi"},"hostPath":{"path":"/data/volume1"}}}'
creationTimestamp: "2021-01-07T23:11:43Z"
finalizers:
- kubernetes.io/pv-protection
labels:
type: local
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:phase: {}
manager: k3s
operation: Update
time: "2021-01-07T23:11:43Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations: {}
f:labels:
.: {}
f:type: {}
f:spec:
f:accessModes: {}
f:capacity: {}
f:hostPath:
.: {}
f:path: {}
f:type: {}
f:persistentVolumeReclaimPolicy: {}
f:volumeMode: {}
manager: kubectl
operation: Update
time: "2021-01-07T23:11:43Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
f:capacity:
f:storage: {}
manager: Go-http-client
operation: Update
time: "2021-01-07T23:12:11Z"
name: volume1
resourceVersion: "173783"
selfLink: /api/v1/persistentvolumes/volume1
uid: 6e76984c-22cd-4219-9ff6-ba7f67c1ca72
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 4Gi
hostPath:
path: /data/volume1
type: ""
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
status:
phase: Available
StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: "2021-01-07T23:29:17Z"
managedFields:
- apiVersion: storage.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:provisioner: {}
f:reclaimPolicy: {}
f:volumeBindingMode: {}
manager: rancher
operation: Update
time: "2021-01-07T23:29:17Z"
name: local-storage
resourceVersion: "180190"
selfLink: /apis/storage.k8s.io/v1/storageclasses/local-storage
uid: 0a5f8b75-7fb5-4965-91ee-91b0a087339a
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
With provided details looks like your storage class is missing on rancher.
Without storage class respective PVC won't get created so it's giving an error. Storage classes may change with cloud providers and also based on the requirement of the type of disk SSD, HDD.
You can get more idea : https://rancher.com/docs/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/
check first your PV is available and after that check for storage class and PVC.
It looks like the issue was with Rancher this time (Thank you #Harsh Manvar for answering). If you have more questions about CockroachDB you can also join the CockroachDB community slack channel where you will find loads of experts who can answer your questions in a timely manner. (And be sure to join the #community channel also to have some FUN!) :) https://go.crdb.dev/p/slack