Exposing Traefik dashboard - raspberry-pi

I know there are many similar posts. But given the slight differences between everyone's environment, I have not found a solution that has worked for me. I am trying to access the Traefik dashboard running on bare metal (pi cluster) k3s. I am using the default LB in k3s.
Other ingress provider resources have worked like ingress to the Pihole dashboard for example. When I try to access the dashboard via: https://www.traefik.localhost/dashboard/ I get an unable to connect error. I have traefik.localhost in /etc/hosts pointing to one of the LB ingress IP's, in this case .104
In theory I think the request should be gobbled up by the LB service on the respective node, forwarded to the traefik service, if the entrypoint is open (80) in this case. The Traefik service should look at the providers available, find the ingressRoute I've made, and match the hostname. Then forward the request to the service api#internal. I do not know how to check if that service is running properly or not, which would be the last step in my debugging process if I knew how.
Here is the Traefik service:
kubectl describe service -n kube-system traefik
Name: traefik
Namespace: kube-system
Labels: app.kubernetes.io/instance=traefik
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=traefik
helm.sh/chart=traefik-10.3.0
Annotations: meta.helm.sh/release-name: traefik
meta.helm.sh/release-namespace: kube-system
Selector: app.kubernetes.io/instance=traefik,app.kubernetes.io/name=traefik
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.226.223
IPs: 10.43.226.223
LoadBalancer Ingress: 192.168.4.101, 192.168.4.102, 192.168.4.103, 192.168.4.104, 192.168.4.105
Port: web 80/TCP
TargetPort: web/TCP
NodePort: web 30690/TCP
Endpoints: 10.42.4.88:8000
Port: websecure 443/TCP
TargetPort: websecure/TCP
NodePort: websecure 30328/TCP
Endpoints: 10.42.4.88:8443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Here is the IngressRoute:
kubectl describe ingressRoute -n kube-system dashboard
Name: dashboard
Namespace: kube-system
Labels: <none>
Annotations: <none>
API Version: traefik.containo.us/v1alpha1
Kind: IngressRoute
Metadata:
Creation Timestamp: 2022-01-18T03:42:49Z
Generation: 9
Managed Fields:
API Version: traefik.containo.us/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:entryPoints:
f:routes:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2022-01-23T16:46:30Z
Resource Version: 628002
UID: b96eb707-b1a9-4a6c-b94f-a8b975b4120b
Spec:
Entry Points:
web
Routes:
Kind: Rule
Match: Host(`traefik.localhost`) && PathPrefix(`/`)
Services:
Kind: TraefikService
Name: api#internal
Events: <none>
Dynamic config:
cat traefik.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik-crd
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-crd-10.3.0.tgz
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-10.3.0.tgz
api:
insecure: true
set:
global.systemDefaultRegistry: ""
valuesContent: |-
rbac:
enabled: true
ports:
websecure:
tls:
enabled: true
podAnnotations:
prometheus.io/port: "8082"
prometheus.io/scrape: "true"
providers:
kubernetesCRD:
kubernetesIngress:
publishedService:
enabled: true
priorityClassName: "system-cluster-critical"
image:
name: "rancher/mirrored-library-traefik"
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
helm chart config to modify the above helm chart:
cat traefik-config.yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
api:
insecure: true
dashboard: true
What can I try to solve this?

I also have k3s with the preinstalled traefik and default loadbalancer.
This works for me.
If you implement this, you only need to type traefik.localhost in your browser and it should get you into your dashboard. No need to add /dashboard to your url.
The middleware will convert your http request into a https request.
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: redirectscheme
namespace: default
spec:
redirectScheme:
scheme: https
permanent: true
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik-dash-http
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`traefik.localhost`)
kind: Rule
services:
- name: api#internal
kind: TraefikService
middlewares:
- name: redirectscheme
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik-dash-https
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`traefik.localhost`)
kind: Rule
services:
- name: api#internal
kind: TraefikService

Related

Kubernetes dashboard through traefik

I have a Kubernetes cluster (release 1.23.4) working using Traefik(release 2.7.0).
I would like to access the kubernetes dashboard through IngressRoute Traefix, everything seems to work correctly no error in the log of the Traefik pod and the dashboard but when i want to access the kubernetes dashboard it can not access the page: https://k8sdash.kub.techlabnews.com/api/v1/login/status and I have an error 404.(log in the firefox console).
Use this code for create the IngressRoute :
apiVersion: traefik.containo.us/v1alpha1
kind: ServersTransport
metadata:
name: kubernetes-dashboard-transport
namespace: kubernetes-dashboard
spec:
serverName: "k8sdash.kub.techlabnews.com"
insecureSkipVerify: true
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`k8sdash.kub.techlabnews.com`)
services:
- kind: Service
port: 443
name: kubernetes-dashboard
namespace: kubernetes-dashboard
serversTransport: kubernetes-dashboard-transport
tls:
secretName: kub.techlabnews-com-cert-secret-replica
Does anyone have an idea of the problem ?
Thanks

Ingress return 404

Rancher ingress return 404 to service.
Setup: I have 6 VMs, one Rancher server x.x.x.51 (where dns domain.company is pointing to, TLS), and 5 VMs (one master and 4 worker x.x.x.52-56).
My service, gvm-gsad running in gvm namespace:
apiVersion: v1
kind: Service
metadata:
annotations:
field.cattle.io/publicEndpoints: "null"
meta.helm.sh/release-name: gvm
meta.helm.sh/release-namespace: gvm
creationTimestamp: "2021-11-15T21:14:21Z"
labels:
app.kubernetes.io/instance: gvm-gvm
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: gvm-gsad
app.kubernetes.io/version: "21.04"
helm.sh/chart: gvm-1.3.0
name: gvm-gsad
namespace: gvm
resourceVersion: "3488107"
uid: c1ddfdfa-3799-4945-841d-b6aa9a89f93a
spec:
clusterIP: 10.43.195.239
clusterIPs:
- 10.43.195.239
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: gsad
port: 80
protocol: TCP
targetPort: gsad
selector:
app.kubernetes.io/instance: gvm-gvm
app.kubernetes.io/name: gvm-gsad
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
My ingress configure: Ingress controller is default one from rancher.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
field.cattle.io/publicEndpoints: '[{"addresses":["172.16.12.53"],"port":443,"protocol":"HTTPS","serviceName":"gvm:gvm-gsad","ingressName":"gvm:gvm","hostname":"dtl.miproad.ad","path":"/gvm","allNodes":true}]'
creationTimestamp: "2021-11-16T19:22:45Z"
generation: 10
name: gvm
namespace: gvm
resourceVersion: "3508472"
uid: e99271a8-8553-45c8-b027-b259a453793c
spec:
rules:
- host: domain.company
http:
paths:
- backend:
service:
name: gvm-gsad
port:
number: 80
path: /gvm
pathType: Prefix
tls:
- hosts:
- domain.company
status:
loadBalancer:
ingress:
- ip: x.x.x.53
- ip: x.x.x.54
- ip: x.x.x.55
- ip: x.x.x.56
When i access it with https://domain.company/gvm then i get 404.
However, when i change the service to NodePort, i could access it with x.x.x.52:PORT normally. Meaning the deployment is running fine, just some configuration issue in ingress.
I checked this one: rancher 2.x thru ingress controller returns 404 but it does not help.
Thank you in advance!
Figured out the solution.
The domain.company is pointing to rancher (x.x.x.51). Where the ingress is running on (x.x.x.53,.54,.55,.56).
So, the solution is to create a new DNS called gvm.domain.company pointing to any ingress (x.x.x.53,.54,.55,.56) (you can have LoadBalancer here or use round robin DNS).
Then, the ingress definition is gvm.domain.company and path is "/".
Hope it helps others!

Istio authorization policy not applying on child gateway

What I am trying to achieve: block all traffic to a service, containing the code to handle this within the same namespace as the service.
Why: this is the first step in "locking down" a specific service to specific IPs/CIDRs
I have a primary ingress GW called istio-ingressgateway which works for services.
$ kubectl describe gw istio-ingressgateway -n istio-system
Name: istio-ingressgateway
Namespace: istio-system
Labels: operator.istio.io/component=IngressGateways
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.5.5
release=istio
Annotations: API Version: networking.istio.io/v1beta1
Kind: Gateway
Metadata:
Creation Timestamp: 2020-08-28T15:45:10Z
Generation: 1
Resource Version: 95438963
Self Link: /apis/networking.istio.io/v1beta1/namespaces/istio-system/gateways/istio-ingressgateway
UID: ae5dd2d0-44a3-4c2b-a7ba-4b29c26fa0b9
Spec:
Selector:
App: istio-ingressgateway
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http
Number: 80
Protocol: HTTP
Events: <none>
I also have another "primary" GW, the K8s ingress GW to support TLS (thought I'd include this, to be as explicit as possible)
k describe gw istio-autogenerated-k8s-ingress -n istio-system
Name: istio-autogenerated-k8s-ingress
Namespace: istio-system
Labels: app=istio-ingressgateway
istio=ingressgateway
operator.istio.io/component=IngressGateways
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.5.5
release=istio
Annotations: API Version: networking.istio.io/v1beta1
Kind: Gateway
Metadata:
Creation Timestamp: 2020-08-28T15:45:56Z
Generation: 2
Resource Version: 95439499
Self Link: /apis/networking.istio.io/v1beta1/namespaces/istio-system/gateways/istio-autogenerated-k8s-ingress
UID: edd46c17-9975-4089-95ff-a2414d40954a
Spec:
Selector:
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http
Number: 80
Protocol: HTTP
Hosts:
*
Port:
Name: https-default
Number: 443
Protocol: HTTPS
Tls:
Credential Name: ingress-cert
Mode: SIMPLE
Private Key: sds
Server Certificate: sds
Events: <none>
I want to be able to create another GW, in the namespace x and have an authorization policy attached to that GW.
If I create the authorization policy in the istio-system namespace, then it comes back with RBAC: access denied which is great - but that is for all services using the primary GW.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: block-all
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
action: DENY
rules:
- from:
- source:
ipBlocks: ["0.0.0.0/0"]
What I currently have does not work. Any pointers would be highly appreciated. The following are all created under the x namespace when applying the kubectl apply -f files.yaml -n x
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
annotations:
app: x-ingress
name: x-gw
labels:
app: x-ingress
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- x.y.com
port:
name: http
number: 80
protocol: HTTP
tls:
httpsRedirect: true
- hosts:
- x.y.com
port:
name: https
number: 443
protocol: HTTPS
tls:
mode: SIMPLE
privateKey: sds
serverCertificate: sds
credentialName: ingress-cert
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: x
labels:
app: x
spec:
hosts:
- x.y.com
gateways:
- x-gw
http:
- route:
- destination:
host: x
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: x-ingress-policy
spec:
selector:
matchLabels:
app: x-ingress
action: DENY
rules:
- from:
- source:
ipBlocks: ["0.0.0.0/0"]
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: x
labels:
app: x
spec:
hosts:
- x.y.com
gateways:
- x-gw
http:
- route:
- destination:
host: x
The above should be blocking all traffic to the GW, as it matches on the CIDR range of 0.0.0.0/0
I am entirely misunderstanding the concept of GWs/AuthorizationPolicies or have I missed something?
Edit
I ended up creating another GW which had the IP restriction block on that, as classic load balancers on AWS do not support IP forwarding.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istiocontrolplane
spec:
profile: demo
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
- name: admin-ingressgateway
enabled: true
label:
istio: admin-ingressgateway
k8s:
serviceAnnotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all-admin
namespace: istio-system
spec:
selector:
matchLabels:
istio: admin-ingressgateway
action: ALLOW
rules:
- from:
- source:
ipBlocks: ["176.252.114.59/32"]
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
I then used that gateway in my workload that I wanted to lock down.
As far as I know you should rather use AuthorizationPolicy in 3 ways
on ingress gateway
on namespace
on specific service
I have tried to make it work on a specific gateway with annotations like you did, but I couldn't make it work for me.
e.g.
the following authorization policy denies all requests to workloads in namespace x.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: x
spec:
{}
the following authorization policy denies all requests on ingress gateway.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
the following authorization policy denies all requests on httpbin in x namespace.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-service-x
namespace: x
spec:
selector:
matchLabels:
app: httpbin
Let's say you deny all requests on x namespace and allow only get requests for httpbin service.
Then you would use this AuthorizationPolicy to deny all requests
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: x
spec:
{}
And this AuthorizationPolicy to allow only get requests.
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "x-viewer"
namespace: x
spec:
selector:
matchLabels:
app: httpbin
rules:
- to:
- operation:
methods: ["GET"]
And there is the main issue ,which is ipBlocks. There is related github issue about that.
As mentioned here by #incfly
I guess the reason why it’s stop working when in non ingress pod is because the sourceIP attribute will not be the real client IP then.
According to https://github.com/istio/istio/issues/22341 7, (not done yet) this aims at providing better support without setting k8s externalTrafficPolicy to local, and supports CIDR range as well.
I have tried this example from istio documentation to make it work, but it wasn't working for me, even if I changed externalTrafficPolicy. Then a workaround with envoyfilter came from above istio discuss thread.
Answer provided by #hleal18 here.
Got and example working successfully using EnvoyFilters, specifically with remote_ip condition applied on httbin.
Sharing the manifest for reference.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: httpbin
namespace: foo
spec:
workloadSelector:
labels:
app: httpbin
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.rbac
config:
rules:
action: ALLOW
policies:
"ip-premissions":
permissions:
- any: true
principals:
- remote_ip:
address_prefix: xxx.xxx.xx.xx
prefix_len: 32
I have tried above envoy filter on my test cluster and as far as I can see it's working.
Take a look at below steps I made.
1.I have changed the externalTrafficPolicy with
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
2.I have created namespace x with istio-injection enabled and deployed httpbin here.
kubectl create namespace x
kubectl label namespace x istio-injection=enabled
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.7/samples/httpbin/httpbin.yaml -n x
kubectl apply -f https://github.com/istio/istio/blob/master/samples/httpbin/httpbin-gateway.yaml -n x
3.I have created envoyfilter
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: httpbin
namespace: x
spec:
workloadSelector:
labels:
app: httpbin
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.rbac
config:
rules:
action: ALLOW
policies:
"ip-premissions":
permissions:
- any: true
principals:
- remote_ip:
address_prefix: xx.xx.xx.xx
prefix_len: 32
address_prefix is the CLIENT_IP, there are commands I have used to get it.
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="http2")].port}')
curl "$INGRESS_HOST":"$INGRESS_PORT"/headers -s -o /dev/null -w "%{http_code}\n"
CLIENT_IP=$(curl "$INGRESS_HOST":"$INGRESS_PORT"/ip -s | grep "origin" | cut -d'"' -f 4) && echo "$CLIENT_IP"
4.I have test it with curl and my browser.
curl "$INGRESS_HOST":"$INGRESS_PORT"/headers -s -o /dev/null -w "%{http_code}\n"
200
Let me know if you have any more questions, I might be able to help.

Exposing Redis with Ingress Nginx Controller

Hello when I use node port to expose my redis service it works fine. I am able to access it.
But if I try switch to Ingress Nginx controller it refuse to connect.. Other apps work fine with ingress.
Here is my service:
apiVersion: v1
kind: Service
metadata:
name: redis-svc
spec:
# type: NodePort
ports:
- name: http
port: 6379
targetPort: 6379
protocol: TCP
# nodePort: 30007
selector:
app: redis
And here is ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: redis-ing
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
# nginx.ingress.kubernetes.io/enable-cors: "true"
# nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
# nginx.ingress.kubernetes.io/cors-allow-origin: "https://test.hefest.io"
# nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
tls:
- secretName: letsencrypt-prod
hosts:
- redis-dev.domain.com
rules:
- host: redis-dev.domain.com
http:
paths:
- path: /
backend:
serviceName: redis-svc
servicePort: 6379
Any idea what can be an issue?
I am using this ingress controller: https://github.com/nginxinc/kubernetes-ingress
Redis works on 6379 which is non HTTP port(80,443). So you need to enable TCP/UDP support in nginx ingress controller. The minikube docs here shows how to do it for redis.
Update the TCP and/or UDP services configmaps
Borrowing from the tutorial on configuring TCP and UDP services with the ingress nginx controller we will need to edit the configmap which is installed by default when enabling the minikube ingress addon.
There are 2 configmaps, 1 for TCP services and 1 for UDP services. By default they look like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
Since these configmaps are centralized and may contain configurations, it is best if we only patch them rather than completely overwrite them.
Let’s use this redis deployment as an example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: default
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
imagePullPolicy: Always
name: redis
ports:
- containerPort: 6379
protocol: TCP
Create a file redis-deployment.yaml and paste the contents above. Then install the redis deployment with the following command:
kubectl apply -f redis-deployment.yaml
Next we need to create a service that can route traffic to our pods:
apiVersion: v1
kind: Service
metadata:
name: redis-service
namespace: default
spec:
selector:
app: redis
type: ClusterIP
ports:
- name: tcp-port
port: 6379
targetPort: 6379
protocol: TCP
Create a file redis-service.yaml and paste the contents above. Then install the redis service with the following command:
kubectl apply -f redis-service.yaml
To add a TCP service to the nginx ingress controller you can run the following command:
kubectl patch configmap tcp-services -n kube-system --patch '{"data":{"6379":"default/redis-service:6379"}}'
Where:
6379 : the port your service should listen to from outside the minikube virtual machine
default : the namespace that your service is installed in
redis-service : the name of the service
We can verify that our resource was patched with the following command:
kubectl get configmap tcp-services -n kube-system -o yaml
We should see something like this:
apiVersion: v1
data:
"6379": default/redis-service:6379
kind: ConfigMap
metadata:
creationTimestamp: "2019-10-01T16:19:57Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: tcp-services
namespace: kube-system
resourceVersion: "2857"
selfLink: /api/v1/namespaces/kube-system/configmaps/tcp-services
uid: 4f7fac22-e467-11e9-b543-080027057910
The only value you need to validate is that there is a value under the data property that looks like this:
"6379": default/redis-service:6379
Patch the ingress-nginx-controller
There is one final step that must be done in order to obtain connectivity from the outside cluster. We need to patch our nginx controller so that it is listening on port 6379 and can route traffic to your service. To do this we need to create a patch file.
spec:
template:
spec:
containers:
- name: ingress-nginx-controller
ports:
- containerPort: 6379
hostPort: 6379
Create a file called ingress-nginx-controller-patch.yaml and paste the contents above.
Next apply the changes with the following command:
kubectl patch deployment ingress-nginx-controller --patch "$(cat ingress-nginx-controller-patch.yaml)" -n kube-system
The way I made it work is by enabling ssl-passthrough on nginx-ingress controller.
Once my nginx-ingress controller was patched, I was able to connect.
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: redis-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
spec:
tls:
- hosts:
- <host_address>
secretName: <k8s_secret_name>
rules:
- host: <host_address>
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: redis-service
port:
number: 6380
python snippet to connect
import redis
r = redis.StrictRedis(host='<host_address>',
port=443, db=0, ssl=True,
ssl_ca_certs='server.pem')
print(r.ping())
I do use redistls in my case.

Managed Certificate in Ingress, Domain Status is FailedNotVisible

I'm simply following the tutorial here: https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#creating_an_ingress_with_a_managed_certificate
Everything works fine until I deploy my certificate and wait 20 minutes for it to show up as:
Status:
Certificate Name: daojnfiwlefielwrfn
Certificate Status: Provisioning
Domain Status:
Domain: moviedecisionengine.com
Status: FailedNotVisible
That domain clearly works so what am I missing?
EDIT:
Here's the Cert:
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: moviedecisionengine
spec:
domains:
- moviedecisionengine.com
The Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.gcp.kubernetes.io/pre-shared-cert: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b
ingress.kubernetes.io/backends: '{"k8s-be-31721--1cd1f38313af9089":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/https-forwarding-rule: k8s-fws-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/https-target-proxy: k8s-tps-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/ssl-cert: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b
ingress.kubernetes.io/target-proxy: k8s-tp-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/url-map: k8s-um-default-showcase-mde-ingress--1cd1f38313af9089
kubernetes.io/ingress.global-static-ip-name: 34.107.208.110
networking.gke.io/managed-certificates: moviedecisionengine
creationTimestamp: "2020-01-16T19:44:13Z"
generation: 4
name: showcase-mde-ingress
namespace: default
resourceVersion: "1039270"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/showcase-mde-ingress
uid: 92a2f91f-3898-11ea-b820-42010a800045
spec:
backend:
serviceName: showcase-mde
servicePort: 80
rules:
- host: moviedecisionengine.com
http:
paths:
- backend:
serviceName: showcase-mde
servicePort: 80
- host: www.moviedecisionengine.com
http:
paths:
- backend:
serviceName: showcase-mde
servicePort: 80
status:
loadBalancer:
ingress:
- ip: 34.107.208.110
And lastly, the load balancer:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-01-13T22:41:27Z"
labels:
app: showcase-mde
name: showcase-mde
namespace: default
resourceVersion: "2298"
selfLink: /api/v1/namespaces/default/services/showcase-mde
uid: d5a77d7b-3655-11ea-af7f-42010a800157
spec:
clusterIP: 10.31.251.46
externalTrafficPolicy: Cluster
ports:
- nodePort: 31721
port: 80
protocol: TCP
targetPort: 80
selector:
app: showcase-mde
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.232.156.172
For the full output of kubectl describe managedcertificate moviedecisionengine:
Name: moviedecisionengine
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.gke.io/v1beta1","kind":"ManagedCertificate","metadata":{"annotations":{},"name":"moviedecisionengine","namespace...
API Version: networking.gke.io/v1beta1
Kind: ManagedCertificate
Metadata:
Creation Timestamp: 2020-01-17T16:47:19Z
Generation: 3
Resource Version: 1042869
Self Link: /apis/networking.gke.io/v1beta1/namespaces/default/managedcertificates/moviedecisionengine
UID: 06c97b69-3949-11ea-b820-42010a800045
Spec:
Domains:
moviedecisionengine.com
Status:
Certificate Name: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b
Certificate Status: Provisioning
Domain Status:
Domain: moviedecisionengine.com
Status: FailedNotVisible
Events: <none>
I was successful in using Managedcertificate with GKE Ingress resource.
Let me elaborate on that:
Steps to reproduce:
Create IP address with gcloud
Update the DNS entry
Create a deployment
Create a service
Create a certificate
Create a Ingress resource
Create IP address with gcloud
Invoke below command to create static ip address:
$ gcloud compute addresses create example-address --global
Check newly created IP address with below command:
$ gcloud compute addresses describe example-address --global
Update the DNS entry
Go to GCP -> Network Services -> Cloud DNS.
Edit your zone with A record with the same address that was created above.
Wait for it to apply.
Check with $ nslookup DOMAIN.NAME if the entry is pointing to the appropriate address.
Create a deployment
Below is example deployment which will respond to traffic:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
spec:
selector:
matchLabels:
app: hello
version: 1.0.0
replicas: 3
template:
metadata:
labels:
app: hello
version: 1.0.0
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:1.0"
env:
- name: "PORT"
value: "50001"
Apply it with command $ kubectl apply -f FILE_NAME.yaml
You can change this deployment to suit your application but be aware of the ports that your application will respond to.
Create a service
Use the NodePort as it's the same as in the provided link:
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
type: NodePort
selector:
app: hello
version: 1.0.0
ports:
- name: hello-port
protocol: TCP
port: 50001
targetPort: 50001
Apply it with command $ kubectl apply -f FILE_NAME.yaml
Create a certificate
As shown in guide you can use below example to create ManagedCertificate:
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: example-certificate
spec:
domains:
- DOMAIN.NAME
Apply it with command $ kubectl apply -f FILE_NAME.yaml
The status FAILED_NOT_VISIBLE indicates that certificate provisioning failed for a domain because of a problem with DNS or the load balancing configuration. Make sure that DNS is configured so that the certificate's domain resolves to the IP address of the load balancer.
-- Google Cloud documentation
Creation of this certificate should be affected by DNS entry that you provided earlier.
Create a Ingress resource
Below is example for Ingress resource which will use ManagedCertificate:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: example-address
networking.gke.io/managed-certificates: example-certificate
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
Apply it with command $ kubectl apply -f FILE_NAME.yaml
It took about 20-25 minutes for it to fully work.