Managed Certificate in Ingress, Domain Status is FailedNotVisible - kubernetes

I'm simply following the tutorial here: https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#creating_an_ingress_with_a_managed_certificate
Everything works fine until I deploy my certificate and wait 20 minutes for it to show up as:
Status:
Certificate Name: daojnfiwlefielwrfn
Certificate Status: Provisioning
Domain Status:
Domain: moviedecisionengine.com
Status: FailedNotVisible
That domain clearly works so what am I missing?
EDIT:
Here's the Cert:
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: moviedecisionengine
spec:
domains:
- moviedecisionengine.com
The Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.gcp.kubernetes.io/pre-shared-cert: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b
ingress.kubernetes.io/backends: '{"k8s-be-31721--1cd1f38313af9089":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/https-forwarding-rule: k8s-fws-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/https-target-proxy: k8s-tps-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/ssl-cert: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b
ingress.kubernetes.io/target-proxy: k8s-tp-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/url-map: k8s-um-default-showcase-mde-ingress--1cd1f38313af9089
kubernetes.io/ingress.global-static-ip-name: 34.107.208.110
networking.gke.io/managed-certificates: moviedecisionengine
creationTimestamp: "2020-01-16T19:44:13Z"
generation: 4
name: showcase-mde-ingress
namespace: default
resourceVersion: "1039270"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/showcase-mde-ingress
uid: 92a2f91f-3898-11ea-b820-42010a800045
spec:
backend:
serviceName: showcase-mde
servicePort: 80
rules:
- host: moviedecisionengine.com
http:
paths:
- backend:
serviceName: showcase-mde
servicePort: 80
- host: www.moviedecisionengine.com
http:
paths:
- backend:
serviceName: showcase-mde
servicePort: 80
status:
loadBalancer:
ingress:
- ip: 34.107.208.110
And lastly, the load balancer:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-01-13T22:41:27Z"
labels:
app: showcase-mde
name: showcase-mde
namespace: default
resourceVersion: "2298"
selfLink: /api/v1/namespaces/default/services/showcase-mde
uid: d5a77d7b-3655-11ea-af7f-42010a800157
spec:
clusterIP: 10.31.251.46
externalTrafficPolicy: Cluster
ports:
- nodePort: 31721
port: 80
protocol: TCP
targetPort: 80
selector:
app: showcase-mde
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.232.156.172
For the full output of kubectl describe managedcertificate moviedecisionengine:
Name: moviedecisionengine
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.gke.io/v1beta1","kind":"ManagedCertificate","metadata":{"annotations":{},"name":"moviedecisionengine","namespace...
API Version: networking.gke.io/v1beta1
Kind: ManagedCertificate
Metadata:
Creation Timestamp: 2020-01-17T16:47:19Z
Generation: 3
Resource Version: 1042869
Self Link: /apis/networking.gke.io/v1beta1/namespaces/default/managedcertificates/moviedecisionengine
UID: 06c97b69-3949-11ea-b820-42010a800045
Spec:
Domains:
moviedecisionengine.com
Status:
Certificate Name: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b
Certificate Status: Provisioning
Domain Status:
Domain: moviedecisionengine.com
Status: FailedNotVisible
Events: <none>

I was successful in using Managedcertificate with GKE Ingress resource.
Let me elaborate on that:
Steps to reproduce:
Create IP address with gcloud
Update the DNS entry
Create a deployment
Create a service
Create a certificate
Create a Ingress resource
Create IP address with gcloud
Invoke below command to create static ip address:
$ gcloud compute addresses create example-address --global
Check newly created IP address with below command:
$ gcloud compute addresses describe example-address --global
Update the DNS entry
Go to GCP -> Network Services -> Cloud DNS.
Edit your zone with A record with the same address that was created above.
Wait for it to apply.
Check with $ nslookup DOMAIN.NAME if the entry is pointing to the appropriate address.
Create a deployment
Below is example deployment which will respond to traffic:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
spec:
selector:
matchLabels:
app: hello
version: 1.0.0
replicas: 3
template:
metadata:
labels:
app: hello
version: 1.0.0
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:1.0"
env:
- name: "PORT"
value: "50001"
Apply it with command $ kubectl apply -f FILE_NAME.yaml
You can change this deployment to suit your application but be aware of the ports that your application will respond to.
Create a service
Use the NodePort as it's the same as in the provided link:
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
type: NodePort
selector:
app: hello
version: 1.0.0
ports:
- name: hello-port
protocol: TCP
port: 50001
targetPort: 50001
Apply it with command $ kubectl apply -f FILE_NAME.yaml
Create a certificate
As shown in guide you can use below example to create ManagedCertificate:
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: example-certificate
spec:
domains:
- DOMAIN.NAME
Apply it with command $ kubectl apply -f FILE_NAME.yaml
The status FAILED_NOT_VISIBLE indicates that certificate provisioning failed for a domain because of a problem with DNS or the load balancing configuration. Make sure that DNS is configured so that the certificate's domain resolves to the IP address of the load balancer.
-- Google Cloud documentation
Creation of this certificate should be affected by DNS entry that you provided earlier.
Create a Ingress resource
Below is example for Ingress resource which will use ManagedCertificate:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: example-address
networking.gke.io/managed-certificates: example-certificate
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
Apply it with command $ kubectl apply -f FILE_NAME.yaml
It took about 20-25 minutes for it to fully work.

Related

503 Service Temporarily Unavailable - nginx, minikube, k8s

Hello I am new to devops
Problem: Unable to access the ticketing.dev from browser (configured using nginx)
I am using nginx and use minikube (running everything locally)
this is my service and deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: arshad/auth
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
type: NodePort
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
this is my ingress file
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
spec:
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-serv
port:
number: 3000
this is my
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-service <none> ticketing.dev 192.168.99.101 80 45m
I also added the /etc/hosts ip like ticketing.dev 192.168.99.101 but still I am getting 503 Service Temporarily Unavailable
Anyone please help.
Hi you have a typo thats why. Your service name is auth-srv when in ingress you are calling service name auth-serv . Change it on ingress to auth-srv instead of auth-serv .

Exposing Redis with Ingress Nginx Controller

Hello when I use node port to expose my redis service it works fine. I am able to access it.
But if I try switch to Ingress Nginx controller it refuse to connect.. Other apps work fine with ingress.
Here is my service:
apiVersion: v1
kind: Service
metadata:
name: redis-svc
spec:
# type: NodePort
ports:
- name: http
port: 6379
targetPort: 6379
protocol: TCP
# nodePort: 30007
selector:
app: redis
And here is ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: redis-ing
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
# nginx.ingress.kubernetes.io/enable-cors: "true"
# nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
# nginx.ingress.kubernetes.io/cors-allow-origin: "https://test.hefest.io"
# nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
tls:
- secretName: letsencrypt-prod
hosts:
- redis-dev.domain.com
rules:
- host: redis-dev.domain.com
http:
paths:
- path: /
backend:
serviceName: redis-svc
servicePort: 6379
Any idea what can be an issue?
I am using this ingress controller: https://github.com/nginxinc/kubernetes-ingress
Redis works on 6379 which is non HTTP port(80,443). So you need to enable TCP/UDP support in nginx ingress controller. The minikube docs here shows how to do it for redis.
Update the TCP and/or UDP services configmaps
Borrowing from the tutorial on configuring TCP and UDP services with the ingress nginx controller we will need to edit the configmap which is installed by default when enabling the minikube ingress addon.
There are 2 configmaps, 1 for TCP services and 1 for UDP services. By default they look like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
Since these configmaps are centralized and may contain configurations, it is best if we only patch them rather than completely overwrite them.
Let’s use this redis deployment as an example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: default
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
imagePullPolicy: Always
name: redis
ports:
- containerPort: 6379
protocol: TCP
Create a file redis-deployment.yaml and paste the contents above. Then install the redis deployment with the following command:
kubectl apply -f redis-deployment.yaml
Next we need to create a service that can route traffic to our pods:
apiVersion: v1
kind: Service
metadata:
name: redis-service
namespace: default
spec:
selector:
app: redis
type: ClusterIP
ports:
- name: tcp-port
port: 6379
targetPort: 6379
protocol: TCP
Create a file redis-service.yaml and paste the contents above. Then install the redis service with the following command:
kubectl apply -f redis-service.yaml
To add a TCP service to the nginx ingress controller you can run the following command:
kubectl patch configmap tcp-services -n kube-system --patch '{"data":{"6379":"default/redis-service:6379"}}'
Where:
6379 : the port your service should listen to from outside the minikube virtual machine
default : the namespace that your service is installed in
redis-service : the name of the service
We can verify that our resource was patched with the following command:
kubectl get configmap tcp-services -n kube-system -o yaml
We should see something like this:
apiVersion: v1
data:
"6379": default/redis-service:6379
kind: ConfigMap
metadata:
creationTimestamp: "2019-10-01T16:19:57Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: tcp-services
namespace: kube-system
resourceVersion: "2857"
selfLink: /api/v1/namespaces/kube-system/configmaps/tcp-services
uid: 4f7fac22-e467-11e9-b543-080027057910
The only value you need to validate is that there is a value under the data property that looks like this:
"6379": default/redis-service:6379
Patch the ingress-nginx-controller
There is one final step that must be done in order to obtain connectivity from the outside cluster. We need to patch our nginx controller so that it is listening on port 6379 and can route traffic to your service. To do this we need to create a patch file.
spec:
template:
spec:
containers:
- name: ingress-nginx-controller
ports:
- containerPort: 6379
hostPort: 6379
Create a file called ingress-nginx-controller-patch.yaml and paste the contents above.
Next apply the changes with the following command:
kubectl patch deployment ingress-nginx-controller --patch "$(cat ingress-nginx-controller-patch.yaml)" -n kube-system
The way I made it work is by enabling ssl-passthrough on nginx-ingress controller.
Once my nginx-ingress controller was patched, I was able to connect.
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: redis-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
spec:
tls:
- hosts:
- <host_address>
secretName: <k8s_secret_name>
rules:
- host: <host_address>
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: redis-service
port:
number: 6380
python snippet to connect
import redis
r = redis.StrictRedis(host='<host_address>',
port=443, db=0, ssl=True,
ssl_ca_certs='server.pem')
print(r.ping())
I do use redistls in my case.

Error {"message":"failure to get a peer from the ring-balancer"} using kong ingress

Getting error msg when I trying to access with public IP:
"{"message":"failure to get a peer from the ring-balancer"}"
Looks like Kong is unable to the upstream services.
I am using voting app
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: telehealth-ingress
namespace: kong
annotations:
kubernetes.io/ingress.class: "kong"
spec:
rules:
- http:
paths:
- backend:
serviceName: voting-service
servicePort: 80
service.yaml
apiVersion: v1
kind: Service
metadata:
name: voting-service
labels:
name: voting-service
app: voting-app
spec:
ports:
- targetPort: 80
port: 80
selector:
name: voting-app-pod
app: voting-app
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: voting-app-pod
labels:
name: voting-app-pod
app: voting-app
spec:
template:
metadata:
labels:
name: voting-app-pod
app: voting-app
spec:
containers:
- name: voting-app
image: dockersamples/examplevotingapp_vote
ports:
- containerPort: 80
replicas: 2
selector:
matchLabels:
app: voting-app
There could be one of many things wrong here. But essentially your ingress cannot get to your backend.
If your backend up and running?
Check backend pods are "Running"
kubectl get pods
Check backend deployment has all replicas up
kubectl get deploy
Connect to the app pod and run a localhost:80 request
kubectl exec -it <pod-name> sh
# curl http://localhost
Connect to the ingress pod and see if you can reach the service from there
kubectl exec -it <ingress-pod-name> sh
# dig voting-service (can you DNS resolve it)
# telnet voting-sevice 80
# curl http://voting-service
This issue might shed some insights as to why you can't reach the backend service. What http error code are you seeing?
The problem is resolved after deploying services and deployments in kong namespace instead of default namespace. Now I can access the application with Kong ingress public IP.
Looks like kong ingress is not able to resolve DNS with headless DNS. We need mention FQDN in ingress yaml
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: telehealth-ingress
namespace: kong
annotations:
kubernetes.io/ingress.class: "kong"
spec:
rules:
- http:
paths:
- backend:
name: voting-service
Port:
number: 80
Try this i thing it will work

Services can't communicate because there is not DNS resolving in Kubernetes

I configure my services with type ClusterIP. And I want to make them communicated.
Service
apiVersion: v1
kind: Service
metadata:
labels:
app: app-backend-deployment
name: app-backend
spec:
type: ClusterIP
ports:
- port: 8020
protocol: TCP
targetPort: 8100
selector:
app: app-backend
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-backend
name: app-backend-deployment
spec:
replicas: 1
selector:
matchLabels:
app: app-backend
template:
metadata:
labels:
app: app-backend
spec:
containers:
- name: app-backend
image: app-backend
ports:
- containerPort: 8100
imagePullPolicy: Never
ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: backend-conf # name of configMap
data:
BACKEND_SERVICE_HOST: app-backend:8020
And that is what I pass to the frontend service, and I want to make a REST call through the DNS name for example http://app-backend:8020/get/1. But like I see in the console app cannot resolve DNS name net::ERR_NAME_NOT_RESOLVED.
I also check pod nslookup:
busybox nslookup app-backend.default.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: app-backend.default.svc.cluster.local
Address: 10.106.41.36
And compare it to
kubectl describe svc app-backend
Name: app-backend
Namespace: default
Labels: app=app-backend-deployment
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"...
Selector: app=app-backend
Type: ClusterIP
IP: 10.106.41.36
Port: <unset> 8020/TCP
TargetPort: 8100/TCP
And like you can see there is the same IP on Address but I don't know and where to look what is wrong why dns resolver doesn't work. kubectl version Client "v1.15.5", Server Version:"v1.17.3",
Because of that frontend service that was served to the local machine (that how Angular works) REST request cannot go through Kubernetes DNS with another backend service. I need to communicate them through the Ingress. Due to different annotations, I have to use 2 Ingress. Meaby there is a better way to use just one, but when I want to use only one Ingress I can't find a way to make them both working, with the same annotation.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: app-backend-ingress
spec:
rules:
- host: app.io
http:
paths:
- path: /api(/|$)(.*)
backend:
serviceName: app-backend
servicePort: 8020
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
name: app-frontend-ingress
spec:
rules:
- host: app.io
http:
paths:
- path: /
backend:
serviceName: app-frontend
servicePort: 80

Kubernetes Ingress to External Service?

Say I have a service that isn't hosted on Kubernetes. I also have an ingress controller and cert-manager set up on my kubernetes cluster.
Because it's so much simpler and easy to use kubernetes ingress to control access to services, I wanted to have a kubernetes ingress that points to a non-kubernetes service.
For example, I have a service that's hosted at https://10.0.40.1:5678 (ssl required, but self signed certificate) and want to access at service.example.com.
You can do it by manual creation of Service and Endpoint objects for your external server.
Objects will looks like that:
apiVersion: v1
kind: Service
metadata:
name: external-ip
spec:
ports:
- name: app
port: 80
protocol: TCP
targetPort: 5678
clusterIP: None
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: external-ip
subsets:
- addresses:
- ip: 10.0.40.1
ports:
- name: app
port: 5678
protocol: TCP
Then, you can create an Ingress object which will point to Service external-ip with port 80:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: external-service
spec:
rules:
- host: service.example.com
http:
paths:
- backend:
serviceName: external-ip
servicePort: 80
path: /
So I got this working using ingress-nginx to proxy an managed external service over a non-standard port
apiVersion: v1
kind: Service
metadata:
name: external-service-expose
namespace: default
spec:
type: ExternalName
externalName: <external-service> # eg example.example.com
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: external-service-expose
namespace: default
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" #important
spec:
rules:
- host: <some-host-on-your-side> # eg external-service.yourdomain.com
http:
- path: /
pathType: Prefix
backend:
service:
name: external-service
port:
number: <port of external service> # eg 4589
tls:
- hosts:
- external-service.yourdomain.com
secretName: <tls secret for your domain>
of-course you need to make sure that the managed url is reachable from inside the cluster, a simple check can be done by launching a debug pod and doing
curl -v https://example.example.com:4589
If your external service has a dns entry configured on it, you can use kubernetes externalName service.
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: prod
spec:
type: ExternalName
externalName: myexternal.http.service.com
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: externalNameservice
namespace: prod
spec:
rules:
- host: service.example.com
http:
paths:
- backend:
serviceName: my-service
servicePort: 80
path: /
In this way, kubernetes create cname record my-service pointing to myexternal.http.service.com
I just want to update #Moulick answer here according to Kubernetes version v1.21.1, as for ingress the configuration has changed a little bit.
In my example I am using Let's Encrypt for my nginx controller:
apiVersion: v1
kind: Service
metadata:
name: external-service
namespace: default
spec:
type: ExternalName
externalName: <some-host-on-your-side> eg managed.yourdomain.com
ports:
- port: <port of external service> eg 4589
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: external-service
namespace: default
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/proxy-body-size: 100m
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" #important
spec:
tls:
- hosts:
- <some-host-on-your-side> eg managed.yourdomain.com
secretName: tls-external-service
rules:
- host: <some-host-on-your-side> eg managed.yourdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: external-service
port:
number: <port of external service> eg 4589