We are running Istio 1.1.3 on 1.12.5-gke.10 cluster-nodes.
We use certmanager for managing our let's encrypt certificates.
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: certs.ourdomain.nl
namespace: istio-system
spec:
secretName: certs.ourdomain.nl
newBefore: 360h # 15d
commonName: operations.ourdomain.nl
dnsNames:
- operations.ourdomain.nl
issuerRef:
name: letsencrypt
kind: ClusterIssuer
acme:
config:
- http01:
ingressClass: istio
domains:
- operations.ourdomain.nl
Next thing we see the acme backend, service (nodeport and ingress) deployed. The ingress (auto-generated) looks like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
generateName: cm-acme-http-solver-
generation: 1
labels:
certmanager.k8s.io/acme-http-domain: "1734084804"
certmanager.k8s.io/acme-http-token: "1476005735"
name: cm-acme-http-solver-69vzw
namespace: istio-system
ownerReferences:
- apiVersion: certmanager.k8s.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: Certificate
name: certs.ourdomain.nl
uid: 751011d2-4fc8-11e9-b20e-42010aa40101
spec:
rules:
- host: operations.ourdomain.nl
http:
paths:
- backend:
serviceName: cm-acme-http-solver-fzk8q
servicePort: 8089
path: /.well-known/acme-challenge/dnrcr-LRRMdXhBaUefjqpHQx8ytYuk-feEfXu9gW-Ck
status:
loadBalancer: {}
However, when we try to access the url operations.ourdomain.nl /.well-known/acme-challenge/dnrcr-LRRMdXhBaUefjqpHQx8ytYuk-feEfXu9gW-Ck we get a 404.
We do have a loadbalancer for istio:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
labels:
app: istio-ingress
chart: gateways-1.1.0
heritage: Tiller
istio: ingress
release: istio
name: istio-ingress
namespace: istio-system
spec:
selector:
app: istio-ingress
servers:
- hosts:
- operations.ourdomain.nl
#port:
# name: http
# number: 80
# protocol: HTTP
#tls:
# httpsRedirect: true
- hosts:
- operations.ourdomain.nl
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: certs.ourdomain.nl
mode: SIMPLE
privateKey: sds
serverCertificate: sds
This interesting article gives a good insight in how the acme-challenge is supposed to work. For purpose of testing we have removed the port 80 and redirect to https in our custom gateway. We have added the autogenerated k8s gateway, listening only on port 80.
Istio is supposed to create a virtualservice for the acme-challenge. This seems to be happening, because now, when we request the acme-challenge url we get a 503: upstream connect error or disconnect/reset before headers. I believe this means the request gets to the gateway and is matched by a virtualservice, but there is no service / healthy pod to revert the traffic to.
We do see some possibly interesting logging in the istio-pilot:
“ProxyStatus”: {“endpoint_no_pod”:
{“cm-acme-http-solver-l5j2g.istio-system.svc.cluster.local”:
{“message”: “10.16.57.248”}
I have double checked and the service mentioned above does have a pod it is exposing. So I am not sure whether this line is relevant to this issue.
The acme-challenge pods do not have an istio-sidecar. Could this be the issue? If so: why does it apparently work for others
Related
We currently use the AWS ALB Ingress Controller to front out ingress, and terminate SSL using a certificate from AWS ACM. This work fine.
Is there a way, to also encrypt the traffic from the Load balancer to the cluster?
Here is what I attempted
Install/Configure Cert Manager
Add a TLS Secret to the Ingress
Change the ingress annotation to set the backend protocol to HTTPS alb.ingress.kubernetes.io/backend-protocol: HTTPS
This... results in a 502 gateway error
here is my current, working ingress, with only the relevant parts still shown.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/certificate-arn: real-cert-arn
alb.ingress.kubernetes.io/group.name: public.monitor
alb.ingress.kubernetes.io/group.order: "40"
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
alb.ingress.kubernetes.io/load-balancer-name: monitoring-public-qa
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-FS-1-2-Res-2020-10
cert-manager.io/cluster-issuer: cert-manager-r53-qa
kubernetes.io/ingress.class: alb
spec:
rules:
- host: goldilocks.qa.realdomain.com
http:
paths:
- backend:
service:
name: goldilocks-dashboard
port:
name: http
path: /*
pathType: ImplementationSpecific
tls:
- hosts:
- goldilocks.qa.realdomain.com
secretName: goldilocks-qa-cert
status:
loadBalancer:
ingress:
- hostname: real-lb-address.us-gov-west-1.elb.amazonaws.com
my (staging) cert exists, and appears fine.
[ec2-user#ip-10-17-2-102 ~]$ kubectl get cert -n goldilocks goldilocks-qa-cert -o yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: goldilocks-qa-cert
namespace: goldilocks
ownerReferences:
- apiVersion: networking.k8s.io/v1
blockOwnerDeletion: true
controller: true
kind: Ingress
name: goldilocks-dashboard
uid: 2494e5be-e624-471b-afd3-c8d56f5dc853
resourceVersion: "81003934"
uid: d494a1ec-7509-474e-b154-d5db8ceb86c0
spec:
dnsNames:
- goldilocks.qa.realdomain.com
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: cert-manager-r53-qa
secretName: goldilocks-qa-cert
usages:
- digital signature
- key encipherment
status:
conditions:
- lastTransitionTime: "2023-02-08T16:06:56Z"
message: Certificate is up to date and has not expired
The fact I can not figure out what to google to find this answer, leads me to think that I'm attempting to do something weird? I understand I could just terminate the TLS on the pod, but I didn't want to rely on the lets encrypt to provide me good/valid certs, I just want the traffic encrypted.
I have a currently functioning Istio application. I would now like to add HTTPS using the Google Cloud managed certs. I setup the ingress there like this...
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
namespace: istio-system
spec:
domains:
- mydomain.co
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: managed-cert-ingress
namespace: istio-system
annotations:
kubernetes.io/ingress.global-static-ip-name: managed-cert
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: istio-ingressgateway
port:
number: 443
---
But when I try going to the site (https://mydomain.co) I get...
Secure Connection Failed
An error occurred during a connection to earth-615.mydomain.co. Cannot communicate securely with peer: no common encryption algorithm(s).
Error code: SSL_ERROR_NO_CYPHER_OVERLAP
The functioning virtual service/gateway looks like this...
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: ingress-gateway
namespace: istio-system
annotations:
kubernetes.io/ingress.global-static-ip-name: earth-616
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http2
protocol: HTTP2
hosts:
- "*"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: test-app
namespace: foo
spec:
hosts:
- "*"
gateways:
- "istio-system/ingress-gateway"
http:
- match:
- uri:
exact: /
route:
- destination:
host: test-app
port:
number: 8000
Pointing k8s ingress towards istio ingress would result in additional latency and additional requirement for the istio gateway to use ingress sni passthrough to accept the HTTPS (already TLS terminated traffic).
Instead the best practice here would be to use the certificate directly with istio Secure Gateway.
You can use the certificate and key issued by Google CA. e.g. from Certificate Authority Service and create a k8s secret to hold the certificate and key. Then configure istio Secure Gateway to terminate the TLS traffic as documented in here.
What I am trying to achieve: block all traffic to a service, containing the code to handle this within the same namespace as the service.
Why: this is the first step in "locking down" a specific service to specific IPs/CIDRs
I have a primary ingress GW called istio-ingressgateway which works for services.
$ kubectl describe gw istio-ingressgateway -n istio-system
Name: istio-ingressgateway
Namespace: istio-system
Labels: operator.istio.io/component=IngressGateways
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.5.5
release=istio
Annotations: API Version: networking.istio.io/v1beta1
Kind: Gateway
Metadata:
Creation Timestamp: 2020-08-28T15:45:10Z
Generation: 1
Resource Version: 95438963
Self Link: /apis/networking.istio.io/v1beta1/namespaces/istio-system/gateways/istio-ingressgateway
UID: ae5dd2d0-44a3-4c2b-a7ba-4b29c26fa0b9
Spec:
Selector:
App: istio-ingressgateway
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http
Number: 80
Protocol: HTTP
Events: <none>
I also have another "primary" GW, the K8s ingress GW to support TLS (thought I'd include this, to be as explicit as possible)
k describe gw istio-autogenerated-k8s-ingress -n istio-system
Name: istio-autogenerated-k8s-ingress
Namespace: istio-system
Labels: app=istio-ingressgateway
istio=ingressgateway
operator.istio.io/component=IngressGateways
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.5.5
release=istio
Annotations: API Version: networking.istio.io/v1beta1
Kind: Gateway
Metadata:
Creation Timestamp: 2020-08-28T15:45:56Z
Generation: 2
Resource Version: 95439499
Self Link: /apis/networking.istio.io/v1beta1/namespaces/istio-system/gateways/istio-autogenerated-k8s-ingress
UID: edd46c17-9975-4089-95ff-a2414d40954a
Spec:
Selector:
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http
Number: 80
Protocol: HTTP
Hosts:
*
Port:
Name: https-default
Number: 443
Protocol: HTTPS
Tls:
Credential Name: ingress-cert
Mode: SIMPLE
Private Key: sds
Server Certificate: sds
Events: <none>
I want to be able to create another GW, in the namespace x and have an authorization policy attached to that GW.
If I create the authorization policy in the istio-system namespace, then it comes back with RBAC: access denied which is great - but that is for all services using the primary GW.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: block-all
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
action: DENY
rules:
- from:
- source:
ipBlocks: ["0.0.0.0/0"]
What I currently have does not work. Any pointers would be highly appreciated. The following are all created under the x namespace when applying the kubectl apply -f files.yaml -n x
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
annotations:
app: x-ingress
name: x-gw
labels:
app: x-ingress
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- x.y.com
port:
name: http
number: 80
protocol: HTTP
tls:
httpsRedirect: true
- hosts:
- x.y.com
port:
name: https
number: 443
protocol: HTTPS
tls:
mode: SIMPLE
privateKey: sds
serverCertificate: sds
credentialName: ingress-cert
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: x
labels:
app: x
spec:
hosts:
- x.y.com
gateways:
- x-gw
http:
- route:
- destination:
host: x
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: x-ingress-policy
spec:
selector:
matchLabels:
app: x-ingress
action: DENY
rules:
- from:
- source:
ipBlocks: ["0.0.0.0/0"]
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: x
labels:
app: x
spec:
hosts:
- x.y.com
gateways:
- x-gw
http:
- route:
- destination:
host: x
The above should be blocking all traffic to the GW, as it matches on the CIDR range of 0.0.0.0/0
I am entirely misunderstanding the concept of GWs/AuthorizationPolicies or have I missed something?
Edit
I ended up creating another GW which had the IP restriction block on that, as classic load balancers on AWS do not support IP forwarding.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istiocontrolplane
spec:
profile: demo
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
- name: admin-ingressgateway
enabled: true
label:
istio: admin-ingressgateway
k8s:
serviceAnnotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all-admin
namespace: istio-system
spec:
selector:
matchLabels:
istio: admin-ingressgateway
action: ALLOW
rules:
- from:
- source:
ipBlocks: ["176.252.114.59/32"]
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
I then used that gateway in my workload that I wanted to lock down.
As far as I know you should rather use AuthorizationPolicy in 3 ways
on ingress gateway
on namespace
on specific service
I have tried to make it work on a specific gateway with annotations like you did, but I couldn't make it work for me.
e.g.
the following authorization policy denies all requests to workloads in namespace x.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: x
spec:
{}
the following authorization policy denies all requests on ingress gateway.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
the following authorization policy denies all requests on httpbin in x namespace.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-service-x
namespace: x
spec:
selector:
matchLabels:
app: httpbin
Let's say you deny all requests on x namespace and allow only get requests for httpbin service.
Then you would use this AuthorizationPolicy to deny all requests
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: x
spec:
{}
And this AuthorizationPolicy to allow only get requests.
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "x-viewer"
namespace: x
spec:
selector:
matchLabels:
app: httpbin
rules:
- to:
- operation:
methods: ["GET"]
And there is the main issue ,which is ipBlocks. There is related github issue about that.
As mentioned here by #incfly
I guess the reason why it’s stop working when in non ingress pod is because the sourceIP attribute will not be the real client IP then.
According to https://github.com/istio/istio/issues/22341 7, (not done yet) this aims at providing better support without setting k8s externalTrafficPolicy to local, and supports CIDR range as well.
I have tried this example from istio documentation to make it work, but it wasn't working for me, even if I changed externalTrafficPolicy. Then a workaround with envoyfilter came from above istio discuss thread.
Answer provided by #hleal18 here.
Got and example working successfully using EnvoyFilters, specifically with remote_ip condition applied on httbin.
Sharing the manifest for reference.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: httpbin
namespace: foo
spec:
workloadSelector:
labels:
app: httpbin
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.rbac
config:
rules:
action: ALLOW
policies:
"ip-premissions":
permissions:
- any: true
principals:
- remote_ip:
address_prefix: xxx.xxx.xx.xx
prefix_len: 32
I have tried above envoy filter on my test cluster and as far as I can see it's working.
Take a look at below steps I made.
1.I have changed the externalTrafficPolicy with
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
2.I have created namespace x with istio-injection enabled and deployed httpbin here.
kubectl create namespace x
kubectl label namespace x istio-injection=enabled
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.7/samples/httpbin/httpbin.yaml -n x
kubectl apply -f https://github.com/istio/istio/blob/master/samples/httpbin/httpbin-gateway.yaml -n x
3.I have created envoyfilter
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: httpbin
namespace: x
spec:
workloadSelector:
labels:
app: httpbin
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.rbac
config:
rules:
action: ALLOW
policies:
"ip-premissions":
permissions:
- any: true
principals:
- remote_ip:
address_prefix: xx.xx.xx.xx
prefix_len: 32
address_prefix is the CLIENT_IP, there are commands I have used to get it.
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="http2")].port}')
curl "$INGRESS_HOST":"$INGRESS_PORT"/headers -s -o /dev/null -w "%{http_code}\n"
CLIENT_IP=$(curl "$INGRESS_HOST":"$INGRESS_PORT"/ip -s | grep "origin" | cut -d'"' -f 4) && echo "$CLIENT_IP"
4.I have test it with curl and my browser.
curl "$INGRESS_HOST":"$INGRESS_PORT"/headers -s -o /dev/null -w "%{http_code}\n"
200
Let me know if you have any more questions, I might be able to help.
I'm simply following the tutorial here: https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#creating_an_ingress_with_a_managed_certificate
Everything works fine until I deploy my certificate and wait 20 minutes for it to show up as:
Status:
Certificate Name: daojnfiwlefielwrfn
Certificate Status: Provisioning
Domain Status:
Domain: moviedecisionengine.com
Status: FailedNotVisible
That domain clearly works so what am I missing?
EDIT:
Here's the Cert:
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: moviedecisionengine
spec:
domains:
- moviedecisionengine.com
The Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.gcp.kubernetes.io/pre-shared-cert: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b
ingress.kubernetes.io/backends: '{"k8s-be-31721--1cd1f38313af9089":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/https-forwarding-rule: k8s-fws-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/https-target-proxy: k8s-tps-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/ssl-cert: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b
ingress.kubernetes.io/target-proxy: k8s-tp-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/url-map: k8s-um-default-showcase-mde-ingress--1cd1f38313af9089
kubernetes.io/ingress.global-static-ip-name: 34.107.208.110
networking.gke.io/managed-certificates: moviedecisionengine
creationTimestamp: "2020-01-16T19:44:13Z"
generation: 4
name: showcase-mde-ingress
namespace: default
resourceVersion: "1039270"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/showcase-mde-ingress
uid: 92a2f91f-3898-11ea-b820-42010a800045
spec:
backend:
serviceName: showcase-mde
servicePort: 80
rules:
- host: moviedecisionengine.com
http:
paths:
- backend:
serviceName: showcase-mde
servicePort: 80
- host: www.moviedecisionengine.com
http:
paths:
- backend:
serviceName: showcase-mde
servicePort: 80
status:
loadBalancer:
ingress:
- ip: 34.107.208.110
And lastly, the load balancer:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-01-13T22:41:27Z"
labels:
app: showcase-mde
name: showcase-mde
namespace: default
resourceVersion: "2298"
selfLink: /api/v1/namespaces/default/services/showcase-mde
uid: d5a77d7b-3655-11ea-af7f-42010a800157
spec:
clusterIP: 10.31.251.46
externalTrafficPolicy: Cluster
ports:
- nodePort: 31721
port: 80
protocol: TCP
targetPort: 80
selector:
app: showcase-mde
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.232.156.172
For the full output of kubectl describe managedcertificate moviedecisionengine:
Name: moviedecisionengine
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.gke.io/v1beta1","kind":"ManagedCertificate","metadata":{"annotations":{},"name":"moviedecisionengine","namespace...
API Version: networking.gke.io/v1beta1
Kind: ManagedCertificate
Metadata:
Creation Timestamp: 2020-01-17T16:47:19Z
Generation: 3
Resource Version: 1042869
Self Link: /apis/networking.gke.io/v1beta1/namespaces/default/managedcertificates/moviedecisionengine
UID: 06c97b69-3949-11ea-b820-42010a800045
Spec:
Domains:
moviedecisionengine.com
Status:
Certificate Name: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b
Certificate Status: Provisioning
Domain Status:
Domain: moviedecisionengine.com
Status: FailedNotVisible
Events: <none>
I was successful in using Managedcertificate with GKE Ingress resource.
Let me elaborate on that:
Steps to reproduce:
Create IP address with gcloud
Update the DNS entry
Create a deployment
Create a service
Create a certificate
Create a Ingress resource
Create IP address with gcloud
Invoke below command to create static ip address:
$ gcloud compute addresses create example-address --global
Check newly created IP address with below command:
$ gcloud compute addresses describe example-address --global
Update the DNS entry
Go to GCP -> Network Services -> Cloud DNS.
Edit your zone with A record with the same address that was created above.
Wait for it to apply.
Check with $ nslookup DOMAIN.NAME if the entry is pointing to the appropriate address.
Create a deployment
Below is example deployment which will respond to traffic:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
spec:
selector:
matchLabels:
app: hello
version: 1.0.0
replicas: 3
template:
metadata:
labels:
app: hello
version: 1.0.0
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:1.0"
env:
- name: "PORT"
value: "50001"
Apply it with command $ kubectl apply -f FILE_NAME.yaml
You can change this deployment to suit your application but be aware of the ports that your application will respond to.
Create a service
Use the NodePort as it's the same as in the provided link:
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
type: NodePort
selector:
app: hello
version: 1.0.0
ports:
- name: hello-port
protocol: TCP
port: 50001
targetPort: 50001
Apply it with command $ kubectl apply -f FILE_NAME.yaml
Create a certificate
As shown in guide you can use below example to create ManagedCertificate:
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: example-certificate
spec:
domains:
- DOMAIN.NAME
Apply it with command $ kubectl apply -f FILE_NAME.yaml
The status FAILED_NOT_VISIBLE indicates that certificate provisioning failed for a domain because of a problem with DNS or the load balancing configuration. Make sure that DNS is configured so that the certificate's domain resolves to the IP address of the load balancer.
-- Google Cloud documentation
Creation of this certificate should be affected by DNS entry that you provided earlier.
Create a Ingress resource
Below is example for Ingress resource which will use ManagedCertificate:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: example-address
networking.gke.io/managed-certificates: example-certificate
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
Apply it with command $ kubectl apply -f FILE_NAME.yaml
It took about 20-25 minutes for it to fully work.
I have an IKS cluster with AppID tied to it. I have a problem with redirecting an NodeJS app with the ingress. All other apps works with both AppID and ingress, but this one gives 500 Internal Server error when redirecting back from AppID. The service works fine when used as a NodePort and accessed on the server address and that nodePort.
When describing the ingress I'm only getting successful results back
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Success 58m public-cr9c603a564cff4a27adb020dd40ceb65e-alb1-59fb8fc894-s59nd Successfully applied ingress resource.
My ingress looks like:
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.bluemix.net/appid-auth: bindSecret=binding-raven3-app-id namespace=default
requestType=web serviceName=r3-ui
ingress.bluemix.net/rewrite-path: serviceName=r3-ui rewrite=/;
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"ingress.bluemix.net/appid-auth":"bindSecret=binding-raven3-app-id namespace=default requestType=api serviceName=r3-ui [idToken=false]"},"name":"myingress","namespace":"default"},"spec":{"rules":[{"host":"*host*","http":{"paths":[{"backend":{"serviceName":"r3-ui","servicePort":3000},"path":"/"}]}}],"tls":[{"hosts":["mydomain"],"secretName":"mytlssecret"}]}}
creationTimestamp: "2019-06-20T10:31:57Z"
generation: 21
name: myingress
namespace: default
resourceVersion: "24140"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/myingress
uid: a15f74aa-9346-11e9-a9bf-f63d33811ba6
spec:
rules:
- host: *host*
http:
paths:
- backend:
serviceName: r3-ui
servicePort: 3000
path: /
tls:
- hosts:
- *host*
secretName: raven3
status:
loadBalancer:
ingress:
- ip: 169.51.71.141
kind: List
metadata:
resourceVersion: ""
selfLink: ""
and my service looks like:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-06-20T09:25:30Z"
labels:
app: r3-ui
chart: r3-ui-0.1.0
heritage: Tiller
release: r3-ui
name: r3-ui
namespace: default
resourceVersion: "23940"
selfLink: /api/v1/namespaces/default/services/r3-ui
uid: 58ff6604-933d-11e9-a9bf-f63d33811ba6
spec:
clusterIP: 172.21.180.240
ports:
- name: http
port: 3000
protocol: TCP
targetPort: http
selector:
app: r3-ui
release: r3-ui
sessionAffinity: None
type: ClusterIP
status:
What's weird is that I'm getting different results on port 80 and on port 443. On port 80 I'm getting HTTP error 500 and on port 443 I'm getting Invalid Host header
Are you using https in order to access your application? For security reasons, App ID authentication only supports back ends with TLS/SSL enabled.
If you are using SSL and still having troubles, can you kindly share your Ingress and application logs so we can figure out what went wrong?
Thanks.