GKE Managed Certificate not serving over HTTPS - kubernetes

I'm trying to spin up a Kubernetes cluster that I can access securely and can't seem to get that last part. I am following this tutorial: https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs
Here are the .yaml files i'm using for my Ingress, Nodeport and ManagedCertificate
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: client-v1-cert
spec:
domains:
- api.mydomain.com
---
apiVersion: v1
kind: Service
metadata:
name: client-nodeport-service
spec:
selector:
app: myApp
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: api-v1
networking.gke.io/managed-certificates: client-v1-cert
spec:
backend:
serviceName: client-nodeport-service
servicePort: 80
No errors that I can see in the GCP console. i can also access my API from http://api.mydomain.com/, but it won't work when I try https, just not https. Been banging my head on this for a few days and just wondering if there's some little thing i'm missing.
--- UPDATE ---
Output of kubectl describe managedcertificate
Name: client-v1-cert
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
API Version: networking.gke.io/v1beta1
Kind: ManagedCertificate
Metadata:
Creation Timestamp: 2019-07-01T17:42:43Z
Generation: 3
Resource Version: 1136504
Self Link: /apis/networking.gke.io/v1beta1/namespaces/default/managedcer
tificates/client-v1-cert
UID: b9b7bec1-9c27-33c9-a309-42284a800179
Spec:
Domains:
api.mydomain.com
Status:
Certificate Name: mcrt-286cdab3-b995-40cc-9b3a-28439285e694
Certificate Status: Active
Domain Status:
Domain: api.mydomain.com
Status: Active
Expire Time: 2019-09-29T09:55:12.000-07:00
Events: <none>

I figured out a solution to this problem. I ended up going into my GCP console, locating the load balancer associated with the Ingress, and then I noticed that there was only one frontend protocol, and it was HTTP serving over port 80. So I manually added another frontend protocol for HTTPS, selected the managed certificate from the list, and waited about 5 minutes and everything worked.
I have no idea why my ingress.yaml didn't do that automatically though. So though the problem is fixed if there is anyone out there who knows what I would love to know.

Related

Unable to log egress traffic HTTP requests with the istio-proxy

I am following this guide.
Ingress requests are getting logged. Egress traffic control is working as expected, except I am unable to log egress HTTP requests. What is missing?
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
name: myapp
spec:
workloadSelector:
labels:
app: myapp
outboundTrafficPolicy:
mode: REGISTRY_ONLY
egress:
- hosts:
- default/*.example.com
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: example
spec:
location: MESH_EXTERNAL
resolution: NONE
hosts:
- '*.example.com'
ports:
- name: https
protocol: TLS
number: 443
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
name: mesh-default
namespace: istio-system
spec:
accessLogging:
- providers:
- name: envoy
Kubernetes 1.22.2 Istio 1.11.4
For ingress traffic logging I am using EnvoyFilter to set log format and it is working without any additional configuration. In the egress case, I had to set accessLogFile: /dev/stdout.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: config
namespace: istio-system
spec:
profile: default
meshConfig:
accessLogFile: /dev/stdout
AFAIK istio collects only ingress HTTP logs by default.
In the istio documentation there is an old article (from 2018) describing how to enable egress traffic HTTP logs.
Please keep in mind that some of the information may be outdated, however I believe this is the part that you are missing.

Kubernetes, Loadbalancing, and Nginx Ingress - AKS

Stack:
Azure Kubernetes Service
NGINX Ingress Controller - https://github.com/kubernetes/ingress-nginx
AKS Loadbalancer
Docker containers
My goal is to create a K8s cluster that will allow me to use multiple pods, under a single IP, to create a microservice architecture. After working with tons of tutorials and documentation, I'm not having any luck with my endgoal. I got to the point of being able to access a single deployment using the Loadbalancer, but introducing the ingress has not been successful so far. The services are separated into their respective files for readability and ease of control.
Additionally, the Ingress Controller was added to my cluster as described in the installation instructions using: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/cloud/deploy.yaml
LoadBalancer.yml:
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
loadBalancerIP: x.x.x.x
selector:
app: ingress-service
tier: backend
ports:
- name: "default"
port: 80
targetPort: 80
type: LoadBalancer
IngressService.yml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /api
backend:
serviceName: api-service
servicePort: 80
api-deployment.yml
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
selector:
app: api
ports:
- port: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
spec:
selector:
matchLabels:
app: api
tier: backend
track: stable
replicas: 1
template:
metadata:
labels:
app: api
tier: backend
track: stable
spec:
containers:
- name: api
image: image:tag
ports:
- containerPort: 80
imagePullPolicy: Always
imagePullSecrets:
- name: SECRET
The API in the image is exposed on port 80 correctly.
After applying each of the above yml services and deployments, I attempt a web request to one of the api resources via the LoadBalancer's IP and receive only a timeout on my requests.
Found my answer after hunting around enough. Basically, the problem was that the Ingress Controller has a Load Balancer built into the yaml, as mentioned in comments above. However, the selector for that LoadBalancer requires marking your Ingress service as part of the class. Then that Ingress service points to each of the services attached to your pods. I also had to make a small modification to allow using a static IP in the provided load balancer.

Enable IAP on Ingress

I've follow the documentation about how to enable IAP on GKE.
I've:
configured the consent screen
Create OAuth credentials
Add the universal redirect URL
Add myself as IAP-secured Web App User
And write my deployment like this:
data:
client_id: <my_id>
client_secret: <my_secret>
kind: Secret
metadata:
name: backend-iap-secret
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
ports:
- port: 443
protocol: TCP
targetPort: 3000
selector:
k8s-app: grafana
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grafana
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: grafana
spec:
containers:
- env:
- name: GF_SERVER_HTTP_PORT
value: "3000"
image: docker.io/grafana/grafana:6.7.1
name: grafana
ports:
- containerPort: 3000
protocol: TCP
readinessProbe:
httpGet:
path: /api/health
port: 3000
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: backend-config-iap
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: backend-iap-secret
---
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: monitoring-tls
spec:
domains:
- monitoring.foo.com
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
beta.cloud.google.com/backend-config: '{"default": "backend-config-iap"}'
kubernetes.io/ingress.global-static-ip-name: monitoring
networking.gke.io/managed-certificates: monitoring-tls
name: grafana
spec:
backend:
serviceName: grafana
servicePort: 443
When I look at my ingress I've this:
$ k describe ingress
Name: grafana
[...]
Annotations: beta.cloud.google.com/backend-config: {"default": "backend-config-iap"}
ingress.kubernetes.io/backends: {"k8s-blabla":"HEALTHY"}
[...]
Events: <none>
$
I can connect to the web page without any problem, the grafana is up and running, but I can also connect without being authenticated (witch is a problem).
So everything look fine, but IAP is not activated, why ?
The worst is that, if I enable it manualy it work but if I redo kubectl apply -f monitoring.yaml IAP is disabled.
What am I missing ?
Because my secret values are stored in secret manager (and retrieved at build time) I suspected my secret to have some glitches (spaces, \n, etc.) in them so I've add a script to test it:
gcloud compute backend-services update \
--project=<my_project_id> \
--global \
$(kubectl get ingress grafana -o json | jq -r '.metadata.annotations."ingress.kubernetes.io/backends"' | jq -r 'keys[0]') \
--iap=enabled,oauth2-client-id=$(gcloud --project="<my_project_id>" beta secrets versions access latest --secret=Monitoring_client_id),oauth2-client-secret=$(gcloud --project="<my_project_id>" beta secrets versions access latest --secret=Monitoring_secret)
And now IAP is properly enabled with the correct OAuth Client, so my secrets are "clean"
By the way, I also tried to rename secret variables like this (from client_id):
* oauth_client_id
* oauth-client-id
* clientID (like in backend documentation )
I've also write the value in the backend like this:
kind: BackendConfig
metadata:
name: backend-config-iap
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: backend-iap-secret
clientID: <value>
clientSecret: <value>
But doesn't work either.
Erratum:
The fact that the IAP is destroyed when I deploy again (after I enable it in web UI) is part of my deployment script in this test (I made a kubectl delete before).
But nevertheless, I can't enable IAP only with my backend configuration.
As suggested I've filed a bug report: https://issuetracker.google.com/issues/153475658
Solution given by Totem
Change given yaml with this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.global-static-ip-name: monitoring
networking.gke.io/managed-certificates: monitoring-tls
name: grafana
[...]
---
apiVersion: v1
kind: Service
metadata:
name: grafana
annotations:
beta.cloud.google.com/backend-config: '{"default": "backend-config-iap"}'
[...]
The backend is associated with the service and not the Ingress...
Now it Works !
You did everything right, just a one small change:
The annotation should be added on the Service resource
apiVersion: v1
kind: Service
metadata:
annotations:
beta.cloud.google.com/backend-config: '{"ports": { "443":"backend-config-iap"}}'
name: grafana
Usually you need to associate it with a port so ive added this example above, but make sure it works with 443 as expected.
this is based on internal example im using:
beta.cloud.google.com/backend-config: '{"ports": { "3000":"be-cfg}}'

Ingress doesn't redirect to my service after AppID on IKS

I have an IKS cluster with AppID tied to it. I have a problem with redirecting an NodeJS app with the ingress. All other apps works with both AppID and ingress, but this one gives 500 Internal Server error when redirecting back from AppID. The service works fine when used as a NodePort and accessed on the server address and that nodePort.
When describing the ingress I'm only getting successful results back
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Success 58m public-cr9c603a564cff4a27adb020dd40ceb65e-alb1-59fb8fc894-s59nd Successfully applied ingress resource.
My ingress looks like:
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.bluemix.net/appid-auth: bindSecret=binding-raven3-app-id namespace=default
requestType=web serviceName=r3-ui
ingress.bluemix.net/rewrite-path: serviceName=r3-ui rewrite=/;
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"ingress.bluemix.net/appid-auth":"bindSecret=binding-raven3-app-id namespace=default requestType=api serviceName=r3-ui [idToken=false]"},"name":"myingress","namespace":"default"},"spec":{"rules":[{"host":"*host*","http":{"paths":[{"backend":{"serviceName":"r3-ui","servicePort":3000},"path":"/"}]}}],"tls":[{"hosts":["mydomain"],"secretName":"mytlssecret"}]}}
creationTimestamp: "2019-06-20T10:31:57Z"
generation: 21
name: myingress
namespace: default
resourceVersion: "24140"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/myingress
uid: a15f74aa-9346-11e9-a9bf-f63d33811ba6
spec:
rules:
- host: *host*
http:
paths:
- backend:
serviceName: r3-ui
servicePort: 3000
path: /
tls:
- hosts:
- *host*
secretName: raven3
status:
loadBalancer:
ingress:
- ip: 169.51.71.141
kind: List
metadata:
resourceVersion: ""
selfLink: ""
and my service looks like:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-06-20T09:25:30Z"
labels:
app: r3-ui
chart: r3-ui-0.1.0
heritage: Tiller
release: r3-ui
name: r3-ui
namespace: default
resourceVersion: "23940"
selfLink: /api/v1/namespaces/default/services/r3-ui
uid: 58ff6604-933d-11e9-a9bf-f63d33811ba6
spec:
clusterIP: 172.21.180.240
ports:
- name: http
port: 3000
protocol: TCP
targetPort: http
selector:
app: r3-ui
release: r3-ui
sessionAffinity: None
type: ClusterIP
status:
What's weird is that I'm getting different results on port 80 and on port 443. On port 80 I'm getting HTTP error 500 and on port 443 I'm getting Invalid Host header
Are you using https in order to access your application? For security reasons, App ID authentication only supports back ends with TLS/SSL enabled.
If you are using SSL and still having troubles, can you kindly share your Ingress and application logs so we can figure out what went wrong?
Thanks.

Ingress is not getting address on GKE and GCE

When creating ingress, no address is generated and when viewed from GKE dashboard it is always in the Creating ingress status.
Describing the ingress does not show any events and I can not see any clues on GKE dashboard.
Has anyone has a similar issue or any suggestions on how to debug?
My deployment.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mobile-gateway-ingress
spec:
backend:
serviceName: mobile-gateway-service
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: mobile-gateway-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
app: mobile-gateway
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mobile-gateway-deployment
labels:
app: mobile-gateway
spec:
selector:
matchLabels:
app: mobile-gateway
replicas: 2
template:
metadata:
labels:
app: mobile-gateway
spec:
containers:
- name: mobile-gateway
image: eu.gcr.io/my-project/mobile-gateway:latest
ports:
- containerPort: 8080
Describing ingress shows no events:
mobile-gateway ➤ kubectl describe ingress mobile-gateway-ingress git:master*
Name: mobile-gateway-ingress
Namespace: default
Address:
Default backend: mobile-gateway-service:80 (10.4.1.3:8080,10.4.2.3:8080)
Rules:
Host Path Backends
---- ---- --------
* * mobile-gateway-service:80 (10.4.1.3:8080,10.4.2.3:8080)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"mobile-gateway-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"mobile-gateway-service","servicePort":80}}}
Events: <none>
hello ➤
With a simple LoadBalancer service, an IP address is given. The problem is only with the ingress resource.
The problem in this case was I had did not include the addon HttpLoadBalancing when creating the cluster!
My fault but was would have been noice to have an event informing me of this mistake in the ingress resource.
Strange that when I created a new cluster to follow the tutorial cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer using default addons including HttpLoadBalancing that I observed the same issue. Maybe I didn't wait long enough? Anyway, working now that I have included the addon.
To complete the accepted answer, it's worth noting that it's possible to activate the addon on an already existing cluster (from the Google console).
However, it will restart your cluster with downtime (in my case, it took several minutes on an almost empty cluster). Be sure that's acceptable in your case, and do tests.