keycloak deployment on kubernetes (GKE) : Ingress class -- nginx VS gce - kubernetes

I am trying to deploy keycloak on google kubernetes engine and got it working using the ingress.class type nginx as follows
kubernetes.io/ingress.class: nginx
Full manifest can be found here
https://github.com/vsomasvr/keycloak-gke/blob/master/keycloak-gke-ingress/ingress.yaml
But, my intent is to use ingress.class type "gce". For that I had changed the ingress annotations from the following
annotations:
ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
to the following
kubernetes.io/ingress.allow-http: "false"
annotations: kubernetes.io/ingress.class: gce
After the above change, I consistently get a message indicating that the ingress has unhealthy backend (0/3).
I wonder what other changes would "gce" require when "nginx" could run without any issues.
I ensured that its not a firewall issue as the ports that the app is using are allowed for all, I also have livenessProbe and readinessProbe setting in place.
Is there anything else that this configuration is missing?
I placed all the manifest files here
https://github.com/vsomasvr/keycloak-gke/tree/master/keycloak-gke-ingress
Any help is appreciated
EDIT
I had added the annotation
kubernetes.io/ingress.allow-http: "false"
to the nginx ingress, tested & ensured that it would not cause any conflict. The app worked without issues.
On the other end, the gce ingress has the same behavior even when I remove the above mentioned annotation

Your service seems to be configured to use HTTP traffic, which means the health checks for the Load Balancer will also use HTTP traffic (port 80) yet you are using an annotation that disables HTTP
kubernetes.io/ingress.allow-http: "false"

Related

mTLS (clientAuth) per Kubernetes Traefik Ingress Route

I have multiple services with working ingress routes using traefik 2.6
All ingress routes work as expected using annotations and I get no errors showing when applying the configuration with args regarding file provider to "dynamic.yml" After checking in the pod itself, traefik is running with the correct arguments and that the dynamic.conf file and cert.pem are mounted correctly.
#dynamic.yml
tls:
options:
default:
clientAuth:
caFiles:
- /opt/traefik/cert.pem
clientAuthType: RequireAndVerifyClientCert
The configurations above applies the tls options to all ingress routes.
When applying the following ingress annotations for the service, clients do not get prompted for certs:
Changing the configuration to the following:
#dynamic.yml
tls:
options:
mtls:
clientAuth:
caFiles:
- /opt/traefik/cert.pem
clientAuthType: RequireAndVerifyClientCert
...
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
traefik.ingress.kubernetes.io/tls.options: mtls
...
The ingress routes function, however, clients are able to view the site without certificate authentication on the specific ingress route with tls.option "mtls".
Found the answer here:
https://doc.traefik.io/traefik/routing/providers/kubernetes-ingress/#annotations
traefik.ingress.kubernetes.io/router.tls.options: foobar#file

force http to https on GKE ingress cloud loadbalancer [duplicate]

Is there a way to force an SSL upgrade for incoming connections on the ingress load-balancer? Or if that is not possible with, can I disable port :80? I haven't found a good documentation pages that outlines such an option in the YAML file. Thanks a lot in advance!
https://github.com/kubernetes/ingress-gce#frontend-https
You can block HTTP through the annotation kubernetes.io/ingress.allow-http: "false" or redirect HTTP to HTTPS by specifying a custom backend. Unfortunately GCE doesn't handle redirection or rewriting at the L7 layer directly for you, yet. (see https://github.com/kubernetes/ingress-gce#ingress-cannot-redirect-http-to-https)
Update: GCP now handles redirection rules for load balancers, including HTTP to HTTPS. There doesn't appear to be a method to create these through Kubernetes YAML yet.
This was already correctly answered by a comment on the accepted answer. But since the comment is buried I missed it several times.
As of GKE version 1.18.10-gke.600 you can add a k8s frontend config to redirect from http to https.
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: ssl-redirect
spec:
redirectToHttps:
enabled: true
# add below to ingress
# metadata:
# annotations:
# networking.gke.io/v1beta1.FrontendConfig: ssl-redirect
The annotation has changed:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
...
Here is the annotation change PR:
https://github.com/kubernetes/contrib/pull/1462/files
If you are not bound to the GCLB Ingress Controller you could have a look at the Nginx Ingress Controller. This controller is different to the builtin one in multiple ways. First and foremost you need to deploy and manage one by yourself. But if you are willing to do so, you get the benefit of not depending on the GCE LB (20$/month) and getting support for IPv6/websockets.
The documentation states:
By default the controller redirects (301) to HTTPS if TLS is enabled for that ingress . If you want to disable that behaviour globally, you
can use ssl-redirect: "false" in the NGINX config map.
The recently released 0.9.0-beta.3 comes with an additional annotation for explicitly enforcing this redirect:
Force redirect to SSL using the annotation ingress.kubernetes.io/force-ssl-redirect
Google has responded to our requests and is testing HTTP->HTTPS SSL redirection on their load balancers. Their latest answer said it should be in Alpha sometime before the end of January 2020.
Their comment:
Thank you for your patience on this issue. The feature is currently in testing and we expect to enter Alpha phase before the end of January. Our PM team will have an announcement with more details as we get closer to the Alpha launch.
My fingers are crossed that we'll have a straightforward solution to this very common feature in the near future.
UPDATE (April 2020):
HTTP(S) rewrites is now a Generally Available feature. It's still a bit rough around the edges and does not work out-of-the-box with the GCE Ingress Controller unfortunately. But time will tell and hopefully a native solution will appear.
A quick update. Here
Now a FrontEndConfig can be make to configure the ingress. Hopes it helps.
Example:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: my-frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: 301
You'll need to make sure that your load balancer supports HTTP and HTTPS
Worked on this for a long time. In case anyone isn't clear on the post above. You would rebuild your ingress with annotation -- kubernetes.io/ingress.allow-http: "falseā€ --
Then delete your ingress and redeploy. The annotation will have the ingress only create a LB for 443, instead of both 443 and 80.
Then you do a compute HTTP LB, not one for GKE.
Gui directions:
Create a load balancer and choose HTTP(S) Load Balancing -- Start configuration.
choose - From Internet to my VMs and continue
Choose a name for the LB
leave the backend configuration blank.
Under Host and path rules, select Advanced host and path rules with the action set to
Redirect the client to different host/path.
Leave the Host redirect field blank.
Select Prefix Redirect and leave the Path value blank.
Chose the redirect response code as 308.
Tick the Enable box for HTTPS redirect.
For the Frontend configuration, leave http and port 80, for ip address select the static
IP address being used for your GKE ingress.
Create this LB.
You will now have all http traffic go to this and 308 redirect to your https ingress for GKE. Super simple config setup and works well.
Note: If you just try to delete the port 80 LB that GKE makes (not doing the annotation change and rebuilding the ingress) and then adding the new redirect compute LB it does work, but you will start to see error messages on your Ingress saying error 400 invalid value for field 'resource.ipAddress " " is in use and would result in a conflict, invalid. It is trying to spin up the port 80 LB and can't because you already have an LB on port 80 using the same IP. It does work but the error is annoying and GKE keeps trying to build it (I think).
Thanks to the comment of #Andrej Palicka and according to the page he provided: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect now I have an updated and working solution.
First we need to define a FrontendConfig resource and then we need to tell the Ingress resource to use this FrontendConfig.
Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-app-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: myapp-prd
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
networking.gke.io/v1beta1.FrontendConfig: myapp-frontend-config
spec:
defaultBackend:
service:
name: myapp-app-service
port:
number: 80
---
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: myapp-frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: MOVED_PERMANENTLY_DEFAULT
You can disable HTTP on your cluster (note that you'll need to recreate your cluster for this change to be applied on the load balancer) and then set HTTP-to-HTTPS redirect by creating an additional load balancer on the same IP address.
I spend couple of hours on the same question, and ended up doing what I've just described. It works perfectly.
Redirecting to HTTPS in Kubernetes is somewhat complicated. In my experience, you'll probably want to use an ingress controller such as Ambassador or ingress-nginx to control routing to your services, as opposed to having your load balancer route directly to your services.
Assuming you're using an ingress controller, then:
If you're terminating TLS at the external load balancer and the LB is running in L7 mode (i.e., HTTP/HTTPS), then your ingress controller needs to use X-Forwarded-Proto, and issue a redirect accordingly.
If you're terminating TLS at the external load balancer and the LB is running in TCP/L4 mode, then your ingress controller needs to use the PROXY protocol to do the redirect.
You can also terminate TLS directly in your ingress controller, in which case it has all the necessary information to do the redirect.
Here's a tutorial on how to do this in Ambassador.

Define a fallback service for an isomorphic JavaScript app

I have an isomorphic JavaScript app that uses Vue's SSR plugin running on K8s. This app can either be rendered server-side by my Express server with Node, or it can be served straight to the client as with Nginx and rendered in the browser. It works pretty flawlessly either way.
Running it in Express with SSR is a much higher resource use however, and Express is more complicated and prone to fail if I misconfigure something. Serving it with Nginx to be rendered client side on the other hand is dead simple, and barely uses any resources in my cluster.
What I want to do is have a few replicas of a pod running my Express server that's performing the SSR, but if for some reason these pods go down, I want a fallback service on the ingress that will serve from a backup pod with just Nginx serving the client-renderable code.
Setting up the pods is easy enough, but how can I tell an ingress to serve from a different service then normal if the normal service is unreachable and/or responding too slowly to requests?
The easiest way to setup NGINX Ingress to meet your needs is by using the default-backend annotation.
This annotation is of the form
nginx.ingress.kubernetes.io/default-backend: <svc name> to specify
a custom default backend. This <svc name> is a reference to a
service inside of the same namespace in which you are applying this
annotation. This annotation overrides the global default backend.
This service will be handle the response when the service in the
Ingress rule does not have active endpoints. It will also handle the
error responses if both this annotation and the custom-http-errors
annotation is set.
Example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/custom-http-errors: '404'
nginx.ingress.kubernetes.io/default-backend: default-http-backend
spec:
rules:
- host: myapp.mydomain.com
http:
paths:
- path: "/"
backend:
serviceName: custom-http-backend
servicePort: 80
In this example NGINX is serving custom-http-backend as primary resource and if this service fails, it will redirect the end-user to default-http-backend.
You can find more details on this example here.

Nginx Ingress Kube

I'm confused about nginx ingress with Kubernetes. I've been able to use it with "basic nginx auth" (unable to do so with oauth2 yet).
I've installed via helm:
helm install stable/nginx-ingress --name app-name --set rbac.create=true
This creates two services, an nginx-ingress-controller and an nginx-ingress-backend.
When I create an ingress, this ingress is targeted towards one and only one nginx-ingress-controller, but I have no idea how:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tomcat
annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - foo"
nginx.ingress.kubernetes.io/rewrite-target: /
namespace: kube-system
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: tomcat-deployment-service
servicePort: 8080
When I get this Ingress from the output of kubectl get ingress -n kube-system, it has a public, external IP.
What's concerning is that basic-auth DOESN'T APPLY to that external IP; it's wide open! Nginx authentication only kicks in when I try to visit the nginx-ingress-controller's IP.
I have a lot of questions.
How do I made an ingress created from kubectl apply -f
ingress.yaml target a specific nginx-ingress-controller?
How do I keep this new ingress from having an external IP?
Why isn't nginx authentication kicking in?
What IP am I suppose to use (the nginx-ingress-controller or the
generated one?)
If I'm suppose to use the generated IP, what about the one from the controller?
I have been searching for descent, working examples (and pouring over sparse, changing documentation, and github issues) for literally days.
EDIT:
In this "official" documentation, it's unclear as to weather or not http://10.2.29.4/ is the IP from the ingress or the controller. I assume the controller because when I run this, the other doesn't even authenticate (it let's me in without asking for a password). Both IP's I'm using are external IPs (publicly available) on GCP.
I think you might have some concept definition misunderstanding.
Ingress is not a job ( Nor a service, nor a pod ). It is just a configuration. It cannot have "IP". think of ingress as a routing rule or a routing table in your cluster.
Nginx-ingress-controller is the service with type Loadbalancer with actual running pods behind it that facilitates those ingress rules that you created for your cluster.
Nginx-ingress-backend is likely to be a default-backend that your nginx-ingress-controller will route to if no matching routes are found. see this
In general, your nginx-ingress-controller should be the only entry of your cluster. Other services in your cluster should have type ClusterIP such that they are not exposed to outside the cluster and only accessible through your nginx-ingress-controller. In you case, since your service could be access from outside directly, it should not be of type ClusterIP. Just change the service type to get it protected.
Based on above understanding, I will glad to provide further help for the question you have.
Some readings:
What is ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/
K8s Services and external accessibility: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types

GKE Load Balancer - Ingress - Service - Session Affinity (Sticky Session)

I had sticky session working in my dev environment with minibike with following configurations:
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gl-ingress
annotations:
nginx.ingress.kubernetes.io/affinity: cookie
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: "projects/oceanic-isotope-199421/global/addresses/web-static-ip"
spec:
backend:
serviceName: gl-ui-service
servicePort: 80
rules:
- http:
paths:
- path: /api/*
backend:
serviceName: gl-api-service
servicePort: 8080
Service:
apiVersion: v1
kind: Service
metadata:
name: gl-api-service
labels:
app: gl-api
annotations:
ingress.kubernetes.io/affinity: 'cookie'
spec:
type: NodePort
ports:
- port: 8080
protocol: TCP
selector:
app: gl-api
Now that I have deployed my project to GKE sticky session no longer function. I believe the reason is that the Global Load Balancer configured in GKE does not have session affinity with the NGINX Ingress controller. Anyone have any luck wiring this up? Any help would be appreciated. I wanting to establish session affinity: Client Browser > Load Balancer > Ingress > Service. The actual session lives in the pods behind the service. Its an API Gateway (built with Zuul).
Session affinity is not available yet in the GCE/GKE Ingress controller.
In the meantime and as workaround, you can use the GCE API directly to create the HTTP load balancer. Note that you can't use Ingress at the same time in the same cluster.
Use NodePort for the Kubernetes Service. Set the value of the port in spec.ports[*].nodePort, otherwise a random one will be assigned
Disable kube-proxy SNAT load balancing
Create a Load Balancer from the GCE API, with cookie session affinity enabled. As backend use the port from 1.
Good news! Finally they have support for these kind of tweaks as beta features!
Beginning with GKE version 1.11.3-gke.18, you can use an Ingress to configure these properties of a backend service:
Timeout
Connection draining timeout
Session affinity
The configuration information for a backend service is held in a custom resource named BackendConfig, that you can "attach" to a Kubernetes Service.
Together with other sweet beta-features (like CDN, Armor, etc...) you can find how-to guides here:
https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service
Based on this: https://github.com/kubernetes/ingress-gce/blob/master/docs/annotations.md
there's no annotation available, which could effect the session affinity setting of the Google Cloud LoadBalancer (GCLB), that is created as a result of the ingress creation. As such:
This have to be turned on by hand: either as suggested above by creating the LB yourself, or letting the ingress controller do so and then changing the backend configuration for each backend (either via GUI or gcloud cli). IMHO the later seems faster and less prone to errors. (Tested, and cookie "GCLB" was returned by LB after the config change got propagated automatically, and subsequent requests including the cookie were routed to the same node)
As rightfully pointed out by Matt-y-er: service.spec "externalTrafficPolicy" has to be set to local "Local" to disable forwarding from the Node the GCLB selected to another. However:
One would still need to ensure:
The GCLB should not send traffic to nodes, which doesn't run the pod or
make sure there's a pod running on all nodes (and only a single pod as the externalTrafficPolicy setting would not prevent loadbalancing over multiple local pods)
With regard to #3,the simple solution:
convert the deployment to a daemonset -> there will be exactly one pod on each node (https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) at all times
The more complicated solution (but which allows to have less pods than nodes):
It seems, that GCLB's health check doesn't need to be adjusted as Ingress rule definition automatically sets up a healthcheck to the backend (and not to the default healthz service)
supply anti-affinity rules to make sure there's at most a single instance of a pod on each node (https://kubernetes.io/docs/concepts/configuration/assign-pod-node/)
Note: The above anti-affinity version was tested on 24th July 2018 with 1.10.4-gke.2 kubernetes version on a 2 node cluster running COS (default GKE VM image)
I was trying the gke tutorial for that on version: 1.11.6-gke.6 (the latest availiable).
stickiness was not there... the only option that was working was only after sessing externalTrafficPolicy":"Local" on the service...
spec:
type: NodePort
externalTrafficPolicy: Local
i opened defect to google about the same, and they accepted it, without commiting on eta.
https://issuetracker.google.com/issues/124064870
For the BackendConfig of the ingress loadbalancer, documentation can be found here:
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features
An example snippet for type generated cookie is :
spec:
timeoutSec: 1800
connectionDraining:
drainingTimeoutSec: 1800
sessionAffinity:
affinityType: "GENERATED_COOKIE"
affinityCookieTtlSec: 1800