keycloak running on kubernetes | not able to browse admin console giving (blocked mixed content) - kubernetes

Describe the bug
I am running keycloak 19.0.1 on kubernetes 1.21.4
I have one internal nginx ingress controller installed on kubernetes and not exposed to outside, and i have another nginx server which is in the DMZ exposed to external and running as reverse proxy.
I used to un keycloak with the option PROXY_ADDRESS_FORWARDING=true from inside kubernetes
I used to configure the below Ingress file for it to be exposed through the nginx ingress controller:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
labels:
app: keycloak
name: keycloak
namespace: fms-preprod
spec:
rules:
- host: keycloak-test-ssl.stcs.com.sa
http:
paths:
- backend:
service:
name: keycloak
port:
number: 8443
path: /
pathType: Prefix
I used to define the below configuration inside the external nginx reverse proxy:
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name keycloak-test-ssl.stcs.com.sa ;
location / {
proxy_pass http://ingress;
proxy_set_header X-Forwarded-For $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
client_max_body_size 100M;
}
ssl_certificate /etc/ssl/blabla/bundle1.crt;
ssl_certificate_key /etc/ssl/blabla/bundle1.key;
}
Everything is working fine using the http without SSL configuration and i am able to browse the admin console, but once configuring the SSL on the nginx external reverse proxy as mentioned above, the first page of keycloak is loading fine but the admin console is blocked loading and giving the error as mentioned in the below pic.
Any one tell me what is wrong or missing in my configuration would be helpful for me
thanks in advance
Version
19.0.1
Expected behavior
keycloak has to work properly behind nginx Reverse proxy with SSL
Actual behavior
keycloak failed loading admin console giving blocked mixed content
How to Reproduce?
details given in above description
Anything else?
No response

Related

Keycloak behind ingress with non-standard port

I successfully installed a keycloak with the bitnami helm chart.
the ingress settings are:
ingress:
enabled: true
hostname: "kc-test.local"
My ingress is listening on port 18000 (locally). If I now call
http://kc-test.local:18000 it works, but all links contain the url without port, e.g.:
http://kc-test.local/admin
The setting:
extraEnvVars:
- name: KC_HOSTNAME_URL
value: "http://kc-test.local:18000"
Any ideas, how to make my ingress (nginx) pass the requested port to keycloak?
EDIT
Following annotations on nginx ingress do not help:
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto http;
Maybe remplace or add KEYCLOAK_FRONTEND_URL en var.
Source docker hub documentation
https://hub.docker.com/r/jboss/keycloak/

How to create TLS load balancer service in K8s on Openstack

We are using openstack. I deployed nginx service first on port 80 using below yaml, and my application was working fine with http url.
apiVersion: v1
kind: Service
metadata:
namespace: app1
labels:
io.kompose.service: nginx
name: nginx
spec:
ports:
- name: "http"
port: 80
targetPort: 80
- name: "30443"
port: 30443
targetPort: 30443
type: LoadBalancer
selector:
io.kompose.service: nginx
Then I edited my service.yaml and updated with SSL port 443 to enable https on my webpage:
apiVersion: v1
kind: Service
metadata:
namespace: app1
labels:
io.kompose.service: nginx
name: nginx
spec:
ports:
- name: "https"
port: 443
targetPort: 31303
- name: "30443"
port: 30443
targetPort: 30443
type: LoadBalancer
selector:
io.kompose.service: nginx
Now I am little confused if I am enabling SSL the right way, also where should I place certificates/key. In my nginx.conf, below is the code I have placed:-
upstream xyzserver {
server xyz.app1.svc.cluster.local:40002;
}
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
client_max_body_size 200M;
access_log /var/log/nginx/xyz_access.log;
error_log /var/log/nginx/xyz_error.log;
# ssl_certificate <cert-path>;
# ssl_certificate_key <key-path>;
proxy_read_timeout 720s;
proxy_connect_timeout 720s;
proxy_send_timeout 720s;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
rewrite ^/(.*)$ https://xyz.net.abc.com/$1 redirect;
error_page 502 /Maintenance.html;
location = /Maintenance.html {
root /opt/nginx/nginx-1.20.1/html/;
internal;
}
}
gzip_types text/css text/less text/plain text/xml application/xml application/json application/javascript;
gzip on;
}
server {
listen [::]:80;
listen 80;
client_max_body_size 200M;
access_log /var/log/nginx/xyz_access.log;
error_log /var/log/nginx/xyz_error.log;
proxy_read_timeout 720s;
proxy_connect_timeout 720s;
proxy_send_timeout 720s;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_buffering off;
proxy_pass http://xyz.app1.svc.cluster.local:40002;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header referer "http://xyz.net.abc.com";
}
gzip_types text/css text/less text/plain text/xml application/xml application/json application/javascript;
gzip on;
}
Please help to guide.
Try below steps to configure SSL certificates in NGINX web server:
1)Get SSL Certificate:
You will get 3 files from the certificate authority via email or by logging into their website.
key (e.g private.key) – your key file. Don’t share this with anyone publicly
Certificate (e.g certificate.crt) – actual SSL/TLS certificate for your domain
Intermediate (e.g intermediate.crt) – Root/intermediate certificate
2)Link SSL Files:
Create a directory to store the above-mentioned files
$ sudo mkdir /etc/nginx/ssl
$ cd /etc/nginx/ssl
Download the three SSL files mentioned in step 2 to the above directory. Link the certificate and intermediate files to create a bundle.crt file
cat certificate.crt intermediate.crt >> bundle.crt
3)Open NGINX Configuration:
Open terminal and run the following command to open NGINX server configuration file.
$ sudo vi /etc/nginx/nginx.conf
If you have configured separate virtual hosts for your website (e.g www.example.com), such as /etc/nginx/sites-enabled/example.conf then open its configuration with the following command
$ sudo vi /etc/nginx/sites-enabled/example.conf
*4)Configure SSL certificate in NGINX:
Create a server block that listens to port 443 (HTTPS port)
server {
listen 443;
ssl on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_certificate /etc/nginx/ssl/bundle.crt;
ssl_certificate_key /etc/nginx/ssl/private.key;
...
}
In the above configuration, the first line will ensure that NGINX server listens to port 443. The second line will enable SSL.
The next line specifies the SSL protocols to be supported by NGINX. For our example, we have enabled TLS version 1, version 1.1 and 1.2. You can change it as per your requirement.
The next two lines specify the file paths to bundle.crt file created in Step 2 and private.key downloaded in step 1. Here’s a sample server block for your reference
server {
listen 443;
ssl on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_certificate /etc/nginx/ssl/bundle.crt;
ssl_certificate_key /etc/nginx/ssl/private.key;
server_name www.example.com;
access_log /path/to/nginx/accces/log/file;
error_log /path/to/nginx/error/log/file;
location / {
root /var/www/html/yoursite/;
index index.html;
}
}
5)Restart NGINX Server:
Finally, run the following command to check syntax of your updated config file.
$ sudo nginx -t
If there are no errors, run the following command to restart NGINX server.
$ sudo service nginx reload #debian/ubuntu
$ systemctl restart nginx #redhat/centos
6)Test NGINX SSL Configuration:
Open a browser and visit https:// version of your domain (e.g https://www.example.com).
You will see a lock symbol next to your URL, in browser’s address bar, indicating that your website’s SSL/TLS certificate is working properly.
Hopefully, now you can configure SSL certificates in NGINX for Ubuntu as well as other Linux systems.

Nginx ingress ignores ConfigMap and annotations

I've set up a k8s cluster (1 bare metal node for now, which is both master and worker). I've also set up Nginx ingress controller as described here: https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/ Below are the exact steps:
kubectl apply -f common/ns-and-sa.yaml https://github.com/nginxinc/kubernetes-ingress/blob/release-1.11/deployments/common/ns-and-sa.yaml (no modifications)
kubectl apply -f rbac/rbac.yaml https://github.com/nginxinc/kubernetes-ingress/blob/release-1.11/deployments/rbac/rbac.yaml (no modifications)
kubectl apply -f common/default-server-secret.yaml https://github.com/nginxinc/kubernetes-ingress/blob/release-1.11/deployments/common/default-server-secret.yaml (no modifications)
kubectl apply -f common/nginx-config.yaml https://github.com/nginxinc/kubernetes-ingress/blob/release-1.11/deployments/common/nginx-config.yaml Modified file:
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: nginx-ingress
data:
ignore-invalid-headers: "false"
use-forwarded-headers: "true"
forwarded-for-header: "CF-Connecting-IP"
proxy-real-ip-cidr: "...IPs go here..."
kubectl apply -f common/ingress-class.yaml https://github.com/nginxinc/kubernetes-ingress/blob/release-1.11/deployments/common/ingress-class.yaml Modified file:
apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: nginx
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: nginx.org/ingress-controller
These commands:
kubectl apply -f common/crds/k8s.nginx.org_virtualservers.yaml
kubectl apply -f common/crds/k8s.nginx.org_virtualserverroutes.yaml
kubectl apply -f common/crds/k8s.nginx.org_transportservers.yaml
kubectl apply -f common/crds/k8s.nginx.org_policies.yaml
No modifications, links:
https://github.com/nginxinc/kubernetes-ingress/blob/release-1.11/deployments/common/crds/k8s.nginx.org_virtualservers.yaml
https://github.com/nginxinc/kubernetes-ingress/blob/release-1.11/deployments/common/crds/k8s.nginx.org_virtualserverroutes.yaml
https://github.com/nginxinc/kubernetes-ingress/blob/release-1.11/deployments/common/crds/k8s.nginx.org_transportservers.yaml
https://github.com/nginxinc/kubernetes-ingress/blob/release-1.11/deployments/common/crds/k8s.nginx.org_policies.yaml
kubectl apply -f daemon-set/nginx-ingress.yaml https://github.com/nginxinc/kubernetes-ingress/blob/release-1.11/deployments/daemon-set/nginx-ingress.yaml (no modifications)
I've also set up cert-manager, which works fine (pretty sure this does not matter much).
Now, when I create some Ingress resource, it almost works. I can access it from the outer Internet, certificate issuing works, etc. But ConfigMap (common/nginx-config.yaml) is not applied, and annotations like nginx.org/rewrite-target: /$1 are not applied, too.
Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-com
namespace: example-com
annotations:
nginx.org/rewrite-target: /$1
spec:
ingressClassName: nginx
tls:
- hosts:
- example.com
secretName: example-com-tls
rules:
- host: example.com
http:
paths:
- path: /api/(.*)
pathType: ImplementationSpecific
backend:
service:
name: api
port:
number: 80
- path: /(.*)
pathType: ImplementationSpecific
backend:
service:
name: frontend
port:
number: 80
Real domain names are used, of course. I get 404 nginx error in this example. In other Ingress I pass /proxy-body-size annotation, which does not work also (can not upload large files).
I've execed into ingress controller pod with kubectl -n nginx-ingress exec -it nginx-ingress-snjjp bash and looked at files in /etc/nginx/conf.d. None of the files contained configuration specified in ConfigMap or annotations.
This is what it look like (I removed extra blank lines and replaced domain names):
# configuration for example-com/example-com
upstream example-com-example-com-example.com-api-80 {
zone example-com-example-com-example.com-api-80 256k;
random two least_conn;
server 10.32.0.4:80 max_fails=1 fail_timeout=10s max_conns=0;
}
upstream example-com-example-com-example.com-frontend-80 {
zone example-com-example-com-example.com-frontend-80 256k;
random two least_conn;
server 10.32.0.27:80 max_fails=1 fail_timeout=10s max_conns=0;
}
server {
listen 80;
listen 443 ssl;
ssl_certificate /etc/nginx/secrets/example-com-example-com-tls;
ssl_certificate_key /etc/nginx/secrets/example-com-example-com-tls;
server_tokens on;
server_name example.com;
set $resource_type "ingress";
set $resource_name "example-com";
set $resource_namespace "example-com";
if ($scheme = http) {
return 301 https://$host:443$request_uri;
}
location /api/(.*) {
set $service "api";
proxy_http_version 1.1;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
client_max_body_size 1m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering on;
proxy_pass http://example-com-example-com-example.com-api-80;
}
location /(.*) {
set $service "frontend";
proxy_http_version 1.1;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
client_max_body_size 1m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering on;
proxy_pass http://example-com-example-com-example.com-frontend-80;
}
}
I also tried nginx.ingress.kubernetes.io/ annotations (I'm not a pro as you can see, and it was what I googled). No success.
I am updating my cluster, and with the older version of k8s (1.15 I think it was) everything worked a couple of days ago. I used the exact same configuration for every service, except ingress controller, of course.
Any ideas?
I've found out what is wrong. I was using Kubernetes Nginx Ingress Controller https://kubernetes.github.io/ingress-nginx/ with my old setup, and now I am using Nginx Ingress Controller https://www.nginx.com/products/nginx-ingress-controller/ These implementations have different annotations (the latter is missing many useful annotations). This is really very confusing, as configuration is similar and one may think these are the same.

Error Too many redirects when replicas of container secured by keycloak are higher than 1

I'm setting up a docker swarm with the following services:
* nginx (acting as reverse proxy)[docker version alpine-14]
* wildfly (serving my secured app)
* keycloak (securing my app)[docker version Keycloak4.0.0.Final]
Everything goes fine and I can authenticate and access my app when I have only one replica of my app.
BUT when I try to scale my wildfly service to more than 1 replica, I can access the login page and once credentials are introduced it gives the error ERR_TOO_MANY_REDIRECTS.
I have tried to change my nginx proxy configuration to forward requests to keycloak https and http ports, in the keycloak side I tried to add
environment:
PROXY_ADDRESS_FORWARDING: "true"
Almost everything is working when I only have one replica of my wildfly service.. but same error keeps appearing when having > 1 replicas of it.
This is my nginx config file:
server {
listen 443 ssl;
ssl on;
ssl_certificate /etc/ssl/fullchain.pem; # path to your cacert.pem
ssl_certificate_key /etc/ssl/privkey.pem; # path to your privkey.pem
server_name testsite.com;
rewrite ^/$ /web/ permanent;
location / {
proxy_pass http://wildfly.service.com:8080/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto https;
# to avoid upstream sent too big header while reading response header from upstream ERROR
# thanks to https://ma.ttias.be/nginx-proxy-upstream-sent-big-header-reading-response-header-upstream/
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
proxy_pass_header Set-Cookie;
}
location /auth {
# proxy_pass http://keycloak.service.com:8080/auth;
proxy_pass https://keycloak.service.com:8443/auth;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto https;
# to avoid upstream sent too big header while reading response header from upstream ERROR
# thanks to https://ma.ttias.be/nginx-proxy-upstream-sent-big-header-reading-response-header-upstream/
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
proxy_pass_header Set-Cookie;
}
}
My keycloak config:
Enabled ON
Consent Required OFF
Login Theme --
Client Protocol openid-connect
Access Type Public
Standard Flow Enabled ON
Implicit Flow Enabled OFF
Direct Access Grants Enabled OFF
Authorization Enabled OFF
Root URL https://testsite.com/
* Valid Redirect URIs https://testsite.com/* http://testsite.com/*
Base URL https://testsite.com/
Admin URL --
Web Origins +
My keycloak config on wildfly app side:
{
"realm": "realm_name",
"auth-server-url": "https://testsite.com/auth",
"ssl-required": "external",
"resource": "client-web",
"public-client": true,
"use-resource-role-mappings": true,
"confidential-port": 0
}
Expected result: authenticating without errors when service is scaled to more than 1 containers.
Finally, we got it working, the redirects issue appears when keycloak accepts your authentication and sends you back to your back-end application, if you have more than one replica of your back-end, the swarm balancer sends you to another instance of your back-end application instead of the one you started with, but as this instance is not authenticated yet it redirects you to keycloak again in a loop.
In our case, we just got rid of the nginx and started using the traefik docker image, this image makes sure your services connections are sticky. This would be the traefik service:
loadbalancer:
image: traefik:1.7
command: --docker \
--docker.swarmmode \
--docker.watch \
--web \
--loglevel=DEBUG
ports:
- 80:80
- 9090:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
restart_policy:
condition: any
mode: replicated
replicas: 1
update_config:
delay: 2s
And after this, you need to pass this labels to the services that are interacting (keycloak and your back-end) in the docker-compose file:
labels:
- "traefik.docker.network=your_network_name"
- "traefik.port=your_service_port"
- "traefik.frontend.rule=PathPrefix:/your_service_path;" // may be not necessary (play around)
- "traefik.backend.loadbalancer.stickiness=true"
CAUTION!!! Passing docker.socket directly to your traefik service is considered to be a security bad practice, there are other ways to implement this more securely (https://github.com/containous/traefik/issues/4174).
I'm using Kubernetes, and I was able to solve a problem that occurred with redirects failing when the number of replicas exceeded 1 by setting KC_CACHE_STACK to kubernetes.
- name: KC_CACHE_STACK
value: "kubernetes"

http -> https redirect in Google Kubernetes Engine

I'm looking to redirect all traffic from
http://example.com -> https://example.com like how nearly all websites do.
I've looked at this link with no success:
Kubernetes HTTPS Ingress in Google Container Engine
And have tried the following annotations in my ingress.yaml file.
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/ingress.allow-http: "false"
All without any success. To be clear, I can access https://example.com and http://example.com without any errors, I need the http call to redirect to https.
Thanks
GKE uses GCE L7. The rules that you referenced in the example are not supported and the HTTP to HTTPS redirect should be controlled at the application level.
L7 inserts the x-forwarded-proto header that you can use to understand if the frontend traffic came using HTTP or HTTPS. Take a look here: Redirecting HTTP to HTTPS
There is also an example in that link for Nginx (copied for convenience):
# Replace '_' with your hostname.
server_name _;
if ($http_x_forwarded_proto = "http") {
return 301 https://$host$request_uri;
}
Currently, the documentation for how to do this properly (annotations, SSL/HTTPS, health checks etc) is severely lacking, and has been for far too long. I suspect it's because they prefer you to use the App Engine, which is magical but stupidly expensive. For GKE, here's two options:
ingress with a google-managed SSL cert and additional NGINX server configuration in front of your app/site
the NGINX ingress controller with self-managed/third-party SSL certs
The following is steps to a working setup using the former.
1 The door to your app
nginx.conf: (ellipses represent other non-relevant, non-compulsory settings)
user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
http {
...
keepalive_timeout 620s;
## Logging ##
...
## MIME Types ##
...
## Caching ##
...
## Security Headers ##
...
## Compression ##
....
server {
listen 80;
## HTTP Redirect ##
if ($http_x_forwarded_proto = "http") {
return 301 https://[YOUR DOMAIN]$request_uri;
}
location /health/liveness {
access_log off;
default_type text/plain;
return 200 'Server is LIVE!';
}
location /health/readiness {
access_log off;
default_type text/plain;
return 200 'Server is READY!';
}
root /usr/src/app/www;
index index.html index.htm;
server_name [YOUR DOMAIN] www.[YOUR DOMAIN];
location / {
try_files $uri $uri/ /index.html;
}
}
}
NOTE: One serving port only. The global forwarding rule adds the http_x_forwarded_proto header to all traffic that passes through it. Because ALL traffic to your domain is now passing through this rule (remember, one port on the container, service and ingress), this header will (crucially!) always be set. Note the check and redirect above: it only continues with serving if the header value is 'https'. The root and index and location values may differ depending for your project (this is an angular project). keepalive_timeout is set to the value recommended by google. I prefer using the main nginx.conf file, but most people add a custom.conf file to /etc/nginx/conf.d; if you do this just make sure the file is imported into the main nginx.conf http block using an includes statement. The comments highlight where other settings you may want to add when everything is working will go, like gzip/brotli, security headers, where logs are saved, and so on.
Dockerfile:
...
COPY nginx.conf /etc/nginx/nginx.conf
CMD ["nginx", "-g", "daemon off;"]
NOTE: only the final two lines. Specifying an EXPOSE port is unnecessary. COPY replaces the default nginx.conf with the modified one. CMD starts a light server.
2 Create a deployment manifest and apply/create
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: uber-dp
spec:
replicas: 1
selector:
matchLabels:
app: uber
template:
metadata:
labels:
app: uber
spec:
containers:
- name: uber-ctr
image: gcr.io/uber/beta:v1 // or some other registry
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 60
httpGet:
path: /health/liveness
port: 80
scheme: HTTP
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 30
httpGet:
path: /health/readiness
port: 80
scheme: HTTP
ports:
- containerPort: 80
imagePullPolicy: Always
NOTE: only one specified port is necessary, as we're going to point all (HTTP and HTTPS) traffic to it. For simplicity we're using the same path for liveness and readiness probes; these checks will be dealt with on the NGINX server, but you can and should add checks that probe the health of your app itself (eg a dedicated page that returns a 200 if healthy).The readiness probe will also be picked up by GCE, which by default has its own irremovable health check.
3 Create a service manifest and apply/create
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: uber-svc
labels:
app: uber
spec:
ports:
- name: default-port
port: 80
selector:
app: uber
sessionAffinity: None
type: NodePort
NOTE: default-port specifies port 80 on the container.
4 Get a static IP address
On GCP in the hamburger menu: VPC Network -> External IP Addresses. Convert your auto-generated ephemeral IP or create a new one. Take note of the name and address.
5 Create an SSL cert and default zone
In the hamburger menu: Network Service -> Load Balancing -> click 'advanced menu' -> Certificates -> Create SSL Certificate. Follow the instructions, create or upload a certificate, and take note of the name. Then, from the menu: Cloud DNS -> Create Zone. Following the instructions, create a default zone for your domain. Add a CNAME record with www as the DNS name and your domain as the canonical name. Add an A record with an empty DNS name value and your static IP as the IPV4. Save.
6 Create an ingress manifest and apply/create
ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mypt-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: [NAME OF YOUR STATIC IP ADDRESS]
kubernetes.io/ingress.allow-http: "true"
ingress.gcp.kubernetes.io/pre-shared-cert: [NAME OF YOUR GOOGLE-MANAGED SSL]
spec:
backend:
serviceName: mypt-svc
servicePort: 80
NOTE: backend property points to the service, which points to the container, which contains your app 'protected' by a server. Annotations connect your app with SSL, and force-allow http for the health checks. Combined, the service and ingress configure the G7 load balancer (combined global forwarding rule, backend and frontend 'services', SSL certs and target proxies etc).
7 Make a cup of tea or something
Everything needs ~10 minutes to configure. Clear cache and test your domain with various browsers (Tor, Opera, Safari, IE etc). Everything will serve over https.
What about the NGINX Ingress Controller?
I've seen discussion it being better because it's cheaper/uses less resources and is more flexible. It isn't cheaper: it requires an additional deployment/workload and service (GCE L4). And you need to do more configuration. Is it more flexible? Yes. But in taking care of most of the work, the first option gives you a more important kind of flexibility — namely allowing you to get on with more pressing matters.
For everyone like me that searches this question about once a month, Google has responded to our requests and is testing HTTP->HTTPS SSL redirection on their load balancers. Their latest answer said it should be in Alpha sometime before the end of January 2020.
Their comment:
Thank you for your patience on this issue. The feature is currently in testing and we expect to enter Alpha phase before the end of January. Our PM team will have an announcement with more details as we get closer to the Alpha launch.
Update: HTTP to HTTPS redirect is now Generally Available: https://cloud.google.com/load-balancing/docs/features#routing_and_traffic_management
GKE uses its own Ingress Controller which does not support forcing https.
That's why you will have to manage NGINX Ingress Controller yourself.
See this post on how to do it on GKE.
Hope it helps.
For what it's worth, I ended up using a reverse proxy in NGINX.
You need to create secrets and sync them into your containers
You need to create a configmap in nginx with your nginx config, as well as a default config that references this additional config file.
Here is my configuration:
worker_processes 1;
events {
worker_connections 1024;
}
http {
default_type application/octet-stream;
# Logging Configs
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# Puntdoctor Proxy Config
include /path/to/config-file.conf;
# PubSub allows 10MB Files. lets allow 11 to give some space
client_max_body_size 11M;
}
Then, the config.conf
server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name example.com;
ssl_certificate /certs/tls.crt;
ssl_certificate_key /certs/tls.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-RSA-RC4-SHA:AES128-GCM-SHA256:HIGH:!RC4:!MD5:!aNULL:!EDH:!CAMELLIA;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
# Fix the “It appears that your reverse proxy set up is broken" error.
proxy_pass http://deployment-name:8080/;
proxy_read_timeout 90;
proxy_redirect http://deployment-name:8080/ https://example.com/;
}
}
Create a deployment:
Here are the .yaml files
---
apiVersion: v1
kind: Service
metadata:
name: puntdoctor-lb
spec:
ports:
- name: https
port: 443
targetPort: 443
- name: http
port: 80
targetPort: 80
selector:
app: puntdoctor-nginx-deployment
type: LoadBalancer
loadBalancerIP: 35.195.214.7
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: puntdoctor-nginx-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: puntdoctor-nginx-deployment
spec:
containers:
- name: adcelerate-nginx-proxy
image: nginx:1.13
volumeMounts:
- name: certs
mountPath: /certs/
- name: site-config
mountPath: /etc/site-config/
- name: default-config
mountPath: /etc/nginx/
ports:
- containerPort: 80
name: http
- containerPort: 443
name: https
volumes:
- name: certs
secret:
secretName: nginxsecret
- name: site-config
configMap:
name: nginx-config
- name: default-config
configMap:
name: default
Hope this helps someone solve this issue, thanks for the other 2 answers, they both gave me valuable insight.
As of GKE version 1.18.10-gke.600, you can use FrontendConfig to create HTTP -> HTTPS redirection in Google Kubernetes Engine
HTTP to HTTPS redirects are configured using the redirectToHttps field
in a FrontendConfig custom resource. Redirects are enabled for the
entire Ingress resource so all services referenced by the Ingress will
have HTTPS redirects enabled.
The following FrontendConfig manifest enables HTTP to HTTPS redirects.
Set the spec.redirectToHttps.enabled field to true to enable HTTPS
redirects. The spec.responseCodeName field is optional. If it's
omitted a 301 Moved Permanently redirect is used.
For example:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: your-frontend-config-name
spec:
redirectToHttps:
enabled: true
responseCodeName: MOVED_PERMANENTLY_DEFAULT
MOVED_PERMANENTLY_DEFAULT is on of the available RESPONSE_CODE field value, to return a 301 redirect response code (default if responseCodeName is unspecified).
You can find more options here: HTTP to HTTPS redirects
Then you have to link your FrontendConfig to the Ingress, like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: your-ingress-name
annotations:
networking.gke.io/v1beta1.FrontendConfig: your-frontend-config-name
spec:
tls:
...