Keycloak behind ingress with non-standard port - keycloak

I successfully installed a keycloak with the bitnami helm chart.
the ingress settings are:
ingress:
enabled: true
hostname: "kc-test.local"
My ingress is listening on port 18000 (locally). If I now call
http://kc-test.local:18000 it works, but all links contain the url without port, e.g.:
http://kc-test.local/admin
The setting:
extraEnvVars:
- name: KC_HOSTNAME_URL
value: "http://kc-test.local:18000"
Any ideas, how to make my ingress (nginx) pass the requested port to keycloak?
EDIT
Following annotations on nginx ingress do not help:
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto http;

Maybe remplace or add KEYCLOAK_FRONTEND_URL en var.
Source docker hub documentation
https://hub.docker.com/r/jboss/keycloak/

Related

keycloak running on kubernetes | not able to browse admin console giving (blocked mixed content)

Describe the bug
I am running keycloak 19.0.1 on kubernetes 1.21.4
I have one internal nginx ingress controller installed on kubernetes and not exposed to outside, and i have another nginx server which is in the DMZ exposed to external and running as reverse proxy.
I used to un keycloak with the option PROXY_ADDRESS_FORWARDING=true from inside kubernetes
I used to configure the below Ingress file for it to be exposed through the nginx ingress controller:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
labels:
app: keycloak
name: keycloak
namespace: fms-preprod
spec:
rules:
- host: keycloak-test-ssl.stcs.com.sa
http:
paths:
- backend:
service:
name: keycloak
port:
number: 8443
path: /
pathType: Prefix
I used to define the below configuration inside the external nginx reverse proxy:
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name keycloak-test-ssl.stcs.com.sa ;
location / {
proxy_pass http://ingress;
proxy_set_header X-Forwarded-For $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
client_max_body_size 100M;
}
ssl_certificate /etc/ssl/blabla/bundle1.crt;
ssl_certificate_key /etc/ssl/blabla/bundle1.key;
}
Everything is working fine using the http without SSL configuration and i am able to browse the admin console, but once configuring the SSL on the nginx external reverse proxy as mentioned above, the first page of keycloak is loading fine but the admin console is blocked loading and giving the error as mentioned in the below pic.
Any one tell me what is wrong or missing in my configuration would be helpful for me
thanks in advance
Version
19.0.1
Expected behavior
keycloak has to work properly behind nginx Reverse proxy with SSL
Actual behavior
keycloak failed loading admin console giving blocked mixed content
How to Reproduce?
details given in above description
Anything else?
No response

Nginx ingress ignores ConfigMap and annotations

I've set up a k8s cluster (1 bare metal node for now, which is both master and worker). I've also set up Nginx ingress controller as described here: https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/ Below are the exact steps:
kubectl apply -f common/ns-and-sa.yaml https://github.com/nginxinc/kubernetes-ingress/blob/release-1.11/deployments/common/ns-and-sa.yaml (no modifications)
kubectl apply -f rbac/rbac.yaml https://github.com/nginxinc/kubernetes-ingress/blob/release-1.11/deployments/rbac/rbac.yaml (no modifications)
kubectl apply -f common/default-server-secret.yaml https://github.com/nginxinc/kubernetes-ingress/blob/release-1.11/deployments/common/default-server-secret.yaml (no modifications)
kubectl apply -f common/nginx-config.yaml https://github.com/nginxinc/kubernetes-ingress/blob/release-1.11/deployments/common/nginx-config.yaml Modified file:
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: nginx-ingress
data:
ignore-invalid-headers: "false"
use-forwarded-headers: "true"
forwarded-for-header: "CF-Connecting-IP"
proxy-real-ip-cidr: "...IPs go here..."
kubectl apply -f common/ingress-class.yaml https://github.com/nginxinc/kubernetes-ingress/blob/release-1.11/deployments/common/ingress-class.yaml Modified file:
apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: nginx
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: nginx.org/ingress-controller
These commands:
kubectl apply -f common/crds/k8s.nginx.org_virtualservers.yaml
kubectl apply -f common/crds/k8s.nginx.org_virtualserverroutes.yaml
kubectl apply -f common/crds/k8s.nginx.org_transportservers.yaml
kubectl apply -f common/crds/k8s.nginx.org_policies.yaml
No modifications, links:
https://github.com/nginxinc/kubernetes-ingress/blob/release-1.11/deployments/common/crds/k8s.nginx.org_virtualservers.yaml
https://github.com/nginxinc/kubernetes-ingress/blob/release-1.11/deployments/common/crds/k8s.nginx.org_virtualserverroutes.yaml
https://github.com/nginxinc/kubernetes-ingress/blob/release-1.11/deployments/common/crds/k8s.nginx.org_transportservers.yaml
https://github.com/nginxinc/kubernetes-ingress/blob/release-1.11/deployments/common/crds/k8s.nginx.org_policies.yaml
kubectl apply -f daemon-set/nginx-ingress.yaml https://github.com/nginxinc/kubernetes-ingress/blob/release-1.11/deployments/daemon-set/nginx-ingress.yaml (no modifications)
I've also set up cert-manager, which works fine (pretty sure this does not matter much).
Now, when I create some Ingress resource, it almost works. I can access it from the outer Internet, certificate issuing works, etc. But ConfigMap (common/nginx-config.yaml) is not applied, and annotations like nginx.org/rewrite-target: /$1 are not applied, too.
Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-com
namespace: example-com
annotations:
nginx.org/rewrite-target: /$1
spec:
ingressClassName: nginx
tls:
- hosts:
- example.com
secretName: example-com-tls
rules:
- host: example.com
http:
paths:
- path: /api/(.*)
pathType: ImplementationSpecific
backend:
service:
name: api
port:
number: 80
- path: /(.*)
pathType: ImplementationSpecific
backend:
service:
name: frontend
port:
number: 80
Real domain names are used, of course. I get 404 nginx error in this example. In other Ingress I pass /proxy-body-size annotation, which does not work also (can not upload large files).
I've execed into ingress controller pod with kubectl -n nginx-ingress exec -it nginx-ingress-snjjp bash and looked at files in /etc/nginx/conf.d. None of the files contained configuration specified in ConfigMap or annotations.
This is what it look like (I removed extra blank lines and replaced domain names):
# configuration for example-com/example-com
upstream example-com-example-com-example.com-api-80 {
zone example-com-example-com-example.com-api-80 256k;
random two least_conn;
server 10.32.0.4:80 max_fails=1 fail_timeout=10s max_conns=0;
}
upstream example-com-example-com-example.com-frontend-80 {
zone example-com-example-com-example.com-frontend-80 256k;
random two least_conn;
server 10.32.0.27:80 max_fails=1 fail_timeout=10s max_conns=0;
}
server {
listen 80;
listen 443 ssl;
ssl_certificate /etc/nginx/secrets/example-com-example-com-tls;
ssl_certificate_key /etc/nginx/secrets/example-com-example-com-tls;
server_tokens on;
server_name example.com;
set $resource_type "ingress";
set $resource_name "example-com";
set $resource_namespace "example-com";
if ($scheme = http) {
return 301 https://$host:443$request_uri;
}
location /api/(.*) {
set $service "api";
proxy_http_version 1.1;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
client_max_body_size 1m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering on;
proxy_pass http://example-com-example-com-example.com-api-80;
}
location /(.*) {
set $service "frontend";
proxy_http_version 1.1;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
client_max_body_size 1m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering on;
proxy_pass http://example-com-example-com-example.com-frontend-80;
}
}
I also tried nginx.ingress.kubernetes.io/ annotations (I'm not a pro as you can see, and it was what I googled). No success.
I am updating my cluster, and with the older version of k8s (1.15 I think it was) everything worked a couple of days ago. I used the exact same configuration for every service, except ingress controller, of course.
Any ideas?
I've found out what is wrong. I was using Kubernetes Nginx Ingress Controller https://kubernetes.github.io/ingress-nginx/ with my old setup, and now I am using Nginx Ingress Controller https://www.nginx.com/products/nginx-ingress-controller/ These implementations have different annotations (the latter is missing many useful annotations). This is really very confusing, as configuration is similar and one may think these are the same.

How can I configure NGINX ingress controller to work with Cloudflare and Digital Ocean Load Balancer?

I have tried the answers in this question. This is my current configuration:
apiVersion: v1
kind: ConfigMap
metadata:
labels:
helm.sh/chart: ingress-nginx-2.13.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.35.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
data:
use-proxy-protocol: 'true'
enable-real-ip: "true"
proxy-real-ip-cidr: "173.245.48.0/20,173.245.48.0/20,103.21.244.0/22,103.22.200.0/22,103.31.4.0/22,141.101.64.0/18,108.162.192.0/18,190.93.240.0/20,188.114.96.0/20,197.234.240.0/22,198.41.128.0/17,162.158.0.0/15,104.16.0.0/12,172.64.0.0/13,131.0.72.0/22,2400:cb00::/32,2606:4700::/32,2803:f800::/32,2405:b500::/32,2405:8100::/32,2a06:98c0::/29,2c0f:f248::/32"
# use-forwarded-headers: "true"
# compute-full-forwarded-for: "true"
# forwarded-for-header: "Cf-Connecting-Ip"
# forwarded-for-header: "X-Original-Forwarded-For"
server-snippet: |
real_ip_header CF-Connecting-IP;
And none of the configuration I have tried is actually giving the originating ip as the real ip.
Before I applied the configuration, I was getting:
Host: example.com
X-Request-ID: deadcafe
X-Real-IP: 162.158.X.X (A Cloudflare IP)
X-Forwarded-For: 162.158.X.X (Same as above)
X-Forwarded-Proto: https
X-Forwarded-Host: example.com
X-Forwarded-Port: 80
X-Scheme: https
X-Original-Forwarded-For: <The Originating IP that I want>
Accept-Encoding: gzip
CF-IPCountry: IN
CF-RAY: cafedeed
CF-Visitor: {"scheme":"https"}
user-agent: Mozilla/5.0
accept-language: en-US,en;q=0.5
referer: https://pv-hr.jptec.in/
upgrade-insecure-requests: 1
cookie: __cfduid=012dadfad
CF-Request-ID: 01234faddad
CF-Connecting-IP: <The Originating IP that I want>
CDN-Loop: cloudflare
After applying the config map, the headers are:
Host: example.com
X-Request-ID: 0123fda
X-Real-IP: 10.X.X.X (An IP that matches the private ip of the Digital Ocean droplets in the vpc, so guessing its the load balancer)
X-Forwarded-For: 10.X.X.X (Same as above)
X-Forwarded-Proto: http
X-Forwarded-Host: example.com
X-Forwarded-Port: 80
X-Scheme: http
X-Original-Forwarded-For: <Originating IP>
Accept-Encoding: gzip
CF-IPCountry: US
CF-RAY: 5005deeb
CF-Visitor: {"scheme":"https"}
accept: /
user-agent: Mozilla/5.0
CF-Request-ID: 1EE7af
CF-Connecting-IP: <Originating IP>
CDN-Loop: cloudflare
So the only change after the configuration is that the real-ip now points to some internal resource on the Digital Ocean vpc. I haven't been able to track that down but I am guessing its the load balancer. I am confident that it is a DO resource because it matches the ip of the kubernetes nodes. So, I am not really sure why this is happening and what I should be doing to get the originating ip as the real ip.
The problem you are facing is here:
proxy-real-ip-cidr: "173.245.48.0/20,173.245.48.0/20,103.21.244.0/22,103.22.200.0/22,103.31.4.0/22,141.101.64.0/18,108.162.192.0/18,190.93.240.0/20,188.114.96.0/20,197.234.240.0/22,198.41.128.0/17,162.158.0.0/15,104.16.0.0/12,172.64.0.0/13,131.0.72.0/22,2400:cb00::/32,2606:4700::/32,2803:f800::/32,2405:b500::/32,2405:8100::/32,2a06:98c0::/29,2c0f:f248::/32"
However, the traffic being seen is coming from your DO LB instead 10.x.x.x. This is causing it to be ignored for this rule.
I did the following to get it functional:
apiVersion: v1
data:
enable-real-ip: "true"
server-snippet: |
real_ip_header CF-Connecting-IP;
kind: ConfigMap
metadata:
[...]
Security Notice: This will apply to all traffic even if it didn't originate from Cloudflare itself. As such, someone could spoof the headers on the request to impersonate another IP address.
I was having the same issue and the above-accepted answer didn't help me.
This what I did to solve the issue:
#1 - In ingress-nginx-controller I've added the below annotation:
# The below will instruct DO LOadBalancer to pass CloudFlare Ips to ingress instead of DO LoadBalancer IPs
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: 'true'
#2 - in Config Maps for ingress-nginx-controller I've added the below in data:
here we set as well use-proxy-protocol
adding set_real_ip_from (ClouDFlares Ips)
server-snippet could be as below or #real_ip_header X-Forwarded-For; see for more
Plese use this link to get updated cloudflare Ips list
data:
use-proxy-protocol: 'true'
server-snippet: 'real_ip_header CF-Connecting-IP;'
set_real_ip_from: '173.245.48.0/20,103.21.244.0/22,103.22.200.0/22,103.31.4.0/22,141.101.64.0/18,108.162.192.0/18,190.93.240.0/20,188.114.96.0/20,197.234.240.0/22,198.41.128.0/17,162.158.0.0/15,104.16.0.0/13,104.24.0.0/14,172.64.0.0/13,131.0.72.0/22,2400:cb00::/32,2606:4700::/32,2803:f800::/32,2405:b500::/32,2405:8100::/32,2a06:98c0::/29,2c0f:f248::/32'

Kong :: Client IP missing in X-FORWARDED-FOR

Using Kong ingress controller (v2.1) in Kubernetes. Running in Digital Ocean.
The problem is that client IP is missing in X-FORWARDED-FOR header for HTTPS request, yet present on the HTTP request. I need to be able to see the IP on the HTTPS requests as well.
The value received in X-FORWARDED-FOR header on HTTP request is "<the-client-ip>, <the-k8s-node-ip>".
Yet in case of HTTPS request the value is "<the-k8s-node-ip>". The client-ip is lost.
Kong installed using Help. The most relevant part of config is:
proxy:
enabled: true
http:
enabled: true
annotations:
service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
service.beta.kubernetes.io/do-loadbalancer-algorithm: "round_robin"
service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
service.beta.kubernetes.io/do-loadbalancer-tls-passthrough: "true"
The problem is with the SLL Passthrough. You can't modify headers when using this configuration. You should rather use SSL termination, unless you have some specific compliance requirements.
Checkout the doc for more:
https://www.digitalocean.com/docs/networking/load-balancers/how-to/ssl-passthrough/

http -> https redirect in Google Kubernetes Engine

I'm looking to redirect all traffic from
http://example.com -> https://example.com like how nearly all websites do.
I've looked at this link with no success:
Kubernetes HTTPS Ingress in Google Container Engine
And have tried the following annotations in my ingress.yaml file.
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/ingress.allow-http: "false"
All without any success. To be clear, I can access https://example.com and http://example.com without any errors, I need the http call to redirect to https.
Thanks
GKE uses GCE L7. The rules that you referenced in the example are not supported and the HTTP to HTTPS redirect should be controlled at the application level.
L7 inserts the x-forwarded-proto header that you can use to understand if the frontend traffic came using HTTP or HTTPS. Take a look here: Redirecting HTTP to HTTPS
There is also an example in that link for Nginx (copied for convenience):
# Replace '_' with your hostname.
server_name _;
if ($http_x_forwarded_proto = "http") {
return 301 https://$host$request_uri;
}
Currently, the documentation for how to do this properly (annotations, SSL/HTTPS, health checks etc) is severely lacking, and has been for far too long. I suspect it's because they prefer you to use the App Engine, which is magical but stupidly expensive. For GKE, here's two options:
ingress with a google-managed SSL cert and additional NGINX server configuration in front of your app/site
the NGINX ingress controller with self-managed/third-party SSL certs
The following is steps to a working setup using the former.
1 The door to your app
nginx.conf: (ellipses represent other non-relevant, non-compulsory settings)
user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
http {
...
keepalive_timeout 620s;
## Logging ##
...
## MIME Types ##
...
## Caching ##
...
## Security Headers ##
...
## Compression ##
....
server {
listen 80;
## HTTP Redirect ##
if ($http_x_forwarded_proto = "http") {
return 301 https://[YOUR DOMAIN]$request_uri;
}
location /health/liveness {
access_log off;
default_type text/plain;
return 200 'Server is LIVE!';
}
location /health/readiness {
access_log off;
default_type text/plain;
return 200 'Server is READY!';
}
root /usr/src/app/www;
index index.html index.htm;
server_name [YOUR DOMAIN] www.[YOUR DOMAIN];
location / {
try_files $uri $uri/ /index.html;
}
}
}
NOTE: One serving port only. The global forwarding rule adds the http_x_forwarded_proto header to all traffic that passes through it. Because ALL traffic to your domain is now passing through this rule (remember, one port on the container, service and ingress), this header will (crucially!) always be set. Note the check and redirect above: it only continues with serving if the header value is 'https'. The root and index and location values may differ depending for your project (this is an angular project). keepalive_timeout is set to the value recommended by google. I prefer using the main nginx.conf file, but most people add a custom.conf file to /etc/nginx/conf.d; if you do this just make sure the file is imported into the main nginx.conf http block using an includes statement. The comments highlight where other settings you may want to add when everything is working will go, like gzip/brotli, security headers, where logs are saved, and so on.
Dockerfile:
...
COPY nginx.conf /etc/nginx/nginx.conf
CMD ["nginx", "-g", "daemon off;"]
NOTE: only the final two lines. Specifying an EXPOSE port is unnecessary. COPY replaces the default nginx.conf with the modified one. CMD starts a light server.
2 Create a deployment manifest and apply/create
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: uber-dp
spec:
replicas: 1
selector:
matchLabels:
app: uber
template:
metadata:
labels:
app: uber
spec:
containers:
- name: uber-ctr
image: gcr.io/uber/beta:v1 // or some other registry
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 60
httpGet:
path: /health/liveness
port: 80
scheme: HTTP
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 30
httpGet:
path: /health/readiness
port: 80
scheme: HTTP
ports:
- containerPort: 80
imagePullPolicy: Always
NOTE: only one specified port is necessary, as we're going to point all (HTTP and HTTPS) traffic to it. For simplicity we're using the same path for liveness and readiness probes; these checks will be dealt with on the NGINX server, but you can and should add checks that probe the health of your app itself (eg a dedicated page that returns a 200 if healthy).The readiness probe will also be picked up by GCE, which by default has its own irremovable health check.
3 Create a service manifest and apply/create
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: uber-svc
labels:
app: uber
spec:
ports:
- name: default-port
port: 80
selector:
app: uber
sessionAffinity: None
type: NodePort
NOTE: default-port specifies port 80 on the container.
4 Get a static IP address
On GCP in the hamburger menu: VPC Network -> External IP Addresses. Convert your auto-generated ephemeral IP or create a new one. Take note of the name and address.
5 Create an SSL cert and default zone
In the hamburger menu: Network Service -> Load Balancing -> click 'advanced menu' -> Certificates -> Create SSL Certificate. Follow the instructions, create or upload a certificate, and take note of the name. Then, from the menu: Cloud DNS -> Create Zone. Following the instructions, create a default zone for your domain. Add a CNAME record with www as the DNS name and your domain as the canonical name. Add an A record with an empty DNS name value and your static IP as the IPV4. Save.
6 Create an ingress manifest and apply/create
ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mypt-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: [NAME OF YOUR STATIC IP ADDRESS]
kubernetes.io/ingress.allow-http: "true"
ingress.gcp.kubernetes.io/pre-shared-cert: [NAME OF YOUR GOOGLE-MANAGED SSL]
spec:
backend:
serviceName: mypt-svc
servicePort: 80
NOTE: backend property points to the service, which points to the container, which contains your app 'protected' by a server. Annotations connect your app with SSL, and force-allow http for the health checks. Combined, the service and ingress configure the G7 load balancer (combined global forwarding rule, backend and frontend 'services', SSL certs and target proxies etc).
7 Make a cup of tea or something
Everything needs ~10 minutes to configure. Clear cache and test your domain with various browsers (Tor, Opera, Safari, IE etc). Everything will serve over https.
What about the NGINX Ingress Controller?
I've seen discussion it being better because it's cheaper/uses less resources and is more flexible. It isn't cheaper: it requires an additional deployment/workload and service (GCE L4). And you need to do more configuration. Is it more flexible? Yes. But in taking care of most of the work, the first option gives you a more important kind of flexibility — namely allowing you to get on with more pressing matters.
For everyone like me that searches this question about once a month, Google has responded to our requests and is testing HTTP->HTTPS SSL redirection on their load balancers. Their latest answer said it should be in Alpha sometime before the end of January 2020.
Their comment:
Thank you for your patience on this issue. The feature is currently in testing and we expect to enter Alpha phase before the end of January. Our PM team will have an announcement with more details as we get closer to the Alpha launch.
Update: HTTP to HTTPS redirect is now Generally Available: https://cloud.google.com/load-balancing/docs/features#routing_and_traffic_management
GKE uses its own Ingress Controller which does not support forcing https.
That's why you will have to manage NGINX Ingress Controller yourself.
See this post on how to do it on GKE.
Hope it helps.
For what it's worth, I ended up using a reverse proxy in NGINX.
You need to create secrets and sync them into your containers
You need to create a configmap in nginx with your nginx config, as well as a default config that references this additional config file.
Here is my configuration:
worker_processes 1;
events {
worker_connections 1024;
}
http {
default_type application/octet-stream;
# Logging Configs
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# Puntdoctor Proxy Config
include /path/to/config-file.conf;
# PubSub allows 10MB Files. lets allow 11 to give some space
client_max_body_size 11M;
}
Then, the config.conf
server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name example.com;
ssl_certificate /certs/tls.crt;
ssl_certificate_key /certs/tls.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-RSA-RC4-SHA:AES128-GCM-SHA256:HIGH:!RC4:!MD5:!aNULL:!EDH:!CAMELLIA;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
# Fix the “It appears that your reverse proxy set up is broken" error.
proxy_pass http://deployment-name:8080/;
proxy_read_timeout 90;
proxy_redirect http://deployment-name:8080/ https://example.com/;
}
}
Create a deployment:
Here are the .yaml files
---
apiVersion: v1
kind: Service
metadata:
name: puntdoctor-lb
spec:
ports:
- name: https
port: 443
targetPort: 443
- name: http
port: 80
targetPort: 80
selector:
app: puntdoctor-nginx-deployment
type: LoadBalancer
loadBalancerIP: 35.195.214.7
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: puntdoctor-nginx-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: puntdoctor-nginx-deployment
spec:
containers:
- name: adcelerate-nginx-proxy
image: nginx:1.13
volumeMounts:
- name: certs
mountPath: /certs/
- name: site-config
mountPath: /etc/site-config/
- name: default-config
mountPath: /etc/nginx/
ports:
- containerPort: 80
name: http
- containerPort: 443
name: https
volumes:
- name: certs
secret:
secretName: nginxsecret
- name: site-config
configMap:
name: nginx-config
- name: default-config
configMap:
name: default
Hope this helps someone solve this issue, thanks for the other 2 answers, they both gave me valuable insight.
As of GKE version 1.18.10-gke.600, you can use FrontendConfig to create HTTP -> HTTPS redirection in Google Kubernetes Engine
HTTP to HTTPS redirects are configured using the redirectToHttps field
in a FrontendConfig custom resource. Redirects are enabled for the
entire Ingress resource so all services referenced by the Ingress will
have HTTPS redirects enabled.
The following FrontendConfig manifest enables HTTP to HTTPS redirects.
Set the spec.redirectToHttps.enabled field to true to enable HTTPS
redirects. The spec.responseCodeName field is optional. If it's
omitted a 301 Moved Permanently redirect is used.
For example:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: your-frontend-config-name
spec:
redirectToHttps:
enabled: true
responseCodeName: MOVED_PERMANENTLY_DEFAULT
MOVED_PERMANENTLY_DEFAULT is on of the available RESPONSE_CODE field value, to return a 301 redirect response code (default if responseCodeName is unspecified).
You can find more options here: HTTP to HTTPS redirects
Then you have to link your FrontendConfig to the Ingress, like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: your-ingress-name
annotations:
networking.gke.io/v1beta1.FrontendConfig: your-frontend-config-name
spec:
tls:
...