How make haproxy read scss/images/icons files inside assets folder referenced in a index.htm file? - haproxy

How to make haproxy k8s pod read scss/images/icons files inside assets folder referenced in a index.htm file?
I have a maintenance page whose text info is coming perfectly.
We used maintenance page when our backend application is down for reason and we get 503 error message.
Below is the haproxy.cfg file
global
log 127.0.0.1 local1
maxconn 4096
ssl-default-bind-ciphers TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:TLS13-CHACHA20-POLY1305-SHA256:EECDH+AESGCM:EECDH+CHACHA20
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
defaults
mode http
maxconn 2048
frontend e-store-app
bind *:80
bind *:443 ssl crt /usr/local/etc/haproxy/e-store-app.pem
errorfile 503 /usr/local/etc/haproxy/errors/index.html
# 16000000 seconds is a bit more than 6 months
http-response set-header Strict-Transport-Security "max-age=16000000; includeSubDomains; preload;"
redirect scheme https if !{ ssl_fc }
mode http
timeout connect 5s
timeout client 5s
timeout server 5s
default_backend e-store-app
backend e-store-app
redirect scheme https if !{ ssl_fc }
server e-store-app e-store-app:8080 check inter 5s rise 2 fall 3
compression algo gzip
compression type text/css text/html text/javascript application/javascript text/plain text/xml application/json
Haproxy Deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-haproxy
spec:
replicas: 1
selector:
matchLabels:
app: test-haproxy
template:
metadata:
labels:
app: test-haproxy
spec:
containers:
- name: test-haproxy
image: haproxy:1.7
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- mountPath: "/usr/local/etc/haproxy"
name: haproxy-config
volumes:
- name: haproxy-config
hostPath:
path: /root/kubernetes/dev-cluster-manifest/haproxy
Text message in the index.html is coming as expected but not the scss/images referenced in the index.html file inside the errors/assests directory.
Is something wrong we are doing in the haproxy config file?
how to make haproxy read images,icons, available inside errors/assest directory and referenced inside index.html file.

As HAProxy is a reverse Proxy and not a web-server, therefore can't you read the assets from the disc via HAProxy.
What you can do is to add all data from the assets to the html file, this could require to increase the tune.bufsize .
Please consider to use a supported HAProxy version like 2.6 or 2.7 as the Deployment shows image: haproxy:1.7 which isn't supported any more from the community see the table on https://www.haproxy.org/

Related

Kubernetes nginx controller: avoid loading assets (css, js, etc.) via 302 redirect to https in browser

I have a local kubernetes cluster (k3s) with an nginx controller (installed via helm). I have two services (Spring-Boot myapp and an auth-server (OAuth2)).
I'm trying to make my application work with http only. Therefore, I have defined an ingress resource in the following way:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |-
if ($uri = /){
return 302 http://$http_host/myapp/;
}
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
spec:
ingressClassName: nginx
rules:
- host: myapp.cloud
http:
paths:
- path: /myapp
pathType: Prefix
backend:
service:
name: myapp
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: auth-server
port:
number: 8080
I have also added the following parameters to the nginx-controller config-map:
hsts: "false"
ssl-redirect: "false"
hsts-max-age: "0"
I have also cleared HSTS in my browsers (Safari & Firefox). SSL (server.ssl.enabled=false) is disabled for both of my backend services.
When loading http://myapp.cloud, I get redirected correctly to the login page of the auth-server (http://myapp.cloud/login). However, the page doesn't get loaded correctly, because the static assets (js, css) are not loaded. Instead the requests to load them are redirected with 302 to the same resources with https. Due to the fact that the default fake certificate of nginx is invalid, these don't get loaded.
If I access these assets directly in my browser (e.g. http://myapp.cloud/assets/style.css), I also get redirected 302 to http://myapp.cloud/assets/style.css and this doesn't load because the nginx certificate is invalid.
If I port-forward to the k8s service directly via http, they are loaded correctly.
Is there a possibility to make this work with http only or do I absolutely need to use a certificate manager etc. and make this work via https? What is missing in my configuration/settings?
I have decided to go with enabling HTTPS with a self-signed certificate, I think there's currently no way around it.

HAProxy peering in kubernetes

Background
Due to our application needs to use sticky tables for a custom header, we decided to use HAProxy, our layout looks as follows:
Nginx Ingress -> HAproxy service -> headless services of stateful application
So far stickiness works fine, but there is a scenario where if handled by the other HAproxy replica, it fails. We are trying to use peers to address this problem.
I use bitnami helm chart to deploy it, this is my values file:
metadata:
chartName: bitnami/haproxy
chartVersion: 0.3.7
service:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
- name: https
protocol: TCP
port: 443
targetPort: 8080
- name: peers
protocol: TCP
port: 10000
targetPort: 10000
containerPorts:
- name: http
containerPort: 8080
- name: https
containerPort: 8080
- name: peers
containerPort: 10000
configuration: |
global
log stdout format raw local0 debug
defaults
mode http
option httplog
timeout client 10s
timeout connect 5s
timeout server 10s
timeout http-request 10s
log global
resolvers default
nameserver dns1 172.20.0.10:53
hold timeout 30s
hold refused 30s
hold valid 10s
resolve_retries 3
timeout retry 3s
peers hapeers
peer $(MY_POD_IP):10000 # I attempted to do something like this
peer $(REPLICA_2_IP):10000 #
frontend stats
bind *:8404
stats enable
stats uri /
stats refresh 10s
frontend myfrontend
mode http
option httplog
bind *:8080
default_backend webservers
backend webservers
mode http
log stdout local0 debug
stick-table type string len 64 size 1m expire 1d peers hapeers
stick on req.hdr(MyHeader)
server s1 headless-service-1:8080 resolvers default check port 8080 inter 5s rise 2 fall 20
server s2 headless-service-2:8080 resolvers default check port 8080 inter 5s rise 2 fall 20
server s3 headless-service-3:8080 resolvers default check port 8080 inter 5s rise 2 fall 20
replicaCount: 2
extraEnvVars:
- name: LOG_LEVEL
value: debug
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
From what I read in HAProxy documentation, it requires the peers IP's, which in this case are the replicas IPs. However, the configmap does not allow injecting IPs from the HAProxy replicas.
I also thought of using a initContainer to modify the haproxy.cfg at deployment time with the correct IPs, but the volume is read-only and I would have to alter a fork of the chart to customize it.
If anyone has an idea of a different approach or workaround, I would appreciate the comments. Thanks!
...the configmap does not allow injecting IPs from the HAProxy replicas.
HAProxy's configuration supports environment variables. Eg. peer $(MY_POD_IP):10000 => peer ${MY_POD_IP}:10000

K8S with Traefik and HAP not getting real client IP on pods

I have an external LB using HAProxy in order to have a HA k8s cluster. My cluster is in K3s from Rancher and it's using Traefik LB internally.
I'm currently facing an issue where in my pods I'm getting the Traefik IP instead of the real client IP.
HAP Configuration:
# Ansible managed
defaults
maxconn 1000
mode http
log global
option dontlognull
log stdout local0 debug
option httplog
timeout http-request 5s
timeout connect 5000
timeout client 2000000
timeout server 2000000
frontend k8s
bind *:6443
bind *:80
bind *:443
mode tcp
option tcplog
use_backend masters-k8s
backend masters-k8s
mode tcp
balance roundrobin
server master01 master01.k8s.int.ntw
server master02 master02.k8s.int.ntw
# end Ansible managed
Traefik Service:
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: traefik
meta.helm.sh/release-namespace: kube-system
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
labels:
app: traefik
app.kubernetes.io/managed-by: Helm
chart: traefik-1.81.0
heritage: Helm
release: traefik
spec:
clusterIP: 10.43.250.142
clusterIPs:
- 10.43.250.142
externalTrafficPolicy: Local
ports:
- name: http
nodePort: 32232
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 30955
port: 443
protocol: TCP
targetPort: https
selector:
app: traefik
release: traefik
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 10.0.1.1
- ip: 10.0.1.11
- ip: 10.0.1.12
- ip: 10.0.1.2
With this configurations I never get my real IP on the pods, during some research I saw that people recommend to use send-proxy in the HAP like this:
server master01 master01.k8s.int.ntw check send-proxy-v2
server master02 master02.k8s.int.ntw check send-proxy-v2
But when I do so, all my cluster communication returns ERR_CONNECTION_CLOSED.
If I'm looking at it correctly, this means its going from the HAP to the cluster and the cluster is somewhere rejecting the traffic.
Any clues what I'm missing here?
Thanks
Well you have two options.
use proxy protocol
use X-Forwarded-For header
Option 1: proxy protocol
This option requires that both sites HAProxy and Traefik uses the proxy protocol, that's the reason why the people recommend "send-proxy-v2".
This requires also that all other clients which want to connect to Traefik MUST also use the proxy protocol, if the clients does not use the proxy protocol then you will get what you get a communication error.
As you configured HAProxy in TCP mode is this the only option to get the client ip to Traefik.
Option 2: X-Forwarded-For
I personally would use this option because it makes it possible to connect to traefik with any http(s) client.
This requires that you use the HTTP mode in haproxy and some more parameters like option forwardfor.
# Ansible managed
defaults
maxconn 1000
mode http
log global
option dontlognull
log stdout local0 debug
option httplog
timeout http-request 5s
timeout connect 5s
timeout client 200s
timeout server 200s
# send client ip in the x-forwarded-for header
option forwardfor
frontend k8s
bind *:6443 v4v6 alpn h2,http/1.1 ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem crt /etc/ssl/haproxy/
bind *:80 v4v6 alpn h2,http/1.1 ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem crt /etc/ssl/haproxy/
bind *:443 v4v6 alpn h2,http/1.1 ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem crt /etc/ssl/haproxy/
use_backend masters-k8s
backend masters-k8s
balance roundrobin
server master01 master01.k8s.int.ntw check
server master02 master02.k8s.int.ntw check
# end Ansible managed
In the File "/etc/haproxy/letsencryptauthorityx3.pem" are the CA's for the backends, in the directory "/etc/ssl/haproxy/" are the certificates for the frontends.
Please take a look into the documentation about the crt keyword.
You have also to configure traefik to allow the header from haproxy forwarded-headers

Custom Errors Backend Configuration for Nginx Ingress Controller

I have Nginx Ingress controller deployed in Nginx namespace of my kubernetes cluster.
I am building a custom backend image and deploying it in Nginx namespace as nginx default backend. Now it should serve custom http error pages like 404, 500 and 503etc.
I built one and deployed using helm-chart with below default.conf file but it is serving index.html for 404 error and serving just default 503 error(not my custom 503.html page)
server {
listen 8080 default_server;
root /var/www/html;
index index.html;
location / {
}
location /healthz {
access_log off;
return 200 "healthy\n";
}
location /metrics {
stub_status on;
}
error_page 404 /404.html;
location = /404.html {
internal;
}
error_page 500 /500.html;
location = /500.html {
internal;
}
error_page 503 /503.html;
location = /503.html {
internal;
}
}
Dockerfile
FROM nginxinc/nginx-unprivileged
USER root
RUN rm /etc/nginx/conf.d/default.conf && \
ln -sf /dev/stdout /var/log/nginx/access.log && \
ln -sf /dev/stderr /var/log/nginx/error.log
COPY ./default.conf /etc/nginx/conf.d/default.conf
COPY content/ /var/www/html/
CMD ["nginx", "-g", "daemon off;"]
Please ask me if you need more info.
Your help will be much appreciated,
Thanks.
To update the default error page, edit the config map of nginx-ingress-controller. Insert a new key custom-http-errors with the HTTP status code that you want to change its error page, such as:
apiVersion: v1
kind: ConfigMap
data:
custom-http-errors: 404,413,503
enable-vts-status: "false"
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.6.0
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
namespace: Nginx
Now, fire the test command again,
curl -i "[http://err-test.192.168.64.5.nip.io/err?code=413](http://err-test.192.168.64.5.nip.io/err?code=413)"
HTTP/1.1 404 Not Found
Server: nginx/1.15.10
Date: Sun, 05 May 2019 11:36:04 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 21
Connection: closedefault backend - 404
This is NOT what you want. The Nginx ingress controller correctly captures the HTTP status code that we want to customize. However, the default Nginx “default-backend” (Image: k8s.gcr.io/defaultbackend:1.4) just simply returns 404 status regardless of the actual status code that the application intends to return. This will cause a problem if the status code is to be used for other purposes.
In the Nginx Ingress Controller documentation you may see:
The custom backend is expected to return the correct HTTP status code
instead of 200. NGINX does not change the response from the custom
default backend.
Then in Dockerfile you have to add path to your defined custom backends errors:
ADD custom-error-pages /
CMD ["/custom-error-pages"]
Build the image and push into docker hub. Redeploy the Nginx helm chart in K3s it looks like:
apiVersion: k3s.cattle.io/v1
kind: HelmChart
metadata:
name: nginx-ingress
namespace: Nginx
spec:
chart: stable/nginx-ingress
targetNamespace: kube-system
valuesContent: |-
defaultBackend:
enabled: true
name: default-backend
image:
repository: your-image/custom-error-page
tag: latest
In k8s simply execute:
helm upgrade nginx-ingress stable/nginx-ingress --set defaultBackendService.enabled=true --set defaultBackend.image.repository=your-image/custom-error-page --set defaultBackend.image.tag=latest
Then test again, execute the same curl command:
➜ curl -i "[http://err-test.192.168.64.5.nip.io/err?code=413](http://err-test.192.168.64.5.nip.io/err?code=413)"
HTTP/1.1 413 Request Entity Too Large
Server: nginx/1.15.10
Date: Sun, 05 May 2019 12:09:21 GMT
Content-Type: */*
Transfer-Encoding: chunked
Connection: close4xx html
You will have the custom error page with the correct HTTP status code.
Since you trap 503 status, scale down the replica.
Execute command:
$ kubectl scale deploy err-status-test --replicas=0
If you will access the application again, you will see the custom error page with the status code shown as 503 which is expected.
More information you can find here: custom-backend-errors.

http -> https redirect in Google Kubernetes Engine

I'm looking to redirect all traffic from
http://example.com -> https://example.com like how nearly all websites do.
I've looked at this link with no success:
Kubernetes HTTPS Ingress in Google Container Engine
And have tried the following annotations in my ingress.yaml file.
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/ingress.allow-http: "false"
All without any success. To be clear, I can access https://example.com and http://example.com without any errors, I need the http call to redirect to https.
Thanks
GKE uses GCE L7. The rules that you referenced in the example are not supported and the HTTP to HTTPS redirect should be controlled at the application level.
L7 inserts the x-forwarded-proto header that you can use to understand if the frontend traffic came using HTTP or HTTPS. Take a look here: Redirecting HTTP to HTTPS
There is also an example in that link for Nginx (copied for convenience):
# Replace '_' with your hostname.
server_name _;
if ($http_x_forwarded_proto = "http") {
return 301 https://$host$request_uri;
}
Currently, the documentation for how to do this properly (annotations, SSL/HTTPS, health checks etc) is severely lacking, and has been for far too long. I suspect it's because they prefer you to use the App Engine, which is magical but stupidly expensive. For GKE, here's two options:
ingress with a google-managed SSL cert and additional NGINX server configuration in front of your app/site
the NGINX ingress controller with self-managed/third-party SSL certs
The following is steps to a working setup using the former.
1 The door to your app
nginx.conf: (ellipses represent other non-relevant, non-compulsory settings)
user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
http {
...
keepalive_timeout 620s;
## Logging ##
...
## MIME Types ##
...
## Caching ##
...
## Security Headers ##
...
## Compression ##
....
server {
listen 80;
## HTTP Redirect ##
if ($http_x_forwarded_proto = "http") {
return 301 https://[YOUR DOMAIN]$request_uri;
}
location /health/liveness {
access_log off;
default_type text/plain;
return 200 'Server is LIVE!';
}
location /health/readiness {
access_log off;
default_type text/plain;
return 200 'Server is READY!';
}
root /usr/src/app/www;
index index.html index.htm;
server_name [YOUR DOMAIN] www.[YOUR DOMAIN];
location / {
try_files $uri $uri/ /index.html;
}
}
}
NOTE: One serving port only. The global forwarding rule adds the http_x_forwarded_proto header to all traffic that passes through it. Because ALL traffic to your domain is now passing through this rule (remember, one port on the container, service and ingress), this header will (crucially!) always be set. Note the check and redirect above: it only continues with serving if the header value is 'https'. The root and index and location values may differ depending for your project (this is an angular project). keepalive_timeout is set to the value recommended by google. I prefer using the main nginx.conf file, but most people add a custom.conf file to /etc/nginx/conf.d; if you do this just make sure the file is imported into the main nginx.conf http block using an includes statement. The comments highlight where other settings you may want to add when everything is working will go, like gzip/brotli, security headers, where logs are saved, and so on.
Dockerfile:
...
COPY nginx.conf /etc/nginx/nginx.conf
CMD ["nginx", "-g", "daemon off;"]
NOTE: only the final two lines. Specifying an EXPOSE port is unnecessary. COPY replaces the default nginx.conf with the modified one. CMD starts a light server.
2 Create a deployment manifest and apply/create
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: uber-dp
spec:
replicas: 1
selector:
matchLabels:
app: uber
template:
metadata:
labels:
app: uber
spec:
containers:
- name: uber-ctr
image: gcr.io/uber/beta:v1 // or some other registry
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 60
httpGet:
path: /health/liveness
port: 80
scheme: HTTP
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 30
httpGet:
path: /health/readiness
port: 80
scheme: HTTP
ports:
- containerPort: 80
imagePullPolicy: Always
NOTE: only one specified port is necessary, as we're going to point all (HTTP and HTTPS) traffic to it. For simplicity we're using the same path for liveness and readiness probes; these checks will be dealt with on the NGINX server, but you can and should add checks that probe the health of your app itself (eg a dedicated page that returns a 200 if healthy).The readiness probe will also be picked up by GCE, which by default has its own irremovable health check.
3 Create a service manifest and apply/create
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: uber-svc
labels:
app: uber
spec:
ports:
- name: default-port
port: 80
selector:
app: uber
sessionAffinity: None
type: NodePort
NOTE: default-port specifies port 80 on the container.
4 Get a static IP address
On GCP in the hamburger menu: VPC Network -> External IP Addresses. Convert your auto-generated ephemeral IP or create a new one. Take note of the name and address.
5 Create an SSL cert and default zone
In the hamburger menu: Network Service -> Load Balancing -> click 'advanced menu' -> Certificates -> Create SSL Certificate. Follow the instructions, create or upload a certificate, and take note of the name. Then, from the menu: Cloud DNS -> Create Zone. Following the instructions, create a default zone for your domain. Add a CNAME record with www as the DNS name and your domain as the canonical name. Add an A record with an empty DNS name value and your static IP as the IPV4. Save.
6 Create an ingress manifest and apply/create
ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mypt-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: [NAME OF YOUR STATIC IP ADDRESS]
kubernetes.io/ingress.allow-http: "true"
ingress.gcp.kubernetes.io/pre-shared-cert: [NAME OF YOUR GOOGLE-MANAGED SSL]
spec:
backend:
serviceName: mypt-svc
servicePort: 80
NOTE: backend property points to the service, which points to the container, which contains your app 'protected' by a server. Annotations connect your app with SSL, and force-allow http for the health checks. Combined, the service and ingress configure the G7 load balancer (combined global forwarding rule, backend and frontend 'services', SSL certs and target proxies etc).
7 Make a cup of tea or something
Everything needs ~10 minutes to configure. Clear cache and test your domain with various browsers (Tor, Opera, Safari, IE etc). Everything will serve over https.
What about the NGINX Ingress Controller?
I've seen discussion it being better because it's cheaper/uses less resources and is more flexible. It isn't cheaper: it requires an additional deployment/workload and service (GCE L4). And you need to do more configuration. Is it more flexible? Yes. But in taking care of most of the work, the first option gives you a more important kind of flexibility — namely allowing you to get on with more pressing matters.
For everyone like me that searches this question about once a month, Google has responded to our requests and is testing HTTP->HTTPS SSL redirection on their load balancers. Their latest answer said it should be in Alpha sometime before the end of January 2020.
Their comment:
Thank you for your patience on this issue. The feature is currently in testing and we expect to enter Alpha phase before the end of January. Our PM team will have an announcement with more details as we get closer to the Alpha launch.
Update: HTTP to HTTPS redirect is now Generally Available: https://cloud.google.com/load-balancing/docs/features#routing_and_traffic_management
GKE uses its own Ingress Controller which does not support forcing https.
That's why you will have to manage NGINX Ingress Controller yourself.
See this post on how to do it on GKE.
Hope it helps.
For what it's worth, I ended up using a reverse proxy in NGINX.
You need to create secrets and sync them into your containers
You need to create a configmap in nginx with your nginx config, as well as a default config that references this additional config file.
Here is my configuration:
worker_processes 1;
events {
worker_connections 1024;
}
http {
default_type application/octet-stream;
# Logging Configs
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# Puntdoctor Proxy Config
include /path/to/config-file.conf;
# PubSub allows 10MB Files. lets allow 11 to give some space
client_max_body_size 11M;
}
Then, the config.conf
server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name example.com;
ssl_certificate /certs/tls.crt;
ssl_certificate_key /certs/tls.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-RSA-RC4-SHA:AES128-GCM-SHA256:HIGH:!RC4:!MD5:!aNULL:!EDH:!CAMELLIA;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
# Fix the “It appears that your reverse proxy set up is broken" error.
proxy_pass http://deployment-name:8080/;
proxy_read_timeout 90;
proxy_redirect http://deployment-name:8080/ https://example.com/;
}
}
Create a deployment:
Here are the .yaml files
---
apiVersion: v1
kind: Service
metadata:
name: puntdoctor-lb
spec:
ports:
- name: https
port: 443
targetPort: 443
- name: http
port: 80
targetPort: 80
selector:
app: puntdoctor-nginx-deployment
type: LoadBalancer
loadBalancerIP: 35.195.214.7
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: puntdoctor-nginx-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: puntdoctor-nginx-deployment
spec:
containers:
- name: adcelerate-nginx-proxy
image: nginx:1.13
volumeMounts:
- name: certs
mountPath: /certs/
- name: site-config
mountPath: /etc/site-config/
- name: default-config
mountPath: /etc/nginx/
ports:
- containerPort: 80
name: http
- containerPort: 443
name: https
volumes:
- name: certs
secret:
secretName: nginxsecret
- name: site-config
configMap:
name: nginx-config
- name: default-config
configMap:
name: default
Hope this helps someone solve this issue, thanks for the other 2 answers, they both gave me valuable insight.
As of GKE version 1.18.10-gke.600, you can use FrontendConfig to create HTTP -> HTTPS redirection in Google Kubernetes Engine
HTTP to HTTPS redirects are configured using the redirectToHttps field
in a FrontendConfig custom resource. Redirects are enabled for the
entire Ingress resource so all services referenced by the Ingress will
have HTTPS redirects enabled.
The following FrontendConfig manifest enables HTTP to HTTPS redirects.
Set the spec.redirectToHttps.enabled field to true to enable HTTPS
redirects. The spec.responseCodeName field is optional. If it's
omitted a 301 Moved Permanently redirect is used.
For example:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: your-frontend-config-name
spec:
redirectToHttps:
enabled: true
responseCodeName: MOVED_PERMANENTLY_DEFAULT
MOVED_PERMANENTLY_DEFAULT is on of the available RESPONSE_CODE field value, to return a 301 redirect response code (default if responseCodeName is unspecified).
You can find more options here: HTTP to HTTPS redirects
Then you have to link your FrontendConfig to the Ingress, like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: your-ingress-name
annotations:
networking.gke.io/v1beta1.FrontendConfig: your-frontend-config-name
spec:
tls:
...