I'm looking to redirect all traffic from
http://example.com -> https://example.com like how nearly all websites do.
I've looked at this link with no success:
Kubernetes HTTPS Ingress in Google Container Engine
And have tried the following annotations in my ingress.yaml file.
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/ingress.allow-http: "false"
All without any success. To be clear, I can access https://example.com and http://example.com without any errors, I need the http call to redirect to https.
Thanks
GKE uses GCE L7. The rules that you referenced in the example are not supported and the HTTP to HTTPS redirect should be controlled at the application level.
L7 inserts the x-forwarded-proto header that you can use to understand if the frontend traffic came using HTTP or HTTPS. Take a look here: Redirecting HTTP to HTTPS
There is also an example in that link for Nginx (copied for convenience):
# Replace '_' with your hostname.
server_name _;
if ($http_x_forwarded_proto = "http") {
return 301 https://$host$request_uri;
}
Currently, the documentation for how to do this properly (annotations, SSL/HTTPS, health checks etc) is severely lacking, and has been for far too long. I suspect it's because they prefer you to use the App Engine, which is magical but stupidly expensive. For GKE, here's two options:
ingress with a google-managed SSL cert and additional NGINX server configuration in front of your app/site
the NGINX ingress controller with self-managed/third-party SSL certs
The following is steps to a working setup using the former.
1 The door to your app
nginx.conf: (ellipses represent other non-relevant, non-compulsory settings)
user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
http {
...
keepalive_timeout 620s;
## Logging ##
...
## MIME Types ##
...
## Caching ##
...
## Security Headers ##
...
## Compression ##
....
server {
listen 80;
## HTTP Redirect ##
if ($http_x_forwarded_proto = "http") {
return 301 https://[YOUR DOMAIN]$request_uri;
}
location /health/liveness {
access_log off;
default_type text/plain;
return 200 'Server is LIVE!';
}
location /health/readiness {
access_log off;
default_type text/plain;
return 200 'Server is READY!';
}
root /usr/src/app/www;
index index.html index.htm;
server_name [YOUR DOMAIN] www.[YOUR DOMAIN];
location / {
try_files $uri $uri/ /index.html;
}
}
}
NOTE: One serving port only. The global forwarding rule adds the http_x_forwarded_proto header to all traffic that passes through it. Because ALL traffic to your domain is now passing through this rule (remember, one port on the container, service and ingress), this header will (crucially!) always be set. Note the check and redirect above: it only continues with serving if the header value is 'https'. The root and index and location values may differ depending for your project (this is an angular project). keepalive_timeout is set to the value recommended by google. I prefer using the main nginx.conf file, but most people add a custom.conf file to /etc/nginx/conf.d; if you do this just make sure the file is imported into the main nginx.conf http block using an includes statement. The comments highlight where other settings you may want to add when everything is working will go, like gzip/brotli, security headers, where logs are saved, and so on.
Dockerfile:
...
COPY nginx.conf /etc/nginx/nginx.conf
CMD ["nginx", "-g", "daemon off;"]
NOTE: only the final two lines. Specifying an EXPOSE port is unnecessary. COPY replaces the default nginx.conf with the modified one. CMD starts a light server.
2 Create a deployment manifest and apply/create
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: uber-dp
spec:
replicas: 1
selector:
matchLabels:
app: uber
template:
metadata:
labels:
app: uber
spec:
containers:
- name: uber-ctr
image: gcr.io/uber/beta:v1 // or some other registry
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 60
httpGet:
path: /health/liveness
port: 80
scheme: HTTP
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 30
httpGet:
path: /health/readiness
port: 80
scheme: HTTP
ports:
- containerPort: 80
imagePullPolicy: Always
NOTE: only one specified port is necessary, as we're going to point all (HTTP and HTTPS) traffic to it. For simplicity we're using the same path for liveness and readiness probes; these checks will be dealt with on the NGINX server, but you can and should add checks that probe the health of your app itself (eg a dedicated page that returns a 200 if healthy).The readiness probe will also be picked up by GCE, which by default has its own irremovable health check.
3 Create a service manifest and apply/create
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: uber-svc
labels:
app: uber
spec:
ports:
- name: default-port
port: 80
selector:
app: uber
sessionAffinity: None
type: NodePort
NOTE: default-port specifies port 80 on the container.
4 Get a static IP address
On GCP in the hamburger menu: VPC Network -> External IP Addresses. Convert your auto-generated ephemeral IP or create a new one. Take note of the name and address.
5 Create an SSL cert and default zone
In the hamburger menu: Network Service -> Load Balancing -> click 'advanced menu' -> Certificates -> Create SSL Certificate. Follow the instructions, create or upload a certificate, and take note of the name. Then, from the menu: Cloud DNS -> Create Zone. Following the instructions, create a default zone for your domain. Add a CNAME record with www as the DNS name and your domain as the canonical name. Add an A record with an empty DNS name value and your static IP as the IPV4. Save.
6 Create an ingress manifest and apply/create
ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mypt-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: [NAME OF YOUR STATIC IP ADDRESS]
kubernetes.io/ingress.allow-http: "true"
ingress.gcp.kubernetes.io/pre-shared-cert: [NAME OF YOUR GOOGLE-MANAGED SSL]
spec:
backend:
serviceName: mypt-svc
servicePort: 80
NOTE: backend property points to the service, which points to the container, which contains your app 'protected' by a server. Annotations connect your app with SSL, and force-allow http for the health checks. Combined, the service and ingress configure the G7 load balancer (combined global forwarding rule, backend and frontend 'services', SSL certs and target proxies etc).
7 Make a cup of tea or something
Everything needs ~10 minutes to configure. Clear cache and test your domain with various browsers (Tor, Opera, Safari, IE etc). Everything will serve over https.
What about the NGINX Ingress Controller?
I've seen discussion it being better because it's cheaper/uses less resources and is more flexible. It isn't cheaper: it requires an additional deployment/workload and service (GCE L4). And you need to do more configuration. Is it more flexible? Yes. But in taking care of most of the work, the first option gives you a more important kind of flexibility — namely allowing you to get on with more pressing matters.
For everyone like me that searches this question about once a month, Google has responded to our requests and is testing HTTP->HTTPS SSL redirection on their load balancers. Their latest answer said it should be in Alpha sometime before the end of January 2020.
Their comment:
Thank you for your patience on this issue. The feature is currently in testing and we expect to enter Alpha phase before the end of January. Our PM team will have an announcement with more details as we get closer to the Alpha launch.
Update: HTTP to HTTPS redirect is now Generally Available: https://cloud.google.com/load-balancing/docs/features#routing_and_traffic_management
GKE uses its own Ingress Controller which does not support forcing https.
That's why you will have to manage NGINX Ingress Controller yourself.
See this post on how to do it on GKE.
Hope it helps.
For what it's worth, I ended up using a reverse proxy in NGINX.
You need to create secrets and sync them into your containers
You need to create a configmap in nginx with your nginx config, as well as a default config that references this additional config file.
Here is my configuration:
worker_processes 1;
events {
worker_connections 1024;
}
http {
default_type application/octet-stream;
# Logging Configs
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# Puntdoctor Proxy Config
include /path/to/config-file.conf;
# PubSub allows 10MB Files. lets allow 11 to give some space
client_max_body_size 11M;
}
Then, the config.conf
server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name example.com;
ssl_certificate /certs/tls.crt;
ssl_certificate_key /certs/tls.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-RSA-RC4-SHA:AES128-GCM-SHA256:HIGH:!RC4:!MD5:!aNULL:!EDH:!CAMELLIA;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
# Fix the “It appears that your reverse proxy set up is broken" error.
proxy_pass http://deployment-name:8080/;
proxy_read_timeout 90;
proxy_redirect http://deployment-name:8080/ https://example.com/;
}
}
Create a deployment:
Here are the .yaml files
---
apiVersion: v1
kind: Service
metadata:
name: puntdoctor-lb
spec:
ports:
- name: https
port: 443
targetPort: 443
- name: http
port: 80
targetPort: 80
selector:
app: puntdoctor-nginx-deployment
type: LoadBalancer
loadBalancerIP: 35.195.214.7
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: puntdoctor-nginx-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: puntdoctor-nginx-deployment
spec:
containers:
- name: adcelerate-nginx-proxy
image: nginx:1.13
volumeMounts:
- name: certs
mountPath: /certs/
- name: site-config
mountPath: /etc/site-config/
- name: default-config
mountPath: /etc/nginx/
ports:
- containerPort: 80
name: http
- containerPort: 443
name: https
volumes:
- name: certs
secret:
secretName: nginxsecret
- name: site-config
configMap:
name: nginx-config
- name: default-config
configMap:
name: default
Hope this helps someone solve this issue, thanks for the other 2 answers, they both gave me valuable insight.
As of GKE version 1.18.10-gke.600, you can use FrontendConfig to create HTTP -> HTTPS redirection in Google Kubernetes Engine
HTTP to HTTPS redirects are configured using the redirectToHttps field
in a FrontendConfig custom resource. Redirects are enabled for the
entire Ingress resource so all services referenced by the Ingress will
have HTTPS redirects enabled.
The following FrontendConfig manifest enables HTTP to HTTPS redirects.
Set the spec.redirectToHttps.enabled field to true to enable HTTPS
redirects. The spec.responseCodeName field is optional. If it's
omitted a 301 Moved Permanently redirect is used.
For example:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: your-frontend-config-name
spec:
redirectToHttps:
enabled: true
responseCodeName: MOVED_PERMANENTLY_DEFAULT
MOVED_PERMANENTLY_DEFAULT is on of the available RESPONSE_CODE field value, to return a 301 redirect response code (default if responseCodeName is unspecified).
You can find more options here: HTTP to HTTPS redirects
Then you have to link your FrontendConfig to the Ingress, like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: your-ingress-name
annotations:
networking.gke.io/v1beta1.FrontendConfig: your-frontend-config-name
spec:
tls:
...
Related
I have a local kubernetes cluster (k3s) with an nginx controller (installed via helm). I have two services (Spring-Boot myapp and an auth-server (OAuth2)).
I'm trying to make my application work with http only. Therefore, I have defined an ingress resource in the following way:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |-
if ($uri = /){
return 302 http://$http_host/myapp/;
}
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
spec:
ingressClassName: nginx
rules:
- host: myapp.cloud
http:
paths:
- path: /myapp
pathType: Prefix
backend:
service:
name: myapp
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: auth-server
port:
number: 8080
I have also added the following parameters to the nginx-controller config-map:
hsts: "false"
ssl-redirect: "false"
hsts-max-age: "0"
I have also cleared HSTS in my browsers (Safari & Firefox). SSL (server.ssl.enabled=false) is disabled for both of my backend services.
When loading http://myapp.cloud, I get redirected correctly to the login page of the auth-server (http://myapp.cloud/login). However, the page doesn't get loaded correctly, because the static assets (js, css) are not loaded. Instead the requests to load them are redirected with 302 to the same resources with https. Due to the fact that the default fake certificate of nginx is invalid, these don't get loaded.
If I access these assets directly in my browser (e.g. http://myapp.cloud/assets/style.css), I also get redirected 302 to http://myapp.cloud/assets/style.css and this doesn't load because the nginx certificate is invalid.
If I port-forward to the k8s service directly via http, they are loaded correctly.
Is there a possibility to make this work with http only or do I absolutely need to use a certificate manager etc. and make this work via https? What is missing in my configuration/settings?
I have decided to go with enabling HTTPS with a self-signed certificate, I think there's currently no way around it.
Describe the bug
I am running keycloak 19.0.1 on kubernetes 1.21.4
I have one internal nginx ingress controller installed on kubernetes and not exposed to outside, and i have another nginx server which is in the DMZ exposed to external and running as reverse proxy.
I used to un keycloak with the option PROXY_ADDRESS_FORWARDING=true from inside kubernetes
I used to configure the below Ingress file for it to be exposed through the nginx ingress controller:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
labels:
app: keycloak
name: keycloak
namespace: fms-preprod
spec:
rules:
- host: keycloak-test-ssl.stcs.com.sa
http:
paths:
- backend:
service:
name: keycloak
port:
number: 8443
path: /
pathType: Prefix
I used to define the below configuration inside the external nginx reverse proxy:
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name keycloak-test-ssl.stcs.com.sa ;
location / {
proxy_pass http://ingress;
proxy_set_header X-Forwarded-For $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
client_max_body_size 100M;
}
ssl_certificate /etc/ssl/blabla/bundle1.crt;
ssl_certificate_key /etc/ssl/blabla/bundle1.key;
}
Everything is working fine using the http without SSL configuration and i am able to browse the admin console, but once configuring the SSL on the nginx external reverse proxy as mentioned above, the first page of keycloak is loading fine but the admin console is blocked loading and giving the error as mentioned in the below pic.
Any one tell me what is wrong or missing in my configuration would be helpful for me
thanks in advance
Version
19.0.1
Expected behavior
keycloak has to work properly behind nginx Reverse proxy with SSL
Actual behavior
keycloak failed loading admin console giving blocked mixed content
How to Reproduce?
details given in above description
Anything else?
No response
We have HTTP and TCP services behind Nginx Ingress Controller. The HTTP services are configured through an Ingress object, where when we get the request, through a snippet generate a header (Client-Id) and pass it to the service.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-pre
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($host ~ ^(?<client>[^\..]+)\.(?<app>pre|presaas).(host1|host2).com$) {
more_set_input_headers 'Client-Id: $client';
}
spec:
tls:
- hosts:
- "*.pre.host1.com"
secretName: pre-host1
- hosts:
- "*.presaas.host2.com"
secretName: presaas-host2
rules:
- host: "*.pre.host1.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-front
port:
number: 80
- host: "*.presaas.host2.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-front
port:
number: 80
The TCP service is configured to connect directly, and it is done through a ConfigMap. These service connect through a TCP socket.
apiVersion: v1
data:
"12345": pre/service-back:12345
kind: ConfigMap
metadata:
name: tcp-service
namespace: ingress-nginx
All this config works fine. The TCP clients connect fine through a TCP sockets and the users connect fine through HTTP. The problem is that the TCP clients, when the establish the connection, get the source IP address (their own IP, or in Nginx, $remote_addr) and report it back to an admin endpoint, where it is shown in a dashboard. So there is a dashboard with all the TCP clients connected, with their IP addresses. Now what happens is that all the IP addresses, instead of being the client ones are the one of the Ingress Controller (the pod).
I set use-proxy-protocol: "true", and it seems to resolve the issue for the TCP connections, as in the logs I can see different external IP addresses being connected, but now HTTP services do not work, including the dashboard itself. These are the logs:
while reading PROXY protocol, client: 1.2.3.4, server: 0.0.0.0:80
2022/04/04 09:00:13 [error] 35#35: *5273 broken header: "��d�hԓ�:�����ӝp��E�L_"�����4�<����0�,�(�$��
����kjih9876�w�s��������" while reading PROXY protocol, client: 1.2.3.4, server: 0.0.0.0:443
I know the broken header logs are from HTTP services, as if I do telnet to the HTTP port I get the broken header, and if I telnet to the TCP port I get clean logs with what I expect.
I hope the issue is clear. So, what I need is a way to configure Nginx Ingress Controller to get HTTP and TCP servies. I don't know if I can configure use-proxy-protocol: "true" parameter for only one service. It seems that this is a global parameter.
For now the solution we are thinking of is to set a new Network Load Balancer (this is running in a AWS EKS cluster) just for the TCP service, and leave the HTTP behind the Ingress Controller.
To solve this issue, go to NLB target groups and enable the proxy protocol version 2 in the attributes tab. Network LB >> Listeners >> TCP80/TCP443 >> select Target Group >> Attribute Tab >> Enable Proxy Protocol v2.
I'm currently trying to wrap my head around how the typical application flow looks like for a kubernetes application in combination with Istio.
So, for my app I have an asp.net application hosted within a Kubernetes cluster, and I added Istio on top. Here is my gateway & VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: appgateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: appvservice
spec:
hosts:
- "*"
gateways:
- appgateway
tls:
- match:
- port: 443
sniHosts:
- "*"
route:
- destination:
host: frontendservice.default.svc.cluster.local
port:
number: 443
This is what I came up with after reading through the Istio documentation.
Note that my frontendservice is a very basic ClusterIP service routing to an Asp.Net application which also offers standard 80 / 443 ports.
I have a few questions now:
Is this the proper approach to securing my application? In essence I want to redirect incoming traffic on port 80 straight to https enabled 443 right at the edge. However, when I try this, there's no redirect going on on port 80 at all.
Also, the tls route on my VirtualService does not work. There's just no traffic ending up on my pod
I'm also wondering, is it necessary to even manually add HTTPs to my internal applications, or is this something where Istios internal CA functionality comes in?
I have imagined it to work like this:
Request comes in. If it's on port 80, send a redirect to the client in order to send a https request. If it's on port 443, allow the request.
The VirtualService providers the instructions what should happen with requests on port 443, and forward it to the service.
The service now forwards the request to my app's 443 port.
Thanks in advance - I'm just learning Istio, and I'm a bit baffled why my seemingly proper setup does not work here.
Your Gateway terminates TLS connections, but your VirtualService is configured to accept unterminated TLS connections with TLSRoute.
Compare the example without TLS termination and the example which terminates TLS. Most probably, the "default" setup would be to terminate the TLS connection and configure the VirtualService with a HTTPRoute.
We are also using a similar setup.
SSL is terminated on ingress gateway, but we use mTLS mode via Gateway CR.
Services are listening on non-ssl ports but sidecars use mTLS between them so that any container without sidecar cannot talk to service.
VirtualService is routing to non-ssl port of service.
Sidecar CR intercepts traffic going to and from non-ssl port of service.
PeerAuthentication sets mTLS between sidecars.
I have a deployment istio is injected in with access to the google maps distance matrix api. If I run the istioctl kube-inject with --includeIPRanges 10.0.0.0/8 it seems to work. If I remove this flag and instead apply a egress rule it won't work:
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: google-egress-rule
namespace: microservices
spec:
destination:
service: "maps.googleapis.com"
ports:
- port: 443
protocol: https
- port: 80
protocol: http
Both, deployment and Egress rule are in the same namespace (microservices).
Any idea where my fault is?
From what I see by running curl maps.googleapis.com, it redirects to https://developers.google.com/maps/.
Two issues here:
You have specify an additional EgressRule for developers.google.com
Currently you have to access https external sites by issuing http requests to port 443, like curl http://developers.google.com/maps:443. Istio proxy will open an https connection to developers.google.com for you. Unfortunately, currently there is no other way to do it, except for using --includeIPRanges.