How to set up https on kubernetes bare metal using traefik ingress controller - kubernetes

I'm running a kubernetes cluster which consists of three nodes and brilliantly works, but it's time to make my web application secure, so I deployed an ingress controller(traefik). But I was unable to find instructions for setting up https on it. I know most of things I will have to do, like setting up a "secret"(container with certs) etc. but I was wondering how to configure my ingress controller and all files related to it so I would be able to use secure connection
I have already configured ingress controller and created some frontends and backends. Also I configured nginx server(It's actually a web application I'm running) to work on 443 port
My web application deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: nginx
replicas: 3 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: ilchub/my-nginx
ports:
- containerPort: 443
tolerations:
- key: "primary"
operator: Equal
value: "true"
effect: "NoSchedule"
Traefik ingress controller deployment code
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: https
containerPort: secure
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
Ingress for traefik dashboard
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
rules:
- host: cluster.aws.ctrlok.dev
http:
paths:
- path: /
backend:
serviceName: traefik-web-ui
servicePort: web
External expose related config
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
nodePort: 30036
name: web
- protocol: TCP
port: 443
nodePort: 30035
name: secure
- protocol: TCP
port: 8080
nodePort: 30034
name: admin
type: NodePort
What I want to do is securing my application which is already running. Final result has to be a webpage running over https

Actually you have 3 ways to configure Traefik to use https to communicate with backend pods:
If the service port defined in the ingress spec is 443 (note that you can still use targetPort to use a different port on your pod).
If the service port defined in the ingress spec has a name that starts with https (such as https-api, https-web or just https).
If the ingress spec includes the annotation ingress.kubernetes.io/protocol: https.
If either of those configuration options exist, then the backend communication protocol is assumed to be TLS, and will connect via TLS automatically.
Also additional authentication annotations should be added to the Ingress object, like:
ingress.kubernetes.io/auth-tls-secret: secret
And of course, add a TLS Certificate to the Ingress

Related

GKE Ingress with Multiple Backend Services returns 404

I'm trying to create a GKE Ingress that points to two different backend services based on path. I've seen a few posts explaining this is only possible with an nginx Ingress because gke ingress doesn't support rewrite-target. However, this Google documentation, GKE Ingresss - Multiple backend services, seems to imply otherwise. I've followed the steps in the docs but haven't had any success. Only the service that is available on the path prefix of / is returned. Any other path prefix, like /v2, returns a 404 Not found.
Details of my setup are below. Is there an obvious error here -- is the Google documentation incorrect and this is only possible using nginx ingress?
-- Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: app-static-ip
networking.gke.io/managed-certificates: app-managed-cert
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: /v2
pathType: Prefix
backend:
service:
name: api-2-service
port:
number: 8080
-- Service 1
apiVersion: v1
kind: Service
metadata:
name: api-service
labels:
app: api
spec:
type: NodePort
selector:
app: api
ports:
- port: 80
targetPort: 5000
-- Service 2
apiVersion: v1
kind: Service
metadata:
name: api-2-service
labels:
app: api-2
spec:
type: NodePort
selector:
app: api-2
ports:
- port: 8080
targetPort: 5000
GCP Ingress supports multiple paths. This is also well described in Setting up HTTP(S) Load Balancing with Ingress. For my test I've used both Hello-world v1 and v2.
There are 3 possible issues.
Issue is with container ports opened. You can check it using netstat:
$ kk exec -ti first-55bb869fb8-76nvq -c container -- bin/sh
/ # netstat -plnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 :::8080 :::* LISTEN 1/hello-app
Issue might be also caused by the Firewall configuration. Make sure you have proper settings. (In general, in the new cluster I didn't need to add anything but if you have more stuff and have specific Firewall configurations it might block).
Misconfiguration between port, containerPort and targetPort.
Below my example:
1st deployment with
apiVersion: apps/v1
kind: Deployment
metadata:
name: first
labels:
app: api
spec:
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: container
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: api-service
labels:
app: api
spec:
type: NodePort
selector:
app: api
ports:
- port: 5000
targetPort: 8080
2nd deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: second
labels:
app: api-2
spec:
selector:
matchLabels:
app: api-2
template:
metadata:
labels:
app: api-2
spec:
containers:
- name: container
image: gcr.io/google-samples/hello-app:2.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: api-2-service
labels:
app: api-2
spec:
type: NodePort
selector:
app: api-2
ports:
- port: 6000
targetPort: 8080
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 5000
- path: /v2
pathType: Prefix
backend:
service:
name: api-2-service
port:
number: 6000
Outputs:
$ curl 35.190.XX.249
Hello, world!
Version: 1.0.0
Hostname: first-55bb869fb8-76nvq
$ curl 35.190.XX.249/v2
Hello, world!
Version: 2.0.0
Hostname: second-d7d87c6d8-zv9jr
Please keep in mind that you can also use Nginx Ingress on GKE by adding specific annotation.
kubernetes.io/ingress.class: "nginx"
Main reason why people use nginx ingress on GKE is using rewrite annotation and possibility to use ClusterIP or NodePort as serviceType, where GCP ingress allows only NodePort serviceType.
Additional information you can find in GKE Ingress for HTTP(S) Load Balancing

Google Identity Aware Proxy with Internal load balancer

Is it possible to add IAP on top of the internal load balancer?
In my setup I have:
Published and configured oauth screen as per https://cloud.google.com/iap/docs/enabling-kubernetes-howto#enabling_iap
Certificates for my internal domain
Kubernetes service setup with the backendConfig (as per https://cloud.google.com/iap/docs/enabling-kubernetes-howto#add-iap-to-backendconfig)
Application running in kubernetes
Kubernetes Ingress that creates internal load balancer with the proper ssl certificates
On my local machine i have /etc/hosts pointing to the load balancer.
I am able to authenticate when accessing the app but then i get "You don't have access" page instead of my application.
In my GCP console I can't see the internal load balancer listed on the IAP page. I can only see external load balancers.
As per this documentation https://cloud.google.com/iap/docs/concepts-overview#your_responsibilities it looks like IAP is supported with HTTPS ILB.
Is iLB with IAP supported on GCP?
My k8s config
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: config-default
namespace: default
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: my-secret
-------------------------------
apiVersion: v1
kind: Service
metadata:
name: ilb-service
annotations:
cloud.google.com/backend-config: '{"default": "config-default"}'
cloud.google.com/neg: '{"ingress": true}'
labels:
app: hello
spec:
type: NodePort
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: host1
- port: 443
targetPort: 8080
protocol: TCP
name: host2
--------------------------------
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ilb-demo-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "gce-internal"
kubernetes.io/ingress.allow-http: "false"
ingress.gcp.kubernetes.io/pre-shared-cert: "titan-testing-ilb"
kubernetes.io/ingress.regional-static-ip-name: "my-internal-address"
spec:
defaultBackend:
service:
name: ilb-service
port:
number: 443
-------------------------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-app
spec:
selector:
matchLabels:
app: hello
replicas: 3
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: "us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0"
ports:
- containerPort: 8080
protocol: TCP
User Type:
I have experienced the same issue at some point and it turned out that I had forgotten to set the user type to "external".
This is if you have managed to get past the login screen of OAuth.

Interplay between Ingress and Service

Ingress can be used to route traffic from an endpoint (or name) to a service which is then forwarded to a pod with matching labels. However, an ingress does not expose arbitrary ports or protocols, which is why services need to be either of type LoadBalancer or NodePort.
So far, so good. I followed a guide to install Traefik as an ingress controller and everything is up and running, but I don't understand why it even works. This question is not related to Traefik; I saw this pattern many times.
The first manifest consists of the ingress deployment and a service called traefik-ingress-service of type LoadBalancer (works with NodePort as well).
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
labels:
k8s-app: traefik-ingress-lb-dep
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v1.7
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
# type: NodePort (works as well, but don't forget to add port to uri)
type: LoadBalancer
In order to expose Traefik's UI there is another manifest including a second service (type: default) traefik-web-ui and an Ingress which connects traffic from / to traefik-web-ui.
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
spec:
#type: NodePort
selector:
k8s-app: traefik-ingress-lb
ports:
- name: web
port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: dashboard.localhost #optional, otherwise cluster's ip
http:
#- http:
paths:
- path: /
backend:
serviceName: traefik-web-ui
servicePort: web
I don't understand why this is working, because service traefik-web-ui should not be reachable from external. The other service traefik-ingress-service is, but they are not connected. So, how does this work?
Moreover, I'm curious why I would not just create one service traefik-web-ui of type NodePort or LoadBalancer?
That's how an ingress controller is supposed to work.
You have one load balancer managed by the ingress controller. Each time you create an ingress resource, the traefik configuration (running in your traefik pod) is updated accordingly to route the traffic to your services internally (traefik-web-ui service in your case).
This image, taken from this very good article, is quite illustrative.
The ingress controller (the service + deployment in your first manifest) is the big Ingress box, routing the traffic to the internal services.

Kubernetes ingress-nginx gives 502 error (Bad Gateway)

I have an EKS cluster for which I want :
- 1 Load Balancer per cluster,
- Ingress rules to direct to the right namespace and the right service.
I have been following this guide : https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
My deployments:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: IMAGENAME
ports:
- containerPort: 8000
name: hello-world
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: bleble
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: bleble
template:
metadata:
labels:
app: bleble
spec:
containers:
- name: bleble
image: IMAGENAME
ports:
- containerPort: 8000
name: bleble
the service of those deployments:
apiVersion: v1
kind: Service
metadata:
name: hello-world-svc
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8000
selector:
app: hello-world
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: bleble-svc
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8000
selector:
app: bleble
type: NodePort
My Load balancer:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
My ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
namespace : default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: internal-lb.aws.com
http:
paths:
- path: /bleble
backend:
serviceName: bleble-svc
servicePort: 80
- path: /hello-world
backend:
serviceName: hello-world-svc
servicePort: 80
I've set up the Nginx Ingress Controller with this : kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.24.1/deploy/mandatory.yaml
I am unsure why I get a 503 Service Temporarily Unavailable for one service and one 502 for another... I would guess it's a problem of ports or of namespace? In the guide, they don't define namespace for the deployment...
Every resources create correctly, and I think the ingress is actually working but is getting confused where to go.
Thanks for your help!
In general, use externalTrafficPolicy: Cluster instead of Local. You can gain some performance (latency) improvement by using Local but you need to configure those pod allocations with a lot efforts. You will hit 5xx errors with those misconfigurations. In addition, Cluster is the default option for externalTrafficPolicy.
In your ingress, you route /bleble to service bleble, but your service name is actually bleble-svc. please make them consistent. Also, you would need to set your servicePort to 8080 as you exposed 8080 in your service configuration.
For internal service like bleble-svc, Cluster IP is good enough in your case as it does not need external access.
Hope this helps.
Found it!
The containerPort in the Deployment were set to 8000, the targetport of the services as well, but the person who did the Dockerfile of the code exposed the port 80. Which was the reason it was getting the 502 Bad getaway!
Thanks a lot as well to #Fei who has been a fantastic helper!

Configure Ingress Kubernetes - accessible only on single node

I had setup ingress on my Kubernetes Cluster running on VMWAre virtual machines by following everything similar to the specifications here. All the ports are open and accessible.
https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example
My master is x.x.x.10 and nodes are x.x.x.12 and x.x.x.13.
After the creation of ingress/controllers, I need to get the IP where the nginx-controller runs
nginx-ingress-rc-kgfmd 1/1 Running 0 21h 172.16.5.5 x.x.x.12
so, it usually runs either on x.x.x.12 or x.x.x.13, and then when I do this it hits my web service
curl --resolve master.federated.fds:80:x.x.x.12 https://master.federated.fds/coffee
where master.federated.fds is the DNS resolvable name of Master.
I need to make it work without the help of IP address and only with the DNS resolvable name or else atleast with any of the node ip's
Eg: http://node2.federated.fds/coffee, when I curl this I get Connection refused error
Updating with specifications
apiVersion: v1
kind: Service
metadata:
name: coffee-svc
labels:
app: coffee
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
# nodePort: 30080
type: NodePort
selector:
app: coffee
ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
spec:
rules:
- host: jciamaster.federated.fds
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
nginx ing controller
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-rc
labels:
app: nginx-ingress
spec:
replicas: 1
selector:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- image: nginxdemos/nginx-ingress:0.8.1
imagePullPolicy: Always
name: nginx-ingress
ports:
- containerPort: 80
hostPort: 80
I see that the port 80 is listening only on the node where nginx pod runs and not on any other node. Could someone pls let me know how to access the application through all node ip's or thro a url like jciamaster.federated.fds?
Thanks,
Update:
Tried to run with nginx controller as svc as suggested by Marc
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-rc
labels:
app: nginx-ingress
spec:
replicas: 1
selector:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- image: nginxdemos/nginx-ingress:0.8.1
imagePullPolicy: Always
name: nginx-ingress
ports:
- containerPort: 80
# Uncomment the lines below to enable extensive logging and/or customization of
# NGINX configuration with configmaps
#args:
#- -v=3
#- -nginx-configmaps=default/nginx-config
---
apiVersion: v1
kind: Service
metadata:
labels:
name: nginx-ingress-label
name: nginx-ing-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
nodePort: 30000
type: NodePort
selector:
name: nginx-ingress
When I hit http://x.x.x.:30000/coffee it just hangs and does nothing.Anything I am doing wrong?
You can expose the nginx controller Pod with a NodePort Service, then you can access it on all nodes.