kubernetes-dashboard exposing through istio [1.0.0] ingress --istio-ingressgateway - kubernetes

I have configured istio ingress with lets encrypt certificate.
I am able to access different service on https which are running on different port by using gateways and virtualservice.
But kubernetes-dashboard run on 443 port in kube-system namespace and with its own certificate, How i can expose it through istio gateways and virtualservice.
I have defined sub domain for dashboard and created gateways,virtualservice and it was directing 443 trafic to kuberentes dashboard service , but its not working.
for https virtual service config i have taken reference from for istio doc

It sounds like you want to configure an ingress gateway to perform SNI passthrough instead of TLS termination. You can do this by setting the tls mode in your Gateway configuration to PASSTHROUGH something like this:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: dashboard
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https-dashboard
protocol: HTTPS
tls:
mode: PASSTHROUGH
hosts:
- dashboard.example.com
A complete passthrough example can be found here.

Related

GKE Ingress configuration for HTTPS-enabled Applications leads to failed_to_connect_to_backend

I have serious problems with the configuration of Ingress on a Google Kubernetes Engine cluster for an application which expects traffic over TLS. I have configured a FrontendConfig, a BackendConfig and defined the proper annotations in the Service and Ingress YAML structures.
The Google Cloud Console reports that the backend is healthy, but if i connect to the given address, it returns 502 and in the Ingress logs appears a failed_to_connect_to_backend error.
So are my configurations:
FrontendConfig.yaml:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: my-frontendconfig
namespace: my-namespace
spec:
redirectToHttps:
enabled: false
sslPolicy: my-ssl-policy
BackendConfig.yaml:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-backendconfig
namespace: my-namespace
spec:
sessionAffinity:
affinityType: "CLIENT_IP"
logging:
enable: true
sampleRate: 1.0
healthCheck:
checkIntervalSec: 60
timeoutSec: 5
healthyThreshold: 3
unhealthyThreshold: 5
type: HTTP
requestPath: /health
# The containerPort of the application in Deployment.yaml (also for liveness and readyness Probes)
port: 8001
Ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: my-namespace
annotations:
# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.class: "gce"
# Frontend Configuration Name
networking.gke.io/v1beta1.FrontendConfig: "my-frontendconfig"
# Static IP Address Rule Name (gcloud compute addresses create epa2-ingress --global)
kubernetes.io/ingress.global-static-ip-name: "my-static-ip"
spec:
tls:
- secretName: my-secret
defaultBackend:
service:
name: my-service
port:
number: 443
Service.yaml:
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: my-namespace
annotations:
# Specify the type of traffic accepted
cloud.google.com/app-protocols: '{"service-port":"HTTPS"}'
# Specify the BackendConfig to be used for the exposed ports
cloud.google.com/backend-config: '{"default": "my-backendconfig"}'
# Enables the Cloud Native Load Balancer
cloud.google.com/neg: '{"ingress": true}'
spec:
type: ClusterIP
selector:
app: my-application
ports:
- protocol: TCP
name: service-port
port: 443
targetPort: app-port # this port expects TLS traffic, no http plain connections
The Deployment.yaml is omitted for brevity, but it defines a liveness and readiness Probe on another port, the one defined in the BackendConfig.yaml.
The interesting thing is, if I expose through the Service.yaml also this healthcheck port (mapped to port 80) and I point the default Backend to port 80 and simply define a rule with a path /* leading to port 443, everything seems to work just fine, but I don't want to expose the healthcheck port outside my cluster, since I have also some diagnostics information there.
Question: How can I be sure that if i connect to the Ingress point with ``https://MY_INGRESS_IP/`, the traffic is routed exactly as it is to the HTTPS port of the service/application, without getting the 502 error? Where do I fail to configure the Ingress?
There are few elements to your question, i'll try to answer them here.
I don't want to expose the healthcheck port outside my cluster
The HealtCheck endpoint is technically not exposed outside the cluster, it's expose inside Google Backbone so that the the Google LoadBalancers (configured via Ingress) can reach it. You can try that by doing a curl against https://INGREE_IP/healthz, this will not work.
The traffic is routed exactly as it is to the HTTPS port of the service/application
The reason why 443 in your Service Definition doesn't work but 80 does, its because when you expose the Service on port 443, the LoadBalancer will fail to connect to a backend without a proper certificate, your backend should also be configured to present a certificate to the Loadbalancer to encrypt traffic. The secretName configured at the Ingress is the certificate used by the clients to connect to the LoadBalancer. Google HTTP LoadBalancer terminate the SSL certificate and initiate a new connection to the backend using whatever port you specific in the Ingress. If that port is 443 but the backend is not configured with SSL certificates, that connection will fail.
Overall you don't need to encrypt traffic between LoadBalancers and backends, it's doable but not needed as Google encrypt that traffic at the network level anyway
Actually i solved it by setting a managed certificate connected to Ingress. It "magically" worked without any other change, using Service of type ClusterIP

Kubernetes Ingress with external Reverse Proxy

I'm trying to make the kubernetes dashboard externally accessible.
For this i installed nginx-ingress as ingress controller in my kubernetes cluster which is accessible on port 32012 on the kubernetes node IP.
Now i want to make it accessible via a domain like k8s.xxx.xx. For that I'd create a TLS certificate from Lets Encrypt to secure the whole thing via HTTPS.
Now we have a Apache2 reverse proxy which handles the requests going to k8s.xxx.xx and proxies them to port 32012, but here I'm getting a error from nginx-ingress:
The plain HTTP request was sent to HTTPS port
The connection chain is:
User -> https://k8s.xxx.xx/ > Apache2 reverse proxy redirecting to port 32012 on kubernetes node and handles Lets Encrypt certificate > ingress controller > kubernetes-dashboard pod.
My kubernetes ingress component looks like the following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes-dashboard-ingress
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: k8s.xxx.xx
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443

Kubernetes - Ingress TCP service SSL Termination

I'm doing SSL termination using Ingress for HTTPS traffic. But I also want to achieve the same thing for Custom Port (http virtual host). For example https://example.com:1234 should go to http://example.com:1234
Nginx Ingress has a ConfigMap where we can expose custom ports. But SSL termination doesn't work here.
Any work around? I wonder If I could redirect the incoming https using .htaccess instead.
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
1234: "test-web-services/httpd:1234"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: port-1234
port: 1234
protocol: TCP
targetPort: 1234
SSL Termination for TCP traffic is not a feature directly supported by nginx-ingress.
It is more widely described in this Github issue:
Github.com: Kubernetes: Ingress-nginx: Issues: [nginx] Support SSL for TCP
You can also find in this thread that some people were successful in implementing a workaround allowing them to support terminating SSL with TCP services. Specifically:
Github.com: Kubernetes: Ingress-nginx: Issues: [nginx] Support SSL for TCP: Comment 749026036
As your example featured the "downgrade" from HTTPS communication to HTTP it could be beneficiary to add that you can alter the way that NGINX Ingress Controller connects to your backend. Let me elaborate on that.
Please consider this as a workaround:
By default your NGINX Ingress Controller will connect to your backend with HTTP. This can be changed with following annotation:
nginx.ingress.kubernetes.io/backend-protocol:
Citing the official documentation:
Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, GRPC, GRPCS, AJP and FCGI
By default NGINX uses HTTP.
-- Kubernetes.github.io: Ingress-nginx: User guide: Nginx configuration: Annotations: Backend protocol
In this particular example the request path will be following:
client -- (HTTPS:443) --> Ingress controller (TLS Termination) -- (HTTP:service-port) --> Service ----> Pod
The caveat
You can use the Service of type LoadBalancer to send the traffic from port 1234 to either 80/443 of your Ingress Controller. This would make TLS termination much easier but it would force the client to use only one protocol. For example:
- name: custom
port: 1234
protocol: TCP
targetPort: 443
This excerpt from nginx-ingress Service could be used to forward the HTTPS traffic to your Ingress Controller where the request would be TLS terminated and forwarded as HTTP to your backend. Forcing the HTTP through that port would yield error code 400: Bad request.
In this particular example the request path will be following:
client -- (HTTPS:1234) --> Ingress controller (TLS Termination) -- (HTTP:service-port) --> Service ----> Pod

Is there a way to configure an EKS service to use HTTPS?

Here is the config for our current EKS service:
apiVersion: v1
kind: Service
metadata:
labels:
app: main-api
name: main-api-svc
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
externalTrafficPolicy: Cluster
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: 80
selector:
app: main-api
sessionAffinity: None
type: LoadBalancer
is there a way to configure it to use HTTPS instead of HTTP?
To terminate HTTPS traffic on Amazon Elastic Kubernetes Service and pass it to a backend:
1.    Request a public ACM certificate for your custom domain.
2.    Identify the ARN of the certificate that you want to use with the load balancer's HTTPS listener.
3.    In your text editor, create a service.yaml manifest file based on the following example. Then, edit the annotations to provide the ACM ARN from step 2.
apiVersion: v1
kind: Service
metadata:
name: echo-service
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:{region}:{user id}:certificate/{id}
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
type: LoadBalancer
selector:
app: echo-pod
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8080
4.    To create a Service object, run the following command:
$ kubectl create -f service.yaml
5.    To return the DNS URL of the service of type LoadBalancer, run the following command:
$ kubectl get service
Note: If you have many active services running in your cluster, be sure to get the URL of the right service of type LoadBalancer from the command output.
6.    Open the Amazon EC2 console, and then choose Load Balancers.
7.    Select your load balancer, and then choose Listeners.
8.    For Listener ID, confirm that your load balancer port is set to 443.
9.    For SSL Certificate, confirm that the SSL certificate that you defined in the YAML file is attached to your load balancer.
10.    Associate your custom domain name with your load balancer name.
11.    Finally, In a web browser, test your custom domain with the following HTTPS protocol:
https://yourdomain.com
You should use an ingress (and not a service) to expose http/s outside of the cluster
I suggest using the ALB Ingress Controller
There is a complete walkthrough here
and you can see how to setup TLS/SSL here

How to expose a Ingress for external access in Kubernetes?

I have a kubernetes cluster on a private network(private server, not aws or google cloud) and I created a Service to be able to access, however, I need to be able to access from outside the cluster and for this I created an Ingress and added ingress-nginx in the cluster.
This is the YAML I'm using after making several attempts:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: k8s.local
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: ClusterIP
selector:
name: nginx
ports:
- port: 80
targetPort: 80
protocol: TCP
# selector:
# app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: echoserver
image: nginx
ports:
- containerPort: 80
I ran yaml like this: kubectl create -f file.yaml
In the /etc/hosts file I added k8s.local to the ip of the master server.
When trying the command in or out of the master server a "Connection refused" message appears:
$ curl http://172.16.0.18:80/ -H 'Host: k8s.local'
I do not know if it's important, but I'm using Flannel in the cluster.
My idea is just to create a 'hello world' and expose it out of the cluster!
Do I need to change anything in the configuration to allow this access?
YAML file edited:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
# nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: k8s.local
http:
paths:
- path: /teste
backend:
serviceName: nginx
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer # NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: echoserver
image: nginx
ports:
- containerPort: 80
You can deploy the ingress controller as a daemonset with host port 80. The service of the controller will not matter then. You can point your domain to every node in your cluster
You can do a NodePort type service but that will force you to use some port in the 30k vicinity, you will not be able to use port 80
Of course the best solution is to use a cloud provider with a load balancer
You can make it work with a plain nginx pod but the recommended method is to install a Kubernetes ingress controller, in your case you are using nginx, so you can install an nginx ingress controller.
Here is some information on how to install it.
If you want to allow external access you can also expose the nginx ingress controller as a LoadBalancer service. You can also use NodePort but you will have to manually point a load balancer to the port on your Kubernetes nodes.
And yes the selector on the 'Service' needs to be:
selector:
app: nginx
In this case NodePort would work. It will open a high port number in every node (same port in every node) so you can use any of these nodes. Place a Load Balancer if you want, and point the backend pool to those instances you have running. Do not use ClusterIP, it is just for internal usage.
If you run your cluster baremetal you need to tell the nginx-ingress controller to use hostNetwork: true, to be added in template/spec part of the mandatory.yml
That way the pod running the ingress controller will listen to Port 80 and 443 of the host node.
https://github.com/alexellis/inlets
Is the easiest way of doing what you want.
Note: encryption requires wss:// which requires TLS certs, If you want fully automated encryption + the ability to use Inlets as a Layer 4 LB, you should use Inlets Pro, it's very cheap compared to other cloud alternatives.
I've also been able to setup the oss /non-kubernetes-operator version of Inlets with encryption / wss (web sockets secure), using the open source version of Inlets as a Layer 7 LB. (it just took some manual configuration/wasn't fully automated like the pro version)
https://blog.alexellis.io/https-inlets-local-endpoints/
I was able to get public internet HTTPS + nginx ingress controller to minikube + tested 2 sites routed using ingress objects. In ~3-4 hours with no good guide to doing it / being new to Caddy/Websockets, but expert on Kubernetes Ingress.
Basically:
Step 1.) Create a $0.007/hour or $5/month VPS on Digital Ocean with a public IP
Step 2.) Point mysite1.com, *.mysite1.com, mysite2.com, *.mysite2.com to the public IP of the VPS.
Step 3.) SSH into the machine and install Inlets + Caddy v1.0.3 + Caddyfile here's mine:
mysite1.com, *.mysite1.com, mysite2.com, *.mysite2.com
proxy / 127.0.0.1:8080 {
transparent
}
proxy /tunnel 127.0.0.1:8080 {
transparent
websocket
}
tls {
max_certs 10
}
Step 4.) deploy 1 inlets deployment on kubernetes cluster, use wss to your VPS, and point the inlets deployment to an ingress controller service of type Cluster IP.
The basics of what's happening are:
1.) Caddy leverages Lets Encrypt Free to auto get HTTPS certs for every website you point at the Caddy Server.
2.) Your inlets deployment starts a bidirection VPN tunnel using websockets with the VPS that has a public IP. (Warning the VPN tunnel will only be encrypted if you specify wss, and that requires the server have a TLS cert, which it gets from "LEF")
3.) Caddy is now a public L7 LB/Reverse Proxy that terminates HTTPS, and forwards to your ingress controller over an encrypted websockets VPN tunnel. Then it's normal-ish ingress.
4.) Traffic Flow: DNS -(resolves IP)-> (HTTPS)VPS/L7 ReverseProxy - encrypted VPNtunnel-> Inlets pod from Inlets Deployment -L7 cleartext in cluster network redirect to -> Ingress Controller Service -> Ingress Controller Pod -L7 redirect to-> Cluster IP services/sites defined by ingress objs.