How to expose a Ingress for external access in Kubernetes? - kubernetes

I have a kubernetes cluster on a private network(private server, not aws or google cloud) and I created a Service to be able to access, however, I need to be able to access from outside the cluster and for this I created an Ingress and added ingress-nginx in the cluster.
This is the YAML I'm using after making several attempts:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: k8s.local
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: ClusterIP
selector:
name: nginx
ports:
- port: 80
targetPort: 80
protocol: TCP
# selector:
# app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: echoserver
image: nginx
ports:
- containerPort: 80
I ran yaml like this: kubectl create -f file.yaml
In the /etc/hosts file I added k8s.local to the ip of the master server.
When trying the command in or out of the master server a "Connection refused" message appears:
$ curl http://172.16.0.18:80/ -H 'Host: k8s.local'
I do not know if it's important, but I'm using Flannel in the cluster.
My idea is just to create a 'hello world' and expose it out of the cluster!
Do I need to change anything in the configuration to allow this access?
YAML file edited:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
# nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: k8s.local
http:
paths:
- path: /teste
backend:
serviceName: nginx
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer # NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: echoserver
image: nginx
ports:
- containerPort: 80

You can deploy the ingress controller as a daemonset with host port 80. The service of the controller will not matter then. You can point your domain to every node in your cluster
You can do a NodePort type service but that will force you to use some port in the 30k vicinity, you will not be able to use port 80
Of course the best solution is to use a cloud provider with a load balancer

You can make it work with a plain nginx pod but the recommended method is to install a Kubernetes ingress controller, in your case you are using nginx, so you can install an nginx ingress controller.
Here is some information on how to install it.
If you want to allow external access you can also expose the nginx ingress controller as a LoadBalancer service. You can also use NodePort but you will have to manually point a load balancer to the port on your Kubernetes nodes.
And yes the selector on the 'Service' needs to be:
selector:
app: nginx

In this case NodePort would work. It will open a high port number in every node (same port in every node) so you can use any of these nodes. Place a Load Balancer if you want, and point the backend pool to those instances you have running. Do not use ClusterIP, it is just for internal usage.

If you run your cluster baremetal you need to tell the nginx-ingress controller to use hostNetwork: true, to be added in template/spec part of the mandatory.yml
That way the pod running the ingress controller will listen to Port 80 and 443 of the host node.

https://github.com/alexellis/inlets
Is the easiest way of doing what you want.
Note: encryption requires wss:// which requires TLS certs, If you want fully automated encryption + the ability to use Inlets as a Layer 4 LB, you should use Inlets Pro, it's very cheap compared to other cloud alternatives.
I've also been able to setup the oss /non-kubernetes-operator version of Inlets with encryption / wss (web sockets secure), using the open source version of Inlets as a Layer 7 LB. (it just took some manual configuration/wasn't fully automated like the pro version)
https://blog.alexellis.io/https-inlets-local-endpoints/
I was able to get public internet HTTPS + nginx ingress controller to minikube + tested 2 sites routed using ingress objects. In ~3-4 hours with no good guide to doing it / being new to Caddy/Websockets, but expert on Kubernetes Ingress.
Basically:
Step 1.) Create a $0.007/hour or $5/month VPS on Digital Ocean with a public IP
Step 2.) Point mysite1.com, *.mysite1.com, mysite2.com, *.mysite2.com to the public IP of the VPS.
Step 3.) SSH into the machine and install Inlets + Caddy v1.0.3 + Caddyfile here's mine:
mysite1.com, *.mysite1.com, mysite2.com, *.mysite2.com
proxy / 127.0.0.1:8080 {
transparent
}
proxy /tunnel 127.0.0.1:8080 {
transparent
websocket
}
tls {
max_certs 10
}
Step 4.) deploy 1 inlets deployment on kubernetes cluster, use wss to your VPS, and point the inlets deployment to an ingress controller service of type Cluster IP.
The basics of what's happening are:
1.) Caddy leverages Lets Encrypt Free to auto get HTTPS certs for every website you point at the Caddy Server.
2.) Your inlets deployment starts a bidirection VPN tunnel using websockets with the VPS that has a public IP. (Warning the VPN tunnel will only be encrypted if you specify wss, and that requires the server have a TLS cert, which it gets from "LEF")
3.) Caddy is now a public L7 LB/Reverse Proxy that terminates HTTPS, and forwards to your ingress controller over an encrypted websockets VPN tunnel. Then it's normal-ish ingress.
4.) Traffic Flow: DNS -(resolves IP)-> (HTTPS)VPS/L7 ReverseProxy - encrypted VPNtunnel-> Inlets pod from Inlets Deployment -L7 cleartext in cluster network redirect to -> Ingress Controller Service -> Ingress Controller Pod -L7 redirect to-> Cluster IP services/sites defined by ingress objs.

Related

GKE Ingress configuration for HTTPS-enabled Applications leads to failed_to_connect_to_backend

I have serious problems with the configuration of Ingress on a Google Kubernetes Engine cluster for an application which expects traffic over TLS. I have configured a FrontendConfig, a BackendConfig and defined the proper annotations in the Service and Ingress YAML structures.
The Google Cloud Console reports that the backend is healthy, but if i connect to the given address, it returns 502 and in the Ingress logs appears a failed_to_connect_to_backend error.
So are my configurations:
FrontendConfig.yaml:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: my-frontendconfig
namespace: my-namespace
spec:
redirectToHttps:
enabled: false
sslPolicy: my-ssl-policy
BackendConfig.yaml:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-backendconfig
namespace: my-namespace
spec:
sessionAffinity:
affinityType: "CLIENT_IP"
logging:
enable: true
sampleRate: 1.0
healthCheck:
checkIntervalSec: 60
timeoutSec: 5
healthyThreshold: 3
unhealthyThreshold: 5
type: HTTP
requestPath: /health
# The containerPort of the application in Deployment.yaml (also for liveness and readyness Probes)
port: 8001
Ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: my-namespace
annotations:
# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.class: "gce"
# Frontend Configuration Name
networking.gke.io/v1beta1.FrontendConfig: "my-frontendconfig"
# Static IP Address Rule Name (gcloud compute addresses create epa2-ingress --global)
kubernetes.io/ingress.global-static-ip-name: "my-static-ip"
spec:
tls:
- secretName: my-secret
defaultBackend:
service:
name: my-service
port:
number: 443
Service.yaml:
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: my-namespace
annotations:
# Specify the type of traffic accepted
cloud.google.com/app-protocols: '{"service-port":"HTTPS"}'
# Specify the BackendConfig to be used for the exposed ports
cloud.google.com/backend-config: '{"default": "my-backendconfig"}'
# Enables the Cloud Native Load Balancer
cloud.google.com/neg: '{"ingress": true}'
spec:
type: ClusterIP
selector:
app: my-application
ports:
- protocol: TCP
name: service-port
port: 443
targetPort: app-port # this port expects TLS traffic, no http plain connections
The Deployment.yaml is omitted for brevity, but it defines a liveness and readiness Probe on another port, the one defined in the BackendConfig.yaml.
The interesting thing is, if I expose through the Service.yaml also this healthcheck port (mapped to port 80) and I point the default Backend to port 80 and simply define a rule with a path /* leading to port 443, everything seems to work just fine, but I don't want to expose the healthcheck port outside my cluster, since I have also some diagnostics information there.
Question: How can I be sure that if i connect to the Ingress point with ``https://MY_INGRESS_IP/`, the traffic is routed exactly as it is to the HTTPS port of the service/application, without getting the 502 error? Where do I fail to configure the Ingress?
There are few elements to your question, i'll try to answer them here.
I don't want to expose the healthcheck port outside my cluster
The HealtCheck endpoint is technically not exposed outside the cluster, it's expose inside Google Backbone so that the the Google LoadBalancers (configured via Ingress) can reach it. You can try that by doing a curl against https://INGREE_IP/healthz, this will not work.
The traffic is routed exactly as it is to the HTTPS port of the service/application
The reason why 443 in your Service Definition doesn't work but 80 does, its because when you expose the Service on port 443, the LoadBalancer will fail to connect to a backend without a proper certificate, your backend should also be configured to present a certificate to the Loadbalancer to encrypt traffic. The secretName configured at the Ingress is the certificate used by the clients to connect to the LoadBalancer. Google HTTP LoadBalancer terminate the SSL certificate and initiate a new connection to the backend using whatever port you specific in the Ingress. If that port is 443 but the backend is not configured with SSL certificates, that connection will fail.
Overall you don't need to encrypt traffic between LoadBalancers and backends, it's doable but not needed as Google encrypt that traffic at the network level anyway
Actually i solved it by setting a managed certificate connected to Ingress. It "magically" worked without any other change, using Service of type ClusterIP

Minikube ingress controller not forwarding request to deployed service properly

I have following setup in minikube cluster
SpringBoot app deployed in minikube cluster
name : opaapp and containerPort: 9999
Service use to expose service app as below
apiVersion: v1
kind: Service
metadata:
name: opaapp
namespace: default
labels:
app: opaapp
spec:
selector:
app: opaapp
ports:
- name: http
port: 9999
targetPort: 9999
type: NodePort
Created an ingreass controller and ingress resource as below
apiVersion: networking.k8s.io/v1beta1 # for versions before 1.14 use extensions/v1beta1
kind: Ingress
metadata:
name: opaapp-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: opaapp.info
http:
paths:
- path: /
backend:
serviceName: opaapp
servicePort: 9999
I have setup host file as below
172.17.0.2 opaapp.info
Now, if I access service as below
http://opaapp.info:32746/api/ping : I am getting the response back
But if I try to access as
http://opaapp.info/api/ping : Getting 404 error
Not able to find the error on configuration
The nginx ingress controller has been exposed via NodePort 32746 which means nginx is not listening on port 80/443 in the host's(172.17.0.2) network, rather nginx is listening on port 80/443 on Kubernetes pod network which is different than host network. Hence accessing it via http://opaapp.info/api/ping is not working. To make it work the way you are expecting the nginx ingress controller need to be deployed with hostNetwork: true option so that it can listen on 80/443 port directly in the host(172.17.0.2) network which can be done as discussed here.

Cannot access a LoadBalancer service at Kubernetes

I managed to deploy a python app at the kubernetes cluster . The python app image is deployed at AWS ECR (Elastic Container Registry).
My deployment is:
(NAME)charting-rest-server (READY)1/1 (UP-TO-DATE)1 (AVAILABLE)1 (AGE)33m (CONTAINERS)charting-rest-server (IMAGES) *****.dkr.ecr.eu-west-2.amazonaws.com/charting-rest-server:latest (SELECTOR)app=charting-rest-server
And my service is:
(NAME)charting-rest-server-service (TYPE)LoadBalancer (CLUSTER-IP)10.100.4.207 (EXTERNAL-IP)*******.eu-west-2.elb.amazonaws.com (PORT(s))8765:32735/TCP (AGE)124m (SELECTOR)app=charting-rest-server
According to this AWS guide , when I do curl *****.us-west-2.elb.amazonaws.com:80 I should be able to externally access the Load Balancer , who is going to route me to my pod's ip.
But all I get is
(6) Could not resolve host: *******.eu-west-2.elb.amazonaws.com
And come to think about it if I want to have access to my pod and send some requests I should have an external-ip like 111.111.111.111 (obv an example).
EDIT
the deployment's yaml is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: charting-rest-server
spec:
selector:
matchLabels:
app: charting-rest-server
replicas: 1
template:
metadata:
labels:
app: charting-rest-server
spec:
containers:
- name: charting-rest-server
image: *****.eu-west-2.amazonaws.com/charting-rest-server:latest
ports:
- containerPort: 5000
the service's yaml:
apiVersion: v1
kind: Service
metadata:
name: charting-rest-server-service
spec:
type: LoadBalancer
selector:
app: charting-rest-server
ports:
- protocol: TCP
port: 80
targetPort: 5000
I already tried with the suggestions from the comments , using an ingress instance but I only ended up spending a huge amount of time trying to understand how they work , "am I doing something wrong"?/etc .
I will put the yaml file I used here but it made no change since my ADDRESS field was empty - no ip to use.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: charting-rest-server-ingress
spec:
rules:
- host: charting-rest-server-service
http:
paths:
- path:/
backend:
serviceName: charting-rest-server-service
servicePort: 80
I am stuck in that problem for so much time so I would appreciate some help.
You already created a Service with type LoadBalancer, but it looks like you have incorrect ports configured.
Your Deployment is created with containerPort: 5000 and your Service is pointing to targetPort: 9376. Those needs to match for the Deployment to be exposed.
If you are having a hard time writing yaml for the Service you can expose the Deployment using following kubectl command:
kubectl expose --namespace=tick deployment charting-rest-server --type=LoadBalancer --port=8765 --target-port=5000 --name=charting-rest-server-service
Once you fix those ports you will be able to access the service from outside using it's hostname:
status:
loadBalancer:
ingress:
- hostname: aba02b223436111ea85ea06a051f04d8-1294697222.eu-west-2.elb.amazonaws.com
I also recommend this guide Tutorial: Expose Services on your AWS Quick Start Kubernetes cluster.
If you need more control over the http rules please consider using ingress, you can read more about ALB Ingress Controller on Amazon EKS also Using a Network Load Balancer with the NGINX Ingress Controller on Amazon EKS.

Nginx Ingress Failing to Serve

I am new to k8s
I have a deployment file that goes below
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 3
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: jenkins
image: jenkins
ports:
- containerPort: 8080
- containerPort: 50000
My Service File is as following:
apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
name: http
selector:
component: web
My Ingress File is
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: jenkins.xyz.com
http:
paths:
- path: /
backend:
serviceName: jenkins-svc
servicePort: 80
I am using the nginx ingress project and my cluster is created using kubeadm with 3 nodes
nginx ingress
I first ran the mandatory command
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
when I tried hitting jenkins.xyz.com it didn't work
when I tried the command
kubectl get ing
the ing resource doesnt get an IP address assigned to it
The ingress resource is nothing but the configuration of a reverse proxy (the Ingress controller).
It is normal that the Ingress doesn't get an IP address assigned.
What you need to do is connect to your ingress controller instance(s).
In order to do so, you need to understand how they're exposed in your cluster.
Considering the YAML you claim you used to get the ingress controller running, there is no sign of exposition to the outside network.
You need at least to define a Service to expose your controller (might be a load balancer if the provider where you put your cluster supports it), you can use HostNetwork: true or a NodePort.
To use the latest option (NodePort) you could apply this YAML:
https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/baremetal/service-nodeport.yaml
I suggest you read the Ingress documentation page to get a clearer idea about how all this stuff works.
https://kubernetes.io/docs/concepts/services-networking/ingress/
In order to access you local Kubernetes Cluster PODs a NodePort needs to be created. The NodePort will publish your service in every node using using its public IP and a port. Then you can access the service using any of the cluster IPs and the assigned port.
Defining a NodePort in Kubernetes:
apiVersion: v1
kind: Service
metadata:
name: nginx-service-np
labels:
name: nginx-service-np
spec:
type: NodePort
ports:
- port: 8082 # Cluster IP, i.e. http://10.103.75.9:8082
targetPort: 8080 # Application port
nodePort: 30000 # (EXTERNAL-IP VirtualBox IPs) i.e. http://192.168.50.11:30000/ http://192.168.50.12:30000/ http://192.168.50.13:30000/
protocol: TCP
name: http
selector:
app: nginx
See a full example with source code at Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube).
The nginx ingress controller can be replaced also with Istio if you want to benefit from a service mesh architecture for:
Load Balance traffic, external o internal
Control failures, retries, routing
Apply limits and monitor network traffic between services
Secure communication
See Installing Istio in Kubernetes under VirtualBox (without Minikube).

Global static IP name on NGINX Ingress

I'm having difficulties getting my Ingress controller running on Google Container Engine. I want to use an NGINX Ingress Controller with Basic Auth and use a reserved global static ip name (this can be made in the External IP addresses section in the Google Cloud Admin interface). When I use the gce class everything works fine except for the Basic Auth (which I think is not supported on the gce class), anenter code hered when I try to use the nginx class the Ingress Controller launches but the IP address that I reserved in the Google Cloud Admin interface will not be attached to the Ingress Controller. Does anyone know how to get this working? Here is my config file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: webserver
annotations:
kubernetes.io/ingress.global-static-ip-name: "myreservedipname"
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-realm: "Auth required"
ingress.kubernetes.io/auth-secret: htpasswd
spec:
tls:
- secretName: tls
backend:
serviceName: webserver
servicePort: 80
I found a solution with helm.
helm install --name nginx-ingress stable/nginx-ingress \
--set controller.service.loadBalancerIP=<YOUR_EXTERNAL_IP>
You should use the external-ip and not the name you gave with gcloud.
Also, in my case I also added --set rbac.create=true for permissions.
External IP address can be attached to the Load Balancer which you can point to your Ingress controller.
One major remark - the External IP address should be reserved in the same region as the Kubernetes cluster.
To do it, you just need to deploy your Nginx-ingress service with type: LoadBalancer and set ExternalIP value, like this:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
spec:
loadBalancerIP: <YOUR_EXTERNAL_IP>
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
After deployment, Kubernetes will create a new Load Balancer with desired static IP which will be an entry-point for your Ingress.
#silgon, as I see, you already tried to do it, but without a positive result. But, it should work. If not - check the region of IP address and configuration once again.
Here's an example that I know works, could be an issue around your syntax:
kind: Ingress
metadata:
name: nginx
spec:
rules:
- host: nginx.192.168.99.100.nip.io
http:
paths:
- backend:
serviceName: nginx
servicePort: 80