I really like the kubernetes Ingress schematics. I currently run ingress-nginx controllers to route traffic into my kubernetes pods.
I would like to use this to also route traffic to 'normal' machines: ie vm's or physical nodes that are not part of my kubernetes infrastructure. Is this possible? How?
In Kubernetes you can define an externalName service in which you define a FQND to an external server.
kind: Service
apiVersion: v1
metadata:
name: my-service
namespace: prod
spec:
type: ExternalName
externalName: my.database.example.com
Then you can use my-service in your nginx rule.
You can create static service and corresponding endpoints for external services which are not k8s and then use k8s service in ingress to route traffic.
Also you see ingress doc to enable custom upstream check
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-nginx-upstream-checks
In below example just change port/IP according to your need
apiVersion: v1
kind: Service
metadata:
labels:
product: external-service
name: external-service
spec:
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
---
apiVersion: v1
kind: Endpoints
metadata:
labels:
product: external-service
name: external-service
subsets:
- addresses:
- ip: x.x.x.x
- ip: x.x.x.x
- ip: x.x.x.x
ports:
- name: http
port: 80
protocol: TCP
I don't think it's possible, since ingress-nginx get pods info through watch namespace, service, endpoints, ingress resources, then redirect traffic to pods, without these resources specific to kubernetes, ingress-nginx has no way to find the ips that need loadbalance. And ingress-nginx doesn't has health-check method defined, it's up to the kubernetes builtin mechanic to check the health of the running pods.
Related
I have the following template, with a deployment, a service and an Ingress. I ran minikube addons enable ingress locally to add an ingress controller before.
apiVersion: apps/v1
kind: Deployment
metadata:
name: fastapi
labels:
app: fastapi
spec:
replicas: 1
selector:
matchLabels:
app: fastapi
template:
metadata:
labels:
app: fastapi
spec:
containers:
- name: fastapi
image: datamastery/fastapi
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: fastapi-service
spec:
selector:
app: fastapi
type: LoadBalancer
ports:
- protocol: TCP
port: 5000
targetPort: 3000
nodePort: 30002
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: datamastery.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: fastapi-service
port:
number: 3000
When I run kubectl get servicesI get:
fastapi-service LoadBalancer 10.108.5.228 <pending> 5000:30002/TCP 5d22h
I my etc/hosts/ file I added the following:
10.108.5.228 datamastery.com
Normally I would expect now to be able to open my service in the browser, but nothing happens. What did I do wrong? Did I miss something in the template? Is the IP wrong? Something in the hosts file?
Thank you!
fastapi-service LoadBalancer 10.108.5.228 5000:30002/TCP 5d22h
10.108.5.228 is an address within your SDN. Only members of your SDN can reach this address, it is unlikely your workstation would have a route sending this trafic to one of your Kubernetes nodes.
<pending> means your cluster is not integrated with a cloud provider with LoadBalancer capabilities. When in doubt: you should use ClusterIP as your service type. LoadBalancer only makes sense in specific cases. While setting a nodePort as you did is also not required (would make sense with a NodePort service, which is as well useful in few use cases, though should not be used otherwise).
You did create an Ingress. If you have an Ingress Controller, you want to connect to that ip/port. The Host header would tell your ingress controller where to route this, within your SDN.
I believe what you are doing here is trying to combine two different things.
NodePort is only sufficient if you have only one node OR you really control where your service pods getting deployed. Otherwise it is not suitable to use the node IP to access services.
To overcome this issue we usually use ingress as a service proxy. Incoming traffic will be routed to the correct service pods depending on the URL and port. Ingress also manages the SSL termination. So basically this is your first "load balancer" as ingress assigned traffic to services across nodes and pods.
In production environment you deploy the ingress controller with the type: Loadbalancer in the kube-system namespace, example for Nginx-ingress:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: kube-system
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: http
targetPort: 80
- port: 443
name: https
targetPort: 443
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
This would spin up a cloud load balancer of your provider and link it to the ingress service in your cluster. So basically now you would have a real load balancer in place balancing traffic between your nodes and ingress routes them to your services and services to your pods.
Back to your question:
In your config files you try to spin up a service with the type: LoadBalancer. This would skip the ingress part and spin up a second cloud load balancer from your provider dedicated for this single service.
You have to remove the type (and nodePort) to use default ClusterIP for your service.
apiVersion: v1
kind: Service
metadata:
name: fastapi-service
spec:
selector:
app: fastapi
ports:
- protocol: TCP
port: 3000
targetPort: 3000
In addition you have mentioned a wrong port. Your ingress object points on port 3000. You Service object on port 5000. So we also change this.
With this config your traffic on the FQDN is routed to ingress, to ClusterIP service on port 3000 to your pods.
Is there any way to limit the access to Kubernetes Service of type LoadBalancer from outside the cluster?
I would like to expose my database's pod to the Internet using the LoadBalancer service that would be accessible only for my external IP address.
My Kubernetes cluster runs on GKE.
You can use loadBalancerSourceRanges to filter load balanced traffic as mentioned here.
Here is the simple example of Service in front of Nginx Ingress controllers:
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: external
app.kubernetes.io/name: ingress-nginx
name: external-ingress-nginx-controller
namespace: kube-ingress
spec:
loadBalancerSourceRanges:
- <YOUR_IP_1>
- <YOUR_IP_2>
- <YOUR_IP_3>
ports:
- name: https
nodePort: 32293
port: 443
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: external
app.kubernetes.io/name: ingress-nginx
type: LoadBalancer
Yes, you can achieve that on Kubernetes level with a native Kubernetes Network Policy. There you can limit the Ingress traffic to your Kubernetes Service by specifying policies for the Ingress type.
An example could be:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
ports:
- protocol: TCP
port: 6379
More information can be found in the official documentation.
If you already want to block traffic from unwanted IP addresses on Load Balancer level, you have to define firewall rules and apply them on your GCP load balancer.
More information regarding the GCP firewall rules can also be found in the documentation.
Is it possible to configure k8s nginx-ingress as a LB to have a K8s Service actively connect to external backend hosted on external hosts/ports (where one will be enabled at a time, connecting back to the cluster service)?
Similar to envoy proxy? This is on vanilla K8s, on-prem.
So rather than balance load from
client -> cluster -> service.
I am looking for
service -> nginx-ingress -> external-backend.
Define a Kubernetes Service with no selector. Then you need to define a Endpoint. You can put the IP and port in the Endpoint. Normally you do not define Endpoints for Services but because the Service will not have a Selector you will need to provide an Endpoint with the same name as the Service.
Then you point the Ingress to the Service.
Here's an example that exposes an Ingress on the cluster and sends the traffic to 192.168.88.1 on TCP 8081.
apiVersion: v1
kind: Service
metadata:
name: router
namespace: default
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8081
---
apiVersion: v1
kind: Endpoints
metadata:
name: router
namespace: default
subsets:
- addresses:
- ip: 192.168.88.1
- ip: 192.168.88.2 # As per question below
ports:
- port: 8081
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: router
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: my-router.domain.com
http:
paths:
- path: /
backend:
serviceName: router
servicePort: 80
While defining ingress use nginx.ingress.kubernetes.io/configuration-snippet annotation. Enable also proxy protocol using use-proxy-protocol: "true".
Using this annotation you can add additional configuration to the NGINX location.
Please take a look: ingress-nginx-issue, advanced-configuration-with-annotations.
I am new to k8s
I have a deployment file that goes below
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 3
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: jenkins
image: jenkins
ports:
- containerPort: 8080
- containerPort: 50000
My Service File is as following:
apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
name: http
selector:
component: web
My Ingress File is
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: jenkins.xyz.com
http:
paths:
- path: /
backend:
serviceName: jenkins-svc
servicePort: 80
I am using the nginx ingress project and my cluster is created using kubeadm with 3 nodes
nginx ingress
I first ran the mandatory command
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
when I tried hitting jenkins.xyz.com it didn't work
when I tried the command
kubectl get ing
the ing resource doesnt get an IP address assigned to it
The ingress resource is nothing but the configuration of a reverse proxy (the Ingress controller).
It is normal that the Ingress doesn't get an IP address assigned.
What you need to do is connect to your ingress controller instance(s).
In order to do so, you need to understand how they're exposed in your cluster.
Considering the YAML you claim you used to get the ingress controller running, there is no sign of exposition to the outside network.
You need at least to define a Service to expose your controller (might be a load balancer if the provider where you put your cluster supports it), you can use HostNetwork: true or a NodePort.
To use the latest option (NodePort) you could apply this YAML:
https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/baremetal/service-nodeport.yaml
I suggest you read the Ingress documentation page to get a clearer idea about how all this stuff works.
https://kubernetes.io/docs/concepts/services-networking/ingress/
In order to access you local Kubernetes Cluster PODs a NodePort needs to be created. The NodePort will publish your service in every node using using its public IP and a port. Then you can access the service using any of the cluster IPs and the assigned port.
Defining a NodePort in Kubernetes:
apiVersion: v1
kind: Service
metadata:
name: nginx-service-np
labels:
name: nginx-service-np
spec:
type: NodePort
ports:
- port: 8082 # Cluster IP, i.e. http://10.103.75.9:8082
targetPort: 8080 # Application port
nodePort: 30000 # (EXTERNAL-IP VirtualBox IPs) i.e. http://192.168.50.11:30000/ http://192.168.50.12:30000/ http://192.168.50.13:30000/
protocol: TCP
name: http
selector:
app: nginx
See a full example with source code at Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube).
The nginx ingress controller can be replaced also with Istio if you want to benefit from a service mesh architecture for:
Load Balance traffic, external o internal
Control failures, retries, routing
Apply limits and monitor network traffic between services
Secure communication
See Installing Istio in Kubernetes under VirtualBox (without Minikube).
I have created a Kubernetes cluster by Kubeadm with 3 nodes. Their IP address and hostname are 10.10.10.146/24(k8s1, master), 10.10.10.135/24(k8s2), 10.10.10.170/24(k8s3).
Now I create a nginx service which contains 3 pods with this yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-app
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-app
image: nginx:1.14.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-app-srv
spec:
type: ClusterIP
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
externalIPs:
- 10.10.100.1
Finally one of the pods was scheduled to k8s2 and two of them to k8s3.
Then if I route 10.10.100.0/24 to k8s2, only one pod work. If to k8s3, only two pods work. And if to k8s1, no pods work.
How can I make all pods work fine through external-IP from outside just like through cluster-IP from inside no matter which node I route the external-IP subnet to? Or that's not possible or I need something else such as Kubernetes Ingress?
There are several options to expose your service outside the cluster:
The first option is to use NodePort type of Kubernetes Service. This way Service will open a port on the network interface of each node in the cluster, and all traffic coming to that port will be forwarded to the service.
By default, port range for this kind of service is limited to 30000–32767.
Here is an example of Service NodePort configuration:
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
type: NodePort
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
nodePort: 30080
The second option is available if you are running Kubernetes cluster in the clouds like AWS, GCP, Azure. If you create LoadBalancer type of service, it will create a new load balancer in the cloud and forward all traffic from that load balancer to the service.
The downside of this way is that each service creates a separate load balancer, which will cost you additional money.
Here is an example of Service LoadBalancer configuration:
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
The third option is to use an Ingress object. An Ingress controller should be running in the cluster to configure the cloud networking according to the content of the Ingress object you created. Ingress can provide you functionality of routing requests to different services depending of dns name and URI path. You can also use it on bare-metal Kubernetes cluster, but in this case, you are responsible for routing network traffic to the Ingress controller, for example by firewall rules.
Here are a couple of examples of Ingress configuration:
# redirect all traffic to a service
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
# redirect traffic to a service for specified URI path
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort: 80