Whitelist an IP to access deployment with Kubernetes ingress Istio - kubernetes

I'm trying to whitelist an IP to access a deployment inside my Kubernetes cluster.
I looked for some documentation online about this, but I only found the
ingress.kubernetes.io/whitelist-source-range
for ingress to grant access to certain IP range. But still, I couldn't manage to isolate the deployment.
Here is the ingress configuration YAML file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-internal
annotations:
kubernetes.io/ingress.class: "istio"
ingress.kubernetes.io/whitelist-source-range: "xxx.xx.xx.0/24, xx.xxx.xx.0/24"
spec:
rules:
- host: white.example.com
http:
paths:
- backend:
serviceName: white
servicePort: 80
I can access the deployment from my whitelisted IP and from the mobile phone (different IP not whitelisted in the config)
Has anyone stepped in the same problem using ingress and Istio?
Any help, hint, docs or alternative configuration will be much appreciated.

Have a look at the annotation overview, it seems that whitelist-source-range is not supported by istio:
whitelist-source-range: Comma-separate list of IP addresses to enable access to.
nginx, haproxy, trafficserver

I managed to solve whitelisting ip address problem for my istio-based service (app that uses istio proxy and exposed through the istio ingress gateway via public LB) using NetworkPolicy.
For my case, here is the topology:
Public Load Balancer (in GKE, using preserve clientIP mode) ==> A dedicated Istio Gateway Controller Pods (see my answer here) ==> My Pods (istio-proxy sidecar container, my main container).
So, I set up 2 network policy:
NetworkPolicy that guards the incoming connection from internet connection to my Istio Ingress Gateway Controller Pods. In my network policy configuration, I just have to set the spec.podSelector.matchLabels field to the pod label of Dedicated Istio Ingress Gateway Controller Pods's
Another NetworkPolicy that limits the incoming connection to my Deployment -> only from the Istio Ingress Gateway Controller pods/deployments.

Related

Can Ingress Controllers use Selector based rules?

I have deployed a statefulset in AKS - My goal is to load balance traffic to my statefulset.
From my understanding I can define a LoadBalancer Service that can route traffic based on Selectors, something like this.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: web
selector:
app: nginx
However I don't want to necessarily go down the LoadBalance route and I would prefer Ingress doing this work for me, My question is can any of the ingress controller support routing rules which can do Path based routing to endpoints based on selectors? Instead of routing to another service.
Update
To elaborate more on the scenario - Each pod in my statefulset is a stateless node doing data processing of a HTTP feed. I want my ingress service to be able to load balance traffic across these statefulset pods ( honoring keep-alives etc), however given the nature of statefulsets in k8s they are currently exposed through a headless service. I am not sure if a headless service can load balance traffic to my statefulsets?
Update 2
Quick search reveals headless service does not loadbalance
Sometimes you don't need load-balancing and a single Service IP. In this case, you can create what are termed "headless" Services, by explicitly specifying "None" for the cluster IP (.spec.clusterIP).
As much i know it's not possible to do the selector-based routing with ingress.
selector based routing is mostly used during a Blue-green deployment or canary deployment you can only achieve this by using the service mesh. You can use any of the service mesh like istio or APP mesh and you can do the selector base routing.
I have deployed a statefulset in AKS - My goal is to load balance
traffic to my statefulset.
if your goal is to just load balance traffic you can use the ingress controller maybe still not sure about scenrio you are trying to explain.
By default kubernetes service also Load balance the traffic across the PODs.
Flow will be something like DNS > ingress > ingress controller > Kubernetes service (Load balancing here) > any of statefulset
+1 to Harsh Manvar's answer but let me add also my 3 cents.
My question is can any of the ingress controller support routing rules
which can do Path based routing to endpoints based on selectors?
Instead of routing to another service.
To the best of my knowledge, the answer to your question is no, it can't as it doesn't even depend on a particular ingress controller implementation. Note that various ingress controllers, no matter how different they may be when it comes to implementation, must conform to the general specification of the ingress resource, described in the official kubernetes documentation. You don't have different kinds of ingresses, depending on what controller is used.
Ingress and Service work on a different layer of abstraction. While Service exposes a set of pods using a selector e.g.:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp 👈
path-based routing performed by Ingress is always done between Services:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test 👈
port:
number: 80
I am not sure if a headless service can load balance traffic to my statefulsets?
The first answer is "no". Why?
k8s Service is implemented by the kube-proxy. Kube-proxy itself can work in two modes:
iptables (also known as netfilter)
ipvs (also known as LVS/Linux Virtual Server)
load balancing in case of iptables mode is a NAT iptables rule: from ClusterIP address to the list of Endpoints
load balancing in case of ipvs mode is a VIP (LVS Virtual IP) with the Endpoints as upstreams
So, when you create k8s Service with clusterIP set to None you are exactly saying:
"I need this service WITHOUT load balancing"
Setting up the clusterIP to None causes kube-proxy NOT TO CREATE NAT rule in iptables mode, VIP in ipvs mode. There will be nothing for traffic load balancing across the pods selected by this particular Service selector
The second answer is "it could be". Why?
You are free to create headless Service with desired pods selector. DNS query to this Service will return the list of DNS A records for selected pods. Then you can use this data to implement load balancing YOUR way

Google Kubernetes Engine ingress annotations

I configure Ingress on google Kubernetes engine. I am new on ingress but as i understood ingress can serve different Loadbalancers and different LBs should be differently configured.
I have started with a simple ingress config on GKE :
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: web-np
servicePort: 8080
- path: /v2/keys
backend:
serviceName: etcd-np
servicePort: 2379
And it works fine so I have 2 different NodePort services web-np and etcd-np . But now I need to extend this logic with some rewrite rules so that request that points to /service1 - will be redirected to the other service1-np service but before /service1/hello.html must be replaced to /hello.html. That's why I have the following questions:
How can I configure rewrite in ingress and if it is possible with default load balancer.
What is default load balancer on GKE.
Where can I find a list of all annotations to it. I have thought that the full list is on https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/ but there is a completly different list and there is no kubernetes.io/ingress.global-static-ip-name annotation that is widely used in google examples.
Ingress - API object that manages external access to the services in a cluster, typically HTTP.
Ingress may provide load balancing, SSL termination and name-based virtual hosting.
Kubernetes.io: Ingress
Kubernetes can have multiple Ingress controllers. This controllers are different from each other. The Ingress controllers mentioned by you in this particular question are:
Ingress-GCE - a default Ingress resource for GKE cluster:
Github.com: Kubernetes: Ingress GCE
Ingress-nginx - an alternative Ingress controller which can be deployed to your GKE cluster:
Github.com: Kubernetes: Ingress-nginx
Ingress configuration you pasted will use the Ingress-GCE controller. If you want to switch to Ingress-nginx one, you will need to deploy it and set an annotation like:
kubernetes.io/ingress.class: "nginx"
How can I configure rewrite in ingress and if it is possible with default load balancer.
There is an ongoing feature request to support rewrites with Ingress-GCE here: Github.com: Ingress-GCE: Rewrite.
You can use Ingress-nginx to have support for rewrites. There is an official documentation about deploying it: Kubernetes.github.io: Ingress-nginx: Deploy
For more resources about rewrites you can use:
Kubernetes.github.io: Ingress nginx: Examples: Rewrite
Stackoverflow.com: Ingress nginx how to serve assests to application - this is an answer which shows an example on how to configure a playground for experimenting with rewrites
What is default load balancer on GKE.
If you create an Ingress resource with a default Ingress-GCE option you will create a L7 HTTP&HTTPS LoadBalancer.
If you create a service of type LoadBalancer in GKE you will create an L4 Network Load Balancer
If you deploy an Ingress-nginx controller in GKE cluster you will create a L4 Network Loadbalancer pointing to the Ingress-nginx controller which after that will route the traffic accordingly to your Ingress definition. If you are willing to use Ingress-nginx you will need to specify:
kubernetes.io/ingress.class: "nginx"
in your Ingress definition.
Please take a look on this article: Medium.com: Google Cloud: Kubernetes Nodeport vs Loadbalancer vs Ingress
Where can I find a list of all annotations to it. I have thought that the full list is on https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/ but there is a completly different list and there is no kubernetes.io/ingress.global-static-ip-name annotation that is widely used in google examples.
The link that you provided with annotations is specifically for Ingress-nginx. This annotations will not work with Ingress-GCE.
The annotations used in GCP examples are specific to Ingress-GCE.
You can create a Feature Request for a list of available annotations for Ingress-GCE on Issuetracker.google.com.
Answering an old question, but hopefully it can help someone.
I found the list of annotations for GCP Ingress in the source code for ingress-gce.

Why is there an ADDRESS for the ingress-service? What's the use of that ADDRESS?

I deploy my cluster on GKE with an Ingress Controller
I use Helm to install the following:
Installed Ingress Controller
Deployed Load Balancer Service (Create a Load Balancer on GCP as well)
I also deployed the Ingress Object (Config as below)
Then I observed the following status ...
The Ingress Controller is exposed (By Load Balancer Service) with two endpoints: 35.197.XX.XX:80, 35.197.XX.XX:443
These two endpoints are exposed by the Cloud load balancer.
I have no problem with it.
However, when I execute kubectl get ing ingress-service -o wide, it prints out the following info.
NAME HOSTS ADDRESS PORTS AGE
ingress-service k8s.XX.com.tw 34.87.XX.XX 80, 443 5h50m
I really don't under the use of the IP under the ADDRESS column.
I can also see that Google add some extra info to the end of my Ingress config file about load balancer IP for me.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
....(ommitted)
spec:
rules:
- host: k8s.XX.com.tw
http:
paths:
- backend:
serviceName: client-cluster-ip-service
servicePort: 3000
path: /?(.*)
- backend:
serviceName: server-cluster-ip-service
servicePort: 5000
path: /api/?(.*)
tls:
- hosts:
- k8s.XX.com.tw
secretName: XX-com-tw
status:
loadBalancer:
ingress:
- ip: 34.87.XX.XX
According to Google's doc, this (34.87.XX.XX) looks like an external IP, but I can't access it with http://34.87.XX.XX
My question is that since we already have an external IP (35.197.XX.XX) to receive the traffic, why do we need this ADDRESS for the ingress-service?
If it's an internal or external IP ADDRESS?
What is this ADDRESS bound to?
What exactly is this ADDRESS used for?
Can anyone shed some light? Thanks a lot!
If you simply go take a look at the documentation you will have your answer.
What is an ingress ressource: https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress
So following the doc:
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
To be more precise on cloud provider, the ingress will create a load-balancer to expose the service to the internet. The cocumentation on the subject specific to gke: https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
That explains why you have an external ip for the ingress.
What you should do now:
If you don't want to expose HTTP or/and HTTPS ports just delete the ingress ressource, you don't use it so it's pretty much useless.
If you are using HTTP/HTTPS ressources, change your service type to nodePort and leave the management of the load balancer to the ingress.
My opinion is that, as you are deploying the ingress-controller, you should select the second option and leave the management of the load-balancer to it. For the ingress of the ingress-controller, don't define rules just the backend to the nodePort service, the rules should be defined in specific ingress for each app and be managed by the ingress-controller.

Does GKE support nginx-ingress with static ip?

I have been using the Google Cloud Load Balancer ingress. However, I'm trying to install a nginxinc/kubernetes-ingress controller in a node with a Static IP address in GKE.
Can I use Google's Cloud Load Balancer ingress controller in the same cluster?
How can we use the nginxinc/kubernetes-ingress with a static IP?
Thanks
In case you're using helm to deploy nginx-ingress.
First create a static IP address. In google the Network Loadbalancers (NLBs) only support regional static IPs:
gcloud compute addresses create my-static-ip-address --region us-east4
Then install nginx-helm with the ip address as a loadBalancerIP parameter
helm install --name nginx-ingress stable/nginx-ingress --namespace my-namespace --set controller.service.loadBalancerIP=35.186.172.1
First question
As Radek 'Goblin' Pieczonka already pointed you out it is possible to do so.
I just wanted to link you to the official documentation regarding this matter:
If you have multiple Ingress controllers in a single cluster, you can
pick one by specifying the ingress.class annotation, eg creating an
Ingress with an annotation like
metadata:
name: foo
annotations:
kubernetes.io/ingress.class: "gce"
will target the GCE controller, forcing the nginx controller to ignore
it, while an annotation like
metadata:
name: foo
annotations:
kubernetes.io/ingress.class: "nginx"
Second question
Since you are making use of the Google Cloud Platform I can give you further details regarding this implementation of Kubernetes in Google.
Consider that:
By default, Kubernetes Engine allocates ephemeral external IP
addresses for HTTP applications exposed through an Ingress.
However of course you can use static IP addressed for your ingress resource,
there is an official step to step guide showing you how to create a HTTP Load Balancing with Ingress making use of a ingress resource and to link to it a static IP or how to promote an "ephemeral" already in use IP to be static.
Try to go through it and if you face some issue update the question and ask!
For the nginx-ingress controller you have to set the external IP on the service:
spec:
loadBalancerIP: "42.42.42.42"
externalTrafficPolicy: "Local"
It is perfectly fine to run multiple ingress controllers inside kubernetes, but they need to be aware which Ingress objects they are supposed to instantiate. That is done with a special annotation like :
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
which tells that this ingress is expected to be provided by and only by nginx ingress controller.
As for IP, Some cloud providers allow the loadBalancerIP to be specified. with this you can controll the public IP of a service.
Create a Static Ip
gcloud compute addresses create my-ip --global
Describe the Static Ip (this will helo you to know static IP )
gcloud compute addresses describe ssl-ip --global
Now add these annotations:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: "gce" # <----
kubernetes.io/ingress.global-static-ip-name: my-ip # <----
Apply the ingress
kubectl apply -f infress.yaml
(Now wait for 2 minutes)
Run this to it will reflect the new ip
kubectl get ingress

Preserving remote client IP with Ingress

My goal is to make my web application (deployed on Kubernetes 1.4 cluster) see the IP of the client that originally made the HTTP request. As I'm planning to run the application on a bare-metal cluster, GCE and the service.alpha.kubernetes.io/external-traffic: OnlyLocal service annotation introduced in 1.4 is not applicable for me.
Looking for alternatives, I've found this question which is proposing to set up an Ingress to achieve my goal. So, I've set up the Ingress and the NginX Ingress Controller. The deployment went smoothly and I was able to connect to my web app via the Ingress Address and port 80. However in the logs I still see cluster-internal IP (from 172.16.0.0/16) range - and that means that the external client IPs are not being properly passed via the Ingress. Could you please tell me what do I need to configure in addition to the above to make it work?
My Ingress' config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myWebApp
spec:
backend:
serviceName: myWebApp
servicePort: 8080
As a layer 4 proxy, Nginx cannot retain the original source IP address in the actual IP packets. You can work around this using the Proxy protocol (the link points to the HAProxy documentation, but Nginx also supports it).
For this to work however, the upstream server (meaning the myWebApp service in your case) also needs to support this protocol. In case your upstream application also uses Nginx, you can enable proxy protocol support in your server configuration as documented in the official documentation.
According to the Nginx Ingress Controller's documentation, this feature can be enabled in the Ingress Controller using a Kubernetes ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-controller
data:
use-proxy-protocol: "true"
Specify the name of the ConfigMap in your Ingress controller manifest, by adding the --nginx-configmap=<insert-configmap-name> flag to the command-line arguments.