Kubernetes Cluster with Ingress - Remote IP - ibm-cloud

We are hosting our cluster at IBM Bluemix and use the out of the box setup, nothing really modified.
We have a few ingress controllers that handle routing to our internal microservices ( Around 12 ) of them.
We use all the standard ALB's, Ingress etc from Bluemix
Inside our microservices we want to be able to log the IP number some specific calls come from for audit pruposes etc, and all the Microservices are reciving the headers x-real-ip x-forwarded-for but they have an incorrect IP number.. i think its the loadbalancers or something similar that is causing the incorrect IP
How do i solve this ?

Related

Connecting to many kubernetes services from local machine

From my local machine I would like to be able to port forward to many services in a cluster.
For example I have services of name serviceA-type1, serviceA-type2, serviceA-type3... etc. None of these services are accessible externally but can be accessed using the kubectl port-forward command. However there are so many services, that port forwarding to each is unfeasible.
Is it possible to create some kind of proxy service in kubernetes that would allow me to connect to any of the serviceA-typeN services by specifying the them in a URL? I would like to be able to port-forward to the proxy service from my local machine and it would then forward the requests to the serviceA-typeN services.
So for example, if I have set up a port forward on 8080 to this proxy, then the URL to access the serviceA-type1 service might look like:
http://localhost:8080/serviceA-type1/path/to/endpoint?a=1
I could maybe create a small application that would do this but does kubernetes provide this functionality already?
kubectl proxy command provides this functionality.
Read more here: https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#manually-constructing-apiserver-proxy-urls
Good option is to use Ingrees to achieve it.
Read more about what Ingress is.
Main concepts are:
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
In Kubernetes we have 4 types of Services and the default service type is Cluster IP which means the service is only reachable within the cluster.Ingress exposes your service outside the cluster so ingress acts as the entry point into your cluster.
If you plan to move to cloud (I assume you will, because all applications are going to work in cloud in future) with Ingress, it will be compatible with cloud services and eventually will save time and will be easier to migrate from local environment.
To start with ingress you need to install an Ingress controller first.
There are different ingress controllers which you can use.
You can start with most common ingress-nginx which is supported by kubernetes community.
If you're using a minikube than it can be enabled as an addon - see here
Once you have installed ingress in your cluster, you need to create a rule to have it work. Simple fanout is an example with two services and path based routing to it.

Getting client ip using Knative and Anthos

We use Google Cloud Run on our K8s cluster on GCP which is powered by Knative and Anthos, however it seems the load balancer doesn't amend the x-forwarded-for (and this is not expected as it is TCP load balancer), and Istio doesn't do the same.
Do you have the same issue or it is limited to our deployment?
I understand Istio support this as part of their upcoming Gateway Network Topology but not in the current gcp version.
I think you are correct in assessing that current Cloud Run for Anthos set up (unintentionally) does not let you see the origin IP address of the user.
As you said, the created gateway for Istio/Knative in this case is a Cloud Network Load Balancer (TCP) and this LB doesn’t preserve the client’s IP address on a connection when the traffic is routed to Kubernetes Pods (due to how Kubernetes networking works with iptables etc). That’s why you see an x-forwarded-for header, but it contains internal hops (e.g. 10.x.x.x).
I am following up with our team on this. It seems that it was not noticed before.

Different Firewall Rules for Kubernetes Cluster

I am running some internal services and also some customer facing services in one K8s cluster. The internal ones should only be accessible from some specific ips and the customer facing services should be accessible worldwide.
So I created my Ingresses and an nginx Ingress Controller and some K8s LoadBalancer Services with the proper ip filters.
Now I see those Firewall rules in GCP are created behind the scenes. But they are conflicting and the "customer facing" firewall rules overrule the "internal" ones. And so everything of my K8s Cluster is visible worldwide.
The usecase sounds not that exotic to me - do you have an idea how to get some parts of a K8s cluster protected by firewall rules and some accessible everywhere?
As surprising as it is, the L7 (http/https) load balancer in GCP created by a Kubernetes Ingress object has no IP whitelisting capabilities by default, so what you described is working as intended. You can filter on your end using the X-Forwarded-For header (see Target Proxies under Setting Up HTTP(S) Load Balancing).
Whitelisting will be available trough Cloud Armour, which is in private beta at the moment.
To make this situation slightly more complicated: the L4 (tcp/ssl) load balancer in GCP created by a Kubernetes LoadBalancer object (so, not an Ingress) does have IP filtering capability. You simply set .spec.loadBalancerSourceRanges on the Service for that. Of course, a Service will not give you url/host based routing, but you can achieve that by deploying an ingress controller like nginx-ingress. If you go this route you can still create Ingresses for your internal services you just need to annotate them so the new ingress controller picks them up. This is a fairly standard solution, and is actually cheaper than creating L7s for each of your internal services (you will only have to pay for 1 forwarding rule for all of your internal services).
(By "internal services" above I meant services you need to be able to access from outside of the itself cluster but only from specific IPs, say a VPN, office, etc. For services you only need to access from inside the cluster you should use type: ClusterIP)

kubernetes on gke / why a load balancer use is enforced?

Made my way into kubernetes through GKE, currently trying out via kubeadm on bare metal.
In the later environment, there is no need of any specific load balancer; using nginx-ingress and ingresses let one serve service to the www.
Oppositely, on gke, using the same nginx-ingress, or using the gke provided l7, you always end up with a billed load balancer.
What's the reason about that, as it seemed not to be ultimately needed ?
(Reposting my comment above)
In general, when one is receiving traffic from the outside world, that traffic is being sent to one or more non-ACLd public IP addresses.
If you run k8s on bare metals, those BMs can have public IPs, and you can just run ingress on one or more of them.
A managed k8s environment, however, for security reasons, will not permit nodes to have public IPs.
Instead, managed load balancers are allowed to have public IPs. Those are configured to know the private node IPs hosting ingress for your cluster and will direct traffic accordingly.
Kubernetes services have few types, each building up on previous one : ClusterIP, NodePort and LoadBalancer. Only the last one will provision LoadBalancer in a cloud environment, so you can avoid it on GKE without fuzz. The question is, what then? Because, in best case you end up with an Ingress (I assume we expose ingress as in your question), that is available on volatile IPs (nodes can be rolled at any time and new ones will get new IPs) and high ports given by NodePort service. Meaning that not only you have no fixed IP to use, but also you would need to open something like http://:31978, which obviously is crap. Hence, in cloud, you have a simple solution of putting a cloud load balancer in front of it with LoadBalancer service type. This LB will ingest the traffic on port 80/443 and forward it to correct backing service/pods.

Limiting access by IP in kubernetes on GCP's GKE

I am running kubernetes (k8s) on top of Google Cloud Patform's Container Engine (GKE) and Load Balancers (GLB). I'd like to limit the access at a k8s ingress to an IP whitelist.
Is this something I can do in k8s or GLB directly, or will I need to run things via a proxy which does it for me?
The way to whitelist source IP's in nginx-ingress is using below annotation.
ingress.kubernetes.io/whitelist-source-range
But unfortunately, Google Cloud Load Balancer does not have support for it, AFAIK.
If you're using nginx ingress controller you can use it.
The value of the annotation can be comma separated CIDR ranges.
More on whitelist annotation.
Issue tracker for progress on Google Cloud Load Balancer support for whitelisting source IP's.
Nowadays you can use nginx.ingress.kubernetes.io/whitelist-source-range as specified here: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#whitelist-source-range
You need to be sure that you are forwarding external IPs to your services - https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
And if you are using NGINX Ingress, make sure you set externalTrafficPolicy: Local on your ingress controllers service.
You can use Cloud Armor, Add a Policy, create your Allow/Deny Rules, then simply attach k8s LB in the target.
GCP’s firewall rules cannot be applied on the Global Load Balancer it attaches with an Ingress that is created on GKE. If you want to restrict access to only specific IP addresses (for example : users connecting via VPN, in this case the VPN gateway’s IP address) then there is no out of the box solution on GCP, especially GKE.
Nginx and Http header “x-forwarded-for” to the rescue
If you are using GKE, chances are that you have a Microservices architecture and you are using an API Gateway, chances are that Nginx is the API Gateway. All that needs to be done is to configure nginx to only allow requests that have the following IPs
user.ext.static.ip → Public IP of the client
app.global.static.ip → Global static IP assigned to Ingress
nginx conf
location /my_service {
rewrite_by_lua_file validate_ip.lua;
proxy_pass http://my_service
}
validate_ip.lua
local cjson = require "cjson"
local status=""
local headers=ngx.req.get_headers()
local source_ips=headers["x-forwarded-for"]
if source_ips ~= "111.222.333.444, 555.666.777.888" then
status="NOT_ALLOWED"
end
if status ~= "" then
ngx.status = ngx.HTTP_UNAUTHORIZED
ngx.header.content_type = "application/json; charset=utf-8"
ngx.say(cjson.encode({ status = "ERROR",message=status.."YOUR_MESSAGE" }))
return ngx.exit(ngx.HTTP_UNAUTHORIZED)
end
For more details read here
You could use CORS and only allow the IP from the frontend to hit your microservices.