How to intercept requests to a service in Kubernetes? - kubernetes

Let's say I define a Service named my-backend in Kubernetes. I would like to intercept every request sent to this service, what is the proper way to do it? For example, another container under the same namespace sends a request through http://my-backend.
I tried to use Admission Controller with a validation Webhook. However, it can intercept the CRUD operations on service resources, but it fails to intercept any connection to a specific service.

There is no direct way to intercept the requests to a service in Kubernetes.
For workaround this is what you can do-
Create a sidecar container just to log the each incoming request. logging
Run tcpdump -i eth0 -n in your containers and filter out requests
Use Zipkin
Creating service on cloud providers, will have their own logging mechanism. for ex - load balancer service on aws will have its logs generated on S3. aws elb logs

You can use a service mesh such as istio. An istio service mesh deploys a envoy proxy sidecar along with every pod. Envoy intercepts all the incoming requests to the pod and can provide you metrics such as number of requests etc. A service mesh brings in more features such as distributed tracing, rate limiting etc.

Kubernetes NetworkPolicy object will help on this. A network policy controls how group of pods can communicate with each other and other network endpoints. You can only allow the ingress traffic to the my-backend service based on pod selector. Below is the example that will allow the ingress traffic from specific
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-only-from-frontend-to-my-backend
namespace: default
spec:
podSelector:
matchLabels:
<my-backend pod label>
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
<Frontend web pod label>

Related

Istio access to container SSL endpoint

My application is running a SSL NodeJS server with mutual authentication.
How do I tell k8s to access the container thought HTTPS?
How do I forward the client SSL certificates to the container?
I tried to setup a Gateway & a Virtual host without success. In every configuration I tried I hit a 503 error.
The Istio sidecar proxy container (when injected) in the pod will automatically handle communicating over HTTPS. The application code can continue to use HTTP, and the Istio sidecar will intercept the request, "upgrading" it to HTTPS. The sidecar proxy in the receiving pod will then handle "downgrading" the request to HTTP for the application container.
Simply put, there is no need to modify any application code. The Istio sidecar proxies requests and responses between Kubernetes pods with TLS/HTTPS.
UPDATE:
If you wish to use HTTPS at the application level, you can tell Istio to exclude certain inbound and outbound ports. To do so, you can add the traffic.sidecar.istio.io/excludeInboundPorts and traffic.sidecar.istio.io/excludeOutboundPorts annotations, respectively, to the Kubernetes deployment YAML.
Example:
...
spec:
selector:
matchLabels:
app: podinfo
template:
metadata:
annotations:
traffic.sidecar.istio.io/includeInboundPorts: "443"
traffic.sidecar.istio.io/excludeInboundPorts: "443"
labels:
...

Within a k8s cluster Should I always call the Ingress Rule Or Node Port Service Name?

I have a number of restful services within our system
Some are our within the kubernetes cluster
Others are on legacy infrasture and are hosted on VM's
Many of our restful services make synchronous calls to each other (so not asynchronously using message queues)
We also have a number of UI's (fat clients or web apps) that make use of these services
We might define a simple k8s manifest file like this
Pod
Service
Ingress
apiVersion: v1
kind: Pod
metadata:
name: "orderManager"
spec:
containers:
- name: "orderManager"
image: "gitlab-prem.com:5050/image-repo/orderManager:orderManager_1.10.22"
---
apiVersion: v1
kind: Service
metadata:
name: "orderManager-service"
spec:
type: NodePort
selector:
app: "orderManager"
ports:
- protocol: TCP
port: 50588
targetPort: 50588
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: orderManager-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /orders
pathType: Prefix
backend:
service:
name: "orderManager-service"
port:
number: 50588
I am really not sure what the best way for restful services on the cluster to talk to each other.
It seems like there is only one good route for callers outside the cluster which is use the url built by the ingress rule
Two options within the cluster
This might illustrate it further with an example
Caller
Receiver
Example Url
UI
On Cluster
http://clusterip/orders
The UI would use the cluster ip and the ingress rule to reach the order manager
Service off cluster
On Cluster
http://clusterip/orders
Just like the UI
On Cluster
On Cluster
http://clusterip/orders
Could use ingress rule like the above approach
On Cluster
On Cluster
http://orderManager-service:50588/
Could use the service name and port directly
I write cluster ip a few times above but in real life we put something top so there is a friendly name like http://mycluster/orders
So when caller and reciever are both on cluster is it either
Use the ingress rule which is also used by services and apps outside the cluster
Use the nodeport service name which is used in the ingress rule
Or perhaps something else!
One benefit of using nodeport service name is that you do not have to change your base URL.
The ingress rule appends an extra elements to the route (in the above case orders)
When I move a restful service from legacy to k8s cluster it will increase the complexity
It depends on whether you want requests to be routed through your ingress controller or not.
Requests sent to the full URL configured in your Ingress resource will be processed by your ingress controller. The controller itself โ€” NGINX in this case โ€” will proxy the request to the Service. The request will then be routed to a Pod.
Sending the request directly to the Serviceโ€™s URL simply skips your ingress controller. The request is directly routed to a Pod.
The trade offs between the two options depend on your setup.
Sending requests through your ingress controller will increase request latency and resource consumption. If your ingress controller does nothing other than route requests, I would recommend sending requests directly to the Service.
However, if you use your ingress controller for other purposes, like authentication, monitoring, logging, or tracing, then you may prefer that the controller process internal requests.
For example, on some of my clusters I use the NGINX ingress controller to measure request latency and track HTTP response statuses. I route requests between apps running in the same cluster through the ingress controller in order to have that information available. I pay the cost of increased latency and resource usage in order to have improved observability.
Whether the trade offs are worth it in your case depends on you. If your ingress controller does nothing more that basic routing, then my recommendation is to skip it entirely. If it does more, then you need to weigh the pros and cons of routing requests through it.

Can Ingress Controllers use Selector based rules?

I have deployed a statefulset in AKS - My goal is to load balance traffic to my statefulset.
From my understanding I can define a LoadBalancer Service that can route traffic based on Selectors, something like this.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: web
selector:
app: nginx
However I don't want to necessarily go down the LoadBalance route and I would prefer Ingress doing this work for me, My question is can any of the ingress controller support routing rules which can do Path based routing to endpoints based on selectors? Instead of routing to another service.
Update
To elaborate more on the scenario - Each pod in my statefulset is a stateless node doing data processing of a HTTP feed. I want my ingress service to be able to load balance traffic across these statefulset pods ( honoring keep-alives etc), however given the nature of statefulsets in k8s they are currently exposed through a headless service. I am not sure if a headless service can load balance traffic to my statefulsets?
Update 2
Quick search reveals headless service does not loadbalance
Sometimes you don't need load-balancing and a single Service IP. In this case, you can create what are termed "headless" Services, by explicitly specifying "None" for the cluster IP (.spec.clusterIP).
As much i know it's not possible to do the selector-based routing with ingress.
selector based routing is mostly used during a Blue-green deployment or canary deployment you can only achieve this by using the service mesh. You can use any of the service mesh like istio or APP mesh and you can do the selector base routing.
I have deployed a statefulset in AKS - My goal is to load balance
traffic to my statefulset.
if your goal is to just load balance traffic you can use the ingress controller maybe still not sure about scenrio you are trying to explain.
By default kubernetes service also Load balance the traffic across the PODs.
Flow will be something like DNS > ingress > ingress controller > Kubernetes service (Load balancing here) > any of statefulset
+1 to Harsh Manvar's answer but let me add also my 3 cents.
My question is can any of the ingress controller support routing rules
which can do Path based routing to endpoints based on selectors?
Instead of routing to another service.
To the best of my knowledge, the answer to your question is no, it can't as it doesn't even depend on a particular ingress controller implementation. Note that various ingress controllers, no matter how different they may be when it comes to implementation, must conform to the general specification of the ingress resource, described in the official kubernetes documentation. You don't have different kinds of ingresses, depending on what controller is used.
Ingress and Service work on a different layer of abstraction. While Service exposes a set of pods using a selector e.g.:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp ๐Ÿ‘ˆ
path-based routing performed by Ingress is always done between Services:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test ๐Ÿ‘ˆ
port:
number: 80
I am not sure if a headless service can load balance traffic to my statefulsets?
The first answer is "no". Why?
k8s Service is implemented by the kube-proxy. Kube-proxy itself can work in two modes:
iptables (also known as netfilter)
ipvs (also known as LVS/Linux Virtual Server)
load balancing in case of iptables mode is a NAT iptables rule: from ClusterIP address to the list of Endpoints
load balancing in case of ipvs mode is a VIP (LVS Virtual IP) with the Endpoints as upstreams
So, when you create k8s Service with clusterIP set to None you are exactly saying:
"I need this service WITHOUT load balancing"
Setting up the clusterIP to None causes kube-proxy NOT TO CREATE NAT rule in iptables mode, VIP in ipvs mode. There will be nothing for traffic load balancing across the pods selected by this particular Service selector
The second answer is "it could be". Why?
You are free to create headless Service with desired pods selector. DNS query to this Service will return the list of DNS A records for selected pods. Then you can use this data to implement load balancing YOUR way

Kubernetes load balancing

I'm studing Kubernetes (with no regard of a specific Cloud Provider) and it's not so clear if the most generic Service (not the Service of type Load-Balancer) works as an internal load balancer among the various replicas of a single microservice.
So how to implement internal load balancing among replicas without exposing the microservice to the outside traffic?
You can use kubernetes service object which is top of the pod.
Service object manage the connection and traffic it can be also used as internal load balancer.
You can create service with yaml file
kind: Service
apiVersion: v1
metadata:
name: myapp-service
spec:
selector:
app: Myapp
ports:
- port: 80
targetPort: 9376
On the base of the same selector in pod metadata to divert the traffic to that pods.
Just use proper selector in specs section inside service and pods.
in order to create an internal load balancer , you will need to create a service based on selectors in order to find the correct pod to direct the traffic.
in order for the pod to be blocked for outside traffic it should be of type ClusterIP.

How to allow/deny http requests from other namespaces of the same cluster?

In a cluster with 2 namespaces (ns1 and ns2), I deploy the same app (deployment) and expose it with a service.
I thought separate namespaces would prevent from executing curl http://deployment.ns1 from a pod in ns2, but apparently, it's possible.
So my question is, how to allow/deny such cross namespaces operations? For example:
pods in ns1 should accept requests from any namespace
pods (or service?) in ns2 should deny all requests from other namespaces
Good that you are working with namespace isolation.
Deploy a new kind Network Policy in your ns1 with ingress all. You can lookup the documentation to define network ingress policy to allow all inbound traffic
Likewise for ns2, you can create a new kind Network Policy and deploy the config in ns2 to deny all ingress. Again the docs will come to rescue to help with you the yaml construct.
It may look something like this:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: ns1
name: web-allow-all-namespaces
spec:
podSelector:
matchLabels:
app: app_name_ns1
ingress:
- from:
- namespaceSelector: {}
It would not be answer you want, but I can provide the helpful feature information to implement your requirements.
AFAIK Kubernetes can define network policy to limit the network access.
Refer Declare Network Policy for more details of Network Policy.
Default policies
Setting a Default NetworkPolicy for New Projects in case OpenShift.