Disable healthckecks logs in ingress - kubernetes

I would like to disable the logging of the health checks produced by my Ingress, on my pods.
I have a GCE ingress, distributing two pods, and I would like to clear up the logs i get from them.
Do you have any idea ?
Thanks,

(It's not clear what do you mean by disabling logs. So I'll make an assumption.)
If your application is logging something when it gets a request, you can check the user agent of the request to disable requests from Google Load Balancer health checking.
When you provision a GCE ingress, your app will get a Google Cloud HTTP Load Balancer (L7). This LB will make health requests with header:
User-agent: GoogleHC/1.0
I recommend checking for a case-insensitive header ("user-agent") and again a case-insenstive check to see if its value is starting with "googlehc".
This way, you can distinguish Google HTTP (L7) load balancer health requests and leave them out of your logs.

Related

NLB or HAProxy - Better way to perform SSL termination?

My architecture looks like this:
Here, the HTTPS requests first go to the route53 service for DNS resolution. Route53 forwards the request to the Network Load balancer. This service redirects the traffic to HAProxy pods running inside a Kubernetes cluster.
The HAProxy servers are required to read a specific request header and based on its value, it will route the traffic to backend. To keep things simple, I have kept a single K8 Backend cluster, but assume that there are more than 1 such backend cluster running.
Considering this architecture:
What is the best place to perform TLS termination? Should we do it at NLB (green box) or implement it at HAProxy (Orange box)?
What are the advantages and disadvantages of each scenario?
As you are using the NLB you can achieve End to end HTTPS also however it forces the service also to use.
You can terminate at the LB level if you have multiple LB backed by clusters, leveraging the AWS cert manage with LB will be an easy way to manage the multiple setups.
There is no guarantee that if anyone that enters in your network won't be able to exploit a bug capable of intercepting traffic between services, Software Defined Network(SDN) in your VPC is secure and protects from spoofing but no guarantee.
So there is an advantage if you use TLS/SSL inside the VPC also.

Azure Traffic Manager and Kubernetes Service showing Degraded

We're trying to implement a traffic manager over the top of our Azure Kubernetes services so we can run a cluster in 2 regions (uk west and south) and balance across both regions.
The actual traffic manager seems to be working ok, but in the azure portal its showing as being degraded, and in the ingress controller logs on the k8 cluster i can see a request that looks like this
[18/Sep/2019:10:40:58 +0000] "GET / HTTP/1.1" 404 153 "-" "Azure Traffic Manager Endpoint Monitor" 407 0.000 [-]
So the traffic manager is firing of a request, its hitting the ingress controller but it obviously cant resolve that path so its returning a 404.
I've had a play about with the Custom host header setting to point them to a health check endpoint in on of the pods, it did kind of work for a bit but then it seemed to go back to doing a GET on / so it went into degraded again (yeah i know sounds odd).
Even if that worked i dont really want to have to point it at a specific pod endpoint in case that is really down for some reason. Is there something we can do in the ingress controller config to make it respond with a 200 so the traffic manager knows that its up?
Cheers
I would suggest you to switch to TCP based probing for a quick fix. You can change the protocol to TCP and choose the port where your AKS is listening.
If the 3 way handshake to the port fails, then the probe is considered failed.
Why not expose a simple health check endpoint on the same pod where app is hosted rather than a different pod? If at all you deploy a work around to return http 200 from ingress controller and if the backend is down then the traffic will still be routed which defeats the reason to have a probe.

Can I use the Ambassador to authenticate service-to-service communication inside a Kubernetes cluster?

I have a Kubernetes cluster with services and I use Ambassador as an API gateway between outside world and my services.
With Ambassador I know that I can use a service, which I have, to check authentication and authorization for incoming requests but does this only apply for requests coming outside the cluster?
I want to intercept service-to-service calls as well.
I would be surprised if you cannot.
This answer needs some terminology, to avoid getting lost in word-soup.
App-A is a consumer of an in-cluster Service, and the one which will be authenticating to Ambassador
App-Z is the provider of an in-cluster Service (the selector would target its Pods)
The k8s Service for app-Z we'll call z-service in the z namespace, for a FQDN of z-service.z.svc.cluster.local
It seems like you can use its v-host support and teach it to honor the in-cluster virtual host (the aforementioned FQDN), then update the z-service selector to target the Ambassador Pods rather than the underlying app-Z Pods.
From app-A's point of view, the only thing that would change is that it now must provide authentication for contacting z-service.z.svc.cluster.local.
Without studying Ambassador's setup more, it's hard to know if Ambassador would Just Work™ at that point, or whether you would then need a "implementation" Service -- such as z-for-real.z.svc.cluster.local -- so that Ambassador knows how to find the actual app-Z Pods.
I have the same problem at the moment. Ambassador routes every request to an auth service (if provided), the auth service can be anything. So you can setup http basic auth, oauth, jwt auth and so on.
The next important thing to mention is that your services may use header based routing (https://www.getambassador.io/reference/headers). Only if a bearer (or something similiar) is present the request will hit your service, otherwise will fail. In your service you can check for permissions and so on. So all in all ambassador can help you, but you have still to program something by yourself.
If you want something ready from start or more advanced you can try
https://github.com/ory/oathkeeper or https://istio.io.
If you already found a solution, it would be interesting to know.

custom load balancing within kubernetes

I am trying to deploy an application with load-balancing within kubernetes
below are my intended deployment diagram
ideally, the application is deployed by a set of pods using k8s deployment with type of "backend"
normally, user instances are stored in the archive. and are restored into one of the pods dynamically upon request, stay there for a TTL time (say 30 minutes), and deleted and backuped into the archive.
ideally, the load balance is deployed by a set of pods using k8s deployment with type of "frontend".
ideally, the frontend is configured as layer7 session sticky with "sticky = host". the host equals the UID of a backend pod
an user requests the service by a SOAP message, which contains parameters "host" and "user" in its body.
when a SOAP message reaches the frontend, the "host" value is extracted from the message body.
if the "host" value is valid, the SOAP message is forwarded to the corresponding backend pod (whose UID equals the host value). otherwise, a random backend pod is assigned.
(processing here upon is application specific)
In a backend pod, the application checks the availability of the user instance by the value of "user".
if already existed, just use it; otherwise, try to restore from the archive; if restoring failed(new user), create a new user instance.
I searched around, and did not find any similar examples.
especially layer7 session sticky configuration, and the implementation of custom acquiring of sticky value from the incoming message body.
This sounds like a use-case where you are doing authentication through the front-end loadbalancer. Have you looked at Istio and Ambassador. Seems like Istio and Envoy could provide the service mesh to route the requests to the pods. Then you would have to write a custom plugin module into Ambassador to create this specific routing and authentication mechanism that you are seeking.
Example of Ambassador custom authentication service: https://www.getambassador.io/user-guide/auth-tutorial
https://www.getambassador.io/user-guide/with-istio
This custom sticky session routing can also be done using other API gateways but still using Istio for routing to the different pods. However it would be best if the pods are defined as separate services in order to have easier segmentation by the API gateway (Ambassador, Kong, Nginx) based on the parameters of the message body.

liveness probes for manually created Endpoints

Is this a thing?
I have some legacy services which will never run in Kubernetes that I currently make available to my cluster by defining a service and manually uploading an endpoints object.
However, the service is horizontally sharded and we often need to restart one of the endpoints. My google-fu might be weak, but i can't figure out if Kubernetes is clever enough to prevent the Service from repeatedly trying the dead endpoint?
The ideal behavior is that the proxy should detect the outage, mark the endpoint as failed, and at some point when the endpoint comes back re-admit it into the full list of working endpoints.
BTW, I understand that at present, liveness probes are HTTP only. This would need to be a TCP probe because it's a replicated database service that doesn't grok HTTP.
I think the design is for the thing managing the endpoint addresses to add/remove them based on liveness. For services backed by pods, the pod IPs are added to endpoints based on the pod's readiness check. If a pod's liveness check fails, it is deleted and its IP removed from the endpoint.
If you are manually managing endpoint addresses, the burden is currently on you (or your external health checker) to maintain the addresses/notReadyAddresses in the endpoint.