Fluency with forward plugin: how to add kubernetes metadata to logs - kubernetes

Hey i have a question.
Im using logback-more-appenders(fluency plugin) to send logs to EFK stack (fluent-bit) which is working in kubernetes cluster, but it lacks kubernetes metadata ( like node/pod names).
I know i can use <additionalField></additionalField> in logbck.xml to add Service name (because this is static), but i cannot do it to dynamic parts like node or pod name.
I tried to do it on fluent-bit side using kubernetes filter, but this works only with tail/systemd inputs not a forward one (it parses tag with filename which contains namespce and pod name). Im using forward plugin to send logs from java software to elasticsearch, and in logback.xml i cannot enter dynamic pod name (or i don't know if i can).
Any tips how i can do it? I prefer to send logs using fluency instead of sniffing host container logs.

In my case, the best i could think of was to change from forward to tail plugin with structured logging (in json).

Have you tried to Pass POD ID and NODE NAME as environment variables in logback.xml as additional fields, that you can attribute the metadata to the logevents?

Related

second NGNIX installed but not performing load balancing

I have installed NGINX on Azure AKS, as default, from the repository, and set it up to handle http and tcp traffic, inside the name space the controller and services are installed. URLs are mapped to internal services. This works fine.
I then created another namespace and installed the same application with same named services, but - again, on a different namespace. The installation seem to work.
I then tried to install another NGINX controller, this time in the new name space, to control the services located there.
Used helm and added --set controller.ingressClass="custom-class-nginx" and --set controller.ingressClassResource.name="custom-class-nginx" to the ugrade --install helm command line.
I also changed the rules configuration used, to use the "custom-class-nginx" for the ingressClassName value.
I now see both ingresses, each in its own name space.
The first instance, installed as default with class name "nginx" works fine.
The second instance does not load balance and I get the NGINX 404 error, when I try to go to the set URLs.
Also, when I look at the logs of the 1st (default) and 2nd (custom) logs in K9s, I see they both show events from the 1st controller. Yes, even in the custom controller's logs.
What an I missing? What am I doing wrong? I read as much info as I could and it is supposed to be easy.
What configuration am I missing? What do I need to make the 2nd controller respond to the URL coming in and move traffic?
Thanks in advance.
Moshe

How to expose cluster+project values to container in GKE (or current-context in k8s)

My container code needs to know in which environment it is running on GKE, more specifically what cluster and project. In standard kubernetes, this could be retrieved from current-context value (gke_<project>_<cluster>).
Kubernetes has a downward api that can push pod info to containers - see https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/ - but unfortunately nothing from "higher" entities.
Any thoughts on how this can be achieved ?
Obviously I do not want to explicit push any info at deployment (e.g. as env in the configMap). I rather deploy using a generic/common yaml and have the code at runtime retrieve the info from env or file and branch accordingly.
You can query the GKE metadata server from within your code. In your case, you'd want to query the /computeMetadata/v1/instance/attributes/cluster-name and /computeMetadata/v1/project/project-id endpoints to get the cluster and project. The client libraries for each supported language all have simple wrappers for accessing the metadata API as well.

How to set MaxRevisionTimeoutSeconds in Knative?

I have deployed a service using Cloud run on gke which uses Knative as an abstraction over k8s. The default MaxRevisionTimeoutSeconds is set to 600s in the knative default config but according to this PR this is customizable.
I couldn't find anything in the official Knative documentation, can anybody help me out here?
UPDATE:
After digging a bit more in knative source code and documentation. It looks like that the MaxRevisionTimeoutSeconds is defined in resource=ConfigMap/config-defaults. So have to update it with custom value.
From this it looks like we can use something called as operator to modify the ConfigMap resource but it did not work probably because gcp's does not use operator to install Knative components. Anyways I went on to install the operator and then used resource=knativeserving to overwrite the config-defaults. But this also did not work when I tried re-deploying service.
The next solution is to directly edit the config-defaults using kubectl edit. I even tried doing this but encountered weird behavior. After editing the YAML file when I used kubectl describe to check the changed value, it sometimes shows the modified value, sometimes shows the old value, and sometimes doesn't even show that particular key-value pair in the YAML. Also, it doesn't work when trying to re-deploy the service after doing this edit.
If anyone can help me with this, it would be really great.
MaxRevisionTimeoutSeconds is a cluster-global setting which enforces the max value for TimeoutSeconds on each Revision. This value exists so that cluster administrators can set upper bounds on the amount of time a single HTTP request can be in the system. Knowing an upper bound can be useful when configuring graceful shutdown settings on the HTTP routing components to prevent dropped requests during upgrades.
It's possible that Cloud Run on GKE has overridden these configurations so that they can upgrade the underlying Istio and Knative components on a predictable schedule. (If you have a 10% upgrade budget and it takes 10m to drain a component, your minimum upgrade time is probably around 110m, taking into account additional scheduling / image fetch / startup time.)

Being notified for changes in namespace of a pod

I have an application running on gcp. I want to set up a mechanism to be notified if there's any change in the namespace. There is an option to use kubernetes Watch to monitor any changes in namespace. But I'm looking for something to create an event or get notification to java application for such a change in namespace. I searched but could not find anything relevant, are there any options to be notified on such namespace changes?
If you are looking for forwarding to use third party app you can use plugin : botkube
If you want to create application in java you can check for respetvice client library of it in official document
https://kubernetes.io/docs/reference/using-api/client-libraries/
Java official client library for Kubernetes : https://github.com/kubernetes-client/java
This is some good example or it you can also use default Kubernetes API and write custom code and run that contained in same Kubernetes cluster to monitor any changes in namespace.
In order to do it, what I would do is deploying an application that checks if there are changes. To do it, you can use kubernetes api. You just need to install curl, instead of kubectl and the rest is restful.
curl http://localhost:8080/api/v1/namespaces/default/pods
Depending on your configuration you may need to use ssl or provide client certificate.
You should do a script with kubernetes api calls in order to check if there are changes.
I would use watches, depends on your specific use case, you can start here:
https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes
https://engineering.bitnami.com/articles/kubernetes-async-watches.html
Let me know if this doesn't solve your use case, I can suggest other solutions.

echo container image:tag (URI) in kubernetes readinessProbe or livenessProbe

I have many versions and tags of containers used by Deployment in k8s (and hence many log groups).
It would be nice if i could display the container URI and tag in a readinessProbe or livenessProbe which then flows to persisted logging.
Basically so that I know my Pod whose logs I am viewing is running on the correct image.
I thought of simply echoing it as a container variable, so i thought of setting the container image URI as a container variable in the Pod manifest.
The docs at k8s EnvVarSource says it only supports certain fields for fieldRef, importantly, it doesn't support grabbing the spec.containers image field.
Anyone has any smart ideas how I might achieve this in other ways?
Or when/if does the kubernetes team support this?
UPDATE:
i found that doing echo under readinessProbe.exec.command works (the Pod is Ready status) but the echo output does not flow to the logs.
Only the application (server) output appear in the logs in my logging backend (CloudWatch).