EFK - Have preconfigured filter by container that will appear in Kibana - kubernetes

I've got the EFK stack installed on kubernetes following this addon: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
What I want to achieve is having all the logs of the same pod together, and even maybe some other filters. But I don't want to configure the filter in kibana with the GUI, I'd like to have them preconfigured in the way that some of my known containers (the containers that I want to monitorize) are configured previously and installed when kibana rather than using an additional step to import/export them. I'd like to have the predefined filters in a way that, immediately after the installation, I can go to "discover", select the pod name that I want to see and then I see all the logs in the format:
In my understanding, that being the first time that I use this tech is near to zero, the in the fluentd-configmap.yml with the correct parameters should do the trick, but none of my tries has altered what I see in kibana.
Am I looking in the correct place for doing this or this filter is not for this use and I'm completely wasting my time? How could I do this filter in any case?
Any help, even if is only a hint, would be appreciated.

Related

Kubernetes: Policy check before container execution

I am new to Kubernetes, I am looking to see if its possible to hook into the container execution life cycle events in the orchestration process so that I can call an API to pass the details of the container and see if its allowed to execute this container in the given environment, location etc.
An example check could be: container can only be run in a Europe or US data centers. so before someone tries to execute this container, outside this region data centers, it should not be allowed.
Is this possible and what is the best way to achieve this?
You can possibly set up an ImagePolicy admission controller in the clusters, were you describes from what registers it is allowed to pull images.
kube-image-bouncer is an example of an ImagePolicy admission controller
A simple webhook endpoint server that can be used to validate the images being created inside of the kubernetes cluster.
If you don't want to start from scratch...there is a Cloud Native Computing Foundation (incubating) project - Open Policy Agent with support for Kubernetes that seems to offer what you want. (I am not affiliated with the project)

How to enable systemd collector in docker-compose.yml file for node exporter

Hi I 'm new to prometheus I have a task to make prometheus show systemd services metrics (I use grafana for visualization) I' m using stefanprodan/dockprom example as my starting point however I couldn't find how to enable systemd collector for node exporter in the node exporter section of the docker-compose.yml and also leave all the enabled by default collectors. Also I need help with getting that info to be sent into grafana. I would appreciate the code in the example or a place where I could find an adequate explanation how to do it like for dummies because I'm not experienced. Thanks in advance.
In order to enable the systemd collector in node_exporter, the command line flag --collector.systemd needs to be passed to the exporter (reference). The default collectors will remain enabled, so you don't need to worry about that.
In order to pass that flag to the application, you need to add that flag to the command portion of the nodeexporter section of the Docker Compose file (here)
In regards to sending the data to Grafana, as long as you have your Prometheus data source configured in Grafana, those metrics will show up automatically -- you don't need to update your Prometheus->Grafana when or removing metrics (or really ever, after initial setup).

Logging Kubernetes with an external ELK stack

Is there any documentation out there on sending logs from containers in K8s to an external ELK cluster running on EC2 instances?
We're in the process of trying to Kubernetes set up and I'm trying to figure out how to get the logging to work correctly. We already have an ELK stack setup on EC2 for current versions of the application but most of the documentation out there seems to be referring to ELK as it's deployed to the K8s cluster.
I am also working on the same cause.
First you should know what driver is being used by your docker containers to manage the logs (json driver/ journald etc - read here).
After that you should use some log collector in your architecture to send the logs to the Logstash endpoint. You can use filebeat/fluent bit. They are light weight alternatives to logstash/fluentd respectively. You must use one of them and not directly send your logs to logstash via syslog since these log shippers have a special functionality of enriching your logs with kubernetes metadata of the respective containers.
There might be lot of challenges after that. Parsing log data (multiline logs for example) etc. For an efficient pipeline, it’s better to do most of the work (i.e. extracting the date object from the logs etc) at the log sender side, than using the common logstash for this purpose that might be a bottle-neck.
Note that in case the container logs are not sent to stdout/stderr but written else-where, you might need to run filebeat/fluent-bit as side-car with your containers.
As for the links for documentation are concerned, I myself didn’t find anything documented in a single place on this, but the keywords that I mentioned over, reading about them I got to know many things.
Hope this helps.

Statefulset - Possible to Skip creation of pod 0 when it fails and proceed with the next one?

I currently do have a problem with the statefulset under the following condition:
I have a percona SQL cluster running with persistent storage and 2 nodes
now i do force both pods to fail.
first i will force pod-0 to fail
Afterwards i will force pod-1 to fail
Now the cluster is not able to recover without manual interference and possible dataloss
Why:
The statefulset is trying to bring pod-0 up first, however this one will not be brought online because of the following message:
[ERROR] WSREP: It may not be safe to bootstrap the cluster from this node. It was not the last one to leave the cluster and may not contain all the updates. To force cluster bootstrap with this node, edit the grastate.dat file manually and set safe_to_bootstrap to 1
What i could do alternatively, but what i dont really like:
I could change ".spec.podManagementPolicy" to "Parallel" but this could lead to race conditions when forming the cluster. Thus i would like to avoid that, i basically like the idea of starting the nodes one after another
What i would like to have:
the possibility to have ".spec.podManagementPolicy":"OrderedReady" activated but with the possibility to adjust the order somehow
to be able to put specific pods into "inactive" mode so they are being ignored until i enable them again
Is something like that available? Does someone have any other ideas?
Unfortunately, nothing like that is available in standard functions of Kubernetes.
I see only 2 options here:
Use InitContainers to somehow check the current state on relaunch.
That will allow you to run any code before the primary container is started so you can try to use a custom script in order to resolve the problem etc.
Modify the database startup script to allow it to wait for some Environment Variable or any flag file and use PostStart hook to check the state before running a database.
But in both options, you have to write your own logic of startup order.

How to customise fluentd config when using the kubernetes fluentd-elasticsearch addon

We'd like to customise the fluentd config that comes out of the box with the kubernetes fluentd-elasticsearch addon. It seems however that there is no easy way of doing this with the current supplied Docker images.
The following file: td-agent.conf is copied to the fluentd-es Docker image with no (apparent) way of us being able to customise it.
We need to customise this config file so that we can handle multi-line log entries as one event. Most likely this would invovle making use of the multiline format (as detailed here fluentd in_tail) which would obviously mean a change from the default config file.
Currently a multi line Java stack trace appears in Kibana as multiple entires which is not ideal.
Unfortunately, I am not aware of any method to customize the config. You can either create your own image, open a feature request at issues.k8s.io, or even submit a PR to to enhance fluentd.
You can still have a look at this guy, edit the td-agent.conf and build your own fluentd-elasticsearch image :)