What is the vertical line in Datadog APM trace - trace

I'm not sure how to interpret the long vertical line in Datadog APM Traces

That indicates the span was called asynchronously
This doc talks about python, but it's true if all languages:
https://www.datadoghq.com/blog/tracing-async-python-code/

Related

Does Jaeger support Grafana Tempo backend?

im starting with tracing tools. I would like to use Grafana Tempo backend storage and as UI Jaeger. Is possible that this stack will work together? Im running that in docker via official docker-compose files.
I checked Jaeger documentation and did not find anything about Grafana Tempo support. There is only Elastic, Cassandra, Fluxdb etc... but not Grafana Tempo.
Thanks
You have to remember when you use Tempo (or Loki) that these systems do not index data. This is why they are so inexpensive; the challenge is that you cannot do a full text search across the data in bulk, this is why Jaeger does not support Tempo as a backend. The way all the Grafana projects work is that when you troubleshoot you start with a metric, isolate down to a small timeframe or specific component, then pivot to logs or traces. Unfortunately, when troubleshooting there are lots of good reasons to start with logs or traces, but this is not possible with their backends, this is the tradeoff between indexing and not indexing, which is why they are inexpensive to operate in comparison to OpenSearch/ElasticSearch.

EFK - Have preconfigured filter by container that will appear in Kibana

I've got the EFK stack installed on kubernetes following this addon: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
What I want to achieve is having all the logs of the same pod together, and even maybe some other filters. But I don't want to configure the filter in kibana with the GUI, I'd like to have them preconfigured in the way that some of my known containers (the containers that I want to monitorize) are configured previously and installed when kibana rather than using an additional step to import/export them. I'd like to have the predefined filters in a way that, immediately after the installation, I can go to "discover", select the pod name that I want to see and then I see all the logs in the format:
In my understanding, that being the first time that I use this tech is near to zero, the in the fluentd-configmap.yml with the correct parameters should do the trick, but none of my tries has altered what I see in kibana.
Am I looking in the correct place for doing this or this filter is not for this use and I'm completely wasting my time? How could I do this filter in any case?
Any help, even if is only a hint, would be appreciated.

How to enable systemd collector in docker-compose.yml file for node exporter

Hi I 'm new to prometheus I have a task to make prometheus show systemd services metrics (I use grafana for visualization) I' m using stefanprodan/dockprom example as my starting point however I couldn't find how to enable systemd collector for node exporter in the node exporter section of the docker-compose.yml and also leave all the enabled by default collectors. Also I need help with getting that info to be sent into grafana. I would appreciate the code in the example or a place where I could find an adequate explanation how to do it like for dummies because I'm not experienced. Thanks in advance.
In order to enable the systemd collector in node_exporter, the command line flag --collector.systemd needs to be passed to the exporter (reference). The default collectors will remain enabled, so you don't need to worry about that.
In order to pass that flag to the application, you need to add that flag to the command portion of the nodeexporter section of the Docker Compose file (here)
In regards to sending the data to Grafana, as long as you have your Prometheus data source configured in Grafana, those metrics will show up automatically -- you don't need to update your Prometheus->Grafana when or removing metrics (or really ever, after initial setup).

Ethstat / Interface Grafana CollectD not showing correct value (MB/s)

I use Grafana with CollectD (and Graphite) to monitor my network usage on my server.
I use the 'Interface' Plugin of CollectD and display the graphs like this:
alias(scale(nonNegativeDerivative(collectd.graph_host.interface-eth0.if_octets.rx), 0.00000095367431640625), 'download')
When I now initiate a downlad with a speedlimit. The download runs for approx 10 minutes, but only this is shown (green line is the download). So it only shows a peak.
Do I have to use some other metrics? I also tried the 'ethstat' but that has so many options none of which I understand!
Is there any beginners documentation. I only found the CollectD Docs, which I read but that does not say anything what the metrics of the ethstat actually mean.
No, there isn't any beginner documentation about the ethstats metrics meaning in collectd. This is because the ethstat plugin reports statistics collected by ethtool on your system and the ethtool stats are vendor specific.
To point you in the right direction, run ethtool -S eth0
That should show you names and numbers like what collectd is reporting.
Now run ethtool -i eth0 and find your driver info.
Then, google your driver name and find out what statistics your card reports and what they mean. It may involve reading linux driver source code, but don't be too scared of that. What you want is probably in the comments, not the code.

How to customise fluentd config when using the kubernetes fluentd-elasticsearch addon

We'd like to customise the fluentd config that comes out of the box with the kubernetes fluentd-elasticsearch addon. It seems however that there is no easy way of doing this with the current supplied Docker images.
The following file: td-agent.conf is copied to the fluentd-es Docker image with no (apparent) way of us being able to customise it.
We need to customise this config file so that we can handle multi-line log entries as one event. Most likely this would invovle making use of the multiline format (as detailed here fluentd in_tail) which would obviously mean a change from the default config file.
Currently a multi line Java stack trace appears in Kibana as multiple entires which is not ideal.
Unfortunately, I am not aware of any method to customize the config. You can either create your own image, open a feature request at issues.k8s.io, or even submit a PR to to enhance fluentd.
You can still have a look at this guy, edit the td-agent.conf and build your own fluentd-elasticsearch image :)