Forward Jaeger traces to Datadog - trace

Is there a way of getting Jaeger traces to Datadog, whether it be through a proxy, scraping traces from Jager and converting them to DD Traces, etc...
We have a vendor provided backend that only supports Jaeger, but the enterprise APM solution is Datadog.
Thanks!

Though Datadog's [OpenTelemetry collector][1] suggests it will ingest Jaeger traces, there is no documentation to explain how that might work.

Related

How to properly monitor all ELK components with Prometheus?

I would like to monitor all ELK service running in our kubernetes clusters to be sure, that is still running properly.
I am able to monitor Kibana portal via URL. ElasticSearch via Prometheus and his metrics (ES have some interested metrics to be sure, that ES is working well).
But exist something similar for Filebeat, Logstash, ... ? Have these daemons some exposed metrics for Prometheus, which is possible to watching and analizing it states?
Thank you very much for all hints.
There is an exporter for ElasticSearch found here: https://github.com/prometheus-community/elasticsearch_exporter and an exporter for Kibana found here: https://github.com/pjhampton/kibana-prometheus-exporter These will enable your Prometheus to scrape the endpoints and collect metrics.
We are also working on a new profiler inside of OpenSearch which will provide much more detailed metrics and fix a lot of bugs. That will also natively provide an exporter for Prometheus to scrape : https://github.com/opensearch-project/OpenSearch/issues/539 you can follow along here, this is in active development if you are looking for an open-source alternative to ElasticSearch and Kibana.
Yes, both the beats and logstash have metrics endpoint for monitoring.
These monitoring endpoints are built to be consumed using metricbeat, but since they return a json you can use other tools to monitor it.
For logstash the metrics endpoint is enabled by default, listening on localhost at port 9600, and from the documentation you have these two endpoints:
node
node_stats
For the beats family you need to enable it as if you would consume the metrics using metricbeat, this documentation explains how to do that.
Then you will have two endpoints:
stats
state
So you would just need to use those endpoints to collect the metrics.

Is it possible/fine to run Prometheus, Loki, Grafana outside of Kubernetes?

In some project there are scaling and orchestration implemented using technologies of a local cloud provider, with no Docker & Kubernetes. But the project has poor logging and monitoring, I'd like to instal Prometheus, Loki, and Grafana for metrics, logs, and visualisation respectively. Unfortunately, I've found no articles with instructions about using Prometheus without K8s.
But is it possible? If so, is it a good way? And how to do this? I also know that Prometheus & Loki can automatically detect services in the K8s to extract metrics and logs, but will the same work for a custom orchestration system?
Can't comment about Loki, but Prometheus is definitely doable.
Prometheus supports a number of service discovery mechanisms, k8s being just on of them. If you look at the list of options (the ones ending with _sd_config) you can see if your provider is there.
If it is not then a generic service discovery can be used. Maybe DNS-based discovery will work with your custom system? If not then with some glue code a file based service discovery will almost certainly work.
Yes, I'm running Prometheus, Loki etc. just fine in a AWS ECS cluster. It just requires a bit more configuration especially regarding service discovery (if you are not already using something like ECS Service Disovery or Hashicorp Consul)

Istio (1.6.4) best practice for log aggregation on K8s

We plan to use Istio on our AWS EKS K8s-Cluster and have explored Ingress, Egress and auth via Keycloak so far, but we are a little lost how we can easily aggregate all logs to a single, easy to query and monitoring place. Istio docs are just mentioning Mixer will be dead, but no really help what else could be done.
Scope: Access-Logs, Istiod logs and Apllication/Microservices logs from stdout as well.
mTLS is enabled cluster-wide (that seems to make problems using log sidecars)
We use Kiali, but that's not exactly what we need.
We are looking more on something like an ELK-Stack or Graylog but idealy more lightweight. We thought of Grafana Loki, but that its quite calm when you google for Istio+Loki... Seems to be not working.
So my question: What would be a best practice for log aggregation with Istio on K8s for all these logs in one place, and what is needed to getting it started (tutorial/how-to link?)
Thanks in advance!
Istio docs are just mentioning Mixer will be dead, but no really help what else could be done.
As mentioned in documentation
Mixer is deprecated. The functionality provided by Mixer is being moved into the Envoy proxies.
If you take a look at 1.5 release notes documentation
A new model for extensibility
Istio has long been the most extensible service mesh, with Mixer plugins allowing custom policy and telemetry support and Envoy extensions allowing data plane customization. In Istio 1.5 we’re announcing a new model that unifies Istio’s extensibility model with Envoy’s, using WebAssembly (Wasm). Wasm will give developers the ability to safely distribute and execute code in the Envoy proxy – to integrate with telemetry systems, policy systems, control routing and even transform the body of a message. It will be more flexible and more efficient, eliminating the need for running a Mixer component separately (which also simplifies deployments).
Read our Wasm blog post, and look out for posts from Google, Solo.io and the Envoy community for much more detail about this exciting work!
After mixer beeing deprecated there is something new, called telemetry v2
Telemetry V2 lacks a central component (Mixer) with access to K8s metadata, the proxies themselves require the metadata necessary to provide rich metrics. Additionally, features provided by Mixer had to be added to the Envoy proxies to replace the Mixer-based telemetry. Istio Telemetry V2 uses two custom Envoy plugins to achieve just that.
It´s well described there.
So it´s not like mixer is dead and there is nothing else to replace it.
What would be a best practice for log aggregation with Istio on K8s for all these logs in one place, and what is needed to getting it started (tutorial/how-to link?)
I would start with Remotely Accessing Telemetry Addons, which shows how to configure Istio to expose and access the telemetry addons(prometheus,grafana, kiali and jaeger/zipkin).
Everything depends on your use case, by default you can enable prometheus,grafana kiali and jaeger/zipkin versions provided by istio.
Additionally take a look at istio documentation metrics,logs and tracing.
We use Kiali, but that's not exactly what we need. We are looking more on something like an ELK-Stack or Graylog but idealy more lightweight. We thought of Grafana Loki, but that its quite calm when you google for Istio+Loki... Seems to be not working.
As far as I know you should be able to configure istio with elk, but it´s not easy and there is lack of documentation about that.
There is information what you have to do with elasticsearch to make it work and related github issue about that, so I assume elk would work too. Take a look at this tutorial.

Istio missing metrics

I am testing Istio 1.1, but the collection of metrics is not working correctly.
I can not find what the problem is. I followed this tutorial and I was able to verify all the steps without problems.
If I access prometheus I can see the log of some requests.
On the other hand, if I access Jaeger, I can not see any service (only 1 from Istio)
Grafana is also having some strange behavior, most of the graphs do not show data.
In istio 1.1, the default sampling rate is 1%, so you need to send at least 100 requests before the first trace is visible.
This can be configured through the pilot.traceSampling option.

How to install Fluentd plugins on k8s

I have set up EFK on Kubernetes, currently I have access only to logs from logstash but wondering how can I install some plugins for Fluentd in order to get some logs from eg. NGINX which I use as a reverse proxy? Can someone please point me how exactly I can configure EFK on k8s and what are the best practices around it? On k8s I have eg. API service in Express JS.
You will find this article interesting for the begging:Kubernetes Logging and Monitoring: The Elasticsearch, Fluentd, and Kibana (EFK) Stack – Part 1: Fluentd Architecture and Configuration
Also there are a lot of fluentd plugins for kubernetes here: https://www.fluentd.org/plugins/all#stq=kubernetes&stp=1
Each plugin has installation instruction, for example Kubernetes Logging with Fluentd
Also you may want to try Fluent Bit is a lightweight and extensible Log Processor