Helm: Datadog Agent with JDBC driver - kubernetes

I would like to use the Datadog Oracle Integration via the Helm Chart Datadog. Oracle Integration states To use the Oracle integration, either install the Oracle Instant Client libraries, or download the Oracle JDBC Driver.
I do not want to use a custom image to package the JDBC-driver, I want to use a standard image such as tag:7-jmx. Other options that come to mind (e.g. EFS volume with the driver inside) seem to be an overkill also.
Best option to me seems to be an init container that downloads the JDBC driver. But Datadog Helm Chart does not support custom init containers for the agents.
What's the best way to do this? To get an Datadog Agent with a JDBC driver via Helm?

Answer from Datadog Support to this:
Thanks again for reaching out to Datadog!
From looking further into this, there does not seem to be a way we can package the JDBC driver with the Datadog Agent. I understand that this is not desirable as you would prefer to use a standard image but I believe the best way to have these bundled together would be to have a custom image for your deployment.
Apologies for any inconveniences that this may cause.

Related

Falco pod initcontainer is not working. curl: (22) The requested URL returned error: 404

I am trying to install falco on my kubernetes cluster with helm chart. I am deploying as Deamonset and using ebpf but getting error on my init containers. What should I do?
This is my values yaml
You are getting this error message due to not having the kernel headers installed so the eBPF driver can be compiled.
Before compiling the eBPF driver, the loader script tries to download it from https://download.falco.org, but it doesn't find it because the Oracle Linux distribution is not officially supported (it is not offered as a prebuilt driver, to be more precise).
The quickest solution would be to install the Kernel Drivers on each Kubernetes node, so Falco can compile the driver the next time it tries to start.
It is also possible to use the project Driverkit to build Falco drivers on your own (as the Falco project does) and make them available somewhere else, but then you'd need to pass the URL for the driver to the Helm Chart. This avoids polluting the system with packages you'd need only once.
You are also welcome to contribute to the project by adding support for the Oracle Linux distribution, which is relatively simple since it is quite similar to the Red Hat distribution. Once it is supported, the drivers will be available to anyone using the same kernel/distribution.
For further information, you can visit the Falco Slack channel and ask for help there, or ping anyone in the community

Can i use different versions of cassandra in a cluster?

Can i use different versions of cassandra in a single cluster? My goal is to transfer data from one DC(A) to new DC(B) and decommission DC(A), but DC(A) is on version 3.11.3 and DC(B) is going to be *3.11.7+
* I Want to use K8ssandra deployment with metrics and other stuff. The K8ssandra project cannot deploy older versions of cassandra than 3.11.7
Thank you!
K8ssandra itself is purposefully an "opinionated" stack, which is why you can only use certain more recent and not-known to include major issues versions of Cassandra.
But, if you already have the existing cluster, that doesn't mean you can't migrate between them. Check out this blog for an example of doing that: https://k8ssandra.io/blog/tutorials/cassandra-database-migration-to-kubernetes-zero-downtime/

Generate docker-compose.yaml from Helm charts

I am interested in generating docker-compose.yaml files from Helm charts. Is there a good way or tool to do this?
I realize that this is in the opposite direction from what most people are doing. Why I want to do this:
Our production systems run Kubernetes via Helm charts. We've got a full blown k8s and Helm setup already; no need to use a tool like Kompose to get us there. The question is how to convert Helm to docker-compose, not the other way around.
We want our Helm charts to be the single authoritative source of container configuration. They are able to describe a superset of what docker-compose can.
Running a set of services using Helm on a development machine is more time and resource consuming than running the same set of services via docker-compose. We do not want to slow development down by having engineers run using Helm/k8s.
We do not want to maintain two sets of configurations.
Can anybody recommend how to do this, or suggest a different solution to the time/resources issue encountered on development machines?

Helm charts vs ansible-playbook vs k8s operator in system installation

I have a big and fairly complex system for install into the k8s cluster.
60 microservices and 10 helm charts installed to 5 namespaces.
Currently, we run 5 helm install/upgrade commands with a pause of 30 seconds between commands. However, this strategy incurs a serious load on nodes due to the fact that we pull docker images and start applications. We have a long and not clear execution time(timeline) that often results in timeouts of components such as consul, Elasticsearch, and applications that depend on the aforementioned components.
I would like to hear opinions about ways to turn this situation around. First, here is our approach so far:
Write the script that controls installation by helm charts.
Write an ansible-playbook that runs Helm charts and controls the installation status of components.
Write an ansible-playbook install components (either using Jinja2 templates or Golang templates)
Write the k8s operator that installs components and controls the system status.
To answer my own question, I created an installation that can be used as a quick solution to fairly complex installations.
The solution relies on Ansible as an installation orchestrator and Helm as a package manager.
You can browse my github repo contains the code.
There's a lot of ways of doing this. But you can use the kubernetes api directly. You can create any tech server such as Spring Boot, NodeJS, etc that controls the creation of the Kubernetes objects that you want.
This way, basically, you'll be doing a customized Helm API, but the main difference is that you'll customize in your way with your needs.

Azure ApplicationInsights Application Map doesn't show PostgreSql dependency by default

In the Application Map feature of Azure ApplicationInsights, it seems that the PostgreSql db dependency is not shown by default, whereas Azure storage queues, blobs are shown, and so are other http dependencies. This doc by Microsoft doesn't explain why either.
Does anyone know why and when this feature will be available?
I believe it's bc for .Net, PostgreSql is not an auto-collected dependency. You will have to manually wire it up, according to this article.