Does Istio envoy proxy sidecar has anything to do with container filesystem? - mongodb

Recently I was adding Istio to my kubernetes cluster. When enabling istio to one of the namespaces where MongoDB statefulset were deployed, MongoDB was failed to start up.
The error message was "keyfile permissions too open"
When I analyzed whats going on, keyfile is coming from the /etc/secrets-volume which is mounted to the statefulset from kubernetes secret.
The file permissions was 440 instead of 400. Because of this MongoDB started to complain that "permissions too open" and the pod went to Crashbackloopoff.
When I disable Istio injection in that namespace, MongoDB is starting fine.
Whats going on here? Does Istio has anything to do with container filesystem, especially default permissions?

The istio sidecar injection is not always meant for all kinds of containers like mentioned in istio documentation guide. These containers should be excluded from istio sidecar injection.
In case of Databases that are deployed using StatefulSets some of the containers might be temporary or used as operators which can end up in crash loop or other problematic states.
There is also alternative approach to not istio inject databases at all and just add them as external services with ServiceEntry objects. There is entire blog post in istio documentation how to do that specifically with MongoDB. the guide is little outdated so be sure to refer to current documentation page for ServiceEntry which also has examples of using external MongoDB.
Hope it helps.

Related

is it ok to use the ingress-nginx nodeport to access the cluster in production?

Im trying to set up a simple k8s cluster on a bare metal server.
Im looking into ways to access the cluster.
Ive been looking though the docs and read through the bare metal considerations section.
so far i've found setting external IP's and nodePorts aren't recommended.
Ive heard metalLB should be used in production so i was about to go ahead with that.
Then i realised the ingress is already using a nodePort service and i can access that for development purposes.
Could i just use this in production too?
Of course you can. If you do not need routing rules or anything beyond what kube-proxy can offer, you don't need ingress controller like MetalLB.

CouchDB kubernetes operator

I need to deploy couchdb in our production environment, and I've been trying to use the couchdb operator (https://operatorhub.io/operator/couchdb-operator) in order to make the deployed cluster more resilient.
I've tried using the official helm chart but it’s not stable enough for us.
Our couchdb deployment needs to use an image in our private repository, and we need to configure an internal sidecar application that will be deployed as a container in the same pod in each node in the cluster.
I went over the documentation here: https://cloud.ibm.com/docs/Cloudant?topic=Cloudant-apache-couchdb-operator
I couldn’t find any reference here to using my own images or configuring sidecar applications, I also tried accessing the couchdb-operator pod and tried looking for something I can use, couldn't find anything helpful.
I'm not too experienced in kubernetes operators as a whole, so this might seem counter-intuitive to using operators, but my question is: can I somehow edit the code for the operator so I can configure external configuration like defining my own repositories/image, sidecars, or even services?
Thanks in advance,
Dean.

Is it possible/fine to run Prometheus, Loki, Grafana outside of Kubernetes?

In some project there are scaling and orchestration implemented using technologies of a local cloud provider, with no Docker & Kubernetes. But the project has poor logging and monitoring, I'd like to instal Prometheus, Loki, and Grafana for metrics, logs, and visualisation respectively. Unfortunately, I've found no articles with instructions about using Prometheus without K8s.
But is it possible? If so, is it a good way? And how to do this? I also know that Prometheus & Loki can automatically detect services in the K8s to extract metrics and logs, but will the same work for a custom orchestration system?
Can't comment about Loki, but Prometheus is definitely doable.
Prometheus supports a number of service discovery mechanisms, k8s being just on of them. If you look at the list of options (the ones ending with _sd_config) you can see if your provider is there.
If it is not then a generic service discovery can be used. Maybe DNS-based discovery will work with your custom system? If not then with some glue code a file based service discovery will almost certainly work.
Yes, I'm running Prometheus, Loki etc. just fine in a AWS ECS cluster. It just requires a bit more configuration especially regarding service discovery (if you are not already using something like ECS Service Disovery or Hashicorp Consul)

Get request count from Kubernetes service

Is there any way to get statistics such as service / endpoint access for services defined in Kubernetes cluster?
I've read about Heapster, but it doesn't seem to provide these statistics. Plus, the whole setup is tremendously complicated and relies on a ton of third-party components. I'd really like something much, much simpler than that.
I've been looking into what may be available in kube-system namespace, and there's a bunch of containers and services, there, Heapster including, but they are effectively inaccessible because they require authentication I cannot provide, and kubectl doesn't seem to have any API to access them (or does it?).
Heapster is the agent that collects data, but then you need a monitoring agent to interpret these data. On GCP, for example, that's fluentd who gets these metrics and sends to Stackdriver.
Prometheus is an excellent monitoring tool. I would recommend this one, if youare not on GCP.
If you would be on GCP, then as mentioned above you have Stackdriver Monitoring, that is configured by default for K8s clusters. All you have to do is to create a Stackdriver accound (this is done by one click from GCP Console), and you are good to go.

Rancher connect to kubernetes instead of start kubernetes

Rancher is designed (as best as I can tell) to own and run a kubernetes cluster. Rancher does provide a configuration so that kubectl can interact w/ the kubernetes cluster. Rancher seems like a nice tool. But as far as I can tell, there is no way to connect to an existing kubernetes cluster. Is there any way to do this?
If you are looking for a service that can connect to an existing k8s cluster(s) then try Containership. You can use Kubectl and/or the Containership UI to manage you workloads, config maps, etc on multiple clusters.
Hope this helps!
I got this answer on the rancher forums
There is not, most of the value we can add at the moment is around configuring, managing, and controlling access to the installation we setup.
https://forums.rancher.com/t/rancher-connect-to-kubernetes-instead-of-start-kubernetes/3209