CouchDB kubernetes operator - kubernetes

I need to deploy couchdb in our production environment, and I've been trying to use the couchdb operator (https://operatorhub.io/operator/couchdb-operator) in order to make the deployed cluster more resilient.
I've tried using the official helm chart but it’s not stable enough for us.
Our couchdb deployment needs to use an image in our private repository, and we need to configure an internal sidecar application that will be deployed as a container in the same pod in each node in the cluster.
I went over the documentation here: https://cloud.ibm.com/docs/Cloudant?topic=Cloudant-apache-couchdb-operator
I couldn’t find any reference here to using my own images or configuring sidecar applications, I also tried accessing the couchdb-operator pod and tried looking for something I can use, couldn't find anything helpful.
I'm not too experienced in kubernetes operators as a whole, so this might seem counter-intuitive to using operators, but my question is: can I somehow edit the code for the operator so I can configure external configuration like defining my own repositories/image, sidecars, or even services?
Thanks in advance,
Dean.

Related

Multiple apps in single K8S deployment

I'm exploring K8S possibilities and I'm wonder is there any way to create deployments for two or more apps in single deployment so it is transactional - when something is wrong after deployment all apps are rollbacked. Also I want to mention that I'm not saying about pod with multiple containers because additional side car containers are rather intended for some crosscutting concerns like monitoring, authentication (like kerberos) and others but it is not recommended to put different apps in single pod. Having this in mind, is it possible to have single deployment that can produce 2+ kind of pods?
Is it possible to have single deployment that can produce 2+ kind of pods?
No. A Deployment creates only one kind of Pod. You can update a Deployment's contents, and it will incrementally replace existing Pods with new ones that match the updated Pod spec.
Nothing stops you from creating multiple Deployments, one for each kind of Pod, and that's probably the approach you're looking for here.
... when something is wrong after deployment all apps are rollbacked.
Core Kubernetes doesn't have this capability on its own; indeed, it has somewhat limited capacity to tell that something has gone wrong, other than a container failing its health checks or exiting.
Of the various tools in #SYN's answer I at least have some experience with Helm. It's not quite "transactional" in the sense you might take from a DBMS, but it does have the ability to manage a collection of related resources (a "release" of a "chart") and it has the ability to roll back an entire version of a release across multiple Deployments if required. See the helm rollback command.
Helm
As pointed out in comments, one way to go about this would be to use something like Helm.
Helm is some kind of client (as of v3. Previous also involved "tiller", a controller running in your kubernetes cluster: let's forget about that one/deprecated).
Helm uses "Charts" (more or less: templates, with default values you can override).
Kustomize
Another solution, similar to Helm, is Kustomize. Working from plain-text files (not templates), while making it simple to override / customize your objects before applying them to your Kubernetes cluster.
ArgoCD
While Kustomize and Helm are both standalone clients, we could also mention solutions such as ArgoCD.
The ArgoCD controller would run inside your Kubernetes cluster, allowing you to create "Application" objects.
Those Applications are processed by ArgoCD, driving deployment of your workloads (common sources for those applications would involve Helm Charts, Git repositories, ...).
The advantage of ArgoCD being that their controller may (depending on your configuration) be responsible for upgrading your applications over time (eg: if your source is a git repository, branch XXX, and someone pushes changes into that branch: argocd would apply those pretty much right away)
Operators
Although most of those solutions are pretty much unaware of how your application is running. Say you upgrade a deployment, driven by Helm, Kustomize or ArgoCD, and end up with some database pods stuck in crashloopbackoff: your application pods would get updated nevertheless, there's no automatic rollback to a previous working configuration.
Which brings us to another way to ship applications to Kubernetes: operators.
Operators are aware of the state of your workloads, and may be able to fix common errors ( depending on how it was coded, ... there's no magic ).
An operator is an application (can be in Go, Java, Python, Ansible playbooks, ... or whichever comes with some library communicating with a Kubernetes cluster API)
An operator is constantly connected to your Kubernetes cluster API. You would usually find some CustomResourceDefinitions specific to your operator, allowing you to describe the deployment of some component in your cluster. (eg: the elasticsearch operator introduces an object kind "ElasticSearch", and some "Kibana")
The operator watches for instances of the objects it managed (eg: ElasticSearch), eventually creating Deployment/StatefulSets/Services ...
If someone deletes an object that was created by your operator, it would/should be re-created by that operator, in a timely manner (mileage may vary, depending on which operator we're talking about ...)
A perfect sample for operators would be something like OpenShift 4 (OKD4). A Kubernetes cluster that comes with 10s of operators (SDN, DNS, machine configurations, ingress controller, kubernetes API server, etcd database, ...). The whole cluster is an assembly of operators: upgrading your cluster, each of those would manage the upgrade of the corresponding services, in an orchestrated way, ... one after the other, ... if anything fails, you're still usually left with enough replicas running to troubleshoot the issue, ...
Depending on what you're looking for, each option has advantages and inconvenients. Now if you're looking for "single deployment that can produce 2+ kind of pods", then ArgoCD or some home-grown operator would qualify.

Does Istio envoy proxy sidecar has anything to do with container filesystem?

Recently I was adding Istio to my kubernetes cluster. When enabling istio to one of the namespaces where MongoDB statefulset were deployed, MongoDB was failed to start up.
The error message was "keyfile permissions too open"
When I analyzed whats going on, keyfile is coming from the /etc/secrets-volume which is mounted to the statefulset from kubernetes secret.
The file permissions was 440 instead of 400. Because of this MongoDB started to complain that "permissions too open" and the pod went to Crashbackloopoff.
When I disable Istio injection in that namespace, MongoDB is starting fine.
Whats going on here? Does Istio has anything to do with container filesystem, especially default permissions?
The istio sidecar injection is not always meant for all kinds of containers like mentioned in istio documentation guide. These containers should be excluded from istio sidecar injection.
In case of Databases that are deployed using StatefulSets some of the containers might be temporary or used as operators which can end up in crash loop or other problematic states.
There is also alternative approach to not istio inject databases at all and just add them as external services with ServiceEntry objects. There is entire blog post in istio documentation how to do that specifically with MongoDB. the guide is little outdated so be sure to refer to current documentation page for ServiceEntry which also has examples of using external MongoDB.
Hope it helps.

Openshift : MongoDB and Mongoexpress

I have some team members that don't have permission to pull docker image locally on their machine and run as local instance of mongodb and mongoexpress.
So am planning to deploy as mongodb and mongoexpress as pods in Openshift to access locally. Can anyone provide the steps to do that in Openshift? Or else proper resource where I can find information / steps.
I am new to openshift.
Can anyone provide the steps to do that in Openshift?
This tutorial explains how apps (containers) are deployed from images in OpenShift
Or else proper resource where I can find information/steps
It depends on your needs, the main question is if you really need container orchestration tools or no. If you need them then you can consider installing them locally:
Docker for Windows
Minikube
or in cloud:
Google Kubernetes Engine aka GKE (it allows you creating basic Kubernetes cluster in a few clicks)
OpenShift (I haven't been dealing with it yet)
from what I've already seen, Kubernetes provides a lot of documentation (with examples, etc) on topic.
Last but not least, there is a really nice step-by-step guide on how to create Kubernetes cluster from scratch "the hard way" if you need the cluster to be fully managed by you.

Multiple pods using same database on kubernetes

I would like to know if it is possible for multiple pods in the same Kubernetes cluster to access a database which is configured using persistent volumes on a Google cloud persistent disk.
Currently I am building a microservices achitecture web app which has 3 node apis in different pods all accessing the same database. So how do I achieve this with kubernetes.
Kindly let me know if my architecture is right as well
You can certainly connect multiple node-based app pods to the same database. It is sometimes said that microservices shouldn't share a database but this depends on what your apps are doing, the project history and the extent to which you want the parts to be worked on separately.
There are questions you have to answer about running databases at scale, such as your future load and whether you want to use relational databases if you're going to try to span availability zones. And there are
some specific to kubernetes, especially around how you associate DB Pods to data. See https://stackoverflow.com/a/53980021/9705485. Another popular option is to use a managed DB service from a cloud provider. If you do run the DB in k8s then I'd suggest looking for a helm chart or looking at an operator, such as the kubeDB operator, to avoid crafting the kubernetes descriptors yourself and to get more guidance on running the DB and setting it up.
If it's a new project and you've not used k8s before then you'll also have to decide where to host your code, your docker images and your deployment descriptors and how to setup your CI pipelines. If you've not got answers to these questions already then I'd suggest looking at Jenkins-X as it will provide you with out of the box defaults for a whole cluster and CI setup and a template ('build pack') for building node apps and deploying them to staging and prod environments through a pipeline.

Clusters and nodes formation in Kubernetes

I am trying to deploy my Docker images using Kubernetes orchestration tools.When I am reading about Kubernetes, I am seeing documentation and many YouTube video tutorial of working with Kubernetes. In there I only found that creation of pods, services and creation of that .yml files. Here I have doubts and I am adding below section,
When I am using Kubernetes, how I can create clusters and nodes ?
Can I deploy my current docker-compose build image directly using pods only? Why I need to create services yml file?
I new to containerizing, Docker and Kubernetes world.
My favorite way to create clusters is kubespray because I find ansible very easy to read and troubleshoot, unlike more monolithic "run this binary" mechanisms for creating clusters. The kubespray repo has a vagrant configuration file, so you can even try out a full cluster on your local machine, to see what it will do "for real"
But with the popularity of kubernetes, I'd bet if you ask 5 people you'll get 10 answers to that question, so ultimately pick the one you find easiest to reason about, because almost without fail you will need to debug those mechanisms when something inevitably goes wrong
The short version, as Hitesh said, is "yes," but the long version is that one will need to be careful because local docker containers and kubernetes clusters are trying to solve different problems, and (as a general rule) one could not easily swap one in place of the other.
As for the second part of your question, a Service in kubernetes is designed to decouple the current provider of some networked functionality from the long-lived "promise" that such functionality will exist and work. That's because in kubernetes, the Pods (and Nodes, for that matter) are disposable and subject to termination at almost any time. It would be severely problematic if the consumer of a networked service needed to constantly update its IP address/ports/etc to account for the coming-and-going of Pods. This is actually the exact same problem that AWS's Elastic Load Balancers are trying to solve, and kubernetes will cheerfully provision an ELB to represent a Service if you indicate that is what you would like (and similar behavior for other cloud providers)
If you are not yet comfortable with containers and docker as concepts, then I would strongly recommend starting with those topics, and moving on to understanding how kubernetes interacts with those two things after you have a solid foundation. Else, a lot of the terminology -- and even the problems kubernetes is trying to solve -- may continue to seem opaque