Deploy DB+Proxy+SSL with kubernetes - kubernetes

I have very little knowledge of how kubernetes works and I’m trying to learn. I have some difficulties to understand how I can use kubernetes to deploy my DB (CouchDB) the reverse proxy (nginx) and the ssl certificate (letsencrypt with certbot-auto).
I run CentOS 8 and have installed podman for the containers. I can install each one in different containers within the same pod and I can make them communicate properly.
What I don’t understand is how can I use kubernetes to properly deploy all of these containers and scale them in a cluster.
My questions are the following:
Where should I start to make kubernetes work with these three components? Should I install the three containers first with their configuration (the DB can be configured to handle clusters but my understanding is that kubernetes handles clusters. So I’m wondering if I have to configure the DB for the cluster and hence install two nodes)
Should I install letsencrypt with certbot? I don’t understand how kubernetes can deploy new pods to have them work with letsencrypt automatically configured
If anyone can give me the steps to get this done it would be really great...I just don’t really know where to start and the docs and tutorials are a bit confusing.

I think you need to deploy two applications for your DB and Nginx, but for your certificates, we have different methods to have letsencrypt on kubernetes
for letsencrypt and nginx these two articles could help you to get some insights about what you need to do
Nginx & LetsEncrypt and this one Let’s Encrypt on Kubernetes
and for CouchDB this article may help you CouchDB on Kubernetes, in this article mentions NFS as storage but you can have your own

Related

How to setup basic auth for Prometheus deployed on K8s cluster using yamls?

How to setup basic auth for Prometheus deployed on K8s cluster using yamls??
I was able to achieve this easily when Prometheus was deployed on a host locally using tar file. But when it is deployed as a pod in K8s cluster, tried almost everything on the internet but no luck.
Any kind of help would be really appreciated!
Thanks!
I'm not sure why would official documentation work only in a vm and not container, but if it truly not work than you can use webserver to hide your web interface behind it and setup authentication on it.

Deploying GitLab on Minikube

I'm trying to deploy GitLab on Kubernetes using minikube through this tutorial, but I don't know what values to put in the fields global.hosts.domain, global.hosts.externalIP and certmanager-issuer.email.
The tutorial is very poor in explanations. I'm stuck in this step. Can someone tell me what are this fields and what should I put on them?
I'm trying to deploy GitLab on Kubernetes using minikube through this tutorial, but I don't know what values to put in the fields global.hosts.domain, global.hosts.externalIP and certmanager-issuer.email.
For the domain, you can likely use whatever you'd like, just be aware that when gitlab generates links that are designed to point to itself they won't resolve. You can work-around that with something like dnsmasq or editing /etc/hosts, if it's important to you
For the externalIP, that will be what minikube ip emits, and is the IP through which you will communicate with gitlab (since you will not be able to use the Pod's IP addresses outside of minikube). If gitlab does not use a Service of type NodePort, you're in for some more hoop-jumping to expose those ports via minikube's IP
The certmanager-issuer.email you can just forget about, because it 100% will not issue you a Let's Encrypt cert running on minikube unless they have fixed cermanager to use the dns01 protocol. In order for Let's Encrypt to issue you a cert, they have to connect to the webserver for which they are issuing the cert, and (as you might guess) they will not be able to connect to your minikube IP. If you want to experience SSL on your gitlab instance, then issue the instance a self-signed cert and call it a draw.
The tutorial is very poor in explanations.
That's because what you are trying to do is perilous; minikube is not designed to run an entire gitlab instance, for the above and tens of other reasons. Google Cloud Platform offers generous credits to kick the tires on kubernetes, and will almost certainly have all the things you would need to make that stuff work.

Recommended way to install kubernetes

I was looking into the different ways of installing Kubernetes in https://kubernetes.io/docs/setup/pick-right-solution/ but I'm still not sure which one is the best for me.
I have access to a testbed that can provision CENTOS 7.3 VM's through vagrant. This tesbed is basically a bare-metal environment in which the VM's are started up.
I can configure each host individually so I suppose kubeadm (https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/) would be a good way to go?
Brandon,
While the Kubernetes community supports multiple cluster deployment solutions simultaneously (mainly because there is no single best solution that will satisfy all the needs of everyone), Kubeadm (https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/) - is the right solution that we may suggest for you.
Kubeadm is a community-driven, cross-distribution cluster deployment and LCM tool, that is widely recognized as a standard way to deploy Kubernetes clusters with a wide variety of options.
Also, feel free to check the article (https://medium.com/#lizrice/kubernetes-in-vagrant-with-kubeadm-21979ded6c63) that describes the way of Kubernetes cluster deployment with Kubeadm and Vagrant.

Deploy Kubernetes on Self-host Production environment

I am trying to install kubernetes on Self-hosted production environment running on Ubuntu 16.04. I am not able to find any helpful guide to setup production grade kubernetes master and connect worked nodes to it.
any help is much appreciated.
you can use the kubespray to self Host production environment.
https://github.com/kubernetes-incubator/kubespray
Depends on what you understand by saying "self-host". The most people think it's about deploying kubernetes in the own environment.
If you want to compare different approaches to deploy k8s in a custom environment, refer to this article which covers a bunch of options suitable for that.
If you are interested in how to set up an HA Kubernetes cluster using kubeadm, refer to this article.
However, in kubernetes, there is a different definition of "self-hosted". It means running kubernetes itself as a workload in kubernetes. If you are interested in a real self-hosted approach (on a custom environment), refer to this article
Hope this helps
You can use typhoon which can be used to provision an HA kubernetes cluster.
Here is a sample configuration which I used to bring up my own home cluster.
A few advantages of typhoon are that you have the option of choosing your choice of a cloud provider for provisioning your infrastructure, which is done using terraform and the fact that it gives you upstream k8s is a big plus too.
Internally, it uses bootkube to bring up the temporary control plane, which would consist of
api-server
controller-manager
scheduler
and then when we have the temporary control plane object, we inject the objects to the API server to have our k8s cluster.
Have a look at this kubecon talk given by CoreOS which explains how this is working.

Rancher connect to kubernetes instead of start kubernetes

Rancher is designed (as best as I can tell) to own and run a kubernetes cluster. Rancher does provide a configuration so that kubectl can interact w/ the kubernetes cluster. Rancher seems like a nice tool. But as far as I can tell, there is no way to connect to an existing kubernetes cluster. Is there any way to do this?
If you are looking for a service that can connect to an existing k8s cluster(s) then try Containership. You can use Kubectl and/or the Containership UI to manage you workloads, config maps, etc on multiple clusters.
Hope this helps!
I got this answer on the rancher forums
There is not, most of the value we can add at the moment is around configuring, managing, and controlling access to the installation we setup.
https://forums.rancher.com/t/rancher-connect-to-kubernetes-instead-of-start-kubernetes/3209