We have no kubernetes in our environment, Is there a relevant document that can be used for reference
Related
As I understand that Kubernetes is a set of binaries that can form a new k8s cluster. There is an open-source kubernetes on git hub but there still some confusion:
Who is the core team maintain (have write permission) to kubernetes repo? "The Linux Foundation" or CNCF?
I see that there a multi Kubernetes engines (RKE, EKS..). Do they just add some add-ons/plugin/tools or they modify the source code of kubernetes to build another version of k8s components (apiserver, kube-proxy, kubelet)?
If I use RKE binary to setup my cluster and it shows Kubernetes version "v1.17.2" that means the version is release of kubernetes repo or it just another fork repo of rancher team. The question is the same to GKE, EKS...
Who is the core team maintain (have write permission) to kubernetes
repo? "The Linux Foundation" or CNCF?
Cloud Native Computing Foundation (CNCF) is one of the projects hosted by the Linux Foundation. Kubernetes is one of the project graduated from CNCF. Read more over here.
I see that there a multi Kubernetes engines (RKE, EKS..). Do they just
add some add-ons/plugin/tools or they modify the source code of
kubernetes to build another version of k8s components (apiserver,
kube-proxy, kubelet)?
They are really not "multi kubernetes engines", these are just Kubernetes offering from different vendors. Another such example is GKE (Google Kubernetes Engine) by Google. Main advantage you get from GKE/EKS v/s Kubernetes is that GKE/EKS etc. are managed products, so the vendor providing the same will be responsible for cluster management, availibility of Master and Worker nodes etc.
If I use RKE binary to setup my cluster and it shows Kubernetes
version "v1.17.2" that means the version is release of kubernetes repo
or it just another fork repo of rancher team. The question is the same
to GKE, EKS..
At the core you still have got Kubernetes but once you are using managed products like GKE or EKS, better not to mix them with "Kubernetes" and start thinking of them as GKE or EKS etc. They all can have their own Release cycles + many different other Cloud Computing products of the same vendor are integrated with it. Read more over here.
What i want do is deployment of multiple container application in...
In RHEL os
RedHat Supportable product (if possible)
In single node K8S cluster (Bare metal machine)
So I found several way but I concerned about..
minikube, minishift, OKD, CodeReady Container
First, they run in VM but what I want is run in HOST.
Second, their doc said they are not for production environment.
So, Is there any PaaS for single-node cluster as production environment?
Docker, Docker-compose
Deployment target OS should maybe RHEL8. I guess it is not good idea to use docker because RedHat product is moving away from docker. Even in RHEL8 repository, there is no docker rpm for el8 yet.
My question is
Is there any PaaS for single-node cluster as production environment?
If not exist, docker-compose is best?
It was already mentioned, you should not use single node setup in production environment.
You should not do that because, if your servers drops you have service offline. There is nothing to switch to, nothing that might continue the process that was being worked on.
If you still want to setup a single node Kubernetes cluster you can do that using kubeadm. I think this would be closest to production grade as you can get.
Other then that as an alternative you can play with Installing Kubernetes with Minikube or Install a local Kubernetes with MicroK8s.
It's up to you which one you will choose but you need to remember this should not be running as a production, this should be a lab or a test environment which if works as expected will be migrated into few node production grade cluster.
As for PaaS as a single node there is Dokku.
Docker powered mini-Heroku. The smallest PaaS implementation you've ever seen.
And if you would consider using a cloud for PaaS, you can choose from AWS Cloud9, Azure App Service or Google App Engine.
Single node cluster is not recommended for production applications. You need scalability, high availability, fault tolerance for production apps. You must have more than one node to have these features.
Rancher uses environments as the most coarse-grained element for configuration.
We typically configure dev and prod environments.
Hosts (physical or virtual) are added to environments.
Rancher has templates for environments. One of the templates is for creating an environment with Kubernetes orchestration.
K8S has clusters with nodes. Apparently, when you create a Rancher environment following the K8S template, you establish a K8S cluster.
Questions I could not answer clearly looking at the documentation:
Is the Rancher environment and the K8S cluster the same thing? (For an environment that uses K8S.) Or can an environment contain more than one cluster?
Is the Rancher host and the K8S node the same thing? (Again, for an environment that uses K8S.)
An environment contains a set of machines (hosts/nodes), an orchestration engine (kubernetes being one of the 4 options), and a set of members with different roles defining access to them. You cannot have more than one kubernetes cluster in one environment.
Yes, the kubernetes term is node.
If you want kubernetes you should really be using Rancher 2.x instead of 1.x. 1.x is in maintenance-only mode and support will eventually be dropped. 2.x is entirely based around kubernetes and is far better integrated with it.
Is HA across multiple cloud providers i.e ONE kubernetes cluster from mix of Azure nodes, AWS nodes, VMware nodes. (Consider all have same OS image)
If so how dynamic provisioning works.
Can Kubernetes CSI (container storage interface) help me with this.
That will not work very well. The cloud provider needs to be set on the apiserver & controller-manager and you can't run multiple copies of those in different configurations.
Now if you don't need a cloud provider, as in you are just using these as generic VMs, you will not have access to cloud storage via the kubernetes api. Otherwise it's workable but is still not a great setup. This would essentially be a cross region cluster which is not a supported use case. You are meant to use 1 cluster per region and arrange for LB somehow (yes, this is the tricky bit).
I'm trying to use the Kubernetes to deploy Docker Container and I found this tutorial.
So according to this tutorial, what is the prerequisites?
They said that "services that are typically on a separate Kubernetes master system and two or more Kubernetes node systems are all running on a single system."
But I don't understand how we run both master and nodes on a single system (for example I have one instance EC2 with IP address 52.192.x.x)
That is a guide about running Kubernetes specifically on RedHat Atomic nodes. There are lots of guides about running Kubernetes on other types of nodes; see the Creating a Kubernetes Cluster page on docs.k8s.io.
One of the guides on the Kubernetes site shows how to run a local docker-based cluster, which should also work for you on a single node in the cloud.