cri-o socket (crio.sock) vs dockershim socket (dockershim.sock) - kubernetes

I am very new to Kubernetes and trying to understand the difference between crio vs dockershim.
I was reading the manual on how to install kubernetes and I see that crio is recommended as a step (see link) Container runtimes/cri-o.
Yet I got more confused when I first tried to launch the pilot and I saw that by default kubernetes is using another cri tool (dockershim) as a default cri tool see link crictl/General usage.
My question is does it worth going through the installation procedure of CRI-O? I have found bugs that on the latest available release for centos7 (1.15.1-2.el7).
I also tested crio-v1.18.0 and the bugs seem to be fixed but on this case it seems that CRI-O can connect to port 10248 when using private repo to pull the pilot images.
Can someone share some light on this? Does it worth it to try to fix those bugs or I am spending too much time on this?

Kubelet (node daemon of Kubernetes) communicates with the container runtime running on the node via Container Runtime Interface. dockershim, as well as crio implement CRI, and act as a connectors between runtime and kubelet, but they refer to different container runtimes.
dockershim is a connector between kubelet and docker
crio is a connector between kubelet and runtime compliant with OCI spec (for example: runc)
There are so many ways of setting up container runtimes. Various kubernetes distributions uses various container runtimes as their defaults (for example, Google Kubernetes Engine installed containerD runtime with containerD-shim when I tried it last time)
I'd say that if you want to start playing with kubernetes and want to have it stable, you should start with docker first (use dockershim as a CRI connector). It's most commonly tested way of using K8S.

Related

External Chaincode Pod on Kubernetes in Hyperledger Fabric v1.4

By what I have seen so far, in a Hyperledger Fabric v1.4 network that has been deployed using Kubernetes, the chaincode container and the peer container co exist within the same pod. An example for the same can be found in this link https://medium.com/#oap.py/deploying-hyperledger-fabric-on-kubernetes-raft-consensus-685e3c4bb0ad . Is it possible to have a deployment where the chaincode container and the peer container exist in two separate pods? If yes, how do I go about implementing this in Hyperledger Fabric v1.4? By my research, it is possible to do so in Hyperledger Fabric v2.1 using external chaincode launchers. However, I am restricted to Fabric v1.4 currently.
As you point out, Fabric v2.0 introduced external builders which are specifically targeted to allow operators to choose how their chaincodes are built and executed. With external builders it's certainly possible to trigger creation of a separate pod to launch the chaincode in.
Unfortunately, in Fabric v1.4.x there is a strong dependency on Docker. You could potentially launch your docker daemon in a separate privileged pod, and securely authenticate to it via TLS, and launch your chaincodes there. You can see the docker daemon connection configuration in the sample core.yaml.
As a warning, I'm unaware of any users which are deploying peers connecting to a remote docker daemon. I don't see any reason it should not work, but it's also not a well tested path. As external builders are available in more recent versions of Fabric, I don't expect a large amount of community support for novel docker configurations.

Kubernetes - Calico CrashLoopBack on Containers

I have just started experimenting with K8S a few days back, try to learn K8S with specific emphasis on networking, service mesh etc.
I am running 2 worker nodes and 1 master on VMs with Centos 7 and K8S, installed with kubeadm.
Default CNI of Flannel. Install was OK and everything except the networking was working. I could deploy containers etc, so a lot of control plane was working.
However, networking not working correctly, even container to container in same worker node. I checked all the usual suspects, the veths, IPs, MACs, briges on a single worker and everything seemed to check out... e.g. MACs where on the correct bridges i.e. cni0, IP address assignments etc. Even when pinging from busybox to busybox, I would see the ARP caches being populated but pings not working still.... disabled all FWs, IP forwarding enabled etc. Not an expert of IPtables but looked OK..... also when logged into the worker node shell I could ping the busybox containers, but they could not ping each other....
One question I have at this point, why is the docker0 bridge still present even when flannel is installed can I delete it or are there some dependencies associated with it ? I did not notice the veths for the containers were showing connected to docker0 bridge but docker bride0 was down... however I followed this website and it show a different way of validating and show veths connected to cni0, which is very confusing and frustrating.....
I gave up Flannel as I was just using Flannel to experiment and decided to try out Calico....
I followed install procedures from Calico site... was not entirely clear on the tidy up procedures for Flannel, not sure where this is documented?... this is where it went from bad to worse...
I started getting crash loops on calico containers and coredns stuck creating, reporting liveliness issues on calico ....and this is where I am stuck......... and would like some help.......
I am have read and tried many things on web and may have fixed some issues as there may be many in play, but would really appreciate any help....
=== install info and some output...
enter image description here
enter image description here
enter image description here
enter image description here
enter image description here
Some questions...
The Container creating for the coredns..... is this dependent on successful install of Calico... are the issues related.... or should coredns install work independent of the CNI install ?
The Container creating for the coredns..... is this dependent on successful install of Calico... are the issues related.... or should coredns install work independent of the CNI install ?
Yes, it is. You need to install a CNI to have coredns working.
When you setup your cluster with kubeadm there's is a flag called --pod-network-cidr, depending on which CNI you intend to use, you need to specify the range with this flag.
For example, by default, Calico use the range 192.168.0.0/16 and Flannel use the range 10.244.0.0/16.
I have a guide how to setup a cluster using kubeadm, maybe it help you.
Please note, if you want to replace the CNI without delete the entire cluster, extras steps need to be taken in order to "cleanup" the firewall rules from the older CNI.
See here how to replace flannel to calico, for example.
And here how to migrate from flannel to calico.

Kubernetes controller-manager and api server issue

I am new to Kubernetes and, trying to setup the master and 2 node architecture using oracle Virtualbox.
OS: Ubuntu 16.04.6 LTS
Docker: 17.03.2-ce
Kubernetes
Client Version: v1.17.4
Server Version: v1.17.4
When I run the join command on the worker node, "kube-controller-manager" and "api-server manager" get disappeared and worker nodes are not getting joined (though join command executed successfully)
I have set the Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs" but still same error.
Please see below snapshot.
Thanks.
The link you have provided is no longer available. While learning and trying out Kubernetes for the first time I highly recommend using the official docs.
There you will find a detailed guide regarding Creating a single control-plane cluster with kubeadm. Note that:
To follow this guide, you need:
One or more machines running a deb/rpm-compatible Linux OS; for example: Ubuntu or CentOS.
2 GiB or more of RAM per machine–any less leaves little room for your apps.
At least 2 CPUs on the machine that you use as a control-plane node.
Full network connectivity among all machines in the cluster. You can use either a public or a private network.
You also need to use a version of kubeadm that can deploy the version
of Kubernetes that you want to use in your new cluster.
Kubernetes’ version and version skew support policy applies to kubeadm
as well as to Kubernetes overall. Check that policy to learn about
what versions of Kubernetes and kubeadm are supported. This page is
written for Kubernetes v1.18.
The kubeadm tool’s overall feature state is General Availability (GA).
Some sub-features are still under active development. The
implementation of creating the cluster may change slightly as the tool
evolves, but the overall implementation should be pretty stable.
If you encounter any issues, first try the troubleshooting steps.
Please let me know if that helped.

Kubelet in Windows 10

I wanna ask something cause I looked for it and couldn't find a clear answer about it anywhere.
Can kubelet be used in windows 10?
Because all I found is usage of kubelet in linux operating systems only.
Also what became my theory is that kubectl is the kubelet version of windows operating system maybe?
I'm really confused about it and couldn't find any clear answer about kubelet in windows and about a comparison between kubelet and kubectl.
I'll be really grateful if someone could explain that to me.
Can kubelet be used in windows 10
Kubelet is one of Node Components and it is part of Kubernetes infrastructure. It is required to proper working of Kubernetes, so it is used in linux/unix, windows and mac.
Also what became my theory is that kubectl is the kubelet version of
windows operating system maybe?
kubelet is an agent that runs on each node in the cluster. It makes sure that containers are running in a pod.
kubectl is a command line interface for running commands against Kubernetes clusters. More information ca be found in documentation.
Please visit Kubernetes Components to get familiar with others Kubernetes components. Here you can find more information about kubelet and here about K8s infrastructure.
I'm really confused about it and couldn't find any clear answer about
kubelet in windows and about a comparison between kubelet and kubectl.
Those both cannot be compared. One is component of infrastructure, second is command line to execute K8s commands.
===
To run Kubernetes on Linux/Windows/MacOS you have to have container manager like docker. For Linux there is special package to download, for Windows is Docker for Windows. (Latest versions of Kuberetes also supports Windows Containers, but its different topic.)
To run Kubernetes on Windows, you have to use Minikube. It allows to run a single-node Kubernetes cluster inside a Virtual Machine.
You can find how to configure Kubernetes on Windows in this tutorial.
Hope it help to understand.
You can add windows node to the Kubernetes cluster following the instructions from the official documentation page. As it mentioned in the documentation, you can get all required components using the links from the Kubernetes CHANGELOG-1.15.md page:
Client binaries (kubectl.exe)
Server binaries (no Windows binaries, because windows cannot be master node at this moment)
Node binaries (kube-proxy.exe, kubeadm.exe, kubectl.exe, kubelet.exe)
If you need other version of binaries please find CHANGELOG for specific version on Kubernetes Releases page.
You need to have Docker engine installed on your Windows machine. Here is the manual how to do it for Windows 10.

how to install kubernetes manually?

While getting familiar with kubernetes I do see tons of tools that should helps me to install kubernetes anywhere, but I don't understand exactly what it does inside, and as a result don't understand how to trouble shoot issues.
Can someone provide me a link with tutorial how to install kubernetes without any tools.
There are two good guides on setting up Kubernetes manually:
Kelsey Hightower's Kubernetes the hard way
Kubernetes guide on getting started from scratch
Kelsey's guide assumes you are using GCP or AWS as the infrstructure, while the Kubernetes guide is a bit more agnostic.
I wouldn't recommend running either of these in production unless you really know what you're doing. However, they are great for learning what is going on under the hood. Even if you just read the guides and don't use them to setup any infrastructure you should gain a better understanding of the pieces that make up a Kubernetes cluster. You can then use one of the helpful setup tools to create your cluster, but now you will understand what it is actually doing and can debug when things go wrong.
For simplicity, you can view k8s as three components
etcd
k8s master, which includes kube-apiserver, controller, scheduler
node, which contains kubelet
You can install etcd and k8s master together in one machine. The procedures are
Install etcd. Download etcd package and run it, which is quite
simple. Remember the port of etcd service, e.g. 2379,4001, or any you
set.
Git clone the kubernetes project from github. Find the executable binary file, e.g. for k8s version 1.3, you can find kube-apiserver, kube-controller-manager and kube-scheduler in src/k8s.io/kubernetes/_output/local/bin/linux/amd64 folder
Then run kube-apiserver, specify the etcd ip and port (e.g. --etcd_servers=http://127.0.0.1:4001)
Run scheduler and controller, specifying the apiserver ip and port(e.g. --master=127.0.0.1:8080). There is no oreder between scheduler and controller
Master is running so far. Make sure these processes run without errors. If etcd exits, apiserver would exit. If apiserver exits, scheduler and controller would exit.
On another machine(virtual preferred, network connected), run kubelet. Kubelet could also be found in previous folder(src/k8s.io/kubernetes/_output/local/bin/linux/amd64), specify apiserver ip and port(e.g. --api-servers=http://10.10.10.19:8080). You may install docker or something else on node, which to prove that you could create a container.