Install minikube (Kubernetes) because I have only one master server / node, but it is not pointing to my IP, but to IP 172.17.0.2 - kubernetes

Good afternoon, I am new to Kubernetes and I am installing for the development environment Kubernetes, I have a red hat server(redhat) to be node/master, at the same time, I followed the following steps to install:
Following the tutorial on the page:
https://www.linuxtechi.com/install-kubernetes-k8s-minikube-centos-8/
sudo dnf update -y
sudo setenforce 0
sudo sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
sudo systemctl start docker
sudo systemctl enable docker
sudo dnf install conntrack -y
#Installing Kubectl
sudo cat <<EOF > /etc/yum.repos.d/kubernetes.repo (root)
yum install -y kubectl (root)
#Installing Minikube
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
chmod +x minikube
mkdir -p /usr/local/bin/
install minikube /usr/local/bin/
However, when I configure minikube, it does not point to the IP of my server, but to the IP 172.17.0.2:
minikube ip
172.17.0.2
kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 172.17.0.2:8443 was refused - did you specify the right host or port?
My IP is 10.154.7.209
What I can be doing wrong? If I can't use minikube to raise a server as master/node, what can I use?

You are not doing anything wrong 👌. minikube is basically using the docker driver so 172.17.0.2 is the IP address of the container where your cluster is running.
But looks like you are using a proxy somewhere in your system. So you need to include the 172.17.0.0/24 range in the NO_PROXY environment variable.
Something like this:
export HTTP_PROXY=http://<proxy hostname:port>
export HTTPS_PROXY=https://<proxy hostname:port>
export NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,172.17.0.0/24 👈
minikube start
# get pods
minukube kubectl -- get pods --all-namespaces
✌️

Related

How can I get access to this cluster?

Hello I was created a k8s cluster in vagrant using the followings commands:
sudo apt-get update
sudo apt-get upgrade
sudo snap install microk8s --classic --channel=1.21/stable
sudo usermod -a -G microk8s $USER
newgrp microk8s
sudo chown -f -R $USER ~/.kube
microk8s enable dns storage ingress metallb:10.64.140.43-10.64.140.49
microk8s config > ~/.kube/config
sudo snap install juju --classic
juju bootstrap microk8s mycontroler
juju add-model kubeflow
juju deploy kubeflow-lite --trust
microk8s kubectl patch role -n kubeflow istio-ingressgateway-operator -p '{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"name":"istio-ingressgateway-operator"},"rules":[{"apiGroups":["*"],"resources":["*"],"verbs":["*"]}]}'
juju config dex-auth public-url=http://10.64.140.43.nip.io
juju config oidc-gatekeeper public-url=http://10.64.140.43.nip.io
juju config dex-auth static-username=admin
juju config dex-auth static-password=admin
I copied the .kube/config from vagrant to the host and set the socks proxy to the vagrant like this
ssh -D 9999 vagrant#10.64.140.43
I cannot get access to the http://10.64.140.43.nip.io because I get a error 403 too.
any idea about how can I use kubectl from host to get access to that k8s cluster?
Thanks
This is by design. This is a security control that doesn't allow outside access to Vagrant networking. You need to configure port forwarding inside your vagrant Vagrantfile and then run vagrant reload for this to take effect.
You can find more details in this stackoverflow answer.

Unable to connect internet/google.com from pod. Docker and k8 are able to pull images

I am trying to learn Kubernetes.
Create a single-node Kubernetes Cluster on Oracle Cloud using these steps here
cat /etc/resolv.conf
>> nameserver 169.254.169.254
kubectl run busybox --rm -it --image=busybox --restart=Never -- sh
cat /etc/resolv.conf
>> nameserver 10.33.0.10
nslookup google.com
>>Server: 10.33.0.10
Address: 10.33.0.10:53
;; connection timed out; no servers could be reached
ping 10.33.0.10
>>PING 10.33.0.10 (10.33.0.10): 56 data bytes
kubectl get svc -n kube-system -o wide
>> CLUSTER-IP - 10.33.0.10
kubectl logs --namespace=kube-system -l k8s-app=kube-dns
>>[ERROR] plugin/errors: 2 google.com. A: read udp 10.32.0.9:57385->169.254.169.254:53: i/o timeout
Not able to identify if this is an error of coredns or pod networking. Any direction would really help
Kubernetes has deprecated Docker as a container runtime after v1.20.
Kubernetes Development decision to deprecate Docker as an underlying runtime in favor of runtimes that use the Container Runtime Interface (CRI) created for Kubernetes.
To support this Mirantis and Docker came to the rescue by agreeing to partner in the maintenance of the shim code standalone.
More details here here
sudo systemctl enable docker
# -- Installin cri-dockerd
VER=$(curl -s https://api.github.com/repos/Mirantis/cri-dockerd/releases/latest|grep tag_name | cut -d '"' -f 4)
echo $VER
wget https://github.com/Mirantis/cri-dockerd/releases/download/${VER}/cri-dockerd-${VER}-linux-arm64.tar.gz
tar xvf cri-dockerd-${VER}-linux-arm64.tar.gz
install -o root -g root -m 0755 cri-dockerd /usr/bin/cri-dockerd
cp cri-dockerd /usr/bin/
# -- Verification
cri-dockerd --version
# -- Configure systemd units for cri-dockerd
wget https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.service
wget https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.socket
sudo cp cri-docker.socket cri-docker.service /etc/systemd/system/
sudo cp cri-docker.socket cri-docker.service /usr/lib/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable cri-docker.service
sudo systemctl enable --now cri-docker.socket
# -- Using cri-dockerd on new Kubernetes cluster
systemctl status docker | grep Active
I ran into similar issue with almost same scenario described above. The accepted solution https://stackoverflow.com/a/72104194/1119570 is wrong. This issue is a pure networking issue that is not related to any of EKS upgrade in any way.
The root cause for our issue was the fact that the Worker Node AWS EKS Linux 1.21 AMI being hardened by our security department which turns off the following setting in this file /etc/sysctl.conf:
net.ipv4.ip_forward = 0
After switching this setting to:
net.ipv4.ip_forward = 1 and rebooting the EC2 Node, everything started working properly. Hope this helps!

How to deploy a microservice application using Kubernetes using three machines in my local network?

As I am new to Kubernetes, I still haven't figured out how to deploy an application on my local network using one or various physical machines.
Usually the tutorials on the internet describe situations using minikube, but it is only for local machine tests, isn't it? Or situations where the deploy is performed on cloud platforms, like google.
I would really appreciate some support in where to begin? In my case will I need to install only Kubernetes on the machines? Is it a trivial task?
One way is to install kubernetes is via kubeadm.
I've done this with my own VM. But should also work for your physical machines.
You can use kubeadm, here is a doc which will help you to install it.
Each server will need to have kubeadm, kubelet and kubectl installed.
On master node you will do kubeadm init and once the whole process is finished it will provide you with an output which you can run to add worker nodes.
Of course those servers needs to see each other on your network.
There are also other options to install Kubernetes. For example using kops or kubespray.
From https://kubernetes.io/docs/setup/learning-environment/minikube/
Minikube is a tool that makes it easy to run Kubernetes locally.
Minikube runs a single-node Kubernetes cluster inside a Virtual
Machine (VM) on your laptop for users looking to try out Kubernetes or
develop with it day-to-day
If you want to set up your own cluster including, there's quite a few options including kubeadm or kubespray.
Personally I've used Vagrant and Ansible to set up a local Kubernetes cluster using kubeadm - there's a good tutorial here... and here's my implementation :)
Kubeadm allows you to create cluster using any machine that runs Docker.
This guide will help you to create your first cluster with 1 master and 1 worker nodes. (This is based on Debian)
Execute the following command on all 2 machines:
Installing Docker from official repository
$ sudo apt-get update
$ sudo apt-get -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common
$ curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg | sudo apt-key add -
$ sudo apt-key fingerprint 0EBFCD88
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable"
$ sudo apt-get update
$ sudo apt-get install -y docker-ce docker-ce-cli containerd.io
$ sudo groupadd docker
$ sudo usermod -aG docker $USER
Install kubeadm, kubectl and kubelet from official repository
$ sudo apt-get update && sudo apt-get install -y apt-transport-https curl
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ sudo su -c "cat > /etc/apt/sources.list.d/kubernetes.list <<EOF
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF"
$ sudo apt-get update
$ sudo apt-get install -y kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl
Execute the following command only on your master node
Initialize your cluster
$ sudo kubeadm init
When the init finishes, kubeadm will give you the command to add your workers to your cluster. Save this command for later use.
EXAMPLE:
kubeadm join 10.128.0.17:6443 --token 27evky.hoel95h16poqici6 \
--discovery-token-ca-cert-hash sha256:521f69cb935951bbfee142432108caeaeaf1682d8647208ba2f558624890ab63
After the kubeadm init command completes, run the following commands to start using your new cluster
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Your master is not ready yet, we need to create a cni network (You can choose different CNIs)
$ kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
Check if your master node is ready. It needs a while to start all dependencies
$ kubectl get nodes
EXAMPLE:
user#kubemaster:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster NotReady master 32s v1.16.2
To check more details about your node:
$ kubectl describe node <NODENAME>
When your master node is ready you can proceed and run the kubectl join command (saved previously) on your worker nodes:
$ sudo kubeadm join 10.128.0.17:6443 --token 27evky.hoel95h16poqici6 --discovery-token-ca-cert-hash sha256:521f69cb935951bbfee142432108caeaeaf1682d8647208ba2f558624890ab63
Check if your worker node is ready (command must be executed on master node)
$ kubectl get nodes
If you want to have more workers, just repeat the kubectl join command on the new workers.

How to install kubernetes by kubernetes

I have kubernetes cluster with one master and 3 worker. I want to create another kubernetes cluster with this kubernetes I mean the other kubernetes cluster (master and worker) should be container forexample I build kubernetes cluster with 1 master and 5 worker.
Since all components in Kubernetes, except Kubelet can be run as pod, you can deploy Kubernetes apiserver, controller,scheduler as a pod in another kubernetes.
You will need to export a SVC exposing API server as a Node port.
IN next step, This Node Port can be used as apiserver URL for second cluster kubelet.
The only challenge you will face, In case you are running calico, ON master node, there could be only one instance of Calico.
So in case you are using operators, your APi server Pod will not be able to reach Operator controller Pod.
That is going to be very difficult. What are you using to run Kubernetes (minikube, Google Kubernetes Engine, etc.)
If using minikube (local) minikube makes use of virtualboxs' hypervisor to create new "containers". If you create a cluster, you are already using that hypervisor. What hypervisor would your 'new' cluster use?
Right now, I think you don't make much sense either. you want to deploy a 'new' Kubernetes Cluster as a pod? Where would this pod get its resources? Let's say you used GKE (Google Cloud) and had a really big node (100vCPUs, 1000RAM). Once you're in this node as a pod and you create another cluster (within this pod) (theoretically), would this pod act as the master node? What if this pod goes down? If the master node goes down, the cluster is lost. Pods are ephemeral. It is theoretically possible, but there is absolutely no logical reason one would implement this. This isn't even an answer, but more of a probe for you to help answer our questions, so we can attempt answering yours.
apt-get update
apt-get install apt-transport-https ca-certificates curl software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
apt-get update -y
apt-get install docker.io -y
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list
apt-get update -y
apt-get install kubelet kubeadm kubectl -y
10. kubeadm init
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
13. sudo chown $(id -u):$(id -g) $HOME/.kube/config
mkdir -p /root/.kube
sudo cp -i /etc/kubernetes/admin.conf /root/.kube/config
16. sudo chown $(id -u):$(id -g) /root/.kube/config
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
18. kubectl taint nodes --all node-role.kubernetes.io/master-
Use the above commands to install Kubernetes and Make kubernetes run as single node cluster for demo purpose - OS Family - Ubuntu 16.04/18.04

Why Kubernetes cluster on VM instances Google Cloud can't connect host with NodePort?

I have installed a kubernetes cluster using this tutorial.
When I've set it up on VM Virtual Box - my host can connect with NodePort normally. When I've tried it on Compute Engine Virtual Machine instance, the Kubernetes cluster can't connect host with NodePort?
I have attached two pictures.
Thank you for your support.
Kubernetes cluster (bare metal) on Local VM Virtual Box
Kubernetes cluster (bare metal) on Google cloud Platform VM Instances
This took me a while to test but I finally have a result. So it turns out the reason of your issue is Calico and GCP firewall. To be more specific you have to add firewall rules before you can be successful with the connectivity.
Following this document on installing Calico for GCE:
GCE blocks traffic between hosts by default; run the following command
to allow Calico traffic to flow between containers on different hosts
(where the source-ranges parameter assumes you have created your
project with the default GCE network parameters - modify the address
range if yours is different):
So you need to allow the traffic to flow between containers:
gcloud compute firewall-rules create calico-ipip --allow 4 --network "default" --source-ranges "10.128.0.0/9"
Note that this IP should be changed. You can use for test purposes 10.0.0.0/8 but this is way to wide range so please narrow it down to your needs.
Then proceed with setting up instances for master and nodes.
You can actually skip most of the steps from the tutorial you posted, as connectivity is resolved by cloud provider. Here is a really simple script I use for Kubeadm on VM's. You can also perform this step by step.
#!/bin/bash
swapoff -a
echo net/bridge/bridge-nf-call-ip6tables = 1 >> /etc/ufw/sysctl.conf
echo net/bridge/bridge-nf-call-iptables = 1 >> /etc/ufw/sysctl.conf
echo net/bridge/bridge-nf-call-arptables = 1 >> /etc/ufw/sysctl.conf
apt-get install -y ebtables ethtool
apt-get update
apt-get install -y docker.io
apt-get install -y apt-transport-https
apt-get install -y curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
kubectl taint nodes --all node-role.kubernetes.io/master-
In my case I used simple Redis application from Kubernetes documentation
root#calico-master:/home/xxx# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29m
redis-master ClusterIP 10.107.41.117 <none> 6379/TCP 26m
root#calico-master:/home/xxx# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
redis-master-57fc67768d-5lx92 1/1 Running 0 27m 192.168.1.4 calico <none>
root#calico-master:/home/xxx# ping 192.168.1.4
PING 192.168.1.4 (192.168.1.4) 56(84) bytes of data.
64 bytes from 192.168.1.4: icmp_seq=1 ttl=63 time=1.48 ms
Before the firewall rules and regular Calico installation I was not able to ping, nor wgetfrom the service after that there is no problem with pinging the IP or hostname and also wget works:
> root#calico-master:/home/xxx# wget http://10.107.41.117:6379
> --2018-10-24 13:24:43-- http://10.107.41.117:6379/ Connecting to 10.107.41.117:6379... connected. HTTP request sent, awaiting response... 200 No headers, assuming HTTP/0.9 Length: unspecified
> Saving to: ‘index.html.2’
Steps above were also tested for type: NodePort and it works as well.
Another way is to use Flannel which I also tested and it worked out of the box for the needs of testing your issue. Be sure to read more about CNI’s so you can choose one that will suit your needs.
Hope this solves your problem.
That's because in minikube there's only one node for everything and the VM is that node. So if you are in the VM you can connect to the NodePort locally or on localhost.
In the case of GCP, you don't usually run work the master(s) and nodes on the same VMs. So you need to will get a reply from one of the nodes (VMs) where your pod is listening.
To get the list of nodes on your cluster you can simply run:
kubectl get nodes -o=wide
You should see for example an Internal IP for your nodes. Then you can try
curl http://<Internal IP>:<NodePort>
You can also get the IP details using,
kubectl describe nodes
Then ping trying to connect to your host using the corresponding IP.