Can somebody give me a starting point to install Kubernetes on Oracle Linux platform. I like to start some comparison between the orchestration of docker containers.
Regards
Walter
You can use kubeadm for the cluster setup. I haven't worked with Oracle Linux. as long as it supports rpm & yam. you can install the kubernets software.
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
You can use this guide from Bitnami to prepare a Kubernetes cluster in Oracle Cloud: https://docs.bitnami.com/kubernetes/how-to/set-kubernetes-cluster-oracle/
The guide is meant for the Ubuntu image, but just as #sfgroups said, using kubeadm the process should be similar.
I have established a kubernetes cluster on Oracle Linux 7.4 using Oracle VirtualBox and vagrant.
Github repo can be found https://github.com/bjarteb/ol-kubeadm
You need an oracle account to follow along (it's free)
Oracle® Container Services for use with Kubernetes
vagrant up && vagrant ssh m - and you are ready for k8s!
[vagrant#m ~]$ kubectl get nodes -owide
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
m Ready master 1h v1.8.4+2.0.1.el7 <none> Oracle Linux Server 7.4 4.1.12-112.14.13.el7uek.x86_64 docker://17.12.0-ol
w1 Ready <none> 1h v1.8.4+2.0.1.el7 <none> Oracle Linux Server 7.4 4.1.12-112.14.13.el7uek.x86_64 docker://17.12.0-ol
w2 Ready <none> 57m v1.8.4+2.0.1.el7 <none> Oracle Linux Server 7.4 4.1.12-112.14.13.el7uek.x86_64 docker://17.12.0-ol
Related
I have installed etcd and kubernetes on centos, now I wanna install kube-apiserver. I installed kube-apiserver by snap.
sudo yum install epel-release
sudo yum install snapd
sudo systemctl enable --now snapd.socket
sudo ln -s /var/lib/snapd/snap /snap
sudo snap install kube-apiserver
I start kube-apiserver with the guide by this link.
Unfortunately, I got failed with ***error etcd certificate file not found in /etc/kubernetes/apiserver/apiserver.pem. But I found that the certificate file exists, how to run the kube-apiserver successfully?
I don't know the reason of your failure. But I suggest you to install kubernetes by kubeadm, it's a great k8s tool. If you install k8s by kubeadm, the kube-apiserver will be installed as a k8s pod. The guide to install kubeadm via this link.
I run the command kubectl get pods -A,
[karl#centos-linux ~]$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66bff467f8-64pt6 1/1 Running 6 4d18h
kube-system coredns-66bff467f8-xpnsr 1/1 Running 6 4d18h
kube-system etcd-centos-linux.shared 1/1 Running 6 4d18h
kube-system kube-apiserver-centos-linux.shared 1/1 Running 6 4d18h
kube-system kube-controller-manager-centos-linux.shared 1/1 Running 6 4d18h
kube-system kube-flannel-ds-amd64-48stf 1/1 Running 8 4d18h
kube-system kube-proxy-9w8gh 1/1 Running 6 4d18h
kube-system kube-scheduler-centos-linux.shared 1/1 Running 6 4d18h
kube-apiserver-centos-linux.shared is a kube-apiserver pod, it is installed successfully.
I suggest using standard tool such as Kubeadm to install kubernetes on centos. kubeadm init will generate necessary certificates and install all the kubernetes control plane components including Kubernetes API Server.
Following this guide you should be able to install a single control plane cluster of kubernetes.
Kubeadm supports kubernetes cluster with multiple control plane node as well as cluster with completely separate ETCD nodes.
I want to setup a kubernetes cluster locally where I would like to have 1 master node and 2 worker nodes. I have managed to do that but I am not able to access pods or see any logs of a specific pod because Internal IP address is the same for all nodes.
vagrant#k8s-head:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-head Ready master 5m53s v1.15.0 10.0.2.15 <none> Ubuntu 16.04.6 LTS 4.4.0-151-generic docker://18.6.2
k8s-node-1 Ready <none> 4m7s v1.15.0 10.0.2.15 <none> Ubuntu 16.04.6 LTS 4.4.0-151-generic docker://18.6.2
k8s-node-2 Ready <none> 2m28s v1.15.0 10.0.2.15 <none> Ubuntu 16.04.6 LTS 4.4.0-151-generic docker://18.6.2
In order to resolve this problem I have found out that following things should be done:
- add KUBELET_EXTRA_ARGS=--node-ip=<IP_ADDRESS> in /etc/default/kubelet file
- restart kubelet by running:sudo systemctl daemon-reload && sudo systemctl restart kubelet
The problem is that /etc/default/kubelet file is missing on this location and I am not able to add this additional parameter. Tried with creating file manually but it looks like it is not working when I restart kubelet, IP address is still the same.
Anyone faced this issue with missing /etc/default/kubelet file or if there is another easier way to setup different Internal IP addresses?
It is normal to have the same IP in every node for the Kubernetes Cluster running in VirtualBox, the reason is that it is a NAT newtork not intended for communication between virtual machines, the 10.0.2.15 IP is NATed when accessing the outside world.
The following diagram shows the networks that are created in a Kubernetes Cluster on top of VirtualBox, as you can see, every node has the same IP in the NAT newtork but different IPs on the other networks:
In order to access the PODs you can use a NodePort and the HOST ONLY network.
See a full example and download the code at Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube). It is a tutorial that explains how to launch a Kubernetes cluster using Ansible playbooks, Vagrant and VirtualBox.
It uses Calico for networking and it includes another tutorial for installing Istio if you need a micro service mesh.
I am getting one weird issue while deploying Cluster on Kubernetes.
Some background details:
I have deployed Kubernetes Cluster using kubeadm-1.10.5 on my on-prem hardware with Weave plugin.
Master : Centos 7.3
Node1 : Centos 7.3
Node2 : Centos 7.3
Node3 : Centos 7.3
Node4 : RHEL 7.2 Kernal `3.10.0-327.el7.x86_64`
Node5 : RHEL 7.3 Kernal `3.10.0-514.el7.x86_64`
Now everything works fine except on Node4.
On this node4 I have successfully deployed all the kube-system pods and my application Pods, but the only issue which I am facing is any pod launched on node4 is unable to access/ping any external IP address.
Dear all I have deployed a sample service such as this:
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 1d
mynodejsapp NodePort 10.233.2.225 <none> 3000:31209/TCP 43s
may I ask how do I access the app mynodejsapp on the cluster ip?
When I did a get nodes -o wide this is what I have seen as below,
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
controlplane-node-001 Ready master 2d v1.9.1+2.1.8.el7 <none> Oracle Linux Server 7.2 4.1.12-112.14.13.el7uek.x86_64 docker://17.3.1
controlplane-node-002 Ready master 2d v1.9.1+2.1.8.el7 <none> Oracle Linux Server 7.2 4.1.12-112.14.13.el7uek.x86_64 docker://17.3.1
controlplane-node-003 Ready master 2d v1.9.1+2.1.8.el7 <none> Oracle Linux Server 7.2 4.1.12-112.14.13.el7uek.x86_64 docker://17.3.1
default-node-001 Ready node 2d v1.9.1+2.1.8.el7 <none> Oracle Linux Server 7.2 4.1.12-112.14.13.el7uek.x86_64 docker://17.3.1
default-node-002 Ready node 2d v1.9.1+2.1.8.el7 <none> Oracle Linux Server 7.2 4.1.12-112.14.13.el7uek.x86_64 docker://17.3.1
Any help. Thanks.
may i ask how do I access the app mynodejsapp on the cluster ip?
Now, for direct answer to your question in regards to your service overview:
To access mynodejsapp service from outside of the cluster you need to target IP of any of the nodes on port 31209 (and kube-proxy will route it to mynodejsapp service for you)
To access mynodejsapp service from within the cluster, meaning from another pod running on that same cluster you need to target clusterIP 10.233.2.225:3000 (or alternatively with running kube-dns you can use service name directly mynodejsapp:3000)
As detailed in the official documentation clusterIP is tied to service, and in turn it is resolved through kube-dns from service name to clusterIP. In a nutshell you can use clusterIP only from within pods running on said cluster (same as service).
As for exposing services externally through NodePort you can find more info also in the official documentation
I am comparatively new to kubernetes but i have successfully created many clusters before. Now i am facing an issue where i tried to add a node to an already existing cluster. At first kubeadm join seems to be successful but even after initializing the pod network only the master became into Ready.
root#master# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-virtual-machine Ready master 18h v1.9.0
testnode-virtual-machine NotReady <none> 16h v1.9.0
OS: Ubuntu 16.04
Any help will be appreciated.
Thanks.
try the following on the slave node and try to get the status again on master.
> sudo swapoff -a
> exit