Confluent Platform setup on minikube multinode cluster - kubernetes

I am trying to setup confluent platform on minikube multi-node cluster following confluent documentation
Below are the steps followed:
Created minikube multinode cluster minikube start --nodes 3
Verified kubectl get nodes
Created namespace kubectl create namespace confluent & kubectl config set-context --current --namespace confluent
Installed confluent-operator helm upgrade --install confluent-operator confluentinc/confluent-for-kubernetes
Pulled below images to minikube minikube ssh docker pull <image-name>
docker.io/confluentinc/cp-server:7.3.0
docker.io/confluentinc/cp-server-connect:7.3.0
docker.io/confluentinc/cp-schema-registry:7.3.0
docker.io/confluentinc/cp-ksqldb-server:7.3.0
docker.io/confluentinc/cp-kafka-rest:7.3.0
docker.io/confluentinc/cp-enterprise-control-center:7.3.0
docker.io/confluentinc/confluent-operator:0.581.34
docker.io/confluentinc/confluent-init-container:2.5.0
Initiated setup kubectl apply -f https://raw.githubusercontent.com/confluentinc/confluent-kubernetes-examples/master/quickstart-deploy/confluent-platform.yaml
Now after initiating setup, it starts with creating pods for zookeeper and connect.
These pods never succeeds and keeps on restarting with Error and CrashLoopbackOff status.
I also tried to look into logs of these pods but it has no error or exception logged.

Related

How to restore accidentally deleted a kube-proxy DaemonSet in a Kubernetes cluster?

I accidentally deleted kube-proxy daemonset by using command: kubectl delete -n kube-system daemonset kube-proxy which should run kube-proxy pods in my cluster, what the best way to restore it?
That's how it should look
Kubernetes allows you to reinstall kube-proxy by running the following command which install the kube-proxy addon components via the API server.
$ kubeadm init phase addon kube-proxy --kubeconfig ~/.kube/config --apiserver-advertise-address string
This will generate the output as
[addons] Applied essential addon: kube-proxy
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
Hence kube-proxy will be reinstalled in the cluster by creating a DaemonSet and launching the pods.
kube-proxy daemon got created at the time of cluster creation, so you need to write your own manifest for daemon-set unless you have a backup to restore it from there.

Failed to install metrics-server on minikube

I am trying to install metrics-server on my Kubernetes cluster. But it is not going to READY mode.
I am was installed metrics-server in this method
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
After installing i was tried some of those commands, kubectl top pods, kubectl top nodes. But i got an error
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
Metrics server is failed to start
Enable metrics-server addon in minikube cluster.
Try the following commend.
minikube addons enable metrics-server

Kubectl port-forward not working with IBM Cluster

When I do Kubernetes port-forward with IBM cluster I get connection refused. I have access to other clusters like Azure Kubernetes Service and kubectl port-forward is working fine there. Also when I get a pod log using kubectl logs {pod_name} I get TLS handshake error but the other kubernetes commands like get pod and describe pod is working fine.

Firewall/Port requirements for Helm 2.16

We are installing helmv2.16 on Kubernetes v1.14 in offline mode.We downloaded the tiller docker image and loaded on the server where we were installing the helm
i. No access to Internet from the application servers
ii. Limited ports connectivity between the Kubernetes master and worker nodes(No * connectivity between the servers). The ports that are opened between the application servers are -
a.10250-10260
b.6443
c.443
d.2379-2380
e.Node port series 30000-32767
f. 44134-44135
We downloaded the Helm 2.16 and installed following the below steps. The tiller pod failed to come up till we allowed ALL communication between kubernetes master and kubernetes worker nodes. This means that there are specific firewall requirements for Helm/tiller to function in a kubernetes cluster. Could someone please share the port / firewall details since we do not want to open ALL traffic even between the nodes of a cluster (rather we would open specific ports).
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --skip-refresh

how to remove kubernetes with all it's dependencies Centos 7

i want to uninstall kubernetes from Centos 7 and all it's dependencies and files like :
kube-apiserver
kube-controller-manager
kubectl
kubelet
kube-proxy
kube-scheduler
Check this thread or else these steps should help,
First clean up the pods running into your k8s cluster using,
$ kubectl delete node --all
then remove data volumes and back-up(if it's not needed) from your host system. Finally, you can stop all the k8s services using the script,
$ for service in kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler; do
systemctl stop $service
done
$ yum -y remove kubernetes #if it's registered as a service