Is it possible to add existing instances in the GCP to a kubernetes cluster?
I can see the inputs while creating the cluster in graphically mode is :
Cluster Name,
Location,
Zone,
Cluster Version,
Machine type,
Size.
In command mode:
gcloud container clusters create cluster-name --num-nodes=4
I'm having 10 running instances.
I need to create the kubernetes cluster with the already existing running instances
On your instance run the following
kubelet --api_servers=http://<API_SERVER_IP>:8080 --v=2 --enable_server --allow-privileged
kube-proxy --master=http://<API_SERVER_IP>:8080 --v=2
This will connect your slave node to your existing cluster. Its actually surprisingly simple
Related
We have an application that requires a different container runtime than what we have right now in our kubernetes cluster. Our cluster is deployed via kubeadm in our baremetal cluster. K8s version is 1.21.3
Can we install different container runtime or version to a single worker node?
Just wanted to verify inspite knowing that k8s design is modular enough.. CRI, CNI etc
I have the kops cluster running on AWS. I would like to extend the service node port range of that cluster without restart the cluster.
Is it possible? If yes, how can it be done?
You can change it, but it does require a reboot of the control plane nodes (but not the woker nodes). This is due to the "immutable" nature of the kOps configuration.
To change the range, add this to your cluster spec:
spec:
kubeAPIServer:
serviceNodePortRange: <range>
See the cluster spec for more information.
Keep ensure you do not conflict with the ports kOps require
How to run two kubernetes master without worker node ,one k8s master another should work as slave ,?
You can find 2 solution for Creating Highly Available clusters with kubeadm here.
There are described steps in order to create 2 kinds kinds of cluster:
Stacked control plane and etcd nodes
External etcd nodes
Additional resources:
Install and configure a multi-master Kubernetes cluster with kubeadm - HAProxy as a load balancer
kubernetes-the-hard-way
Hope this help.
Objective : MongoDB Replica establishment running on two different kubernetes cluster on different physical host.
Host 1 : kubernetes cluster -1 : A pod is running with let say (mongo1 instance with replication set name “my-mongo-set”). Service is created with pod running behind let say mongo1-service.
Host 2 : Kubernetes cluster-2 : Another pod is running with (mong2 instance with same replication set name “my-mongo-set”). Service is created with pod running behind let say mongo2-service.
now, i am unable to set the mongoDb in replication mode from inside the pod to another pod.
Host 1: Kubernetes-Cluster-1 Host 2 : Kubernetes-Cluster-2
Mongo1-service Mongo2-service
Mongo1-pod Mongo2-pod
I need pod to pod communication between two different kubernetes cluster node running onto different machines.
I am unable to expose the Kubernetes mongo-service IP using either type as NodePort, LoadBalancer, ingress controller(kong) etc.
I am new to kubernetes and I have installed the kubernetes (kubectl, kubeadm, kubelet) through apt-get and then run the Kubeadm init command.
Any suggestions how to achieve this …. ?
I don't want to setup another etcd cluster.
How can I access the built-in etcd from kubernetes pod?
I suppose first I need create a service account and use this account to launch the pod.
Then how can the container in this pod discover the URI of built-in etcd?
Thank you
The etcd instance used by the Kubernetes apiserver is generally treated as an implementation detail of the apiserver and is not designed to be reused by user applications. By default it is installed to only listen for connections on localhost and run on a machine where no user applications are scheduled.
It isn't difficult to run a second etcd instance for your own use. For example, the DNS cluster add-on includes a private instance of etcd that is separate from the etcd used by the apiserver.