Deploying K8S cluster without default worker pool in IBM Cloud - kubernetes

Good day to you.
I am implementing VPC and K8S modules for Terraform to deploy a complete virtual datacenter including compute resources in the IBM managed cloud. I would like to have full control of the worker pools attributes, like
name
flavor
zone
size
and therefore I would like to delete the default worker pool. This should ideally happen during the deployment by terraform.
Does anyone know, whether it is possible?
I tried to set the worker count to zero and define a specific worker pool, but this creates me a cluster with to worker pools and one worker in the default pool.
Best regards.
Jan

#Jan-Hendrik Palic unfortunately, the IBM Cloud Kubernetes Service API does not support this scenario at the moment. Because Terraform uses the API, there is no way right now to create a cluster without the default worker pool.

Related

Kubernetes multi-cloud Set up

Is it possible to have Kubernetes master as an on-premise instance and another one on Google for example or is it required for all nodes to be on the same network?
If I understand correctly the only blocking factor one has to take care of is to open the right ports for the master and worker.
Yes. It's possible, but you should use the vm instances on GCP.
If Kubernetes master as an on-premise instance can communicate the vm instances on GCP, no problem.
But how do you use HA?..
I shouldn't recommend this architecture.

How to create a multi-master cluster in Azure

I need to create an Azure Kubernetes Service with 3 master nodes. So far I used to work with single master cluster, now I am in need of creating a multi-master cluster for production environments.
Can I get a way to create an AKS with multiple control planes. Thanks in Advance.
As Soundarya mentioned in the comment, the solution could be fould here:
As your ask is on AKS (Managed service from Azure) with HA enabled Clusters you already have more than one master running. As AKS is a managed offering service you will will not have the visibility/control on this.
Can I get a way to create an AKS with multiple control planes?
For this you can check the AKS Uptime SLA, Uptime SLA guarantees 99.95% availability of the Kubernetes API server endpoint for clusters.
Please check this document for more details.
If you are using AKS Engine (unmanaged service), then you can specify the number of masters. Please refer to this document for more details.

How to add remote vm instance as worker node in kubernetes cluster

I'm new to kubernetes and trying to explore the new things in it. So, my question is
Suppose I have existing kubernetes cluster with 1 master node and 1 worker node. Consider this setup is on AWS, now I have 1 more VM instance available on Oracle Cloud Platform and I want to configure that VM as worker node and attach that worker node to existing cluster.
So, is it possible to do so? Can anybody have any suggestions regarding this.
I would instead divide your clusters up based on region (unless you have a good VPN between your oracle and AWS infrastructure)
You can then run applications across clusters. If you absolutely must have one cluster that is geographically separated, I would create a master (etcd host) in each region that you have a worker node in.
Worker Node and Master Nodes communication is very critical for Kubernetes cluster. Adding nodes from on-prem to a cloud provider or from different cloud provider will make lots of issues from network perspective.
As VPN connection between AWS and Oracle Cloud needed and every time worker node has to cross ocean (probably) to reach master node.
EDIT: From Kubernetes Doc, Clusters cannot span clouds or regions (this functionality will require full federation support).
https://kubernetes.io/docs/setup/best-practices/multiple-zones/

Resizing instance groups by schedule

I have kubernetes cluster that contains two node pools. I have a task to automate resizing node pools to 0 nodes on weekends to save the money.
I know that I can stop the compute instances by standard schedule.
But I can't stop the instances that are members of instance pools. I can only resize the pool to 0. How can I do that by gcloud schedule?
Cloud scheduler won't allow you to resize the node pool. You can instead use Cloud scheduler along with Cloud Functions to call the container API to resize the node pool. There is an example on the Google public docs to do something like this for a compute instance, you'll have to convert the function call to use the container API instead.
Here are some possible solutions:
Use GKE to manage your cluster, so you can resizing-a-cluster or migration to
different size machine.
Manage your own kubernetes cluster, uses a Compute Engine instance group for the nodes in your cluster, you can actually update it without needing GKE's help
If you want automation, you can use Jenkins or Airflow to schedule resizing jobs.
Hope this can help you.

Does Kubernetes provision new VMs for pods on my cloud platform?

I'm currently learning about Kubernetes and still trying to figure it out. I get the general use of it but I think that there still plenty of things I'm missing, here's one of them. If I want to run Kubernetes on my public cloud, like GCE or AWS, will Kubernetes spin up new VMs by itself in order to make more compute for new pods that might be needed? Or will it only use a certain amount of VMs that were pre-configured as the compute pool. I heard Brendan say, in his talk in CoreOS fest, that Kubernetes sees the VMs as a "sea of compute" and the user doesn't have to worry about which VM is running which pod - I'm interested to know where that pool of compute comes from, is it configured when setting up Kubernetes? Or will it scale by itself and create new machines as needed?
I hope I managed to be coherent.
Thanks!
Kubernetes supports scaling, but not auto-scaling. The addition and removal of new pods (VMs) in a Kubernetes cluster is performed by replication controllers. The size of a replication controller can be changed by updating the replicas field. This can be performed in a couple ways:
Using kubectl, you can use the scale command.
Using the Kubernetes API, you can update your config with a new value in the replicas field.
Kubernetes has been designed for auto-scaling to be handled by an external auto-scaler. This is discussed in responsibilities of the replication controller in the Kubernetes docs.