Does kubernetes run new virtual machines on new machines when autoscale pods? - kubernetes

i'm reading about kubernetes but i don't have understand if kubernetes has the ability to run new virtual machines on new machines and then start the pods on them or if the set of machines in which operate are fixed and must be always running. I'll use kubernetes on top of openstack. Thanks

Providing answer blindly as have no comments from author.
First of all I would like recommend you read Theory of Auto-Scaling article, as you mentioned you will use k8s on top of openstack.
In openstack conceptual diagram looks like this
And below are the components that can be controlled with Auto-Scaling.
Compute Host
VM running on a Compute Host
Container running on a Compute Host
Network Attached Storage
Virtual Network Functions
So answer your question - yes
What I would also like to show - read about Heat component and Autoscaling with heat

Related

How to simulate node joins and failures with a local Kubernetes cluster?

I'm developing a Kubernetes scheduler and I want to test its performance when nodes join and leave a cluster, as well as how it handles node failures.
What is the best way to test this locally on Windows 10?
Thanks in advance!
Unfortunately, you can't add nodes to Docker Desktop with Kubernetes enabled. Docker Desktop is single-node only.
I can think of two possible solutions, off the top of my head:
You could use any of the cloud providers. Major (AWS, GCP, Azure) ones have some kind of free tier (under certain usage, or timed). Adding nodes in those environments is trivial.
Create local VM for each node. This is less than perfect solution - very resource intesive. To make adding nodes easier, you could use kubeadm to provision your cluster.

Can you attach external worker nodes to Managed Kubernetes Control Plane? If yes, how to attach them?

I know many services provided a managed control plane. GKE, Digital Ocean. I want a Kubernetes cluster using those service as they provide reliablity. But I want to expand my kubernetes cluster using a number of physical machines, old machines, raspberry pis etc that I have in my local office.
Is it possible? If so, how can I do that?
I found two questions in stackoverflow related to this, but both go with unsatisfactory answers
Add external node to GCP Kubernetes Cluster
Here the answer seems to be using kubeadm init - which really is not about adding a worker node, but doing making the control plane HA.
GKE with Aws worker nodes
Says it is possible, but does not mention how to add external worker nodes.

Kubernetes deployment using shared-disk FC HBA options

I have been looking at available Kubernetes storage add-ons and have been unable to put together something that would work with our setup. Current situation is several nodes each with an FC HBA controller connected to a single LUN. I realize that some sort of cluster FS will need to be implemented, but once that is in place I don't see how I would then connect this to Kubernetes.
We've discussed taking what we have and making an iSCSI or NFS host but in addition to requiring another dedicated machine, we lose all the advantages of having the storage directly available on each node. Is there any way to make use of our current infrastructure?
Details:
4x Kubernetes nodes (1 master) deployed via kubeadm on Ubuntu 16.04 using flannel as the network addon, each system has the SAN available as block device (/dev/sdb)

kubernetes network performance issue: moving service from physical machine to kubernetes get half rps drop

I setup a kubernetes cluster with 2 powerful physical servers (32 cores + 64GB memory.) Everything runs very smooth except the bad network performance I observed.
As comparison: I run my service on such physical machine directly (one instance). Have a client machine in the same network subset calling the service. The rps can goes to 10k easily. While when I put the exact same service in kubernetes version 1.1.7, one pod (instance) of the service in launched and expose the service by ExternalIP in service yaml file. With the same client, the rps drops to 4k. Even after I switched to iptable mode of kube-proxy, it doesn't seem help a lot.
When I search around, I saw this document https://www.percona.com/blog/2016/02/05/measuring-docker-cpu-network-overhead/
Seems the docker port-forwarding is the network bottleneck. While other network mode of docker: like --net=host, bridge network, or containers sharing network don't have such performance drop. Wondering whether Kubernetes team already aware of such network performance drop? Since docker containers are launched and managed by Kubernetes. Is there anyway to tune the kubernetest to use other network mode of docker?
You can configure Kubernetes networking in a number of different ways when configuring the cluster, and a few different ways on a per-pod basis. If you want to try verifying whether the docker networking arrangement is the problem, set hostNetwork to true in your pod specification and give it another try (example here). This is the equivalent of the docker --net=host setting.

Does Kubernetes provision new VMs for pods on my cloud platform?

I'm currently learning about Kubernetes and still trying to figure it out. I get the general use of it but I think that there still plenty of things I'm missing, here's one of them. If I want to run Kubernetes on my public cloud, like GCE or AWS, will Kubernetes spin up new VMs by itself in order to make more compute for new pods that might be needed? Or will it only use a certain amount of VMs that were pre-configured as the compute pool. I heard Brendan say, in his talk in CoreOS fest, that Kubernetes sees the VMs as a "sea of compute" and the user doesn't have to worry about which VM is running which pod - I'm interested to know where that pool of compute comes from, is it configured when setting up Kubernetes? Or will it scale by itself and create new machines as needed?
I hope I managed to be coherent.
Thanks!
Kubernetes supports scaling, but not auto-scaling. The addition and removal of new pods (VMs) in a Kubernetes cluster is performed by replication controllers. The size of a replication controller can be changed by updating the replicas field. This can be performed in a couple ways:
Using kubectl, you can use the scale command.
Using the Kubernetes API, you can update your config with a new value in the replicas field.
Kubernetes has been designed for auto-scaling to be handled by an external auto-scaler. This is discussed in responsibilities of the replication controller in the Kubernetes docs.