Does Kubernetes support VM / node provisioning & management? - kubernetes

To my understanding Kubernetes is a container orchestration service comparable to AWS ECS or Docker Swarm. Yet there are several high rated questions on stackoverflow that compare it to CloudFoundry which is a plattform orchestration service.
This means that CloudFoundry can take care of the VM layer, updating and provisioning VMs while moving containers avoiding downtime. Therefore the comparison to Kubernetes makes limited sense to my understanding.
Am I misunderstanding something, does Kubernetes support provisioning and managing the VM layer too?

Yes, you can manage VMs with KuberVirt as #AbdennourTOUMI pointed out. However, Kubernetes focuses on container orchestration and it also interacts with cloud providers to provision things like Load Balancers that can direct traffic to a cluster.
Cloud Foundry is a PaaS that provides much more than Kubernetes at the lower level. Kubernetes can run on top of an IaaS like AWS together with something like OpenShift
This is a diagram showing some of the differences:

As for VM, my answer is YES; you can run VM as workload in k8s cluster.
Indeed, Redhat team figured out how to run VM in the kubernetes cluster by adding the patch KubeVirt.
example from the link above.
apiVersion: kubevirt.io/v1alpha2
kind: VirtualMachine
metadata:
creationTimestamp: null
labels:
kubevirt.io/vm: vm-cirros
name: vm-cirros
spec:
running: false
template:
metadata:
creationTimestamp: null
labels:
kubevirt.io/vm: vm-cirros
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: registrydisk
volumeName: registryvolume
- disk:
bus: virtio
name: cloudinitdisk
volumeName: cloudinitvolume
machine:
type: ""
resources:
requests:
memory: 64M
terminationGracePeriodSeconds: 0
volumes:
- name: registryvolume
registryDisk:
image: kubevirt/cirros-registry-disk-demo:latest
- cloudInitNoCloud:
userDataBase64: IyEvYmluL3NoCgplY2hvICdwcmludGVkIGZyb20gY2xvdWQtaW5pdCB1c2VyZGF0YScK
name: cloudinitvolume
Then:
kubectl create -f vm.yaml
virtualmachine "vm-cirros" created

Related

how to connect vmware storage to kuberentes built using rancher 2.8

The cluster nodes are on-prem vmware servers, we used rancher just to build the k8s cluster.
Built was successful, when trying to host apps that are using PVC we have problems, the dynamic volume provisioning isn't happening and pvc are stuck in 'pending' state.
VMWare storage class is being used, we got confirmed from our vsphere admins that the VM's have visibility to the datastores and ideally it should work.
While configuring the cluster we have used the cloud provider credentials according the rancher docs.
cloud_provider:
name: vsphere
vsphereCloudProvider:
disk:
scsicontrollertype: pvscsi
global:
datacenters: nxs
insecure-flag: true
port: '443'
soap-roundtrip-count: 0
user: k8s_volume_svc#vsphere.local
Storage class yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nxs01-k8s-0004
parameters:
datastore: ds1_K8S_0004
diskformat: zeroedthick
reclaimPolicy: Delete
PVC yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: arango
namespace: arango
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: nxs01-k8s-0004
Now wanted understand why my PVC are stuck under pending state? is there any other steps missed out.
I saw in the rancher documentation saying Storage Policy has to be given as an input
https://rancher.com/docs/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere/#creating-a-storageclass
In a vmware document it referred that as an optional parameter, and also had a statement on the top stating it doesn't apply to the tools that use CSI(container storage Interface)
https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/storageclass.html
I found that rancher is using an CSI driver called rshared.
So now is this storage policy a mandatory one? is this one that stopping me from provisioning a VMDK file?
I gave the documentation of creating the storage policy to the vsphere admins, they said this is for VSAN and the datastores are in VMax. I couldn't understand the difference or find an diff doc for VMax.
It would be a great help!! if fixed :)
The whole thing is about just the path defined for the storage end, in the cloud config yaml the PATH was wrong. The vpshere admins gave us the PATH where the vm
's residing instead they should have given the path where the Storage Resides.
Once this was corrected the PVC came to bound state.

Is it possible to mount disk to gke pod and compute engine

Is it possible to mount disk to gke pod and compute engine at the same time.
I have a ubunut disk of 10 gb
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-demo
spec:
capacity:
storage: 10G
accessModes:
- ReadWriteOnce
claimRef:
name: pv-claim-demo
gcePersistentDisk:
pdName: pv-test1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim-demo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
deploment.yaml
spec:
containers:
- image: wordpress
name: wordpress
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /app/logs
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: pv-claim-demo
The idea is to mount the logs files generated by pod to disk and access it from compute engine.
I cannot use NFS or hostpath to solve the problem. The other challenge is multiple pod will be writting to same pv.
The other challenge is multiple pod will be writing to same PV.
Yes, this does not work well, unless you have a storage class similar to NFS. The default storageClass in Google Kubernetes Engine only support access mode ReadWriteOnce when dynamically provisioned - so only one replica can mount it.
The idea is to mount the logs files generated by pod to disk and access it from compute engine.
This is not a recommended solution for logs when using Kubernetes. An app on Kubernetes should follow the 12 factor principles, and for this problem there is a specific item about logs - the app should log to stdout. For apps that does not follow the 12 factor principles, this can be solved by a sidecar that tails the log files and then print them on stdout.
Logs that are printed to stdout is typically forwarded by the platform to a log collection system - as a service. So this is not anything the app developer need to be responsible for.
For how logs is handled by the platform in Google Kubernetes Engine, see Google Cloud Operations suite for GKE
You can't write many on Persistent disk. If you set your disk in read only, many can read on it (but not write, don't match your use case).
The only solution for this is to use NFS compliant storage. On Google Cloud, it's filestore service. It's exactly designed for your use case and you have tutorial for GKE
Better use the Google Cloud's operations suite for GKE (formerly known as StackDriver).
There would be two API, which can be used to access from GCE:
Cloud Monitoring
Cloud Logging

Methods of Verifying Kubernetes Configuration

I've been working on a small side project to try and learn Kubernetes. I have a relatively simple cluster with two services, an ingress, and working on adding a Redis database now. I'm hosting this cluster in Google Kubernetes Engine (GKE), but using Minikube to run the cluster locally and try everything out before I commit any changes and push them to the prod environment in GKE.
During this project, I have noticed that GKE seems to have some slight differences in how it wants the configuration vs what works in Minikube. I've seen this previously with ingresses and now with persistent volumes.
For example, to run Redis with a persistent volume in GKE, I can use:
apiVersion: apps/v1
kind: Deployment
metadata:
name: chatter-db-deployment
labels:
app: chatter
spec:
replicas: 1
selector:
matchLabels:
app: chatter-db-service
template:
metadata:
labels:
app: chatter-db-service
spec:
containers:
- name: master
image: redis
args: [
"--save", "3600", "1", "300", "100", "60", "10000",
"--appendonly", "yes",
]
ports:
- containerPort: 6379
volumeMounts:
- name: chatter-db-storage
mountPath: /data/
volumes:
- name: chatter-db-storage
gcePersistentDisk:
pdName: chatter-db-disk
fsType: ext4
The gcePersistentDisk section at the end refers to a disk I created using gcloud compute disks create. However, this simply won't work in Minikube as I can't create disks that way.
Instead, I need to use:
volumes:
- name: chatter-db-storage
persistentVolumeClaim:
claimName: chatter-db-claim
I also need to include separate configuration for a PeristentVolume and a PersistentVolumeClaim.
I can easily get something working in either Minikube OR GKE, but I'm not sure what is the best means of getting a config which works for both. Ideally, I want to have a single k8s.yaml file which deploys this app, and kubectl apply -f k8s.yaml should work for both environments, allowing me to test locally with Minikube and then push to GKE when I'm satisfied.
I understand that there are differences between the two environments and that will probably leak into the config to some extent, but there must be an effective means of verifying a config before pushing it? What are the best practices for testing a config? My questions mainly come down to:
Is it feasible to have a single Kubernetes config which can work for both GKE and Minikube?
If not, is it feasible to have a mostly shared Kubernetes config, which overrides the GKE and Minikube specific pieces?
How do existing projects solve this particular problem?
Is the best method to simply make a separate dev cluster in GKE and test on that, rather than bothering with Minikube at all?
Yes, you have found some parts of Kubernetes configuration that was not perfect from the beginning. But there are newer solutions.
Storage abstraction
The idea in newer Kubernetes releases is that your application configuration is a Deployment with Volumes that refers to PersistentVolumeClaim for a StorageClass.
While StorageClass and PersistentVolume belongs more to the infrastructure configuration.
See Configure a Pod to Use a PersistentVolume for Storage on how to configure a Persistent Volume for Minikube. For GKE you configure a Persistent Volume with GCEPersistentDisk or if you want to deploy your app to AWS you may use a Persistent Volume for AWSElasticBlockStore.
Ingress and Service abstraction
Service with type LoadBalancer and NodePort in combination with Ingress does not work the same way across cloud providers and Ingress Controllers. In addition, Services Mesh implementations like Istio have introduced VirtualService. The plan is to improve this situation with Ingress v2 as how I understand it.

How to save SQL storage data in running preemtible instance?

I am trying to cut the costs of running the kubernetes cluster on Google Cloud Platform.
I moved my node-pool to preemptible VM instances. I have 1 pod for Postgres and 4 nodes for web apps.
For Postgres, I've created StorageClass to make data persistent.
Surprisingly, maybe not, all storage data was erased after a day.
How to make a specific node in GCP not preemptible?
Or, could you advice what to do in that situation?
I guess I found a solution.
Create a disk on gcloud via:
gcloud compute disks create --size=10GB postgres-disk
gcloud compute disks create --size=[SIZE] [NAME]
Delete any StorageClasses, PV, PVC
Configure deployment file:
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
selector:
matchLabels:
app: postgres
replicas: 1
template:
metadata:
labels:
app: postgres
role: postgres
spec:
containers:
- name: postgres
image: postgres
env:
...
ports:
...
# Especially this part should be configured!
volumeMounts:
- name: postgres-persistent-storage
mountPath: /var/lib/postgresql
volumes:
- name: postgres-persistent-storage
gcePersistentDisk:
# This GCE PD must already exist.
pdName: postgres-disk
fsType: ext4
You can make a specific node not preemptible in a Google Kubernetes Engine cluster, as mentioned in the official documentation.
The steps to set up a cluster with both preemptible and non-preemptible node pools are:
Create a Cluster: In the GCP Console, go to Kubernetes Engine -> Create Cluster, and configure the cluster as you need.
On that configuration page, under Node pools, click on Add node pool. Enter the number of nodes for the default and the new pool.
To make one of the pools preemptible, click on the Advance edit button under the pool name, check the Enable preemptible nodes (beta) box, and save the changes.
Click on Create.
Then you probably want to schedule specific pods only on non-preemptible nodes. For this, you can use node taints.
You can use managed service from GCP named GKE google kubernetes cluster.
And storage data erased cause of storage class change may not retain policy and PVC.
It's better to use managed service I think.

Windows Containers on windows and linux Kubernetes cluster

I'm kind of new to the Kubernetes world. In my project we are planning to use windows containers(.net full framework) in short term and linux containers(.net core) for the long run.
We have a K8 cluster provided by infrastructure team and the cluster has mix of Linux and Windows nodes. I just wanted to know how my windows containers will only be deployed to windows nodes in the K8 cluster. Is it handled by K8 or Do I need anything else ?
Below are the details from the Kubernetes Windows Documentation.
Because your cluster has both Linux and Windows nodes, you must explicitly set the nodeSelector constraint to be able to schedule pods to Windows nodes. You must set nodeSelector with the label beta.kubernetes.io/os to the value windows; see the following example:
apiVersion: v1
kind: Pod
metadata:
name: iis
labels:
name: iis
spec:
containers:
- name: iis
image: microsoft/iis:windowsservercore-1709
ports:
- containerPort: 80
nodeSelector:
"kubernetes.io/os": windows
You would need to add following lines to your YAML file. Details are available here https://kubernetes.io/docs/getting-started-guides/windows/
nodeSelector:
"beta.kubernetes.io/os": windows