I'm kind of new to the Kubernetes world. In my project we are planning to use windows containers(.net full framework) in short term and linux containers(.net core) for the long run.
We have a K8 cluster provided by infrastructure team and the cluster has mix of Linux and Windows nodes. I just wanted to know how my windows containers will only be deployed to windows nodes in the K8 cluster. Is it handled by K8 or Do I need anything else ?
Below are the details from the Kubernetes Windows Documentation.
Because your cluster has both Linux and Windows nodes, you must explicitly set the nodeSelector constraint to be able to schedule pods to Windows nodes. You must set nodeSelector with the label beta.kubernetes.io/os to the value windows; see the following example:
apiVersion: v1
kind: Pod
metadata:
name: iis
labels:
name: iis
spec:
containers:
- name: iis
image: microsoft/iis:windowsservercore-1709
ports:
- containerPort: 80
nodeSelector:
"kubernetes.io/os": windows
You would need to add following lines to your YAML file. Details are available here https://kubernetes.io/docs/getting-started-guides/windows/
nodeSelector:
"beta.kubernetes.io/os": windows
Related
Summary
trying to get minikube-test-ifs.com to map to my deployment using minikube.
What I Did
minikube start
minikube addons enable ingress
kubectl apply -f <path-to-yaml-below>
kubectl get ingress
Added ingress ip mapping to /etc/hosts file in form <ip> minikube-test-ifs.com
I go to chrome and enter minikube-test-ifs.com and it doesn't load.
I get "site can't be reached, took too long to respond"
yaml file
note - it's all in the default namespace, I don't know if that's a problem.
There may be a problem in this yaml, but I checked and double checled and see no potential error... unless I'm missing something
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: nginx
ports:
- name: client
containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
selector:
app: test
ports:
- name: client
protocol: TCP
port: 3000
targetPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: minikube-test-ifs.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test-service
port:
number: 3000
OS
Windows 10
Other Stuff
I checked Minikube with ingress example not working but I already added to my /etc/hosts and I also tried removing the spec.host but that still doesn't work...
also checked Minikube Ingress (Nginx Controller) not working but that person has his page already loading so not really relevent to me from what I can tell
Any Ideas?
I watched so many Youtube tutorials on this and I follow everything perfectly. I'm still new to this but I don't see a reason for it not working?
Edit
When I run kubectl describe ingress <ingress> I get:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 8s (x5 over 19m) nginx-ingress-controller Scheduled for sync
How do I get it to sync? Is there a problem since it's been "Scheduled for sync" for a long time
Overview
Ingress addon for Minikube using docker driver only works on linux
Docker for Windows uses Hyper-V, therefore, if the Docker daemon is running, you will not be able to use VM platforms such as VirtualBox or VMware
If you have Windows Pro, Enterprise or Education, you may be able to get it working if you use Hyper-V as your minikube cluster (see Solution 1)
If you don't want to upgrade Windows, you can open a minikube cluster on a Linux Virtual Machine and run all your tests there. This will require you to configure some Windows VM settings in order to get your VM's to run (see Solution 2). Note that you can only run either Docker or a VM platform (other than Hyper-V) but not both (See The Second Problem for why this is the case).
The Problem
For those of you who are in the same situation as I was, the problem lies in the fact that the minikube ingress addon only works for Linux OS when using the docker driver (Thanks to #rriovall for showing me this documentation).
The Second Problem
So the solution should be simple, right? Just use a different driver and it should work. The problem here is, when Docker is intalled on Windows, it uses the built in Hyper-V virtualization technology which by default seems to disable all other virtualizatioin tech.
I have tested this hypothesis and this seems to be the case. When the Docker daeomon is running, I am unable to boot any virtual machine that I have. For instance, I get an error when I tried to run my VM's on VirtualBox and on VMWare.
Furthermore, when I attemt to start a minikube cluster using the virtualbox driver, it gets stuck "booting the kernel" and then I get a This computer doesn't have VT-X/AMD-v enabled error. This error is false as I do have VT-X enabled (I checked my BIOS). This is most likely due to the fact that when Hyper-V is enabled, all other types of virtualization tech seems to be disabled.
For my personal machine, when I do a search for "turn windows features on or off" the Docker daemon enabled "Virtual Machine Platform" and then asked me to restart my computer. This happened when I installed Docker. As a test, I turned off both "Virtual Machine Platform" and "Windows Hypervsor Platform" features and restarted my computer.
What happened when I did that? The Docker daemon stopped running and I could no longer work with docker, however, I was able to open my VM's and I was able to start my minikube cluster with virtualbox as the driver. The problem? Well, Docker doesn't work so when my cluster tries to pull the docker image I am using, it won't be able to.
So here lies the problem. Either you have VM tech enables and Docker disabled, or you have VM tech (other than Hyper-V, I'll touch on that soon) disabled and Docker enabled. But you can't have both.
Solution 1 (Untested)
The simplest solution would probably be upgrading to Windows Pro, Enterpriseor or Education. The Hyper-V platform is not accessable on normal Windows. Once you have upgraded, you should be able to use Hyper-V as your driver concurrently with the Docker daemon. This, in theory, should make the ingress work.
Solution 2 (Tested)
If you're like me and don't want to do a system upgrade for something so miniscule, there's another solution.
First, search your computer for the "turn windows features on or off" section and disable "Virtual Machine Platform" and "Windows Hypervisor Platform" and restart your computer. (See you in a bit :D)
After that, install a virtual machine platform on your computer. I prefer VirtualBox but you can also use others such as VMware.
Once you have a VM platform installed, add a new Linux VM. I would recommend either Debian or Ubuntu. If you are unfamiliar with how to set up a VM, this video will show you how to do so. This will be the general set up for most iso images.
After you have your VM up and running, download minikube and Docker on it. Be sure to install the correct version for your VM (for Debian, install Debian versions, for Ubuntu, install Ubuntu versions. Some downlaods may just be general Linux wich should work on most Linux versions).
Once you have everything installed, create a minikube cluster with docker as the driver, apply your Kubernetes configurations (deployment, service and ingress). Configure your /etc/hosts file and go to your browser and it should work. If you don't know how to set up an ingress, you can watch this video for an explanation on what an ingress is, how it works, and an example of how to set it up.
Typically, if I have a remote server, I could access it using ssh, and VS Code gives a beautiful extension for editing and debugging codes for the remote server. But when I create pods in Kuberneters, I can't really ssh into the container and so I cannot edit the code inside the pod or machine. And the kuberneters plugin in VSCode does not really help because the plugin is used to deploy the code. So, I was wondering whether there is a way edit codes inside a pod using VSCode.
P.S. Alternatively if there is a way to ssh into a pod in a kuberneters, that will do too.
If your requirement is "kubectl edit xxx" to use VSCode.
The solution is:
For Linux,macos: export EDITOR='code --wait'
For Windows: set EDITOR=code --wait
Kubernetes + Remote Development extensions now allow:
attaching to k8s pods
open remote folders
execute remotely
debug on remote
integrated terminal into remote
must have:
kubectl
docker (minimum = docker cli - Is it possible to install only the docker cli and not the daemon)
required vscode extentions:
Kubernetes. https://marketplace.visualstudio.com/items?itemName=ms-kubernetes-tools.vscode-kubernetes-tools
Remote Development - https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.vscode-remote-extensionpack
Well a pod is just a unit of deployment in kubernetes which means you can tune the containers inside it to receive an ssh connection.
Let's start by getting a docker image that allows ssh connections. rastasheep/ubuntu-sshd:18.04 image is quite nice for this. Create a deployment with it.
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: debugger
name: debugger
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: debugger
template:
metadata:
creationTimestamp: null
labels:
app: debugger
spec:
containers:
- name: debugger
image: rastasheep/ubuntu-sshd:18.04
imagePullPolicy: "Always"
hostname: debugger
restartPolicy: Always
Now let's create a service of type LoadBalancer such that we can access the pod remotely.
---
apiVersion: v1
kind: Service
metadata:
namespace: default
labels:
app: debugger
name: debugger
spec:
type: LoadBalancer
ports:
- name: "22"
port: 22
targetPort: 22
selector:
app: debugger
status:
loadBalancer: {}
Finally, get the external ip address by running kubectl get svc | grep debugger and use it to test the ssh connection ssh root#external_ip_address
Note the user / pass of this docker image is root / root respectively.
UPDATE
Nodeport example. I tested this and it worked running ssh -p30036#ipBUT I had to enable a firewall rule to make it work. So, the nmap command that I gave you has the answer. Obviously the machines that run kubernetes don't allow inbound traffic on weird ports. Talk to them such that they can give you an external ip address or at least a port in a node.
---
apiVersion: v1
kind: Service
metadata:
name: debugger
namespace: default
labels:
app: debugger
spec:
type: NodePort
ports:
- name: "ssh"
port: 22
nodePort: 30036
selector:
app: debugger
status:
loadBalancer: {}
As mentioned in some of the other answers, you can do this although it is fraught with danger as the cluster can/will replace pods regularly and when it does, it starts a new pod idempotently from the configuration which will not have your changes.
The command below will get you a shell session in your pod , which can sometimes be helpful for debugging if you don't have adequate monitoring/local testing facilities to recreate an issue.
kubectl --namespace=exmaple exec -it my-cool-pod-here -- /bin/bash
Note You can replace the command with any tool that is installed in your container (python3, sh, bash, etc). Also know that that some base images like alpine wont have bash/shell installed be default.
This will open a bash session in the running container on the cluster, assuming you have the correct k8s RBAC permissions.
There is an Cloud Code extension available for VS Code that will serve your purpose.
You can install it in your Visual Studio Code to interact with your Kubernetes cluster.
It allows you to create minikube cluster, Google GKE, Amazon EKS or Azure AKS and manage it from VS Code (you can access cluster information, stream/view logs from pods and open interactive terminal to the container).
You can also enable continuous deployment so it will continuously watch for changes in your files, rebuild the container and redeploy application to the cluster.
It is well explained in Documentation
Hope it will be useful for your use case.
With the support of Windows server 2019 in Kubernetes 1.14 it seems possible to have nodes of different OS. For example a Ubuntu 18.04 Node, RHEL 7 Node, Windows Server Node within one cluster.
In my use case I would like to have pre-configured queue system with a queue per OS type. The nodes would feed off their specific queues processing the job.
With the above in my mind is it possible to configure a Job to go to a specific queue and in turn a specific OS node?
Kubernetes nodes come populated with a standard set of labels, this includes kubernetes.io/os
Pods can then be assigned to certain places via a nodeSelector, podAffinity and podAntiAffinity.
apiVersion: extensions/v1beta1
kind: Pod
metadata:
name: anapp
spec:
containers:
- image: docker.io/me/anapp
name: anapp
ports:
- containerPort: 8080
nodeSelector:
kubernetes.io/os: linux
If you need finer grained control (for example choosing between Ubuntu/RHEL) you will need to add custom labels in your kubernetes node deployment to select from. This level of selection is rare as container runtimes try and hide most of the differences from you, but if you have a particular case then add extra label metadata to the nodes.
I would recommend using the ID and VERSION_ID fields from cat /etc/*release* as most Linux distros populate this information in some form.
kubectl label node thenode softey.com/release-id=debian
kubectl label node thenode softey.com/release-version-id=9
To my understanding Kubernetes is a container orchestration service comparable to AWS ECS or Docker Swarm. Yet there are several high rated questions on stackoverflow that compare it to CloudFoundry which is a plattform orchestration service.
This means that CloudFoundry can take care of the VM layer, updating and provisioning VMs while moving containers avoiding downtime. Therefore the comparison to Kubernetes makes limited sense to my understanding.
Am I misunderstanding something, does Kubernetes support provisioning and managing the VM layer too?
Yes, you can manage VMs with KuberVirt as #AbdennourTOUMI pointed out. However, Kubernetes focuses on container orchestration and it also interacts with cloud providers to provision things like Load Balancers that can direct traffic to a cluster.
Cloud Foundry is a PaaS that provides much more than Kubernetes at the lower level. Kubernetes can run on top of an IaaS like AWS together with something like OpenShift
This is a diagram showing some of the differences:
As for VM, my answer is YES; you can run VM as workload in k8s cluster.
Indeed, Redhat team figured out how to run VM in the kubernetes cluster by adding the patch KubeVirt.
example from the link above.
apiVersion: kubevirt.io/v1alpha2
kind: VirtualMachine
metadata:
creationTimestamp: null
labels:
kubevirt.io/vm: vm-cirros
name: vm-cirros
spec:
running: false
template:
metadata:
creationTimestamp: null
labels:
kubevirt.io/vm: vm-cirros
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: registrydisk
volumeName: registryvolume
- disk:
bus: virtio
name: cloudinitdisk
volumeName: cloudinitvolume
machine:
type: ""
resources:
requests:
memory: 64M
terminationGracePeriodSeconds: 0
volumes:
- name: registryvolume
registryDisk:
image: kubevirt/cirros-registry-disk-demo:latest
- cloudInitNoCloud:
userDataBase64: IyEvYmluL3NoCgplY2hvICdwcmludGVkIGZyb20gY2xvdWQtaW5pdCB1c2VyZGF0YScK
name: cloudinitvolume
Then:
kubectl create -f vm.yaml
virtualmachine "vm-cirros" created
I have configured a Kubernetes cluster using kubeadm, by creating 3 Virtualbox nodes, each node running CentOS (master, node1, node2). Each virtualbox virtual machine is configured using 'Bridge' networking.
As a result, I have the following setup:
Master node 'master.k8s' running at 192.168.19.87 (virtualbox)
Worker node 1 'node1.k8s' running at 192.168.19.88 (virtualbox)
Worker node 2 'node2.k8s' running at 192.168.19.89 (virtualbox
Now I would like to access services running in the cluster from my local machine (the physical machine where the virtualbox nodes are running).
Running kubectl cluster-info I see the following output:
Kubernetes master is running at https://192.168.19.87:6443
KubeDNS is running at ...
As an example, let's say I deploy the dashboard inside my cluster, how do I open the dashboard UI using a browser running on my physical machine?
The traditional way is to use kubectl proxy or a Load Balancer, but since you are in a development machine a NodePort can be used to publish the applications, as a Load balancer is not available in VirtualBox.
The following example deploys 3 replicas of an echo server running nginx and publishes the http port using a NodePort:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: my-echo
image: gcr.io/google_containers/echoserver:1.8
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service-np
labels:
name: nginx-service-np
spec:
type: NodePort
ports:
- port: 8082 # Cluster IP http://10.109.199.234:8082
targetPort: 8080 # Application port
nodePort: 30000 # Example (EXTERNAL-IP VirtualBox IPs) http://192.168.50.11:30000/ http://192.168.50.12:30000/ http://192.168.50.13:30000/
protocol: TCP
name: http
selector:
app: nginx
You can access the servers using any of the VirtualBox IPs, like
http://192.168.50.11:30000 or http://192.168.50.12:30000 or http://192.168.50.13:30000
See a full example at Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube).
The traditional way of getting access to the kubernetes dashboard is documented in their readme and is to use kubectl proxy.
One should not have to ssh into the cluster to access any kubernetes service, since that would defeat the purpose of having a cluster, and would absolutely shoot a hole in the cluster's security model. Any ssh to Nodes should be reserved for "in case of emergency, break glass" situations.
More generally speaking, a well configured Ingress controller will surface services en-masse and also has the very pleasing side-effect of meaning your local cluster will operate exactly the same as your "for real" cluster, without any underhanded ssh-ery rules required