can i install kubernetes on amazon linux 2 - kubernetes

I'm having trouble Installing kubeadm on my amazon linux 2 instance specifically when i try to create a cluster,
when i try Installing runtime i get to chose which one to use :
containerd
CRI-O
Docker Engine
Mirantis Container Runtime
first of all i'm wondering which one i should use between them that is compatible with amazon linux 2 and second of all whenever i run yum install for any CRI i get this same error:
this is the output of the command: yum install cri-o
the doc that i followed is: https://kubernetes.io/docs/setup/production-environment/container-runtimes/

hi, hope you are enjoying your kubernetes Journey !
First off, you I wanna tell you that you can use whichever you want between the container runtime you want to install.
You can use docker if you are not familiar with the others but containerd is in my opinion the best lightweight alternative ( containerd is used in docker, but for kubernetes you don't need all the layers that docker provides only the container runtime Itself, here containerd ) you can read this for more info, but there is plenty of documentation about this.: https://www.tutorialworks.com/difference-docker-containerd-runc-crio-oci/
Second of all, I don't know how you are trying to install your kubernetes cluster but again there is few couples of way to do it:
The hardest but very instructive can be kubernetes the hard way ( https://github.com/kelseyhightower/kubernetes-the-hard-way )
Next you can use kubeadm (again there is plenty of documentation on the internet but you can follow one of the kubeadm tutorials: https://devopscube.com/setup-kubernetes-cluster-kubeadm/ )
Here is a list of tools that you can use to install your kubernetes cluster, you can look for tutorials for each of them on the internet: https://dzone.com/articles/50-useful-kubernetes-tools )
Last but not least, since you are on aws, you can use the AWS EKS service to setup quickly a robust kubernetes cluster. (https://aws.amazon.com/fr/eks/)
This is for AWS. If you want a local k8s cluster I strongly suggest you to use kind (kubernetes in docker)
Bguess

Related

How to determine kubernetes version from within EKS node

We’re providing our own AMI node images for EKS using the self-managed node feature.
The challenge I’m currently having is how to fetch the kubernetes version from within the EKS node as it starts up.
I’ve tried IMDS - which unfortunately doesn’t seem to have it:
root#ip-10-5-158-249:/# curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/
ami-id
ami-launch-index
ami-manifest-path
autoscaling/
block-device-mapping/
events/
hostname
iam/
identity-credentials/
instance-action
instance-id
instance-life-cycle
instance-type
local-hostname
local-ipv4
mac
metrics/
network/
placement/
profile
reservation-id
It also doesn’t seem to be passed in by the EKS bootstrap script - seems AWS is baking a single K8s version into each AMI. (install-worker.sh).
This is different from Azure’s behaviour of baking a bunch of Kubelets into a single VHD.
I’m hoping for something like IMDS or a passed in user-data param which can be used at runtime to symlink kubelet to the correct kubelet version binary.
Assumed you build your AMI base on the EKS optimized AMI; one of the possible way is use kubelet --version to capture the K8s version in your custom built; as you knew EKS AMI is coupled with the control plane version. If you are not using EKS AMI, you will need to make aws eks describe-cluster call to get cluster information in order to join the cluster; which the version is provided at cluster.version.

Specify containerd version on minikube or kind

I am trying to reproduce an issue that requires me to use containerd v1.4.4 for my container-runtime and kubernetes v1.19.8. When I try to use minikube to create a multi-node cluster locally, it allows me to specify the kubernetes version but I am unable to specify the containerd version(i.e. it always uses v1.4.9) and based on this github discussion, it doesn't seem to support it. I then turned to kind but was unable to find a way to specify the same from the documentation. Is there a way either in kind or in minikube to specify the containerd version?
I ended up using kubeadm and set up a master and worker node using 2 VMs. This allowed me to specify the versions I want on the worker node. Building a base image on kind should also work as user Mikolaj.S mentioned

Change Container Runtime without destroying cluster

we are running multiple kubespray deployed clusters with 10-100 nodes.
with 1.20 kubernetes deperecates dockershim support -> https://github.com/kubernetes/kubernetes/blob/ab32085bf36fc7af1ded30456e2f09399dc1115f/CHANGELOG/CHANGELOG-1.20.md#deprecation
how to change the container runtime to containerd - without removing nodes and without destroying master.
i am not at panick, just wan't to be prepared we are at 1.19 already so 1.22 is not soo faar away.
anyways i tested it with a smaller cluster, and it was way easier as expected.
change: container_manager to containerd.
run the kubespray cluster.yml playbook over all nodes and boom.
only needed to do a simple ansible playbook to uninstall docker et-all, but it also works with docker installed.
Please treat this answer as a friendly advise.
First of all, as suggested in yesterday's fresh article Don't Panic: Kubernetes and Docker:
You do not need to panic :)
Kubernetes is only deprecating Docker as a container runtime after v1.20. They are currently only planning to remove Docker runtime support in the 1.22 release in late 2021(almost year!), so please don't brake your 100 nodes clusters till work solution will appear :)

Deploy Local Docker Image with Kubeadm

I have a single node Kubeadm deployment, I want do be able to run docker images the same way that you can in Minikube with eval $(minikube docker-env), is this possible?
I know that I can side-load a tarball and start a docker image to host my images, but I don't want that. Also, it seems that it would be most helpful to find out how to find the environment variables related to any program, not just this one. I will be looking into that but figured I would ask in case someone knew right away.
For the docker image containing kubeadm binary, you can find it on docker hub here (e.g for Kubernetes v1.14.0 - use 'kindest/node:v1.14.0')
If you want to re-use node's docker-engine, please follow one of many online tutorials on how to 'connect docker client to a remote docker daemon'. e.g this one. If you want to have a similar experience like with 'minikube docker-env', install standalone docker-machine tool, and follow instruction from this guide.

How do you install Python libraries on gcloud kubernetes nodes?

I have a gcloud Kubernetes cluster running and a Google bucket that holds some data I want to run on the cluster.
In order to use the data in the bucket, I need gcsfs installed on the nodes. How do I install packages like this on the cluster using gcloud, kubectl, etc.?
Check if a recipe like "Launch development cluster on Google Cloud Platform with Kubernetes and Helm" could help.
Using Helm, you can define workers with additional pip packages:
worker:
replicas: 8
limits:
cpu: 2
memory: 7500000000
pipPackages: >-
git+https://github.com/gcsfs/gcsfs.git
git+https://github.com/xarray/xarray.git
condaPackages: >-
-c conda-forge
zarr
blosc
I don't know if the suggestion given by VonC will actually work, but what I do know is that you're not really supposed to install stuff onto a Kubernetes Engine's worker node. This is evident by the fact that neither does it have a package manager nor does it allow to update individual programs separately.
Container-Optimized OS does not support traditional package managers (...) This design prevents updating individual software packages in the root filesystem independent of other packages.
Having said that, you can customize the worker nodes of a node pool if the number of nodes is static for that node pool via startup scripts. These still work as intended, but since you can't edit the instance template being used by the node pool, you'll have to edit those into the instances manually. So again, this clearly is not a very good way of doing things.
Finally, worker nodes have something called a "Toolbox", which is basically a special container you can run to get access to debugging tools. This container is ran directly on Docker, it's not scheduled by Kubernetes. You can customize this container image, so you can add some extra tools into it.