Kubernetes1.9.0 kubeadm init - crictl not found in system path - kubernetes

I am setting up kubernetes cluster on a Centos 7 machine, and the kubeadm init command gives me the below warning message.
[init] Using Kubernetes version: v1.9.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.09.1-ce. Max validated version: 17.03
[WARNING FileExisting-crictl]: crictl not found in system path
How can I fix this crictl not found in system path warning? Do I need to install any additional software?

Yes, you need additional software. crictl is part of the cri-tools repo on github.
At least when I encountered this problem (Dec 20, 2017), cri-tools isn't available on kubernete's package repo, so I had to download source and build it. cri-tools is written in go, so you may need to install golang on your system as well.

I installed crictl with
go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
If you don have go on your system, you could install crictl from
https://github.com/kubernetes-incubator/cri-tools/releases

Related

how to change kubectl version in lens?

First I installed lens on my mac, when I try to shell one of the pods, there's message said that I don't have any kubectl installed, so I install kubectl and it works properly.
Now I try to change configmaps but I get an error
kubectl/1.18.20/kubectl not found
When I check the kubectl folder there's 2 kubectl version 1.18.20 and 1.21.
1.21 is the one that I installed before.
How can I move kubectl version that has define in lens (1.18.20) and change it to 1.21 ?
Note:
Lens: 5.2.0-latest.20210908.1
Electron: 12.0.17
Chrome: 89.0.4389.128
Node: 14.16.0
© 2021 Mirantis, Inc.
Thanks in advance, sorry for bad English
You can set kubectl path at File -> Preference -> Kubernetes -> PATH TO KUBECTL BINARY. Or you can also check "Download kubectl binaries matching the Kubernetes cluster version", this way Lens will use the same version as your target cluster.
By the way, you should use latest v5.2.5.

Elastic Beanstalk application deployment fails from EBExtension failing to install

We're using Elastic Beanstalk, (Postgres, Node.js running on 64bit Amazon Linux/3.2.0) and I woke up today to a Severe Health warning, causing all requests to respond with a 502 Bad Gateway. I haven't manually deployed since 4/9/19, so not sure why this happened all of a sudden.
The original error we got was:
Application deployment failed at 2019-04-18T15:39:51Z with exit status 1 and error: Package listed in EBExtension failed to install.
Yum does not have postgresql96-devel available for installation.
The repo I inherited is a little untidy, and I found instance of postgres96-devel in three different files:
.ebextensions/config.yml
.ebextensions/proxy.config
proxy.config
My config.yml file looks like:
packages:
rpm:
postgresql: https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-6-x86_64/pgdg-ami201503-96-9.6-2.noarch.rpm
yum:
postgresql96-devel: []
perl-CPAN: []
I noticed the rpm link returns a 404, and when looking for a better url, I saw this warning on the Postgres RPM page:
As of 15 April 2019, there is only one repository RPM per distro, and
it includes repository information for all available PostgreSQL
releases
What I've tried:
Redeploying the last successful build from 4/9/19
Changing the config.yml file to look like
// obviously I'm thrashing here
packages:
rpm:
postgresql: https://download.postgresql.org/pub/repos/yum/11/redhat/rhel-6-x86_64/pgdg-redhat-repo-latest.noarch.rpm
yum:
postgresql11-devel: []
perl-CPAN: []
According to this AWS support article, I terminated the instance and let EB bring up a new instance.
With all these trials + redeploying, I still seeing errors like:
Application deployment failed at 2019-04-18T17:40:41Z with exit status 1 and error: Package listed in EBExtension failed to install.
Yum does not have postgresql96-devel available for installation.
Incorrect application version "app-v1_4_1-190418_084747" (deployment
98). Expected version "app-v1_4_1-190409_140626" (deployment 104).
Process default has been unhealthy for 42 minutes
(Target.FailedHealthChecks).
I'm not sure why it's complaining about postgres96-devel since I changed my config file to point to postgres11-devel.
Any ideas how to get things back up and running?
I was able to get everything back up and running. Here's what I think happened.
Our prod instances were running Linux 3.2. This did not contain the correct rpm package, so it relied on the rpm link from config.yml.
That url broke as of 4/15/19, so when EB went to deploy and pull that RPM, it failed, causing the entire deployment to fail.
The fix was to simply downgrade the yum package from postresql96-devel to postgres95-devel. Linux 3.2's yum directory contained postgres95-devel, so the deployment was able to skip going out to the internet to download the rpm (which at this point was broken).
You can install the PostgreSQL 9.6 using the amazon-linux-extra tool:
(if using docker, in a Dockerfile: )
RUN amazon-linux-extras install postgresql9.6

kubeadm throws "command not found" error. What to do?

I am new to kubernetes. I have kubenetes and kubelet installed on my linux (RHEL7) system. I want to get kubeadm on my system, but due to the organization's policy, I can't install it via yum or ap-get, etc.
Now, I am trying to find the kubeadm rpm file, which is compatible for my Redhat linux system. This I can install on the system. i found the rpm files here but after running it the following error shows:
"error: kubernetes-kubeadm-1.10.3-1.fc29.ppc64le.rpm: not an rpm package" for every rpm file.
How do I solve this? Or are these files compatible with Fedora instead?
You can find links to the official packages for all OSes included RHEL 7 on the docs page: https://kubernetes.io/docs/setup/independent/install-kubeadm/
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable kubelet && systemctl start kubelet
As pointed by #code-ranger, you can download packages from kubernetes repo, and the way to do that is:
The following link is the xml file which lists all the packages for kubernetes:
https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/primary.xml
This has list of all the packages present in kubernetes, search for kubeadm and you will find something like:
This gives you a link to the rpm package -kubeadm- and you can use that link as follows:
https://packages.cloud.google.com/yum/pool/5af5ecd0bc46fca6c51cc23280f0c0b1522719c282e23a2b1c39b8e720195763-kubeadm-1.13.1-0.x86_64.rpm
Note: This links expire in few weeks or days and new strings generated, so it would be good if you download your rpm locally instead of using link directly.
In similar fashion, you can download other packages like kubelet,kubectl etc.
Hope this helps.

[ERROR KubeletVersion]: the kubelet version is higher than the control plane version

I'm new to kubernetes and I'm setting up my first testing cluster. However, I'll get this error when I set up the master node. But I'm not sure how to fix it.
[ERROR KubeletVersion]: the kubelet version is higher than the control plane version.
This is not a supported version skew and may lead to a malfunctional cluster.
Kubelet version: "1.12.0-rc.1" Control plane version: "1.11.3"
The host is fully patched to the latest levels
CentOS Linux release 7.5.1804 (Core)
Many Thanks
S
I hit the same problem and used the kubeadm option: --kubernetes-version=v1.12.0-rc.1
sudo kubeadm init --pod-network-cidr=172.16.0.0/12 --kubernetes-version=v1.12.0-rc.1
I'm using a JVM image that was prepared a few weeks ago and have just updated the packages. Kubeadm, kubectl and kubelet all now return version v1.12.0-rc.1 when asked but when 'kubeadm init' is called it kicks off with the previous version.
[init] using Kubernetes version: v1.11.3
specifying the (control plane) version did the trick.
Install the same version of kubelet & kubeadm
yum -y remove kubelet
yum -y install kubelet-1.11.3-0 kubeadm-1.11.3-0
I'm getting the same error on a clean Centos 7 install after fully updating with yum update, and then applying the instructions from https://kubernetes.io/docs/setup/independent/install-kubeadm/ for setup.
Adding the option for --ignore-preflight-errors=KubeletVersion allows the installer to continue but the installation is non-working afterwards.
I was able to remove everything and reinstall matching versions with the following:
yum -y remove kubelet kubeadm kubectl
yum install -y --disableexcludes=kubernetes kubeadm-1.11.3-0.x86_64 kubectl-1.11.3-0.x86_64 kubelet-1.11.3-0.x86_64

Error creating ubuntu 16 container under arch

I am trying to install a Ubuntu container on Archlinux using LXC. I am following this guide:https://gist.github.com/manoj23/8a35849697945896cdaef77927c695a7
After I run this command:
lxc-create --name=ubuntu-16 --template=ubuntu -- --release xenial --arch amd64
I get the following error:
Bad template: ubuntu
Error creating container ubuntu-16
Why is this happening?
It says in the error. Bad template.
You can see that in the current version of lxc there is no ubuntu template. The gist is probably for the previous version.
The LXC documentation does not really have any clear examples of using their updated method. The Ubunu LXC documentation does though. https://help.ubuntu.com/lts/serverguide/lxc.html