How to fix calico.yaml for kubernetes cluster? - kubernetes

Trying several options to resolve the issue with weave-net (How to fix weave-net CrashLoopBackOff for the second node?), I have decided to try calico instead of weave-net. The documentation for kubernetes tells I need only one or another. The command (as per documentation here https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes/manifests/kubeadm) fails:
vagrant#vm-master:~$ sudo kubectl create -f https://github.com/projectcalico/calico-containers/blob/master/docs/cni/kubernetes/manifests/kubeadm/calico.yaml
yaml: line 6: mapping values are not allowed in this context
What I am doing wrong? Is it known issue? How can I fix/workaround it?

You need to reference the raw YAML file in your command, instead of the full GitHub HTML document:
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-containers/master/docs/cni/kubernetes/manifests/kubeadm/calico.yaml

Simply replace your HTML URL with raw data URL, it will work.

Related

unable to create a pv due to VOL_DIR: parameter not set

I'm running rke2 version v1.22.7+rke2r2 in 3 nodes. Today I decide to reinstall my application and I'm not able to do it anymore due to a problem in claiming PV.
I have had never this problems before, and I think is due to an update on local-path-provisioner but I'm not sure I'm still a newbie about kube.
Anyway these are the commands I run before installing my solution:
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
I omitted metallb. Then as a test I try to install the test specified in the local-path-provisioner website (https://github.com/rancher/local-path-provisioner):
kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pvc/pvc.yaml
kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml
What I see is that the pvc stays in a PENDING status, then I check the pod creation in local-path-storage namespace and I see that the helper-pod-create-pvc-xxxx goes in error.
I try to get some logs and the only thing I was able to grab is this:
kubectl -n local-path-storage logs helper-pod-create-pvc-dd8cecf3-d65b-48f7-9e04-d56a20573f8e -f
/script/setup: line 3: VOL_DIR: parameter not set
So it seems VOL_DIR is not set for whatever reason. But I never did a custom configuration, it always starts without problem, and to be honest I don't know what put in VOL_DIR env variable and where.
I just answer to my question. It seems to be a bug on local-path-provisioner
they are fixing it.
In the meantime, instead of using the last one present in the master that has the bug, please use 0.0.21, like this:
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.21/deploy/local-path-storage.yaml
I tested and it works fine.
The deploy manifest in master branch is already fixed.
The master branch is for development, so please use the v0.0.x (e.g v0.0.21, stable release) for production use.

I have difficulties in composing an image using kubectl

Please find the following image. I have typed kubectl apply -k directory but there is error regarding root path.
I hope someone can help. Is it only able to be deployed in Azure environment?
If I typed kubectl -f xxx.yaml file. The yaml is found but still error. Sorry I unable to reproduce the problem exists in kubelet once i started the minikube.
Use -f to specify A yaml file.

kubectl set image error: arguments in resource/name form may not have more than one slash (kubernetes)

I want to deploy my project to the Kubernetes cluster. I want to deploy it by using command:
- kubectl set image deployment/$CLUSTER_NAME gcr.io/$PROJECT_ID/$DOCKER_REPOSITORY:latest
But here I get error :
It's a misleading error message.
Essentially instead of abcxyz/abcxyz:example you also need to specify the container name that the image should be assigned to so for example example=abcxyz/abcxyz:example.
It's quite complicated and misleading, I got to say. Public docs don't help much but the kubectl set image --help does.
The problem is that you might have MULTIPLE instances in the deployment. If you have only one you can do something like this (note that this works but it's not AS SPECIFIC as you might want):
# The part before = is the spec.template.spec.containers.name which is image's brother
kubectl set image deployments goliardiait-staging=gcr.io/goliardia-prod/goliardia-it-matrioska:2.12 --all
I'll update when I find what nails it. In your case:
kubectl set image deployment $CLUSTER_NAME=gcr.io/$PROJECT_ID/$DOCKER_REPOSITORY:latest --all
- kubectl set image deployment/$CLUSTER_NAME $INSTANSE_NAME=gcr.io/$PROJECT_ID/$DOCKER_REPOSITORY:latest
It is working with using command like this

Where is the Kubernetes YAML/JSON configuration files documentation?

Hei,
I'm looking for the documentation for Kubernetes's configuration files. The ones used by kubectl (e.g. kubectl create -f whatever.yaml).
Basically, the Kubernetes equivalent of this Docker Compose document.
I did search a lot but I didn't find much, or 404 links from old stackoverflow questions.
You could use the official API docs but a much more user-friendly way on the command line is the explain command, for example, I never remember what exactly goes into the spec of a pod, so I do:
$ kubectl explain Deployment.spec.template.spec

Kubernetes rolling update for same image

Document of kubernetes says to do rolling update for a updated docker image. In my case I need to do rolling update for my pods using the same image. Is it possible to do rolling update of a replication controller for a same docker image?
In my experience, you cannot. If you try to (e.g., using the method George describes), you get the following error:
error: must specify a matching key with non-equal value in Selector for api
see 'kubectl rolling-update -h' for help.
The above with kubernetes v1.1.
Sure you can, Try this command:
$ kubectl rolling-update <rc name> --image=<image-name>:<tag>
If your image:tag has been used before, you may like to do following to make sure you get the latest image on kubernetes.
$ docker pull <image-name>:<tag>