I have difficulties in composing an image using kubectl - kubernetes

Please find the following image. I have typed kubectl apply -k directory but there is error regarding root path.
I hope someone can help. Is it only able to be deployed in Azure environment?
If I typed kubectl -f xxx.yaml file. The yaml is found but still error. Sorry I unable to reproduce the problem exists in kubelet once i started the minikube.

Use -f to specify A yaml file.

Related

Launching containers in Kubernetes after converting the config from docker compose using kompose crashes with a "not a directory" error

Help a novice kuberneter, please.
I'm trying to reconfigure the docker compose file into configs for kubernetes using the kompose tool, according to the official instructions: https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
The translation of the config itself is performed without problems, but when I try to start containers using kubectl apply, the containers fall with the following error:
I understand the meaning of the error, the mount cannot be performed, since this is a file, not a directory. But I can't figure it out why? As far as I know, the kubernetes config allows you to mount files. Does kompose make a corrupted conversion or what?
In the original docker-compose.yml file, the problem area looks like this:
In prometheus-deployment.yaml, after converting with kompose, the following is obtained:
There is an assumption that kubernetes is trying to mount prometheus.yml as a persistentVolume type and this is the problem, but this is only my assumption.
Can you please suggest me what I'm doing wrong and what needs to be fixed?

unable to create a pv due to VOL_DIR: parameter not set

I'm running rke2 version v1.22.7+rke2r2 in 3 nodes. Today I decide to reinstall my application and I'm not able to do it anymore due to a problem in claiming PV.
I have had never this problems before, and I think is due to an update on local-path-provisioner but I'm not sure I'm still a newbie about kube.
Anyway these are the commands I run before installing my solution:
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
I omitted metallb. Then as a test I try to install the test specified in the local-path-provisioner website (https://github.com/rancher/local-path-provisioner):
kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pvc/pvc.yaml
kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml
What I see is that the pvc stays in a PENDING status, then I check the pod creation in local-path-storage namespace and I see that the helper-pod-create-pvc-xxxx goes in error.
I try to get some logs and the only thing I was able to grab is this:
kubectl -n local-path-storage logs helper-pod-create-pvc-dd8cecf3-d65b-48f7-9e04-d56a20573f8e -f
/script/setup: line 3: VOL_DIR: parameter not set
So it seems VOL_DIR is not set for whatever reason. But I never did a custom configuration, it always starts without problem, and to be honest I don't know what put in VOL_DIR env variable and where.
I just answer to my question. It seems to be a bug on local-path-provisioner
they are fixing it.
In the meantime, instead of using the last one present in the master that has the bug, please use 0.0.21, like this:
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.21/deploy/local-path-storage.yaml
I tested and it works fine.
The deploy manifest in master branch is already fixed.
The master branch is for development, so please use the v0.0.x (e.g v0.0.21, stable release) for production use.

How to view the manifest file used to create a Kubenetes resource?

I have K8s deployed on an EC2 based cluster,
There is an application running in the deployment, and I am trying to figure out the manifest files that were used to create the resources,
There were deployment, service and ingress files used to create the App setup.
I tried the following command, but I'm not sure if it's the correct one as it's also returning a lot of unusual data like lastTransitionTime, lastUpdateTime and status-
kubectl get deployment -o yaml
What is the correct command to view the manifest yaml files of an existing deployed resource?
There is no specific way to do that. You should store your source files in source control like any other code. Think of it like decompiling, you can do it, but what you get back is not the same as what you put in. That said, check for the last-applied annotation, if you use kubectl apply that would have a JSON version of a more original-ish manifest, but again probably with some defaulted fields.
You can try using the --export flag, but it is deprecated and may not work perfectly.
kubectl get deployment -o yaml --export
Refer: https://github.com/kubernetes/kubernetes/pull/73787
KUBE_EDITOR="cat" kubectl edit secrets rook-ceph-mon -o yaml -n rook-ceph 2>/dev/null >user.yaml

Using kubectl to tear down from yaml

I created a .yaml file following this tutorial. You deploy the web service with kubectl apply -f shopfront-service.yaml. So far so good. The author says nothing though about how to tear everything down.
With TerraForm or CloudFormation you use the same .yaml file to remove all resources. I would think that K8 would also support cleaning up using the same .yaml file, but I can't find any way to do this.
Is there a way to delete resources with the same .yaml file used to create the deployment?
kubectl delete -f shopfront-service.yaml
see kubectl delete docs

How to fix calico.yaml for kubernetes cluster?

Trying several options to resolve the issue with weave-net (How to fix weave-net CrashLoopBackOff for the second node?), I have decided to try calico instead of weave-net. The documentation for kubernetes tells I need only one or another. The command (as per documentation here https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes/manifests/kubeadm) fails:
vagrant#vm-master:~$ sudo kubectl create -f https://github.com/projectcalico/calico-containers/blob/master/docs/cni/kubernetes/manifests/kubeadm/calico.yaml
yaml: line 6: mapping values are not allowed in this context
What I am doing wrong? Is it known issue? How can I fix/workaround it?
You need to reference the raw YAML file in your command, instead of the full GitHub HTML document:
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-containers/master/docs/cni/kubernetes/manifests/kubeadm/calico.yaml
Simply replace your HTML URL with raw data URL, it will work.