CustomResourceDefinition problem with VPA - kubernetes

I have the following problem with Kubernetes and VPA:
resource mapping not found for name: "verticalpodautoscalers.autoscaling.k8s.io" namespace: "" from "STDIN": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "verticalpodautoscalercheckpoints.autoscaling.k8s.io" namespace: "" from "STDIN": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
When I try to run ./hack/vpa-up.sh from tag vertical-pod-autoscaler/v0.9.2. How can I fix this problem?

Related

error while installing controller and agent on k8s: "CRD must be installed"

resource mapping not found for name: "jenkins-master" namespace:
"jenkins" from "role.yaml": no matches for kind "RoleBinding" in
version "rbac.authorization.k8s.io/v1beta1" ensure CRDs are installed
first
i tried to install agnos but it didnt help.

helm upgrade mongodb fails with error "unable to build kubernetes objects"

In process of project initialization I have a command:
helm upgrade mongodb mongodb/mongodb --install --set replicaSet.enabled=true.
Which fails with error:
Release "mongodb" does not exist. Installing it now.
Error: unable to build kubernetes objects from release manifest: [resource mapping not found for name: "mongodb-arbiter" namespace: "" from "": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"
ensure CRDs are installed first, resource mapping not found for name: "mongodb-secondary" namespace: "" from "": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"
ensure CRDs are installed first]
Can you please suggest what to do?
The version of Kubernetes you are using is too new for your helm charts.
Within the latest version of Kubernetes PodDisruptionBudget lives in policy/v1 and not policy/v1beta1 where your chart is looking for it.

getting error while installing kubernetes

ubuntu#kmaster:~$ kubectl apply -f
https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
configmap/calico-config unchanged
service/calico-etcd unchanged
serviceaccount/calico-cni-plugin unchanged
serviceaccount/calico-kube-controllers unchanged
resource mapping not found for name: "calico-etcd" namespace: "kube-system" from "https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "calico-node" namespace: "kube-system" from "https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "calico-kube-controllers" namespace: "kube-system" from "https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "calico-cni-plugin" namespace: "" from "https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "calico-cni-plugin" namespace: "" from "https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "calico-kube-controllers" namespace: "" from "https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "calico-kube-controllers" namespace: "" from "https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1beta1"
ensure CRDs are installed firstenter image description here
I suspect you're trying to install outdated manifests. The API versions of the resources you're applying have been deprecated in the k8s version you have. See the deprecation guide here.
You have to use the latest version which will have the v1 version of these manifests.

install dapr helm chart on a second namespace while already installed on another namespace in same cluster

I am trying to install a second dapr helm chart on namespace "test" while it is already installed on namespace "dev" in same cluster.
helm upgrade -i --namespace $NAMESPACE \
dapr-uat dapr/dapr
already installed config exists whith following name:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
dapr dev 1 2021-10-06 21:16:27.244997 +0100 +01 deployed dapr-1.4.2 1.4.2
I get the following error
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "dapr-operator-admin" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "dapr-uat": current value is "dapr"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "test": current value is "dev"
Tried specifying a different version for the installation but with no success
helm upgrade -i --namespace $NAMESPACE \
dapr-uat dapr/dapr \
--version 1.4.0
Starting to think the current chart does not allow for multiple instances (development and testing ) on the same cluster.
Has anyone faced the same issue ?
thank you,
Existing dapr chart applies cluster-wide ressources where names are given with no namespace name consideration. So, when trying to install a second configuration, a cluster-wide ressource name conflict occurs with pre-existing cluster-wide ressource:
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: ClusterRole "dapr-operator-admin" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "dapr-uat": current value is "dapr-dev"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "uat": current value is "dev"
I had to edit the chart:
git clone https://github.com/dapr/dapr.git
I edited RBAC ressources in subchart dapr_rbac where ressource name now considers namespace name in dapr_rbac/templates/ClusterRoleBinding.yaml
previous file :
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: dapr-operator
...
Edit now consists of metadata name on all ressources:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: dapr-operator-{{ .Release.Namespace }}
...
Same logic have been applied to MutatingWebhookConfiguration in subchart dapr_sidecar_injector in file dapr_sidecar_injector/templates/dapr_sidecar_injector_webhook_config.yaml
For full edits, please see forked repo in :
https://github.com/redaER7/dapr/tree/DEV/charts/dapr

no matches for kind "DaemonSet" in version "extensions/v1beta1"

I am trying install flannel on master node and getting below error.
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
Conf file:
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
networking:
podSubnet: 10.244.0.0/16
apiServerExtraArgs:
service-node-port-range: 8000-31274
~
You must be using Kubernetes version >=1.16. DaemonSet in extensions/v1beta1 is deprecated..
Use apps/v1 api group instead.
Try using https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml.