issues when setting up kubeflow pipeline on minikube - minikube

I have a minikube running on macos. When trying to setup kubeflow pipeline I got the following output:
(base) ~/ml $ export PIPELINE_VERSION=1.7.0
(base) ~/ml $ kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=$PIPELINE_VERSION"
namespace/kubeflow created
customresourcedefinition.apiextensions.k8s.io/clusterworkflowtemplates.argoproj.io unchanged
customresourcedefinition.apiextensions.k8s.io/cronworkflows.argoproj.io unchanged
customresourcedefinition.apiextensions.k8s.io/workfloweventbindings.argoproj.io unchanged
customresourcedefinition.apiextensions.k8s.io/workflows.argoproj.io unchanged
customresourcedefinition.apiextensions.k8s.io/workflowtemplates.argoproj.io unchanged
serviceaccount/kubeflow-pipelines-cache-deployer-sa created
clusterrole.rbac.authorization.k8s.io/kubeflow-pipelines-cache-deployer-clusterrole unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubeflow-pipelines-cache-deployer-clusterrolebinding unchanged
unable to recognize "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=1.7.0": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=1.7.0": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=1.7.0": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
(base) ~/ml $ kubectl wait --for condition=established --timeout=60s crd/applications.app.k8s.io
Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "applications.app.k8s.io" not found
(base) ~/ml $
(base) ~/ml $ kubectl get crd -A
NAME CREATED AT
clusterworkflowtemplates.argoproj.io 2021-12-18T15:28:31Z
cronworkflows.argoproj.io 2021-12-18T15:28:31Z
workfloweventbindings.argoproj.io 2021-12-18T15:28:31Z
workflows.argoproj.io 2021-12-18T15:28:31Z
workflowtemplates.argoproj.io 2021-12-18T15:28:31Z
In particular, what does it mean:
unable to recognize "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=1.7.0": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
Is it the root cause for the following error:
Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "applications.app.k8s.io" not found
(base) ~/ml $ minikube version
minikube version: v1.24.0
commit: 76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b
(base) ~/ml $ kubectl api-resources --api-group=apiextensions.k8s.io -o wide
NAME SHORTNAMES APIVERSION NAMESPACED KIND VERBS
customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition [create delete deletecollection get list patch update watch]

It turns out the kubeflow pipeline 1.7.0 does not work with kubernetes version higher than 1.22. I used kubernetes 1.21.8 with minikube and there is no problem installing kubeflow pipeline 1.7.0.
Yes, this is correct behaviour.
You have mentioned:
In particular, what does it mean:
unable to recognize "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=1.7.0": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
This is connected directly to Kubernetes version 1.22 api changes:
The v1.22 release will stop serving the API versions we've listed immediately below. These are all beta APIs that were previously deprecated in favor of newer and more stable API versions.
Beta versions of the ValidatingWebhookConfiguration and MutatingWebhookConfiguration API (the admissionregistration.k8s.io/v1beta1 API versions)
The beta CustomResourceDefinition API (apiextensions.k8s.io/v1beta1)
The beta APIService API (apiregistration.k8s.io/v1beta1)
The beta TokenReview API (authentication.k8s.io/v1beta1)
Beta API versions of SubjectAccessReview, LocalSubjectAccessReview, SelfSubjectAccessReview (API versions from authorization.k8s.io/v1beta1)
The beta CertificateSigningRequest API (certificates.k8s.io/v1beta1)
The beta Lease API (coordination.k8s.io/v1beta1)
All beta Ingress APIs (the extensions/v1beta1 and networking.k8s.io/v1beta1 API versions)
As of version 1.22 it is not possible to use apiextensions.k8s.io/v1beta1(this API is no longer available) and if you want to install a pipeline using this type of API, you can only use Kubernetes version 1.21.

Related

Helm Grafana/Loki chart installation error. Rendered manifests contain a resource that already exists

I need to install Grafana Loki with Prometheus in my Kubernetes cluster. So I followed the below to install them. It basically uses Helm to install it. Below is the command which I executed to install it.
helm upgrade --install loki grafana/loki-stack --set grafana.enabled=true,prometheus.enabled=true,prometheus.alertmanager.persistentVolume.enabled=false,prometheus.server.persistentVolume.enabled=false,loki.persistence.enabled=true,loki.persistence.storageClassName=standard,loki.persistence.size=5Gi -n monitoring --create-namespace
I followed the official Grafana website in this case.
But when I execute the above helm command, I get the below error. In fact, I'm new to Helm.
Release "loki" does not exist. Installing it now.
W0307 16:54:55.764184 1474330 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Error: rendered manifests contain a resource that already exists. Unable to continue with install: PodSecurityPolicy "loki-grafana" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "loki": current value is "loki-grafana"
I don't see any Grafana chart installed.
helm list -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cert-manager cert-manager 1 2021-11-26 13:07:26.103036078 +0000 UTC deployed cert-manager-v0.16.1 v0.16.1
ingress-nginx ingress-basic 1 2021-11-18 12:23:28.476712359 +0000 UTC deployed ingress-nginx-4.0.8 1.0.5
Well, I was able to get through my issue. The issue was "PodSecurityPolicy". I deleted the existing Grafana PodSecurityPolicy and it worked.
try this to get all releases in all namespaces, use --all-namespaces flag with helm ls.
Problem is here:
rendered manifests contain a resource that already exists. Unable to continue with install: PodSecurityPolicy "loki-grafana" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "loki": current value is "loki-grafana"
Deleting PodSecurityPolicy could be a solution, but better you can change annotation key meta.helm.sh/release-name from loki-grafana to loki.
Additionally I can see you are using deprecated API:
policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
To solve it look at this documentation:
The policy/v1beta1 API version of PodDisruptionBudget will no longer be served in v1.25.
Migrate manifests and API clients to use the policy/v1 API version, available since v1.21.
All existing persisted objects are accessible via the new API
Notable changes in policy/v1:
- an empty spec.selector ({}) written to a policy/v1 PodDisruptionBudget selects all pods in the namespace (in policy/v1beta1 an empty spec.selector selected no pods). An unset spec.selector selects no pods in either API version.
PodSecurityPolicy
PodSecurityPolicy in the policy/v1beta1 API version will no longer be served in v1.25, and the PodSecurityPolicy admission controller will be removed.
PodSecurityPolicy replacements are still under discussion, but current use can be migrated to 3rd-party admission webhooks now.
See also this documentation for more information about Dynamic Admission Control.

I am getting this error while installing prometheus operator in helm

This chart is deprecated
Error: INSTALLATION FAILED: failed to install CRD crds/crd-alertmanager.yaml: unable to recognize "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
helm install prometheus monitor/prometheus-operator --namespace prometheus
The chart prometheus-operator is deprecated!
Deprecation message:
DEPRECATED
This chart will be renamed, but first must be deprecated before the prometheus-community/helm-charts repo is indexed, so that it won't be listed in the hubs. See [this prometheus-community issue](https://github.com/prometheus-community/community/issues/28#issuecomment-670406329) for reasoning and next steps.
Try the latest one:
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --namespace prometheus
N.B.: The apiVersion for custom resource definitions (CRD) is apiextensions.k8s.io/v1 now.

Cannot install kube-prometheus-stack in k8s 1.15

Running kubernetes 1.15 in azure.
I need a basic alert (e-mail/slack notification) when one or more of my applications/pods are down in kubernetes.
As an example I have https://cert-manager.io/docs/ running in multiple clusters (hosted in azure) and I would like to get an alert (e-mail/slack notification) if it stops running.
Based on this post:
How do I set up a hook to send an email on Kubernetes pod restart?
it seems to get an e-mail alert I need to install Prometheus + Grafana access the web-ui and configure alerts so based on:
https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack
I have tried:
helm version
version.BuildInfo{Version:"v3.1.2", GitCommit:"d878d4d45863e42fd5cff6743294a11d28a9abce", GitTreeState:"clean", GoVersion:"go1.13.8"}
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm repo update
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --namespace monitoring
But that gives:
Error: failed to install CRD crds/crd-alertmanager.yaml: unable to recognize "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1"
Here there is some guide on how to create the crds manually:
https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#helm-fails-to-create-crds
but that should only be necessary if running helm 2.x which I am not I am running 3.1.2.
Also if I try to install them manually I get:
$ kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.42/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
error: unable to recognize "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.42/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1"
$ kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.42/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
error: unable to recognize "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.42/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1"
$ kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.42/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
error: unable to recognize "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.42/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1"
...
Also I found this kube-prometheus stack compatibility matrix:
https://github.com/prometheus-operator/kube-prometheus#compatibility
but versions in that matris does not match the ones I get:
$ helm search repo prometheus-community/kube-prometheus-stack --versions
NAME CHART VERSION APP VERSION DESCRIPTION
prometheus-community/kube-prometheus-stack 10.1.2 0.42.1 kube-prometheus-stack collects Kubernetes manif...
prometheus-community/kube-prometheus-stack 10.1.1 0.42.1 kube-prometheus-stack collects Kubernetes manif...
prometheus-community/kube-prometheus-stack 10.1.0 0.42.1 kube-prometheus-stack collects Kubernetes manif...
prometheus-community/kube-prometheus-stack 10.0.2 0.42.1 kube-prometheus-stack collects Kubernetes manif...
prometheus-community/kube-prometheus-stack 10.0.1 0.42.1 kube-prometheus-stack collects Kubernetes manif...
So seems that might be a 3rd way to install Prometheus
Any input appreciated.
UPDATE:
Randomly selecting the previous major version (9.4.10) seems to work:
$ helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --namespace monitoring --version 9.4.10
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
NAME: kube-prometheus-stack
LAST DEPLOYED: Fri Oct 23 15:15:03 2020
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
kubectl --namespace monitoring get pods -l "release=kube-prometheus-stack"
Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
Guess trial and error is the way to go when installing stuff on older k8s version, could be great with compatibility matrices though.
Based on the kube-prometheus-stack repo, this helm chart is restricted for K8s versions 1.16.0 or above;
kubeVersion: ">=1.16.0-0"
Even though the github README says the prerequisites as Kubernetes 1.10+ with Beta APIs, internally the helm chart checks for the kube version to be 1.16.0 or above.
So I believe, you will need to try this on an upgrade K8s cluster.
If upgrading the cluster is not an option, maybe you could try the deprecated old version of this.
https://github.com/helm/charts/tree/master/stable/prometheus

Unable to install istio on Kops instance

I am trying to install Istio on a Kubernetes cluster. I created a three node cluster and installed istioctl 1.1.0 version. The istio installation comes with a istio-demo.yaml file located inside install/kubernetes/istio-demo.yaml directory. When I ran the kubectl apply -f install/kubernetes/istio-demo.yaml command, I got the below output.
Then I listed the services using kubectl get svc -n istio-system I am seeing the services
Then when I list the pods using kubectl get pod -n istio-system I cannot see the pods. Where am I going wrong?
rule.config.istio.io/tcpkubeattrgenrulerule created
kubernetes.config.istio.io/attributes created
destinationrule.networking.istio.io/istio-policy created
destinationrule.networking.istio.io/istio-telemetry created
unable to recognize "install/kubernetes/istio-demo.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"
unable to recognize "install/kubernetes/istio-demo.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"
unable to recognize "install/kubernetes/istio-demo.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"
unable to recognize "install/kubernetes/istio-demo.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"
unable to recognize "install/kubernetes/istio-demo.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"
unable to recognize "install/kubernetes/istio-demo.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"
unable to recognize "install/kubernetes/istio-demo.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"
istio-1.1.0]$ kubectl get namespaces
NAME STATUS AGE
default Active 11m
istio-system Active 100s
kube-node-lease Active 11m
kube-public Active 11m
kube-system Active 11m
kubectl get pod -n istio-system
NAME READY STATUS RESTARTS AGE
istio-cleanup-secrets-1.1.0-fbr87 0/1 Completed 0 3m27s
istio-grafana-post-install-1.1.0-kwz58 0/1 Completed 0 3m27s
istio-security-post-install-1.1.0-mc9wk 0/1 Completed 0 3m27s
P.s: Update on the question:
1.
$ kubectl api-resources | grep deployment
deployments deploy apps true
Deployment
Client Version: version.Info{Major:"1", Minor:"17"
You can check which apis support current Kubernetes object using
$ kubectl api-resources | grep deployment
deployments deploy apps true Deployment
So you are trying to use deprecated apiVersion extensions/v1beta1. This was deprecated in kubernetes version 1.16. You seem to have a kubernetes cluster which is above version 1.16.
Two solutions:
In the istio-demo.yaml wherever you have Deployment change the apiVersion from extensions/v1beta1 to apps/v1
Istio 1.1 is pretty old so suggestion is to upgrade it to latest version which should fix the issue.
Also verify if the kubectl client version and Kubernetes server version is matching by running kubectl version

Why helm upgrade --install failed when previous install is failure?

This is the helm and tiller version:
> helm version --tiller-namespace data-devops
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
The previous helm installation failed:
helm ls --tiller-namespace data-devops
NAME REVISION UPDATED STATUS CHART NAMESPACE
java-maven-app 1 Thu Aug 9 13:51:44 2018 FAILED java-maven-app-1.0.0 data-devops
When I tried to install it again using this command, it failed:
helm --tiller-namespace data-devops upgrade java-maven-app helm-chart --install \
--namespace data-devops \
--values helm-chart/values/stg-stable.yaml
Error: UPGRADE FAILED: "java-maven-app" has no deployed releases
Is the helm upgrade --install command going to fail, if the previous installation failed? I am expecting it to force install. Any idea?
This is or has been a helm issue for a while. It only affects the situation where the first install of a chart fails and has up to helm 2.7 required a manual delete of the failed release before correcting the issue and installing again. However there is now a --force flag available to address this case - https://github.com/helm/helm/issues/4004
It happens when a deployment fails as unexpected.
First, check the status of the helm release deployment;
❯ helm ls -n $namespace
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
Most probably you will see nothing about the problematic helm deployment. So, check the status of the deployment with the -a option;
❯ helm list -n $namespace -a
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
$release_name $namespace 7 $update_date pending-upgrade $chart_name $app_version
As you can see the deployment stuck with the pending-upgrade status.
Check the helm deployment secrets;
❯ kubectl get secret -n $namespace 42s ⎈ eks_non-prod/monitoring
NAME TYPE DATA AGE
sh.helm.release.v1.$namespace.v1 helm.sh/release.v1 1 2d21h
sh.helm.release.v1.$namespace.v2 helm.sh/release.v1 1 21h
sh.helm.release.v1.$namespace.v3 helm.sh/release.v1 1 20h
sh.helm.release.v1.$namespace.v4 helm.sh/release.v1 1 19h
sh.helm.release.v1.$namespace.v5 helm.sh/release.v1 1 18h
sh.helm.release.v1.$namespace.v6 helm.sh/release.v1 1 17h
sh.helm.release.v1.$namespace.v7 helm.sh/release.v1 1 16h
and describe the last one;
❯ kubectl describe secret sh.helm.release.v1.$namespace.v7
Name: sh.helm.release.v1.$namespace.v7
Namespace: $namespace
Labels: modifiedAt=1611503377
name=$namespace
owner=helm
status=pending-upgrade
version=7
Annotations: <none>
Type: helm.sh/release.v1
Data
====
release: 792744 bytes
You will see the secret has the same status with the failed deployment. So delete the secret;
❯ kubectl delete secret sh.helm.release.v1.$namespace.v7
Now, you should be able to upgrade the helm release. You can check the status of the helm release after the upgrade;
❯ helm list -n $namespace -a
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
$release_name $namespace 7 $update_date deployed $chart_name $app_version
Try:
helm delete --purge <deployment>
This will do the trick
and for helm3 onwards you need to uninstall eg.
helm uninstall <deployment> -n <namespace>
Just to add...
I have often seen the Error: UPGRADE FAILED: "my-app" has no deployed releases error in Helm 3.
Almost every time, the error was in either kubectl, aws-cli or aws-iam-authenticator not Helm. Seems that a lot of problems seem to bubble-up to this exception, which is not ideal.
To diagnose the true issue you can run simple commands in one or more of these tools if you are using them and you should be able to quickly diagnose your problem.
For example:
aws-cli - aws --version to ensure you have the cli installed.
aws-iam-authenticator - aws-iam-authenticator version to check that this is correctly installed.
kubectl - kubectl version will show if the tool is installed.
kubectl - kubectl config current-context will show if you have provided a valid config that can connect to Kubernetes.