Running kubernetes 1.15 in azure.
I need a basic alert (e-mail/slack notification) when one or more of my applications/pods are down in kubernetes.
As an example I have https://cert-manager.io/docs/ running in multiple clusters (hosted in azure) and I would like to get an alert (e-mail/slack notification) if it stops running.
Based on this post:
How do I set up a hook to send an email on Kubernetes pod restart?
it seems to get an e-mail alert I need to install Prometheus + Grafana access the web-ui and configure alerts so based on:
https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack
I have tried:
helm version
version.BuildInfo{Version:"v3.1.2", GitCommit:"d878d4d45863e42fd5cff6743294a11d28a9abce", GitTreeState:"clean", GoVersion:"go1.13.8"}
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm repo update
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --namespace monitoring
But that gives:
Error: failed to install CRD crds/crd-alertmanager.yaml: unable to recognize "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1"
Here there is some guide on how to create the crds manually:
https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#helm-fails-to-create-crds
but that should only be necessary if running helm 2.x which I am not I am running 3.1.2.
Also if I try to install them manually I get:
$ kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.42/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
error: unable to recognize "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.42/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1"
$ kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.42/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
error: unable to recognize "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.42/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1"
$ kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.42/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
error: unable to recognize "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.42/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1"
...
Also I found this kube-prometheus stack compatibility matrix:
https://github.com/prometheus-operator/kube-prometheus#compatibility
but versions in that matris does not match the ones I get:
$ helm search repo prometheus-community/kube-prometheus-stack --versions
NAME CHART VERSION APP VERSION DESCRIPTION
prometheus-community/kube-prometheus-stack 10.1.2 0.42.1 kube-prometheus-stack collects Kubernetes manif...
prometheus-community/kube-prometheus-stack 10.1.1 0.42.1 kube-prometheus-stack collects Kubernetes manif...
prometheus-community/kube-prometheus-stack 10.1.0 0.42.1 kube-prometheus-stack collects Kubernetes manif...
prometheus-community/kube-prometheus-stack 10.0.2 0.42.1 kube-prometheus-stack collects Kubernetes manif...
prometheus-community/kube-prometheus-stack 10.0.1 0.42.1 kube-prometheus-stack collects Kubernetes manif...
So seems that might be a 3rd way to install Prometheus
Any input appreciated.
UPDATE:
Randomly selecting the previous major version (9.4.10) seems to work:
$ helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --namespace monitoring --version 9.4.10
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
NAME: kube-prometheus-stack
LAST DEPLOYED: Fri Oct 23 15:15:03 2020
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
kubectl --namespace monitoring get pods -l "release=kube-prometheus-stack"
Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
Guess trial and error is the way to go when installing stuff on older k8s version, could be great with compatibility matrices though.
Based on the kube-prometheus-stack repo, this helm chart is restricted for K8s versions 1.16.0 or above;
kubeVersion: ">=1.16.0-0"
Even though the github README says the prerequisites as Kubernetes 1.10+ with Beta APIs, internally the helm chart checks for the kube version to be 1.16.0 or above.
So I believe, you will need to try this on an upgrade K8s cluster.
If upgrading the cluster is not an option, maybe you could try the deprecated old version of this.
https://github.com/helm/charts/tree/master/stable/prometheus
Related
This chart is deprecated
Error: INSTALLATION FAILED: failed to install CRD crds/crd-alertmanager.yaml: unable to recognize "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
helm install prometheus monitor/prometheus-operator --namespace prometheus
The chart prometheus-operator is deprecated!
Deprecation message:
DEPRECATED
This chart will be renamed, but first must be deprecated before the prometheus-community/helm-charts repo is indexed, so that it won't be listed in the hubs. See [this prometheus-community issue](https://github.com/prometheus-community/community/issues/28#issuecomment-670406329) for reasoning and next steps.
Try the latest one:
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --namespace prometheus
N.B.: The apiVersion for custom resource definitions (CRD) is apiextensions.k8s.io/v1 now.
I have downloaded the latest stable release of istio i.e.1.11.4 and am executing the below command inside the root of the istio release folder:
helm install istio install/kubernetes/helm/istio --namespace istio-system --set grafana.enabled=True --set kiali.enabled=True
When I do, I get the error:
Error: INSTALLATION FAILED: failed to download "install/kubernetes/helm/istio"
My helm version: version.BuildInfo{Version:"v3.7.1"
How can I resolve this error?
You are using some old command to install Istio.
Check out the latest installation docs: https://istio.io/latest/docs/setup/install/helm/#installation-steps
Additionally, the addons (grafana, kiali, or prometheus) are not part of Istio anymore and need to be installed separately as shown here:
Prometheus:
https://istio.io/latest/docs/ops/integrations/prometheus/
Grafana: https://istio.io/latest/docs/ops/integrations/grafana/
Kiali: https://istio.io/latest/docs/ops/integrations/kiali/
IBM Mq Helm chart installation failed to create Pod showing "Crashloop Backoff error".
Pod error Message:
Error setting admin password: /usr/bin/sudo: exit status 1: sudo: effective uid is not 0, is
/usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?
Infrastructure: Google Cloud Platform
Kubectl version:
Client Version: v1.18.6
Server Version: v1.16.13-gke.1.
helm chart:
helm repo add ibm-charts https://raw.githubusercontent.com/IBM/charts/master/repo/stable/
Helm command:
$ helm install mq --set license=accept --set service.type=LoadBalancer --set queueManager.dev.secret.name=mysecret --set queueManager.dev.secret.adminPasswordKey=adminPassword . --set security.initVolumeAsRoot=true
I'm using Minikube to tinker with Helm.
I understand Helm installs tiller in the kube-system namespace by default:
The easiest way to install tiller into the cluster is simply to run
helm init...
Once it connects, it will install tiller into the kube-system
namespace.
But instead it's trying to install tiller in a namespace named after me:
$ ~/bin/minikube start
* minikube v1.4.0 on Ubuntu 18.04
* Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
* Starting existing virtualbox VM for "minikube" ...
* Waiting for the host to be provisioned ...
* Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
* Relaunching Kubernetes using kubeadm ...
* Waiting for: apiserver proxy etcd scheduler controller dns
* Done! kubectl is now configured to use "minikube"
$ helm init
$HELM_HOME has been configured at /home/mcrenshaw/.helm.
Error: error installing: namespaces "mcrenshaw" not found
$
I can specify the tiller namespace, but then I have to specify it in every subsequent use of helm.
$ helm init --tiller-namespace=kube-system
$HELM_HOME has been configured at /home/mcrenshaw/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
$ helm upgrade --install some-thing .
Error: could not find tiller
$ helm upgrade --install some-thing . --tiller-namespace=kube-system
Release "some-thing" does not exist. Installing it now.
I suppose specifying the namespace in each command is fine. But it feels incorrect. Have I done something to corrupt my Helm config?
Update:
Per Eduardo's request, here's my helm version:
$ helm version --tiller-namespace=kube-system
Client: &version.Version{SemVer:"v2.15.0", GitCommit:"c2440264ca6c078a06e088a838b0476d2fc14750", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.15.0", GitCommit:"c2440264ca6c078a06e088a838b0476d2fc14750", GitTreeState:"clean"}
There are two ways of setting the Tiller default namespace:
Using the --tiller-namespace flag (as you are already using).
By setting the $TILLER_NAMESPACE environment variable.
The flag configuration takes precedence over the environment config. You probably have this environment variable set (you can check with printenv TILLER_NAMESPACE). If so, unset it and the further helm commands should point properly to kube-system namespace.
I'm using the instructions from Zero to jupyterhub with kubernetes to install a jupyterhub in a minikube:
When I run the command in step 2 shown below:
RELEASE=jhub
NAMESPACE=jhub
~/minik$ helm upgrade --install $RELEASE jupyterhub/jupyterhub --namespace $NAMESPACE --version 0.7.0 --values config.yaml --debug --dry-run
I get this error:
[debug] Created tunnel using local port: '42995'
[debug] SERVER: "127.0.0.1:42995"
[debug] Fetched jupyterhub/jupyterhub to
/home1/chrisj/.helm/cache/archive/jupyterhub-0.7.0.tgz
Release "jhub" does not exist. Installing it now.
[debug] CHART PATH: /home1/chrisj/.helm/cache/archive/jupyterhub-0.7.0.tgz
Error: render error in "jupyterhub/templates/proxy/autohttps/service.yaml": template: jupyterhub/templates/proxy/autohttps/service.yaml:1:26: executing "jupyterhub/templates/proxy/autohttps/service.yaml" at <.Values.proxy.https....>: can't evaluate field https in type interface {}
I have deployed Jupyterhub on minikube correctly using the provided tutorial, then I deleted it using helm delete, and tried to deploy it again with helm upgrade --install. I got similar error as you have posted. For me using:
helm delete --purge jhub solved the problem.
PS: If this will not help you, please provide some more details, like helm version, kubectl get pods --all-namespaces