i am not able to install "Keda" with helm on AKS. Getting below error..
Any help is greatly appreciated.
Error: unable to convert to CRD type: unable to convert unstructured object to apiextensions.k8s.io/v1beta1, Kind=CustomResourceDefinition: cannot convert int64 to float64
I made a reproduction of your problem and that is the solution
You need to use
helm fetch kedacore/keda-edge --devel
To download keda files to your pc
Unzip it
tar -xvzf keda-edge-xxx.tgz
Then you need to change hook in scaledobject-crd.yaml
nano keda-edge/templates/scaledobject-crd.yaml
"helm.sh/hook": crd-install need to be changed to "helm.sh/hook": pre-install
And install it will helm
helm install ./keda-edge --name keda
NAME: keda
LAST DEPLOYED: Mon Sep 30 12:13:14 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ClusterRoleBinding
NAME AGE
hpa-controller-custom-metrics 1s
keda-keda-edge 1s
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
keda-keda-edge 0/1 1 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
keda-keda-edge-6b55bf7674-j5kgc 0/1 ContainerCreating 0 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
keda-keda-edge ClusterIP 10.110.59.143 <none> 443/TCP,80/TCP 1s
==> v1/ServiceAccount
NAME SECRETS AGE
keda-serviceaccount 1 1s
==> v1beta1/APIService
NAME AGE
v1beta1.external.metrics.k8s.io 0s
Related
I tried installing Ansible AWX. However, AWX also installs PostgreSQL on the system (I am using kubernetes for AWX btw). I understand that PostgreSQL is one of the requirements for AWX.
Now, for another project, I have to install PostgreSQL (on Kubernetes itself). I looked up a method online and it is working. However, is there some way I can do it automatically, just like the installation of AWX?
Thanks,
Suhas
This can be achieved by using the awx-operator. Below is a Demo installation of Helm. By default awx and PG db are located on the same worker node, but this requires a default SC
Helm Deployment
Configuring Helm sources for awx-operator
┌──[root#vms81.liruilongs.github.io]-[~/AWK]
└─$helm repo add awx-operator https://ansible.github.io/awx-operator/
"awx-operator" has been added to your repository.
┌──[root#vms81.liruilongs.github.io]──[~/AWK]
└─$helm repo update
Grab the latest from your diagram repository as we grab it...
... Successfully get updates from the "liruilong_repo" chart repository
... Successfully get updates from the "elastic" chart library
... Successfully obtained updates from the "prometheus-community" chart repository
... Successfully obtained updates from the "azure" chart repository
... Unable to get updates from "ali" chart repository (https://apphub.aliyuncs.com).
Failed to fetch https://apphub.aliyuncs.com/index.yaml: 504 gateway timeout
... Successfully getting updates from the "awx-operator" chart library
... Successfully fetching updates from the "stable" chart library
Update completed. ⎈ Have fun! ⎈
Search awx-operator for Chart
┌──[root#vms81.liruilongs.github.io]-[~/AWK]
└─$helm search repo awx-operator
NAME CHART VERSION APP VERSION DESCRIPTION
awx-operator/awx-operator 0.30.0 0.30.0 A Helm chart for the AWX Operator
Custom parameter installation helm install my-awx-operator awx-operator/awx-operator -n awx --create-namespace -f myvalues.yaml.
If you use a custom installation, you need to enable the corresponding switches in myvalues.yaml, you can configure HTTPS, standalone PG database, LB, LDAP authentication, etc. The file template can be found in the chart package under pull, and use the value.yaml inside for the template.
We use the default configuration here to install, no need to specify a configuration file.
┌──[root#vms81.liruilongs.github.io]-[~/AWK]
└─$helm install -n awx --create-namespace my-awx-operator awx-operator/awx-operator
Name: my-awx-operator
Last deployed. mon oct 10 16:29:24 2022
namespace: awx
Status: Deployed
Revision: 1
Test suite: none
Notes.
AWX operator is installed in Helm Chart version 0.30.0.
┌──[root#vms81.liruilongs.github.io]──[~/AWK]
└─$
After looking at the POD status
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
awx-demo-postgres-13-0 0/1 Pending 0 105s
awx-operator-controller-manager-79ff9599d8-2v5fn 2/2 Running 0 128m
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
awx-demo-postgres-13 ClusterIP None <none> 5432/TCP 5m48s
awx-operator-controller-manager-metrics-service ClusterIP 10.107.17.167 <none> 8443/TCP 132m
pg corresponding pod: awx-demo-postgres-13-0 pending now, look at the events
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl describe pods awx-demo-postgres-13-0 | grep -i -A 10 event
Event.
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 23s (x8 over 7m31s) default-scheduler 0/3 nodes are available: 3 pods have unbound direct PersistentVolumeClaims.
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pvc
name status volume capacity access mode storage class age
postgres-13-awx-demo-postgres-13-0 Pending 10m
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl describe pvc postgres-13-awx-demo-postgres-13-0 | grep -i -A 10 event
Event.
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 82s (x42 over 11m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set.
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get sc
No resources found
OK ,the reason for Pending is that there is no default SC
For stateful applications, we need to create a default SC (dynamic volume provisioning) before generating a statefulset, which will dynamically handle the creation of PVs and PVCs and generate data storage for PGs, so we need to create a SC here.
Here, for convenience, we use local storage as the back-end storage. In general, PV can only be network storage and does not belong to any Node, so it is a bit more by way of NFS, and the SC will specify the allocator through the provisioner field. After the storageClass is created, the user uses the default SC's allocation storage when defining the pvc.
To confirm successful creation
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get sc
name provisioner reclaimpolicy volumebindingmode allowvolumeexpansion age
local-path rancher.io/local-path delete WaitForFirstConsumer false 2m6s
Set to default SC:
https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/change-default-storage-class/
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl patch storageclass local-path -p '{"metadata": {"comments":{"storageclass.kubernetes.io/is-default-class": "true"}}'
storageclass.storage.k8s.io/local-path patched
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
awx-demo-postgres-13-0 0/1 Pending 0 46m
awx-operator-controller-manager-79ff9599d8-2v5fn 2/2 Running 0 173m
Export yaml file, delete and recreate
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pvc postgres-13-awx-demo-postgres-13-0 -o yaml > postgres-13-awx-demo-postgres-13-0.yaml
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl delete -f postgres-13-awx-demo-postgres-13-0.yaml
persistentvolumeclaim "postgres-13-awx-demo-postgres-13-0" deleted
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl apply -f postgres-13-awx-demo-postgres-13-0.yaml
persistentvolumeclaim/postgres-13-awx-demo-postgres-13-0 created
Check the status of the pvc, here you need to wait a while, Bound means it has been bound.
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
postgres-13-awx-demo-postgres-13-0 Pending local-path 3s
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl describe pvc postgres-13-awx-demo-postgres-13-0 | grep -i -A 10 event
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForPodScheduled 42s persistentvolume-controller waiting for pod awx-demo-postgres-13-0 to be scheduled
Normal ExternalProvisioning 41s persistentvolume-controller waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator
Normal Provisioning 41s rancher.io/local-path_local-path-provisioner-7c795b5576-gmrx4_d69ca393-bcbe-4abb-8b22-cd8db3b26bf8 External provisioner is provisioning volume for claim "awx/postgres-13-awx-demo-postgres-13-0"
Normal ProvisioningSucceeded 39s rancher.io/local-path_local-path-provisioner-7c795b5576-gmrx4_d69ca393-bcbe-4abb-8b22-cd8db3b26bf8 Successfully provisioned volume pvc-44b7687c-de18-45d2-bef6-8fb2d1c415d3
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
postgres-13-awx-demo-postgres-13-0 Bound pvc-44b7687c-de18-45d2-bef6-8fb2d1c415d3 8Gi RWO local-path 53s
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$
┌──[root#vms81.liruilongs.github.io]-[~/awx-operator/crds]
└─$kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-44b7687c-de18-45d2-bef6-8fb2d1c415d3 8Gi RWO Delete Bound awx/postgres-13-awx-demo-postgres-13-0 local-path 54s
Look at the status of the POD, here the PG-DB related POD is created successfully
Here you need to wait a while, you will see the Pods are normal
┌──[root#vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
awx-demo-65d9bf775b-hc58x 4/4 Running 0 79m
awx-demo-postgres-13-0 1/1 Running 0 143m
awx-operator-controller-manager-79ff9599d8-m7t8k 2/2 Running 0 81m
View SVC Access Test
┌──[root#vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
awx-demo-postgres-13 ClusterIP None <none> 5432/TCP 143m
awx-demo-service NodePort 10.104.176.210 <none> 80:30066/TCP 79m
awx-operator-controller-manager-metrics-service ClusterIP 10.108.71.67 <none> 8443/TCP 82m
┌──[root#vms81.liruilongs.github.io]-[~/ansible]
└─$curl 192.168.26.82:30066
<!doctype html><html lang="en"><head><script nonce="cw6jhvbF7S5bfKJPsimyabathhaX35F5hIyR7emZNT0=" type="text/javascript">window.....
┌──[root#vms81.liruilongs.github.io]-[~/ansible]
└─$
Get Password
┌──[root#vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get secrets
NAME TYPE DATA AGE
awx-demo-admin-password Opaque 1 146m
awx-demo-app-credentials Opaque 3 82m
awx-demo-broadcast-websocket Opaque 1 146m
awx-demo-postgres-configuration Opaque 6 146m
awx-demo-receptor-ca kubernetes.io/tls 2 82m
awx-demo-receptor-work-signing Opaque 2 82m
awx-demo-secret-key Opaque 1 146m
awx-demo-token-sc92t kubernetes.io/service-account-token 3 82m
awx-operator-controller-manager-token-tpv2m kubernetes.io/service-account-token 3 84m
default-token-864fk kubernetes.io/service-account-token 3 4h32m
redhat-operators-pull-secret Opaque 1 146m
sh.helm.release.v1.my-awx-operator.v1 helm.sh/release.v1 1 84m
┌──[root#vms81.liruilongs.github.io]-[~/awx-operator/crds]
└─$echo $(kubectl get secret awx-demo-admin-password -o jsonpath="{.data.password}" | base64 --decode)
tP59YoIWSS6NgCUJYQUG4cXXJIaIc7ci
┌──[root#vms81.liruilongs.github.io]-[~/awx-operator/crds]
└─$
Access test
The default service is published as NodePort, so we can access it from any subnet IP via node plus port:http://192.168.26.82:30066/#/login
For some reasons, I cannot use the helm chart given here inside my premise. Is there any reference how can we do this?
Yes, you can deploy JupyterHub without using Helm.
Follow the tutorial on: Jupyterhub Github Installation page
But,
The Helm installation was created to automate a long part of the installation process.
I know you can't maintain external Helm repositories in your premise, but you can download manually the package, and install it.
It will be really easier and faster than creating the whole setup manually.
TL;DR: The only thing different From Documentation will be this command:
helm upgrade --install jhub jupyterhub-0.8.2.tgz \
--namespace jhub \
--version=0.8.2 \
--values config.yaml
Bellow is my full reproduction of the local installation.
user#minikube:~/jupyterhub$ openssl rand -hex 32
e278e128a9bff352becf6c0cc9b029f1fe1d5f07ce6e45e6c917c2590654e9ee
user#minikube:~/jupyterhub$ cat config.yaml
proxy:
secretToken: "e278e128a9bff352becf6c0cc9b029f1fe1d5f07ce6e45e6c917c2590654e9ee"
user#minikube:~/jupyterhub$ wget https://jupyterhub.github.io/helm-chart/jupyterhub-0.8.2.tgz
2020-02-10 13:25:31 (60.0 MB/s) - ‘jupyterhub-0.8.2.tgz’ saved [27258/27258]
user#minikube:~/jupyterhub$ helm upgrade --install jhub jupyterhub-0.8.2.tgz \
--namespace jhub \
--version=0.8.2 \
--values config.yaml
Release "jhub" does not exist. Installing it now.
NAME: jhub
LAST DEPLOYED: Mon Feb 10 13:27:20 2020
NAMESPACE: jhub
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing JupyterHub!
You can find the public IP of the JupyterHub by doing:
kubectl --namespace=jhub get svc proxy-public
It might take a few minutes for it to appear!
user#minikube:~/jupyterhub$ k get all -n jhub
NAME READY STATUS RESTARTS AGE
pod/hub-68d9d97765-ffrz6 0/1 Pending 0 19m
pod/proxy-56694f6f87-4cbgj 1/1 Running 0 19m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hub ClusterIP 10.96.150.230 <none> 8081/TCP 19m
service/proxy-api ClusterIP 10.96.115.44 <none> 8001/TCP 19m
service/proxy-public LoadBalancer 10.96.113.131 <pending> 80:31831/TCP,443:31970/TCP 19m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hub 0/1 1 0 19m
deployment.apps/proxy 1/1 1 1 19m
NAME DESIRED CURRENT READY AGE
replicaset.apps/hub-68d9d97765 1 1 0 19m
replicaset.apps/proxy-56694f6f87 1 1 1 19m
NAME READY AGE
statefulset.apps/user-placeholder 0/0 19m
If you have any problem in the process, just let me know.
In the output of helm status mychart, it show NAMESPACE in which chart is deployed that is NAMESPACE: default.
#=> helm status mychart
LAST DEPLOYED: Tue Sep 24 21:32:45 2019
NAMESPACE: default
STATUS: DEPLOYED
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
nginx-web-stg-55f55958-v2cxm 0/1 Pending 0 28m
tomcat-api-stg-6d54498fdd-cqctr 1/1 Running 0 28m
and if I run kubectl get all -A, It show NAMESPACE along with resouces name-
#=> kubectl get all -A
NAMESPACE NAME READY STATUS RESTARTS AGE
nginx pod/nginx-web-stg-55f55958-v2cxm 0/1 Pending 0 20m
tomcat pod/tomcat-api-stg-6d54498fdd-cqctr 1/1 Running 0 20m
In the Kubectl output, column for NAMESPACE is included in output but not in helm status mychart. I wish to print resources along with NAMESPACE in helm status mychart output.
The output formats of kubectl and helm are completely unrelated. I'm not aware that you can modify the output of helm status in any way to make it display the namespace with each resource.
Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.
According to issue simply execute:
$ kubectl api-resources -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -l release=your-chart-name --all-namespaces
Sample output:
user#home:~$ kubectl api-resources -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -l release=terrific-ferret --all-namespaces
NAME STATUS MESSAGE ERROR
componentstatus/scheduler Healthy ok
componentstatus/etcd-0 Healthy {"health": "true"}
componentstatus/etcd-1 Healthy {"health": "true"}
componentstatus/controller-manager Healthy ok
NAMESPACE NAME DATA AGE
default configmap/terrific-ferret-mysql-test 1 12m
NAMESPACE NAME ENDPOINTS AGE
default endpoints/terrific-ferret-mysql aa.bb.cc.dd:port 12m
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default persistentvolumeclaim/terrific-ferret-mysql Bound pvc-896382d2 8Gi RWO standard 12m
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/terrific-ferret-mysql-86588b4646 1/1 Running 0 2m55s
NAMESPACE NAME TYPE DATA AGE
default secret/terrific-ferret-mysql Opaque 2 13m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/terrific-ferret-mysql ClusterIP xx.yy.zz.ww <none> 3306/TCP 13m
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
default deployment.apps/terrific-ferret-mysql 1 1 1 1 13m
NAMESPACE NAME DESIRED CURRENT READY AGE
default replicaset.apps/terrific-ferret-mysql-86 1 1 1 13m
We are using kubectl api-resources to list all supported resource types along with their shortnames.
Useful information you can find here: api-resources.
Useful blog: kubectlcheatsheet.
I have followed the steps as described in this link.
When i am on section of helm install (Step 2), and trying to run:
helm install --name web ./demo
I am getting the following error:
Get https://10.96.0.1:443/version?timeout=32s: dial tcp 10.96.0.1:443: i/o timeout
Expected Result: It should install and deploy the chart.
This issue relates to your kubernetes configuration and not to your helm.
Assume you are also not able see outputs from other helm commands like helm list , etc.
Lots of people have this issue because of not properly configured CNI(typically this is calico). And sometimes this happens because of your kubeconfig absence.
Solutions are:
migrate from calico to flannel
Change the --pod-network-cidr for calico from 192.168.0.0/16 to 172.16.0.0/16 when using kubeadm to init cluster, like kubeadm init --pod-network-cidr=172.16.0.0
More related info you han find on similar github helm issue
Simple single-machine example:
1) kubeadm init --pod-network-cidr=172.16.0.0/16
2) kubectl taint nodes --all node-role.kubernetes.io/master-
3) kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
4)install helm
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
5)create and install chart
$ helm create demo
Creating demo
$ helm install --name web ./demo
NAME: web
LAST DEPLOYED: Tue Jul 16 10:44:15 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
web-demo 0/1 1 0 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
web-demo-6986c66d7d-vctql 0/1 ContainerCreating 0 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web-demo ClusterIP 10.106.140.176 <none> 80/TCP 0s
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=demo,app.kubernetes.io/instance=web" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
6)result
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/web-demo-6986c66d7d-vctql 1/1 Running 0 75s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11m
service/web-demo ClusterIP 10.106.140.176 <none> 80/TCP 75s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web-demo 1/1 1 1 75s
NAME DESIRED CURRENT READY AGE
replicaset.apps/web-demo-6986c66d7d 1 1 1 75s
You can find more info in how to configure helm and kubernetes itself in Get Started With Kubernetes Using Minikube article
I'm trying to install Prometheus to my EKS cluster, using the default prometheus helm chart located at https://github.com/kubernetes/charts/tree/master/stable/prometheus. It deploys successfully, but in the Kubernetes Dashboard the AlertManager and Server deployments say:
pod has unbound PersistentVolumeClaims (repeated 3 times)
I've tried tinkering with the values.yaml file to no avail.
I know this isn't much to go on, but I'm not really sure what else I can look up when it comes to logging.
Here's the output from running helm install stable/prometheus --name prometheus --namespace prometheus
root#fd9c3cc3f356:~/charts# helm install stable/prometheus --name prometheus --namespace prometheus
NAME: prometheus
LAST DEPLOYED: Wed Jun 20 14:55:41 2018
NAMESPACE: prometheus
STATUS: DEPLOYED
RESOURCES:
==> v1beta1/ClusterRole
NAME AGE
prometheus-kube-state-metrics 1s
prometheus-server 1s
==> v1/ServiceAccount
NAME SECRETS AGE
prometheus-alertmanager 1 1s
prometheus-kube-state-metrics 1 1s
prometheus-node-exporter 1 1s
prometheus-pushgateway 1 1s
prometheus-server 1 1s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
prometheus-alertmanager Pending 1s
prometheus-server Pending 1s
==> v1beta1/ClusterRoleBinding
NAME AGE
prometheus-kube-state-metrics 1s
prometheus-server 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
prometheus-alertmanager ClusterIP 10.100.3.32 <none> 80/TCP 1s
prometheus-kube-state-metrics ClusterIP None <none> 80/TCP 1s
prometheus-node-exporter ClusterIP None <none> 9100/TCP 1s
prometheus-pushgateway ClusterIP 10.100.243.103 <none> 9091/TCP 1s
prometheus-server ClusterIP 10.100.144.15 <none> 80/TCP 1s
==> v1beta1/DaemonSet
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
prometheus-node-exporter 3 3 2 3 2 <none> 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
prometheus-alertmanager 1 1 1 0 1s
prometheus-kube-state-metrics 1 1 1 0 1s
prometheus-pushgateway 1 1 1 0 1s
prometheus-server 1 1 1 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
prometheus-node-exporter-dklx8 0/1 ContainerCreating 0 1s
prometheus-node-exporter-hphmn 1/1 Running 0 1s
prometheus-node-exporter-zxcnn 1/1 Running 0 1s
prometheus-alertmanager-6df98765f4-l9vq2 0/2 Pending 0 1s
prometheus-kube-state-metrics-6584885ccf-8md7c 0/1 ContainerCreating 0 1s
prometheus-pushgateway-5495f55cdf-brxvr 0/1 ContainerCreating 0 1s
prometheus-server-5959898967-fdztb 0/2 Pending 0 1s
==> v1/ConfigMap
NAME DATA AGE
prometheus-alertmanager 1 1s
prometheus-server 3 1s
NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
prometheus-server.prometheus.svc.cluster.local
Get the Prometheus server URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace prometheus -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace prometheus port-forward $POD_NAME 9090
The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster:
prometheus-alertmanager.prometheus.svc.cluster.local
Get the Alertmanager URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace prometheus -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace prometheus port-forward $POD_NAME 9093
The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
prometheus-pushgateway.prometheus.svc.cluster.local
Get the PushGateway URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace prometheus -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace prometheus port-forward $POD_NAME 9091
For more information on running Prometheus, visit:
https://prometheus.io/
Turns out, EKS clusters are not created with any persistent storage enabled:
Amazon EKS clusters are not created with any storage classes. You must
define storage classes for your cluster to use and you should define a
default storage class for your persistent volume claims.
This guide explains how to add a kubernetes StorageClass for EKS
After adding the StorageClass as instructed, deleting my prometheus deployment using helm delete prometheus --purge and re-creating the deployment, all of my pods are now fully functional.