For some reasons, I cannot use the helm chart given here inside my premise. Is there any reference how can we do this?
Yes, you can deploy JupyterHub without using Helm.
Follow the tutorial on: Jupyterhub Github Installation page
But,
The Helm installation was created to automate a long part of the installation process.
I know you can't maintain external Helm repositories in your premise, but you can download manually the package, and install it.
It will be really easier and faster than creating the whole setup manually.
TL;DR: The only thing different From Documentation will be this command:
helm upgrade --install jhub jupyterhub-0.8.2.tgz \
--namespace jhub \
--version=0.8.2 \
--values config.yaml
Bellow is my full reproduction of the local installation.
user#minikube:~/jupyterhub$ openssl rand -hex 32
e278e128a9bff352becf6c0cc9b029f1fe1d5f07ce6e45e6c917c2590654e9ee
user#minikube:~/jupyterhub$ cat config.yaml
proxy:
secretToken: "e278e128a9bff352becf6c0cc9b029f1fe1d5f07ce6e45e6c917c2590654e9ee"
user#minikube:~/jupyterhub$ wget https://jupyterhub.github.io/helm-chart/jupyterhub-0.8.2.tgz
2020-02-10 13:25:31 (60.0 MB/s) - ‘jupyterhub-0.8.2.tgz’ saved [27258/27258]
user#minikube:~/jupyterhub$ helm upgrade --install jhub jupyterhub-0.8.2.tgz \
--namespace jhub \
--version=0.8.2 \
--values config.yaml
Release "jhub" does not exist. Installing it now.
NAME: jhub
LAST DEPLOYED: Mon Feb 10 13:27:20 2020
NAMESPACE: jhub
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing JupyterHub!
You can find the public IP of the JupyterHub by doing:
kubectl --namespace=jhub get svc proxy-public
It might take a few minutes for it to appear!
user#minikube:~/jupyterhub$ k get all -n jhub
NAME READY STATUS RESTARTS AGE
pod/hub-68d9d97765-ffrz6 0/1 Pending 0 19m
pod/proxy-56694f6f87-4cbgj 1/1 Running 0 19m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hub ClusterIP 10.96.150.230 <none> 8081/TCP 19m
service/proxy-api ClusterIP 10.96.115.44 <none> 8001/TCP 19m
service/proxy-public LoadBalancer 10.96.113.131 <pending> 80:31831/TCP,443:31970/TCP 19m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hub 0/1 1 0 19m
deployment.apps/proxy 1/1 1 1 19m
NAME DESIRED CURRENT READY AGE
replicaset.apps/hub-68d9d97765 1 1 0 19m
replicaset.apps/proxy-56694f6f87 1 1 1 19m
NAME READY AGE
statefulset.apps/user-placeholder 0/0 19m
If you have any problem in the process, just let me know.
Related
I was trying to install the Weave Cloud Agents for my minikube. I used the provided command
curl -Ls https://get.weave.works |sh -s -- --token=xxx
but keep getting the following error:
There was an error while performing a DNS check: checking DNS failed, the DNS in the Kubernetes cluster is not working correctly. Please check that your cluster can download images and run pods.
I have following dns:
kube-system coredns-6955765f44-7zt4x 1/1 Running 0 38m
kube-system coredns-6955765f44-xdnd9 1/1 Running 0 38m
I tried different suggestions such as https://www.jeffgeerling.com/blog/2019/debugging-networking-issues-multi-node-kubernetes-on-virtualbox or https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/. However none of them resolved my issue.
It seems to an issue which happened before https://github.com/weaveworks/launcher/issues/285.
My Kubernetes is on v1.17.3
Reproduced you issue, have the same error.
minikube v1.7.2 on Centos 7.7.1908
Docker 19.03.5
vm-driver=virtualbox
Connecting cluster to "Old Tree 34" (id: old-tree-34) on Weave Cloud
Installing Weave Cloud agents on minikube at https://192.168.99.100:8443
Performing a check of the Kubernetes installation setup.
There was an error while performing a DNS check: checking DNS failed, the DNS in the Kubernetes cluster is not working correctly. Please check that your cluster can download images and run pods.
I wasnt able to fix this problem, instead of that found a workaround - use Helm. You have second tab 'Helm 'in 'Install the Weave Cloud Agents' with provided command, like
helm repo update && helm upgrade --install --wait weave-cloud \
--set token=xxx \
--namespace weave \
stable/weave-cloud
Lets install Helm and use it.
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
.....
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
helm repo update
helm upgrade --install --wait weave-cloud \
> --set token=xxx \
> --namespace weave \
> stable/weave-cloud
Release "weave-cloud" does not exist. Installing it now.
NAME: weave-cloud
LAST DEPLOYED: Thu Feb 13 14:52:45 2020
NAMESPACE: weave
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME AGE
weave-agent 35s
==> v1/Pod(related)
NAME AGE
weave-agent-69fbf74889-dw77c 35s
==> v1/Secret
NAME AGE
weave-cloud 35s
==> v1/ServiceAccount
NAME AGE
weave-cloud 35s
==> v1beta1/ClusterRole
NAME AGE
weave-cloud 35s
==> v1beta1/ClusterRoleBinding
NAME AGE
weave-cloud 35s
NOTES:
Weave Cloud agents had been installed!
First, verify all Pods are running:
kubectl get pods -n weave
Next, login to Weave Cloud (https://cloud.weave.works) and verify the agents are connect to your instance.
If you need help or have any question, join our Slack to chat to us – https://slack.weave.works.
Happy hacking!
Check(wait around 10 min to deploy everything):
kubectl get pods -n weave
NAME READY STATUS RESTARTS AGE
kube-state-metrics-64599b7996-d8pnw 1/1 Running 0 29m
prom-node-exporter-2lwbn 1/1 Running 0 29m
prometheus-5586cdd667-dtdqq 2/2 Running 0 29m
weave-agent-6c77dbc569-xc9qx 1/1 Running 0 29m
weave-flux-agent-65cb4694d8-sllks 1/1 Running 0 29m
weave-flux-memcached-676f88fcf7-ktwnp 1/1 Running 0 29m
weave-scope-agent-7lgll 1/1 Running 0 29m
weave-scope-cluster-agent-8fb596b6b-mddv8 1/1 Running 0 29m
[vkryvoruchko#nested-vm-image1 bin]$ kubectl get all -n weave
NAME READY STATUS RESTARTS AGE
pod/kube-state-metrics-64599b7996-d8pnw 1/1 Running 0 30m
pod/prom-node-exporter-2lwbn 1/1 Running 0 30m
pod/prometheus-5586cdd667-dtdqq 2/2 Running 0 30m
pod/weave-agent-6c77dbc569-xc9qx 1/1 Running 0 30m
pod/weave-flux-agent-65cb4694d8-sllks 1/1 Running 0 30m
pod/weave-flux-memcached-676f88fcf7-ktwnp 1/1 Running 0 30m
pod/weave-scope-agent-7lgll 1/1 Running 0 30m
pod/weave-scope-cluster-agent-8fb596b6b-mddv8 1/1 Running 0 30m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/prometheus ClusterIP 10.108.197.29 <none> 80/TCP 30m
service/weave-flux-memcached ClusterIP None <none> 11211/TCP 30m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/prom-node-exporter 1 1 1 1 1 <none> 30m
daemonset.apps/weave-scope-agent 1 1 1 1 1 <none> 30m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-state-metrics 1/1 1 1 30m
deployment.apps/prometheus 1/1 1 1 30m
deployment.apps/weave-agent 1/1 1 1 31m
deployment.apps/weave-flux-agent 1/1 1 1 30m
deployment.apps/weave-flux-memcached 1/1 1 1 30m
deployment.apps/weave-scope-cluster-agent 1/1 1 1 30m
NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-state-metrics-64599b7996 1 1 1 30m
replicaset.apps/prometheus-5586cdd667 1 1 1 30m
replicaset.apps/weave-agent-69fbf74889 0 0 0 31m
replicaset.apps/weave-agent-6c77dbc569 1 1 1 30m
replicaset.apps/weave-flux-agent-65cb4694d8 1 1 1 30m
replicaset.apps/weave-flux-memcached-676f88fcf7 1 1 1 30m
replicaset.apps/weave-scope-cluster-agent-8fb596b6b 1 1 1 30m
Login to https://cloud.weave.works/ and check the same:
Started installing agents on Kubernetes cluster v1.17.2
All Weave Cloud agents are connected!
i am not able to install "Keda" with helm on AKS. Getting below error..
Any help is greatly appreciated.
Error: unable to convert to CRD type: unable to convert unstructured object to apiextensions.k8s.io/v1beta1, Kind=CustomResourceDefinition: cannot convert int64 to float64
I made a reproduction of your problem and that is the solution
You need to use
helm fetch kedacore/keda-edge --devel
To download keda files to your pc
Unzip it
tar -xvzf keda-edge-xxx.tgz
Then you need to change hook in scaledobject-crd.yaml
nano keda-edge/templates/scaledobject-crd.yaml
"helm.sh/hook": crd-install need to be changed to "helm.sh/hook": pre-install
And install it will helm
helm install ./keda-edge --name keda
NAME: keda
LAST DEPLOYED: Mon Sep 30 12:13:14 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ClusterRoleBinding
NAME AGE
hpa-controller-custom-metrics 1s
keda-keda-edge 1s
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
keda-keda-edge 0/1 1 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
keda-keda-edge-6b55bf7674-j5kgc 0/1 ContainerCreating 0 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
keda-keda-edge ClusterIP 10.110.59.143 <none> 443/TCP,80/TCP 1s
==> v1/ServiceAccount
NAME SECRETS AGE
keda-serviceaccount 1 1s
==> v1beta1/APIService
NAME AGE
v1beta1.external.metrics.k8s.io 0s
I have followed the steps as described in this link.
When i am on section of helm install (Step 2), and trying to run:
helm install --name web ./demo
I am getting the following error:
Get https://10.96.0.1:443/version?timeout=32s: dial tcp 10.96.0.1:443: i/o timeout
Expected Result: It should install and deploy the chart.
This issue relates to your kubernetes configuration and not to your helm.
Assume you are also not able see outputs from other helm commands like helm list , etc.
Lots of people have this issue because of not properly configured CNI(typically this is calico). And sometimes this happens because of your kubeconfig absence.
Solutions are:
migrate from calico to flannel
Change the --pod-network-cidr for calico from 192.168.0.0/16 to 172.16.0.0/16 when using kubeadm to init cluster, like kubeadm init --pod-network-cidr=172.16.0.0
More related info you han find on similar github helm issue
Simple single-machine example:
1) kubeadm init --pod-network-cidr=172.16.0.0/16
2) kubectl taint nodes --all node-role.kubernetes.io/master-
3) kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
4)install helm
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
5)create and install chart
$ helm create demo
Creating demo
$ helm install --name web ./demo
NAME: web
LAST DEPLOYED: Tue Jul 16 10:44:15 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
web-demo 0/1 1 0 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
web-demo-6986c66d7d-vctql 0/1 ContainerCreating 0 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web-demo ClusterIP 10.106.140.176 <none> 80/TCP 0s
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=demo,app.kubernetes.io/instance=web" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
6)result
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/web-demo-6986c66d7d-vctql 1/1 Running 0 75s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11m
service/web-demo ClusterIP 10.106.140.176 <none> 80/TCP 75s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web-demo 1/1 1 1 75s
NAME DESIRED CURRENT READY AGE
replicaset.apps/web-demo-6986c66d7d 1 1 1 75s
You can find more info in how to configure helm and kubernetes itself in Get Started With Kubernetes Using Minikube article
I'm following the tutorial for Istio on the Google Cloud Platform and have been able to get my cluster up and running. I get to part where I start the demo app by running kubectl apply -f install/kubernetes/istio-demo-auth.yaml but a number of the pods wont come up.
I'm running Istio 1.0.3
kubectl version --short
Client Version: v1.11.1
Server Version: v1.9.7-gke.6
When I run the command kubectl get service -n istio-system
to verify istio pods are deployed and containers are running many of them are in crash cycles. Any tips on how to debug this?
grafana-7b6d98d887-9dgdc 1/1 Running 0 17h
istio-citadel-778747b96d-cw78t 1/1 Running 0 17h
istio-cleanup-secrets-2vjlf 0/1 Completed 0 17h
istio-egressgateway-7b8f4ccb6-rl69j 1/1 Running 123 17h
istio-galley-7557f8c985-jp975 0/1 ContainerCreating 0 17h
istio-grafana-post-install-n45x4 0/1 Error 202 17h
istio-ingressgateway-5f94fdc55f-dc2q5 1/1 Running 123 17h
istio-pilot-d6b56bf4d-czp9w 1/2 CrashLoopBackOff 328 17h
istio-policy-6c7d8454b-dpvfj 1/2 CrashLoopBackOff 500 17h
istio-security-post-install-qrzpq 0/1 CrashLoopBackOff 201 17h
istio-sidecar-injector-75cf59b857-z7wbc 0/1 ContainerCreating 0 17h
istio-telemetry-69db5c7575-4jp2d 1/2 CrashLoopBackOff 500 17h
istio-tracing-77f9f94b98-vjmhc 1/1 Running 0 17h
prometheus-67d4988588-gjmcp 1/1 Running 0 17h
servicegraph-57d8ff7686-x2r8r 1/1 Running 0 17h
That looks like the output for kubectl -n istio-system get pods
Tips, check the output for these:
$ kubectl -n istio-system logs istio-pilot-d6b56bf4d-czp9w
$ kubectl -n istio-system logs istio-policy-6c7d8454b-dpvfj
$ kubectl -n istio-system logs istio-grafana-post-install-n45x4
$ kubectl -n istio-system logs istio-telemetry-69db5c7575-4jp2d
Check the deployment/service/configmap definitions in install/kubernetes/istio-demo-auth.yaml that you have pods crashing for.
Try installing with Helm via template.
You would usually want to have Grafana, Zipkin & Kiali along. This is what worked for me:
1) kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
2) helm template install/kubernetes/helm/istio --name istio --namespace istio-system --set grafana.enabled=true --set servicegraph.enabled=true --set tracing.enabled=true --set kiali.enabled=true --set sidecarInjectorWebhook.enabled=true --set global.tag=1.0.5 > $HOME/istio.yaml
3) kubectl create namespace istio-system
4) kubectl apply -f $HOME/istio.yaml
I had a similar issue - turned out that my NAT Gateway wasn't configured correctly. The Terraform I used to create the private cluster created an additional default internet gateway that I needed to delete.
Some came up, some didn't - I think that maybe some of the images were cached somewhere the cluster could reach, like a Google repo.
I'm trying to install Prometheus to my EKS cluster, using the default prometheus helm chart located at https://github.com/kubernetes/charts/tree/master/stable/prometheus. It deploys successfully, but in the Kubernetes Dashboard the AlertManager and Server deployments say:
pod has unbound PersistentVolumeClaims (repeated 3 times)
I've tried tinkering with the values.yaml file to no avail.
I know this isn't much to go on, but I'm not really sure what else I can look up when it comes to logging.
Here's the output from running helm install stable/prometheus --name prometheus --namespace prometheus
root#fd9c3cc3f356:~/charts# helm install stable/prometheus --name prometheus --namespace prometheus
NAME: prometheus
LAST DEPLOYED: Wed Jun 20 14:55:41 2018
NAMESPACE: prometheus
STATUS: DEPLOYED
RESOURCES:
==> v1beta1/ClusterRole
NAME AGE
prometheus-kube-state-metrics 1s
prometheus-server 1s
==> v1/ServiceAccount
NAME SECRETS AGE
prometheus-alertmanager 1 1s
prometheus-kube-state-metrics 1 1s
prometheus-node-exporter 1 1s
prometheus-pushgateway 1 1s
prometheus-server 1 1s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
prometheus-alertmanager Pending 1s
prometheus-server Pending 1s
==> v1beta1/ClusterRoleBinding
NAME AGE
prometheus-kube-state-metrics 1s
prometheus-server 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
prometheus-alertmanager ClusterIP 10.100.3.32 <none> 80/TCP 1s
prometheus-kube-state-metrics ClusterIP None <none> 80/TCP 1s
prometheus-node-exporter ClusterIP None <none> 9100/TCP 1s
prometheus-pushgateway ClusterIP 10.100.243.103 <none> 9091/TCP 1s
prometheus-server ClusterIP 10.100.144.15 <none> 80/TCP 1s
==> v1beta1/DaemonSet
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
prometheus-node-exporter 3 3 2 3 2 <none> 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
prometheus-alertmanager 1 1 1 0 1s
prometheus-kube-state-metrics 1 1 1 0 1s
prometheus-pushgateway 1 1 1 0 1s
prometheus-server 1 1 1 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
prometheus-node-exporter-dklx8 0/1 ContainerCreating 0 1s
prometheus-node-exporter-hphmn 1/1 Running 0 1s
prometheus-node-exporter-zxcnn 1/1 Running 0 1s
prometheus-alertmanager-6df98765f4-l9vq2 0/2 Pending 0 1s
prometheus-kube-state-metrics-6584885ccf-8md7c 0/1 ContainerCreating 0 1s
prometheus-pushgateway-5495f55cdf-brxvr 0/1 ContainerCreating 0 1s
prometheus-server-5959898967-fdztb 0/2 Pending 0 1s
==> v1/ConfigMap
NAME DATA AGE
prometheus-alertmanager 1 1s
prometheus-server 3 1s
NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
prometheus-server.prometheus.svc.cluster.local
Get the Prometheus server URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace prometheus -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace prometheus port-forward $POD_NAME 9090
The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster:
prometheus-alertmanager.prometheus.svc.cluster.local
Get the Alertmanager URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace prometheus -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace prometheus port-forward $POD_NAME 9093
The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
prometheus-pushgateway.prometheus.svc.cluster.local
Get the PushGateway URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace prometheus -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace prometheus port-forward $POD_NAME 9091
For more information on running Prometheus, visit:
https://prometheus.io/
Turns out, EKS clusters are not created with any persistent storage enabled:
Amazon EKS clusters are not created with any storage classes. You must
define storage classes for your cluster to use and you should define a
default storage class for your persistent volume claims.
This guide explains how to add a kubernetes StorageClass for EKS
After adding the StorageClass as instructed, deleting my prometheus deployment using helm delete prometheus --purge and re-creating the deployment, all of my pods are now fully functional.