I'm following the Zero to Kubernetes GKE installation guide.
I've successfully installed kubernetes and helm but when it comes to do the install:
helm upgrade --cleanup-on-fail \
--install $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--create-namespace \
--version=0.9.0 \
--values config.yaml
The error I keep getting is:
Release "jhub" does not exist. Installing it now.
Error: failed pre-install: timed out waiting for the condition
I've tried adding the flag --timeout 1200s as per the troubleshooting instructions on GKE but it still hangs up.
Any ideas on how to resolve this? If it continues to fail I'll probably just try deploying on Azure AD but ideally GKE should work.
(For reference I've actually done this deploy twice on GKE successfully but had to delete those clusters so I'm really not sure what the issue is.)
EDIT: I resolved the issue.
Further down the config.yaml file I somehow changed the tag from 1.0.7 which was the version registered via Jupyter to 1.1.0 and there was a conflict between my Jupyter registry and my JupyterHub config.
I'm still inexperienced with Jupyter & JupyterHub so apologies if this seemed obvious to others.
As mentioned in comments:
The issue was solved by cleanning old configurations from the cluster.
All the information on posted guide is working fine.
Related
helm install airflow . --namespace airflow -f my_values.yaml -f my_other_values.yaml
I executed the command from above but had to interrupt it, cannot re-execute it because gives me the error:
Error: cannot re-use a name that is still in use
How can I fix it?
Thank you
Either helm uninstall the existing release
helm uninstall airflow
helm install airflow . -n airflow -f values.dev.yaml ...
or use helm upgrade to replace it with a new one
helm upgrade airflow . -n airflow -f values.dev.yaml ...
Both will have almost the same effect. You can helm rollback the upgrade but the uninstall discards that history.
Mechanically, helm install and helm upgrade just send Kubernetes manifests to the cluster, and from there the cluster takes responsibility for actually doing the work. Unless the chart has time-consuming hook jobs, it's actually possible that your current installation is fine and you don't need to do any of this (even if helm install --wait didn't report the Deployments were ready yet).
(The commands above assume you're using the current version 3 of Helm. Helm 2 has slightly different syntax and commands, but at this point is unsupported and end-of-lifed.)
I am trying to install JanusGraph on google cloud using the tutorial available at the https://cloud.google.com/architecture/running-janusgraph-with-bigtable
But i am getting an error unable to recognize "": no matches for kind "Deployment" in version "apps/v1beta1", and getting the chart deprecated error . Let me know if any one is able to install the same,
With as per the documentation
gcloud container clusters create janusgraph-tutorial \
--cluster-version=1.15 \
--machine-type=n1-standard-4 \
--scopes=\
"https://www.googleapis.com/auth/bigtable.admin",\
"https://www.googleapis.com/auth/bigtable.data"
The error was ERROR: (gcloud.container.clusters.create) ResponseError: code=400, message=No valid versions with the prefix "1.15" found.
So i tried:
gcloud container ...
--cluster-version=1.20 \ ..
and i was able to create the container
later janus graph installation
helm upgrade --install --wait --timeout 600s janusgraph stable/janusgraph -f values.yaml
Release "janusgraph" does not exist. Installing it now.
WARNING: This chart is deprecated
Error: unable to build kubernetes objects from release manifest: [unable to recognize "": no matches for kind "Deployment" in version "apps/v1beta1", unable to recognize "": no matches for kind "StatefulSet" in version "apps/v1beta1"]
Kubernetes versions resources like Deployment. Until recently (<=1.16), Deployment was available under the apps/v1beta1 API (version). Since Kubernetes 1.16 this is deprecated and removed. You must now use apps/v1.
See:
Deprecated APIs Removed in 1.16
Kubernetes API Reference 1.21
If you're able to revise these references in the tutorial, then you should do so. You may encounter other APIs that are deprecated and|or removed but only the latter should cause similar problems.
It may be preferable to "Send feedback" (bottom of tutorial) to Google asking that someone there upgrade the tutorial or provide caveats.
Curiously, I noticed that the tutorial includes creating a Kubernetes v1.15 cluster:
gcloud container clusters create janusgraph-tutorial \
--cluster-version=1.15 \
...
Per the above, that version should still support apps/v1beta1/Deployment did that not work?
Your question would benefit from additional context|detail. You write "I am getting an error" but you do not include the specific step that cause this issue. I assume it was when you attempted to deploy JanusGraph to the cluster using Helm?
helm upgrade --install ... 600s janusgraph stable/janusgraph -f values.yaml
If I run
helm upgrade --cleanup-on-fail \
$RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.9.0 \
--values config.yaml
It fails, with this error: Error: UPGRADE FAILED: pre-upgrade hooks failed: timed out waiting for the condition. It just hangs for a bit and ultimately times out. It seems like too small of a change to cause a true timeout. I found this command in the Zero to JupyterHub docs, where it describes how to apply changes to the configuration file.
I've tried several permutations, including leaving out cleanup, leaving out version, etc. The only thing I could get to work was helm upgrade jhub jupyterhub/jupyterhub, but I don't think it's producing the desired effect.
For example, when I add a line in my config.yaml to change the default to Jupyter Lab, it doesn't work if I run helm upgrade jhub jupyterhub/jupyterhub. I believe I need to specify config.yaml using --values or -f
My overall project is to set up JupyterHub on a cloud Kubernetes environment. I'm using GKE and the online terminal.
Thanks
Solved: I specified tag incorrectly in config.yaml. I put the digest rather than the actual tag. Here are the images on DockerHub
While I'm trying to install IBM mq in the GCP Kubernetes engine using Helm charts, I got an error as shown in above figure. Anyone help me out from this...
Infrastructure: Google Cloud Platform
Kubectl version:
Client Version: v1.18.6
Server Version: v1.16.13-gke.1.
Helm version: v3.2.1+gfe51cd1
helm chart:
helm repo add ibm-charts https://raw.githubusercontent.com/IBM/charts/master/repo/stable/
Helm command:
$ helm install mqa ibm-charts/ibm-mqadvanced-server-dev --version 4.0.0 --set license=accept --set service.type=LoadBalancer --set queueManager.dev.secret.name=mysecret --set queueManager.dev.secret.adminPasswordKey=adminPassword --set security.initVolumeAsRoot=true
First, it appears it's not installing the right version of the Helm chart. You can follow the official installation instructions for the Chart.
Secondly, the messages are inconsistent. The error shows a GKE v1.15.12-gke.2 and also a GKE v1.16.13-gke.1. So I would make sure your client K8s context is pointing to the right cluster.
It also appears that the kubectl versions are not matching.
For example, you can download the v1.16.13 client so that it matches (Assuming that your client is on Linux):
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.16.13/bin/linux/amd64/kubectl
$ chmod +x kubectl
$ ./kubectl version
✌️
IBM have provided a new sample MQ Helm chart here. Included are a number of samples for different Kubernetes distributions, and GKE can be found here. Worth highlighting that this sample deploys IBM MQ in it Cloud Native high availability topology called NativeHA.
I am trying to install istio 1.4.0 from 1.3.2 and I am running into the following issue when I run the following:
$ istioctl manifest apply --set values.global.mtls.enabled=true --set values.grafana.enabled=true --set values.kiali.enabled=true
I'm following the instructions in the documentation:
$ curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.4.0 sh -
$ cd istio-1.4.0
$ export PATH=$PWD/bin:$PATH
When I run the istio manifest apply I'm able to install a majority of the components but keep getting the following message for each Istio specific CRD:
error: unable to recognize "STDIN": no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3" (repeated 1 times)
Is there a step I'm missing? I'm simply following the documentation so I'm not sure where I'm going wrong here.
If anyone runs into this issue check what k8s version your nodes are on (kubectl get nodes). Upgrading my EKS cluster from 1.11 to 1.12 fixed the issue when installing with istioctl
Also, I didn't notice this in their docs for installing 1.4.0 with istioctl.
Before you can install Istio, you need a cluster running a compatible version of Kubernetes. Istio 1.4 has been tested with Kubernetes releases 1.13, 1.14, 1.15.