I am trying to install JanusGraph on google cloud using the tutorial available at the https://cloud.google.com/architecture/running-janusgraph-with-bigtable
But i am getting an error unable to recognize "": no matches for kind "Deployment" in version "apps/v1beta1", and getting the chart deprecated error . Let me know if any one is able to install the same,
With as per the documentation
gcloud container clusters create janusgraph-tutorial \
--cluster-version=1.15 \
--machine-type=n1-standard-4 \
--scopes=\
"https://www.googleapis.com/auth/bigtable.admin",\
"https://www.googleapis.com/auth/bigtable.data"
The error was ERROR: (gcloud.container.clusters.create) ResponseError: code=400, message=No valid versions with the prefix "1.15" found.
So i tried:
gcloud container ...
--cluster-version=1.20 \ ..
and i was able to create the container
later janus graph installation
helm upgrade --install --wait --timeout 600s janusgraph stable/janusgraph -f values.yaml
Release "janusgraph" does not exist. Installing it now.
WARNING: This chart is deprecated
Error: unable to build kubernetes objects from release manifest: [unable to recognize "": no matches for kind "Deployment" in version "apps/v1beta1", unable to recognize "": no matches for kind "StatefulSet" in version "apps/v1beta1"]
Kubernetes versions resources like Deployment. Until recently (<=1.16), Deployment was available under the apps/v1beta1 API (version). Since Kubernetes 1.16 this is deprecated and removed. You must now use apps/v1.
See:
Deprecated APIs Removed in 1.16
Kubernetes API Reference 1.21
If you're able to revise these references in the tutorial, then you should do so. You may encounter other APIs that are deprecated and|or removed but only the latter should cause similar problems.
It may be preferable to "Send feedback" (bottom of tutorial) to Google asking that someone there upgrade the tutorial or provide caveats.
Curiously, I noticed that the tutorial includes creating a Kubernetes v1.15 cluster:
gcloud container clusters create janusgraph-tutorial \
--cluster-version=1.15 \
...
Per the above, that version should still support apps/v1beta1/Deployment did that not work?
Your question would benefit from additional context|detail. You write "I am getting an error" but you do not include the specific step that cause this issue. I assume it was when you attempted to deploy JanusGraph to the cluster using Helm?
helm upgrade --install ... 600s janusgraph stable/janusgraph -f values.yaml
Related
I am attempting to install the Azure Policy extension on a newly deployed Arc Kubernetes cluster.
az k8s-extension delete --cluster-type connectedClusters --cluster-name azure-arc-test-01 --resource-group arc-enabled-kubernetes-poc --name azurepolicy
However, I am getting the following error:
Code: ExtensionOperationFailed
Message: The extension operation failed with the following error:
Error: {failed to install chart from path [] for release [azurepolicy]:
err [unable to build kubernetes objects from release manifest:
unable to recognize "": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"]} occurred while doing the operation :
{Installing the extension} on the config.
Kubernetes version is 1.25.
I believe that the error might be caused by the fact that PodSecurityPolicy is located in extensions/v1beta1 and not in policy/v1beta - discussed here: https://github.com/helm/charts/issues/8789#issuecomment-433811260
I am looking for suggestions on how I could get around this issue. Specifically, could I download the required Helm chart and point to extensions/v1beta1?
If I run
helm upgrade --cleanup-on-fail \
$RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.9.0 \
--values config.yaml
It fails, with this error: Error: UPGRADE FAILED: pre-upgrade hooks failed: timed out waiting for the condition. It just hangs for a bit and ultimately times out. It seems like too small of a change to cause a true timeout. I found this command in the Zero to JupyterHub docs, where it describes how to apply changes to the configuration file.
I've tried several permutations, including leaving out cleanup, leaving out version, etc. The only thing I could get to work was helm upgrade jhub jupyterhub/jupyterhub, but I don't think it's producing the desired effect.
For example, when I add a line in my config.yaml to change the default to Jupyter Lab, it doesn't work if I run helm upgrade jhub jupyterhub/jupyterhub. I believe I need to specify config.yaml using --values or -f
My overall project is to set up JupyterHub on a cloud Kubernetes environment. I'm using GKE and the online terminal.
Thanks
Solved: I specified tag incorrectly in config.yaml. I put the digest rather than the actual tag. Here are the images on DockerHub
I'm following the Zero to Kubernetes GKE installation guide.
I've successfully installed kubernetes and helm but when it comes to do the install:
helm upgrade --cleanup-on-fail \
--install $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--create-namespace \
--version=0.9.0 \
--values config.yaml
The error I keep getting is:
Release "jhub" does not exist. Installing it now.
Error: failed pre-install: timed out waiting for the condition
I've tried adding the flag --timeout 1200s as per the troubleshooting instructions on GKE but it still hangs up.
Any ideas on how to resolve this? If it continues to fail I'll probably just try deploying on Azure AD but ideally GKE should work.
(For reference I've actually done this deploy twice on GKE successfully but had to delete those clusters so I'm really not sure what the issue is.)
EDIT: I resolved the issue.
Further down the config.yaml file I somehow changed the tag from 1.0.7 which was the version registered via Jupyter to 1.1.0 and there was a conflict between my Jupyter registry and my JupyterHub config.
I'm still inexperienced with Jupyter & JupyterHub so apologies if this seemed obvious to others.
As mentioned in comments:
The issue was solved by cleanning old configurations from the cluster.
All the information on posted guide is working fine.
I have this error when the previous upgrade failed.
I cannot upgrade without deleting manually all my pods and services.
Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists.
Unable to continue with update: existing resource conflict: namespace: ns-xy, name: svc-xy, existing_kind: /v1, Kind=Service, new_kind: /v1, Kind=Service
I tried with helm upgrade --force but with no success.
One solution is to delete all the services and deployments updated, but that's long and creates a long interruption.
How can I force the upgrade?
OP doesn't mention what is the version of helm currently being used. So, assuming that you are using a version earlier than 3.1.0:
Upgrade helm to 3.2.4 (Which is the current 3.2 version)
Label and annotate the resource you want to upgrade (As per #7649):
KIND=deployment
NAME=my-app-staging
RELEASE=staging
NAMESPACE=default
kubectl annotate $KIND $NAME meta.helm.sh/release-name=$RELEASE --overwrite
kubectl annotate $KIND $NAME meta.helm.sh/release-namespace=$NAMESPACE --overwrite
kubectl label $KIND $NAME app.kubernetes.io/managed-by=Helm
Run your helm upgrade command as you were before.
This should tell Helm that it is okay to take over existing resource and begin managing it. That procedure also works for api upgrades (like "apps/v1beta2" changed to "apps/v1") or onboarding old elements in a namespace.
List the services
kubectl get service
Delete them in the following sequence
kubectl delete service <service-name>
And them run helm upgrade as normally
I am trying to install istio 1.4.0 from 1.3.2 and I am running into the following issue when I run the following:
$ istioctl manifest apply --set values.global.mtls.enabled=true --set values.grafana.enabled=true --set values.kiali.enabled=true
I'm following the instructions in the documentation:
$ curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.4.0 sh -
$ cd istio-1.4.0
$ export PATH=$PWD/bin:$PATH
When I run the istio manifest apply I'm able to install a majority of the components but keep getting the following message for each Istio specific CRD:
error: unable to recognize "STDIN": no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3" (repeated 1 times)
Is there a step I'm missing? I'm simply following the documentation so I'm not sure where I'm going wrong here.
If anyone runs into this issue check what k8s version your nodes are on (kubectl get nodes). Upgrading my EKS cluster from 1.11 to 1.12 fixed the issue when installing with istioctl
Also, I didn't notice this in their docs for installing 1.4.0 with istioctl.
Before you can install Istio, you need a cluster running a compatible version of Kubernetes. Istio 1.4 has been tested with Kubernetes releases 1.13, 1.14, 1.15.