I am trying to download a release of the aws-efs-csi-driver, but I'm getting the following error
Error: Kubernetes cluster unreachable: Get "http://localhost:8080/version": dial tcp 127.0.0.1:8080: connect: connection refused
I tried export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
I tried kubectl config view --raw >~/.kube/config
Both resulted in the same error as when they were not added. I'm new to helm and EKS. Looking for any suggestions, thanks!
helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver/
helm repo update --kubeconfig ./cluster_config
kubectl config view --raw >~/.kube/config
helm upgrade -i aws-efs-csi-driver aws-efs-csi-driver/aws-efs-csi-driver \
--namespace kube-system \
--set image.repository=602401143452.dkr.ecr.$1.amazonaws.com/eks/aws-efs-csi-driver \
--set controller.serviceAccount.create=false \
--set controller.serviceAccount.name=efs-csi-controller-sa ```
You can simply use the --kube-config parameter in your command and point it to the existing kubeconfig file that you've created in step 3 at ~/.kube/config. Like
helm upgrade -i aws-efs-csi-driver aws-efs-csi-driver/aws-efs-csi-driver \
--namespace kube-system \
--set image.repository=602401143452.dkr.ecr.$1.amazonaws.com/eks/aws-efs-csi-driver \
--set controller.serviceAccount.create=false \
--set controller.serviceAccount.name=efs-csi-controller-sa
--kubeconfig ~/.kube/config
Related
So I am trying to deploy rancher on my K3S cluster.
I installed it using the documentation and helm: Rancher documentation
While I am getting access using my loadbalancer. I cannot find the secret to insert into the setup.
They discribe the following command for getting the token:
kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{ "\n" }}'
When I run this I get the following error
Error from server (NotFound): secrets "bootstrap-secret" not found
And also I cannot find the bootstrap-secret inside the namespace cattle-system.
So can somebody help me out where I need to look?
I was with the same problem. So I figured it out with the following commands:
I installed the helm chart with "--set bootstrapPassword=Changeme123!", for example:
helm upgrade --install
--namespace cattle-system
--set hostname=rancher.example.com
--set replicas=3
--set bootstrapPassword=Changeme123!
rancher rancher-stable/rancher
I forced a hard reset, because even if I had setted the bootstrap password in the installation helm chart command, I was not able to login. So, I used the following command to hard reset:
kubectl -n cattle-system exec $(kubectl -n cattle-system get pods -l app=rancher | grep '1/1' | head -1 | awk '{ print $1 }') -- reset-password
So, I hope that can help you.
I am trying to do a helm upgrade dry run.
1.
helm upgrade -i $xyz-abc-ms xyz-abc-exe/target/classes/helm/xyz-abc \
--set jobs.helmServiceAccount=jenkins,csbEnabledLocal=false,jacoco.enabled=true,containerinfo.imageTag=${DOCKER_BUILD_NUMBER},pki.sslenabled=false,pki.kafkaEnabled=true,runtimeContainerInfo.image=fnd-base-images/ocp-os-java-msnext,couchbase.serviceName=oc-cb-02 \
--tiller-namespace=$(oc project -q) \
--namespace $(oc project -q) \
--debug \
--dry-run
But I get the error below:
Error: unknown flag: --tiller-namespace helm.go:81: [debug] unknown flag: --tiller-namespace
2.
I think tiller-namespace is removed from the Helm 3. So I tried the below:
helm upgrade -i $xyz-abc-ms xyz-abc-exe/target/classes/helm/xyz-abc \
--set jobs.helmServiceAccount=jenkins,csbEnabledLocal=false,jacoco.enabled=true,containerinfo.imageTag=${DOCKER_BUILD_NUMBER},pki.sslenabled=false,pki.kafkaEnabled=true,runtimeContainerInfo.image=fnd-base-images/ocp-os-java-msnext,couchbase.serviceName=oc-cb-02 \
--namespace $(oc project -q) \
--debug \
--dry-run
But now I am getting below error:
Error: unknown shorthand flag: 'q' in -q) helm.go:81: [debug] unknown shorthand flag: 'q' in -q)
Can someone help me with the correct command here?
Wihtout -q when I try as below:
helm upgrade -i $xyz-abc-ms xyz-abc-exe/target/classes/helm/xyz-abc \
--set jobs.helmServiceAccount=jenkins,csbEnabledLocal=false,jacoco.enabled=true,containerinfo.imageTag=${DOCKER_BUILD_NUMBER},pki.sslenabled=false,pki.kafkaEnabled=true,runtimeContainerInfo.image=fnd-base-images/ocp-os-java-msnext,couchbase.serviceName=oc-cb-02 ) \
--namespace $(oc project) \
--debug \
--dry-run
It fails with below Error:
Error: "helm upgrade" requires 2 arguments
Usage: helm upgrade [RELEASE] [CHART] [flags]
helm.go:81: [debug] "helm upgrade" requires 2 arguments
What's the proper command for this?
Yeah tiller is not even used by Helm 3.
This article talks about why it was needed in Helm 2 and why they eventually removed it but if you want a very short summary, here it is:
Helm takes your yaml and template files and has to add the resulting objects to Kubernetes right? Tiller does that job but in order to be able to do that, it would need to have maximum permission. In Helm 3, they drop tiller and rely on the authorization that comes with Kubernetes.
Now let's go back to your problem. You should drop your tiller-namespace flag as you have already done. With regards to the q flag, you don't even use it with the helm upgrade command, seems like it's the oc project -q is the part that's failing?
I was able to do with this command:
helm upgrade -i xyz-abc xyz-abc-exe/target/classes/helm/xyz-abc --debug --dry-run
istioctl kube-inject \
--injectConfigFile inject-config.yaml \
--meshConfigFile mesh-config.yaml \
--valuesFile inject-values.yaml \
--filename samples/sleep/sleep.yaml \
| kubectl apply -f -
While trying to inject istio sidecar container manually to pod. I got error -
Error: template: inject:469: function "appendMultusNetwork" not defined
https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/
As mentioned in comments I have tried to reproduce your issue on gke with istio 1.7.4 installed.
I've followed the documentation you mentioned and it worked without any issues.
1.Install istioctl and istio default profile
curl -sL https://istio.io/downloadIstioctl | sh -
export PATH=$PATH:$HOME/.istioctl/bin
istioctl install
2.Create samples/sleep directory and create sleep.yaml, for example with vi.
3.Create local copies of the configuration.
kubectl -n istio-system get configmap istio-sidecar-injector -o=jsonpath='{.data.config}' > inject-config.yaml
kubectl -n istio-system get configmap istio-sidecar-injector -o=jsonpath='{.data.values}' > inject-values.yaml
kubectl -n istio-system get configmap istio -o=jsonpath='{.data.mesh}' > mesh-config.yaml
4.Apply it with istioctl kube-inject
istioctl kube-inject \
--injectConfigFile inject-config.yaml \
--meshConfigFile mesh-config.yaml \
--valuesFile inject-values.yaml \
--filename samples/sleep/sleep.yaml \
| kubectl apply -f -
5.Verify that the sidecar has been injected
kubectl get pods
NAME READY STATUS RESTARTS AGE
sleep-5768c96874-m65bg 2/2 Running 0 105s
So there are few things worth to check as it might might cause this issue::
Could you please check if you executed all your commands correctly?
Maybe you run older version of istio and you should follow older
documentation?
Maybe you changed something in above local copies of the
configuration and that cause the issue? If you did what exactly did you change?
When I try to install the DB2 with the command:
helm install --name stocktrader-db2 ibm-charts/ibm-db2oltp-dev --tls \
--set db2inst.instname=db2inst1 --set db2inst.password=start1a \
--set options.databaseName=STRADER --set peristence.useDynamicProvisioning=true \
--set dataVolume.size=20Gi --set dataVolume.storageClassName=ibmc-block-gold
I get the following error message:
could not read x509 key pair (cert: "/Users/name/.helm/cert.pem", key:
"/Users/name/.helm/key.pem"): can't load key pair from cert
/Users/name/.helm/cert.pem and key /Users/name/.helm/key.pem: open
/Users/name/.helm/cert.pem: no such file or directory
=> What is the default directory for the files cert.pem and key.pem?
I think you are following their README.md, the installation instructions there assume you have Tiller setup in your cluster with TLS enabled.
If you remove the --tls flag from the command (helm install --name stocktrader-db2 ibm-charts/ibm-db2oltp-dev --set db2inst.instname=db2inst1 --set db2inst.password=start1a --set options.databaseName=STRADER --set peristence.useDynamicProvisioning=true --set dataVolume.size=20Gi --set dataVolume.storageClassName=ibmc-block-gold) it will not attempt to find the certificates.
If you need TLS between helm and tiller, follow this link. Also, per this link, copy the certificates into helm's home directory:
$ cp ca.cert.pem $(helm home)/ca.pem
$ cp helm.cert.pem $(helm home)/cert.pem
$ cp helm.key.pem $(helm home)/key.pem
Then, run the helm install --name stocktrader-db2 ... command.
I removed TLS from the following command:
helm install --name stocktrader-db2 ibm-charts/ibm-db2oltp-dev
--tls
--set db2inst.instname=db2inst1
--set db2inst.password=ThisIsMyPassword
--set options.databaseName=STRADER
--set peristence.useDynamicProvisioning=true
--set dataVolume.size=20Gi
--set dataVolume.storageClassName=glusterfs
If TLS is need the helm configuration can be done via the following installation procedure:
https://helm.sh/docs/using_helm/#securing-your-helm-installation
I have a Kubernetes (v1.10) cluster with Istio installed, I'm trying to install fission following Enabling Istio on Fission guide. when i run
[![helm install --namespace $FISSION_NAMESPACE --set enableIstio=true --name istio-demo
https://github.com/fission/fission/releases/download/0.9.1/fission-all-0.9.1.tgz
It throws error saying
Error: the server has asked for the client to provide credentials
(My cluster has two nodes and one master created using kubespray all ubuntu 16.04 machines)
I think that error is probably an authentication failure between helm and the cluster. Are you able to run kubectl version? How about helm ls?
If you have follow up questions, could you ask them on the fission slack? You'll get quicker answers there.
I think problem with helm
Solution
Remove .helm folder
rm -rf .helm
kubectl create serviceaccount tiller --namespace kube-system
kubectl create clusterrolebinding tiller-cluster-rule \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
helm init --service-account=tiller
kubectl get pods -n kube-system