AKS Ingress controller install, helm error - kubernetes-helm

I am very new to azure kubernetes and am just trying to get a test cluster set up. I am at the point where I need to add an ingress controller, so I am following the guide from microsoft here :
https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-powershell
I am attempting to create the ingress controller in powershell using the helm script they have in the guide :
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$AcrUrl = (Get-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $RegistryName).LoginServer
helm install nginx-ingress ingress-nginx/ingress-nginx `
--namespace kuber-ut `
--set controller.replicaCount=2 `
--set controller.nodeSelector."kubernetes\.io/os"=linux `
--set controller.image.registry=$AcrUrl `
--set controller.image.image=$ControllerImage `
--set controller.image.tag=$ControllerTag `
--set controller.image.digest="" `
--set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux `
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz `
--set controller.admissionWebhooks.patch.image.registry=$AcrUrl `
--set controller.admissionWebhooks.patch.image.image=$PatchImage `
--set controller.admissionWebhooks.patch.image.tag=$PatchTag `
--set controller.admissionWebhooks.patch.image.digest="" `
--set defaultBackend.nodeSelector."kubernetes\.io/os"=linux `
--set defaultBackend.image.registry=$AcrUrl `
--set defaultBackend.image.image=$DefaultBackendImage `
--set defaultBackend.image.tag=$DefaultBackendTag `
--set defaultBackend.image.digest=""
When I run this, I get the error :
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: IngressClass "nginx" in namespace "" exists and cannot be imported into the current
release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "nginx-ingress": current value is "ingress-nginx"; annotation validation error: key
"meta.helm.sh/release-namespace" must equal "kuber-ut": current value is "ingress-basic"
It says that the resource already exists. But in the azure portal, I see no ingress controllers. I also do not see the namespace ingress-basic. I just wanted to create an ingress controller in my namespace kuber-ut, but apparently there already is one? I just cant see it in the portal?

Please follow below steps:
Go to the PowerShell and connect to the Azure Portal by using the az Login command
After logging in to the Azure Portal` connect your AKS cluster by using the below command:
az aks get-credentials --resource-group "your resource group name" --name "your aks cluster name"
After connecting to the AKS cluster by using the below command, you will get your Ingress list:
kubectl get ingress -n kuber-ut
After getting the Ingress list, delete Ingress one and reinstall it.
kubectl delete ingress -n kuber-ut "youringressname"

Related

AKS nginx-ingress controller ACR

I'm unable to install the nginx ingress controller on AKS.Since I'm using userDefinedRouting as outboundType for egress when running
helm install nginx-ingress nginx-stable/nginx-ingress -n ingress --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal"='"true"' --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal-subnet"=x-x-x-aks-ingress-sub01 --set controller.service.loadBalancerIP="10.240.137.40" i can see that it failed to download the image because the root CA is not on the worker node and hence is unable to verify the SSL certificate. This is actually good and I've uploaded the nginx image to my ACR:
docker pull nginx docker tag nginx/nginx-ingress:2.2.2 nameofacr.azurecr.io/hub/nginx/nginx-ingress:2.2.2 and docker push nameofacr.azurecr.io/hub/nginx/nginx-ingress:2.2.2. If it look in the values.yaml file I see this:
values.yaml I've followed how to - helm install using private registry and think that I've added the tag as required but I can't figure out how to run the command now so that it will pull the image from my ACR.
What I've tried:
helm install nginx-ingress nameofacr.azurecr.io/hub/nginx/nginx-ingress -n ingress --set controller.service.annotations."service\.beta\ .kubernetes\.io/azure-load-balancer-internal"='"true"' --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal-subnet"=x-x-x-x-aks-ingress-sub01 --set contro ller.service.loadBalancerIP="10.240.137.40"
failed with Error: INSTALLATION FAILED: failed to download "nameofacr.azurecr.io/hub/nginx/nginx-ingress"
or
helm install nginx-ingress --set Image=nameofacr.azurecr.io nginx/nginx-ingress -n ingress --set controller.service.annotations."servi ce\.beta\.kubernetes\.io/azure-load-balancer-internal"='"true"' --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal-subnet"=x-x-x-x-aks-ingress-sub01 --s et controller.service.loadBalancerIP="10.240.137.40" resulted in Error: INSTALLATION FAILED: failed to download "nginx/nginx-ingress"
I can't get this to work. Any help please?
Use this chart : ingress-nginx/ingress-nginx. You have an official documentation that explain how to import images to private ACR.
You can use this commands to import images :
az acr import --name nameofacr --source k8s.gcr.io/ingress-nginx/controller:v1.2.1 --force
az acr import --name nameofacr --source k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1 --force
Also try to fix the helm chart version helm install nginx-ingress ingress-nginx/ingress-nginx --version 4.1.3, to be sure you use the images related to a specific values.yaml.

Rancher helm chart, cannot find secret bootstrap-secret

So I am trying to deploy rancher on my K3S cluster.
I installed it using the documentation and helm: Rancher documentation
While I am getting access using my loadbalancer. I cannot find the secret to insert into the setup.
They discribe the following command for getting the token:
kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{ "\n" }}'
When I run this I get the following error
Error from server (NotFound): secrets "bootstrap-secret" not found
And also I cannot find the bootstrap-secret inside the namespace cattle-system.
So can somebody help me out where I need to look?
I was with the same problem. So I figured it out with the following commands:
I installed the helm chart with "--set bootstrapPassword=Changeme123!", for example:
helm upgrade --install
--namespace cattle-system
--set hostname=rancher.example.com
--set replicas=3
--set bootstrapPassword=Changeme123!
rancher rancher-stable/rancher
I forced a hard reset, because even if I had setted the bootstrap password in the installation helm chart command, I was not able to login. So, I used the following command to hard reset:
kubectl -n cattle-system exec $(kubectl -n cattle-system get pods -l app=rancher | grep '1/1' | head -1 | awk '{ print $1 }') -- reset-password
So, I hope that can help you.

Istio installation failed with private docker registry

Bug description
Installation gets timeout errors and in kubectl get pods -n istio-system shows ImagePullBackOff.
kubectl describe pod istiod-xxx-xxx -n istio-system
Failed to pull image "our-registry:5000/pilot:1.10.3": rpc error: code = Unknown desc = Error response from daemon: Head https://our-registry:5000/v2/pilot/manifests/1.10.3: no basic auth credentials
Affected product area (please put an X in all that apply)
[x] Installation
Expected behavior
Successful installation with istioctl install --set profile=demo --set hub=our-registry:5000
Steps to reproduce the bug
Create istio-system namespace.
Set docker-registry user credentials for istio-system namespace.
istioctl manifest generate --set profile=demo --set hub=our-registry:5000 > new-generated-manifest.yaml
Verify it has proper images with our-registry:5000
Pull and push required images to our-registry:5000
istioctl install --set profile=demo --set hub=our-registry:5000
Version
Kubernetes : v1.21
Istio : 1.10.3 / 1.7.3
How was Istio installed?
istioctl install --set profile=demo --set hub=our-registry:5000
[References]
Tried to setup imagePullSecrets as described here, but it gives
Json object error
2. Here describe about using it in charts, but dont know how they applied it.
Originally posted as an issue.
There are two ways to cirumvent this issue.
If installing with istioctl install
Using istioctl install provide a secret with docker-registry auth details with --set values.global.imagePullSecrets. Like this
istioctl install [other options] --set values.global.imagePullSecrets[0]=<auth-secret>
Where <auth-secret> is the secret created prior on the cluster.
You can read more about using secrets with docker repository here
If installing using Istio operator
Installing Istio with operator, from private regostry, you have to pass proper YAML:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
...
spec:
profile: demo #as an example
values:
global:
imagePullSecrets:
- <auth-secret>
...
Again, <auth-secret> must be created prior.

Error: This command needs 1 argument: chart name

I am following Install OneAgent on Kubernetes official instructions while doing this I am getting the error mentioned in the title. when I add --name after helm install I am getting
Error: apiVersion 'v2' is not valid. The value must be "v1"
helm instructions:
helm install dynatrace-oneagent-operator \
dynatrace/dynatrace-oneagent-operator -n\
dynatrace --values values.yaml
Well, if you're using this helm chart it's stated in its description that it requires helm 3:
The Dynatrace OneAgent Operator Helm Chart which supports the rollout
and lifecycle of Dynatrace OneAgent in Kubernetes and OpenShift
clusters.
This Helm Chart requires Helm 3. 👈
and you use Helm 2:
Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
As to your error message:
Error: apiVersion 'v2' is not valid. The value must be "v1"
it can be expected on helm 2 when running a chart that requires helm 3 as the apiVersion has been incremented from v1 to v2 only in helm 3. In fact this is one of the major differences between two releases of helm. You can read more about it here:
Chart apiVersion:
Helm decides to increment the chart API version to v2 in Helm3:
# Chart.yaml
-apiVersion: v1 # Helm2
+apiVersion: v2 # Helm3
...
You can install Helm 3 easily by following this official guide.
Note that apart from using helm chart, you can also deploy OneAgent Operator on Kubernetes with kubectl and as you can read in the official dynatrace docs this is actually the recommended way of installation:
We recommend installing OneAgent Operator on Kubernetes with kubectl.
If you prefer Helm, you can use the OneAgent Helm chart as a basic
alternative.
These errors are resolved for me!
#This command needs 1 argument: chart name
#apiVersion 'v2' is not valid. The value must be "v1"
#release seq-charts failed: namespaces "seq" is forbidden: User
"system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API
group "" in the namespace "seq"
I started using local PowerShell for Azure Kubernetes.
These errors started when I made some changes to the Windows environment, but mine might work for you too.
PS C:\Users\{User}> Connect-AzAccount
PS C:\Users\{User}> Set-AzContext 'Subscription Name or ID'
PS C:\Users\{User}> az configure --defaults group=AKS
PS C:\Users\{User}> kubectl create namespace seq
PS C:\Users\{User}> kubectl create namespace prometheus-log
PS C:\Users\{User}> C:\ProgramData\chocolatey\choco upgrade chocolatey
PS C:\Users\{User}> C:\ProgramData\chocolatey\choco install kubernetes-helm
After that.
PS C:\Users\{User}> helm install --name prometheus prometheus-community/kube-prometheus-stack --namespace prometheus-log
Error: This command needs 1 argument: chart name
After that, I tried this.
PS C:\Users\{User}> C:\Users\vahem\.azure-helm\helm install --name prometheus prometheus-community/kube-prometheus-stack --namespace prometheus-log
Error: apiVersion 'v2' is not valid. The value must be "v1"
After that, I tried this.
PS C:\Users\{User}> helm install --name seq-charts --namespace seq --set persistence.existingClaim=seq-pvc stable/seq
Error: release seq-charts failed: namespaces "seq" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "seq"
After much trial and error, I discovered that there are two different versions of 'helm' on the system.
C:\Users{User}.azure-helm => V2.x.x
C:\ProgramData\chocolatey\lib\kubernetes-helm\tools\windows-amd64\helm =>V3.x.x
Finally I tried this and it worked great. Using 'helm v3.x.x' and no parameter name '--name'
PS C:\Users\{User}> C:\ProgramData\chocolatey\lib\kubernetes-helm\tools\windows-amd64\helm repo update
PS C:\Users\{User}> C:\ProgramData\chocolatey\lib\kubernetes-helm\tools\windows-amd64\helm install seq-charts --namespace seq --set persistence.existingClaim=seq-pvc stable/seq
PS C:\Users\{User}> C:\ProgramData\chocolatey\lib\kubernetes-helm\tools\windows-amd64\helm install prometheus prometheus-community/kube-prometheus-stack --namespace prometheus-log --set persistence.existingClaim=prometheus-pvc
It Worket Great For Me!
Please upgrade your helm version to 3. unless you are using a tillerless version of the helm2.

Pass Annotation (To create Private load balancer ) to helm while installing ISTIO on Azure Kubernetes service

Hi I am trying to install the ISTIO with helm on Azure kubernetes service.
I wanted to pass below value for istio so that it will request a private ip on azure
annotations: {"service.beta.kubernetes.io/azure-load-balancer-internal": "true"}
can some one let me know how i can pass this in the helm command so that it will override the annotations in values.yml ?
This is the helm command i am using but its gives me a error
helm install /opt/istio/istio-1.0.4/install/kubernetes/helm/istio --name istio --namespace istio-system --set gateways.istio-ingressgateway.serviceAnnotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal"=true
I was able to create a private LB with the following helm command
helm install /opt/istio-1.0.4/install/kubernetes/helm/istio --name istio --namespace istio-system --set servicegraph.enabled=true --set servicegraph.enable=true --set tracing.enabled=true --set grafana.enabled=true --set kiali.enabled=true --set prometheus.enabled=true --set gateways.istio-ingressgateway.serviceAnnotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal"='"true"'