Kubernetes service deploying in default namespace instead of defined namespace using Helm - kubernetes

I am trying to deploy my microservice on a Kuberenetes cluster in 2 different environment dev and test. And I am using helm chart to deploy my Kubernetes service. I am using Jenkinsfile to deploy the chart. And inside Jenkinsfile I added helm command within the stage like the following ,
stage ('helmchartinstall')
{
steps
{
sh 'helm upgrade --install kubekubedeploy --namespace test pipeline/spacestudychart'
}
}
}
Here I am defining the --namespace test parameter. But when it deploying, it showing the console output with default namespace. I already created namespaces test and prod.
When I checked the Helm version, I got response like the following,
docker#mildevdcr01:~$ helm version
Client: &version.Version{SemVer:"v2.14.1",
GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.0",
GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
Have I made any mistake here for defining the namespace?

The most likely issue here is that the Chart already specifies default as metadata.namespace which in Helm 2 is not overwritten by the --namespace parameter.
If this is the cause a solution would be to remove the namespace specified in the metadata.namespace or to make it a template parameter (aka release value).
Also see https://stackoverflow.com/a/51137448/1977182.

Approach 1:
export TILLER_NAMESPACE= your_namespace
helm upgrade -i -n release_name chart.tgz
Approach 2:
helm upgrade -i -n release_name --namespace your_namespace chart.tgz

Related

Failed to create nodeport error, after deploying ingress

Failed to create NodePort error, after deploying ingress
I have an ingress defined as in the screenshot:
Screenshot
The 2 replicas of an Ingress server are not spinning due to the Failed to create NodePort error. Please advice
Just like the error says. You are missing the NodePortPods CRD. It looks like that CRD existed at some point in time. But I don't see it anymore in the repo. You didn't specify how you deployed the ingress operator but you can make sure you install the latest.
helm repo add appscode https://charts.appscode.com/stable/
helm repo update
helm search repo appscode/voyager --version v13.0.0
# Generate the template to check or use helm install
helm template voyager-operator appscode/voyager --version v13.0.0 --namespace kube-system --no-hooks --set cloudProvider=baremetal 👈 Use the right cloud provider
✌️

Error: This command needs 1 argument: chart name

I am following Install OneAgent on Kubernetes official instructions while doing this I am getting the error mentioned in the title. when I add --name after helm install I am getting
Error: apiVersion 'v2' is not valid. The value must be "v1"
helm instructions:
helm install dynatrace-oneagent-operator \
dynatrace/dynatrace-oneagent-operator -n\
dynatrace --values values.yaml
Well, if you're using this helm chart it's stated in its description that it requires helm 3:
The Dynatrace OneAgent Operator Helm Chart which supports the rollout
and lifecycle of Dynatrace OneAgent in Kubernetes and OpenShift
clusters.
This Helm Chart requires Helm 3. 👈
and you use Helm 2:
Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
As to your error message:
Error: apiVersion 'v2' is not valid. The value must be "v1"
it can be expected on helm 2 when running a chart that requires helm 3 as the apiVersion has been incremented from v1 to v2 only in helm 3. In fact this is one of the major differences between two releases of helm. You can read more about it here:
Chart apiVersion:
Helm decides to increment the chart API version to v2 in Helm3:
# Chart.yaml
-apiVersion: v1 # Helm2
+apiVersion: v2 # Helm3
...
You can install Helm 3 easily by following this official guide.
Note that apart from using helm chart, you can also deploy OneAgent Operator on Kubernetes with kubectl and as you can read in the official dynatrace docs this is actually the recommended way of installation:
We recommend installing OneAgent Operator on Kubernetes with kubectl.
If you prefer Helm, you can use the OneAgent Helm chart as a basic
alternative.
These errors are resolved for me!
#This command needs 1 argument: chart name
#apiVersion 'v2' is not valid. The value must be "v1"
#release seq-charts failed: namespaces "seq" is forbidden: User
"system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API
group "" in the namespace "seq"
I started using local PowerShell for Azure Kubernetes.
These errors started when I made some changes to the Windows environment, but mine might work for you too.
PS C:\Users\{User}> Connect-AzAccount
PS C:\Users\{User}> Set-AzContext 'Subscription Name or ID'
PS C:\Users\{User}> az configure --defaults group=AKS
PS C:\Users\{User}> kubectl create namespace seq
PS C:\Users\{User}> kubectl create namespace prometheus-log
PS C:\Users\{User}> C:\ProgramData\chocolatey\choco upgrade chocolatey
PS C:\Users\{User}> C:\ProgramData\chocolatey\choco install kubernetes-helm
After that.
PS C:\Users\{User}> helm install --name prometheus prometheus-community/kube-prometheus-stack --namespace prometheus-log
Error: This command needs 1 argument: chart name
After that, I tried this.
PS C:\Users\{User}> C:\Users\vahem\.azure-helm\helm install --name prometheus prometheus-community/kube-prometheus-stack --namespace prometheus-log
Error: apiVersion 'v2' is not valid. The value must be "v1"
After that, I tried this.
PS C:\Users\{User}> helm install --name seq-charts --namespace seq --set persistence.existingClaim=seq-pvc stable/seq
Error: release seq-charts failed: namespaces "seq" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "seq"
After much trial and error, I discovered that there are two different versions of 'helm' on the system.
C:\Users{User}.azure-helm => V2.x.x
C:\ProgramData\chocolatey\lib\kubernetes-helm\tools\windows-amd64\helm =>V3.x.x
Finally I tried this and it worked great. Using 'helm v3.x.x' and no parameter name '--name'
PS C:\Users\{User}> C:\ProgramData\chocolatey\lib\kubernetes-helm\tools\windows-amd64\helm repo update
PS C:\Users\{User}> C:\ProgramData\chocolatey\lib\kubernetes-helm\tools\windows-amd64\helm install seq-charts --namespace seq --set persistence.existingClaim=seq-pvc stable/seq
PS C:\Users\{User}> C:\ProgramData\chocolatey\lib\kubernetes-helm\tools\windows-amd64\helm install prometheus prometheus-community/kube-prometheus-stack --namespace prometheus-log --set persistence.existingClaim=prometheus-pvc
It Worket Great For Me!
Please upgrade your helm version to 3. unless you are using a tillerless version of the helm2.

Use Gitlab-installed Helm from CLI. Could not find tiller

I've created a kubernetes cluster on AWS using Kops, and I've correctly configured the cluster on Gitlab.
I've installed Helm Tiller and Ingress from Gitlab's panel, but I now wish to uninstall the Ingress chart.
I'm not sure how to uninstall the ingress chart. What I'm tring now is configuring my Helm CLI to delete the ingress release, but I'm not getting the Helm CLI correctly configured. The Tiller stuff is being deployed at the gitlab-managed-apps, so I'm trying the following command:
$ helm init --tiller-namespace gitlab-managed-apps --service-account tiller --upgrade
HELM_HOME has been configured at C:\Users\danie\.helm.
Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
But then when I'm trying to issue the helm ls command I'm getting the following error:
$ helm ls
Error: could not find tiller
But the service account exists on the namespace:
$ kubectl get serviceAccounts -n gitlab-managed-apps
NAME SECRETS AGE
default 1 23h
ingress-nginx-ingress 1 23h
tiller 1 23h
Any ideas how to get the CLI correctly configured?
you have installed tiller to a namespace that is not the default namespace.
As per default the Helm CLI will assume tiller is installed in default and that this is the namespace you want to "get in touch with"
this can be fixed by using the tiller-namespace flag as for your example that'd be
helm list --tiller-namespace gitlab-managed-apps
Try using Helm version 3 onward. Helm versions 1 and 2 are actually composed of two pieces – the Helm CLI, and Tiller, the Helm server-side component. It is important to note that Helm 3 removes the Tiller component, and thus is more secure

Deploying Images from gitlab in a new namespace in Kubernetes

I have integrated gitlab with Kubernetes cluster which is hosted on AWS. Currently it builds the code from gitlab to the default namespace. I have created two namespaces in kubernetes one for production and one for development. What are the steps if I want that to be deployed in a dev or a production namespace. Do I need to make changes at the gitlab level or on the kubernetes level.
This is done at the kubernetes level. Whether you're using helm or kubectl, you can specify the desired namespace in the command.
As in:
kubectl create -f deployment.yaml --namespace <desired-namespace>
helm install stable/gitlab-ce --namespace <desired-namespace>
Alternatively, you can just change your current namespace to the desired namespace and install as you did before. By default, helm charts or kuberenetes yaml files will install into your current namespace unless specified otherwise.

Is there something like `helm exec`?

I use the following helm (2.4.2) commands in my gitlab-ci.yml script:
- helm upgrade --install myapp-db --wait --set postgresUser=postgres,postgresPassword=postgres,postgresDatabase=myapp stable/postgresql
- helm upgrade --install myapp-web ./myapp-chart --wait --set env.DATABASE_URL="${DATABASE_URL}"
It's part of a deployment to my staging/review environment. After the above commands complete, I would like to execute commands against the my-app pod to create/migrate the database. At the moment this is achieved through the use of an initContainer (defined in the referenced yaml file). But I would prefer the logic to be part of the CI script - so I don't have to have a separate deployment file for production.
Is there a way to do this with helm? Or is my only option to use kubectl exec? If I use kubectl exec, is there an easy way to get the name of the pod using helm?
This GitHub issue addresses how you might use kubectl to find out the name of a pod based on a label:
https://github.com/kubernetes/kubernetes/issues/8876
I implemented the following:
- export POD_NAME=`kubectl get pod -l "app=myapp-web-chart" -o jsonpath='{.items[0].metadata.name}'`
- kubectl exec $POD_NAME -- mix ecto.migrate
Still, it would be much nicer if there was a way to do this with helm