Should jx step helm apply create/produce a helm release - kubernetes

I'm struggling with jx, kubernetes and helm. I run a Jenkinsfile on jx executing commands in env directory:
sh 'jx step helm build'
sh 'jx step helm apply'
It finishes with success and deploys pods/creates deployment etc. however, helm list is empty.
When I execute something like helm install ... or helm upgrade --install ... it creates a release and helm list shows that.
Is it correct behavior?
More details:
EKS installed with:
eksctl create cluster --region eu-west-2 --name integration --version 1.12 \
--nodegroup-name integration-nodes \
--node-type t3.large \
--nodes 3 \
--nodes-min 1 \
--nodes-max 10 \
--node-ami auto \
--full-ecr-access \
--vpc-cidr "172.20.0.0/16"
Then I set up ingresses (external & internal) with some kubectly apply command (won't share the files). Then I set up routes and vpc related stuff.
JX installed with:
jx install --provider=eks --ingress-namespace='internal-ingress-nginx' \
--ingress-class='internal-nginx' \
--ingress-deployment='nginx-internal-ingress-controller' \
--ingress-service='internal-ingress-nginx' --on-premise \
--external-ip='#########' \
--git-api-token=######### \
--git-username=######### --no-default-environments=true
Details from the installation:
? Select Jenkins installation type: Static Jenkins Server and Jenkinsfiles
? Would you like wait and resolve this address to an IP address and use it for the domain? No
? Domain ###########
? Cloud Provider eks
? Would you like to register a wildcard DNS ALIAS to point at this ELB address? Yes
? Your custom DNS name: ###########
? Would you like to enable Long Term Storage? A bucket for provider eks will be created No
? local Git user for GitHub server: ###########
? Do you wish to use GitHub as the pipelines Git server: Yes
? A local Jenkins X versions repository already exists, pull the latest? Yes
? A local Jenkins X cloud environments repository already exists, recreate with latest? Yes
? Pick default workload build pack: Kubernetes Workloads: Automated CI+CD with GitOps Promotion
Then I set up helm:
kubectl apply -f tiller-rbac-config.yaml
helm init --service-account tiller
where tiller-rbac-config.yaml is:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
helm version says:
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
jx version says:
NAME VERSION
jx 2.0.258
jenkins x platform 2.0.330
Kubernetes cluster v1.12.6-eks-d69f1b
helm client Client: v2.13.1+g618447c
git git version 2.17.1
Operating System Ubuntu 18.04.2 LTS
Applications were imported this way:
jx import --branches="devel" --org ##### --disable-updatebot=true --git-api-token=##### --url git#github.com:#####.git
And environment was created this way:
jx create env --git-url=##### --name=integration --label=Integration --domain=##### --namespace=jx-integration --promotion=Auto --git-username=##### --git-private --branches="master|devel|test"

Going throught the changelog, it seems that the tillerless mode has been made the default mode since version 2.0.246.
In Helm v2, Helm relies on its server side component called Tiller. The Jenkins X tillerless mode means that instead of using Helm to install charts, the Helm client is only used for templating and generating the Kubernetes manifests. But then, those manifests are applied normally using kubectl, not helm/tiller.
The consequence is that Helm won't know about this installations/releases, because they were made by kubectl. So that's why you won't get the list of releases using Helm. That's the expected behavior, as you can read on the Jenkins X docs.
What --no-tiller means is to switch helm to use template mode which
means we no longer internally use helm install mychart to install a
chart, we actually use helm template mychart instead which generates
the YAML using the same helm charts and the standard helm
confiugration management via --set and values.yaml files.
Then we use kubectl apply to apply the YAML.
As mentioned by James Strachan in the comments, when using the tillerless mode, you can view your deployments using jx step helm list

Related

istio v1.11.4 - install via helm chart; how to enable envoy proxy logging?

This is probably a very basic question. I am looking at Install Istio with Helm and Enable Envoy’s access logging.
How do I enable envoy access logging if I install istio via its helm charts?
Easiest, and probably only, way to do this is to install Istio with IstioOperator using Helm.
Steps to do so are almost the same, but instead of base chart, you need to use istio-operator chart.
First create istio-operator namespace:
kubectl create namespace istio-operator
then deploy IstioOperator using Helm (assuming you have downloaded Istio, and changed current working directory to istio root):
helm install istio-operator manifests/charts/istio-operator -n istio-operator
Having installed IstioOperator, you can now install Istio. This is a step where you can enable Envoy’s access logging:
kubectl apply -f - <<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istiocontrolplane
spec:
profile: default
meshConfig:
accessLogFile: /dev/stdout
EOF
I tried enabling Envoy’s access logging with base chart, but could not succeed, no matter what I did.

How can I pass the correct parameters to Helm, using Ansible to install GitLab?

I'm writing an Ansible task to deploy GitLab in my k3s environment.
According to the doc, I need to execute this to install GitLab using Helm:
$ helm install gitlab gitlab/gitlab \
--set global.hosts.domain=DOMAIN \
--set certmanager-issuer.email=me#example.com
But the community.kubernetes.helm doesn't handle --set parameters and only call helm with the --values parameter.
So my Ansible task looks like this:
- name: Deploy GitLab
community.kubernetes.helm:
update_repo_cache: yes
release_name: gitlab
chart_ref: gitlab/gitlab
release_namespace: git
release_values:
global.hosts.domain: example.com
certmanager-issuer.email: info#example.com
But the helm chart still return the error You must provide an email to associate with your TLS certificates. Please set certmanager-issuer.email.
I've tried manually in a terminal, and it seems that the GitLab helm chart requires --set parameters and fail with --values. But community.kubernetes.helm doesn't.
What can I do?
Is there a bug on GitLab helm chart side?
it seems that the GitLab helm chart requires --set parameters and fail with --values
That is an erroneous assumption; what you are running into is that --set splits on . because otherwise providing fully-formed YAML on the command line would be painful
The correct values are using sub-objects where the . occurs:
- name: Deploy GitLab
community.kubernetes.helm:
update_repo_cache: yes
release_name: gitlab
chart_ref: gitlab/gitlab
release_namespace: git
release_values:
global:
hosts:
# https://gitlab.com/gitlab-org/charts/gitlab/-/blob/v4.4.5/values.yaml#L47
domain: example.com
# https://gitlab.com/gitlab-org/charts/gitlab/-/blob/v4.4.5/values.yaml#L592-595
certmanager-issuer:
email: info#example.com

no matches for kind "Deployment" in version "extensions/v1beta1"

While deploying mojaloop, Kubernetes responds with the following errors:
Error: validation failed: [unable to recognize "": no matches for kind
"Deployment" in version "apps/v1beta2", unable to recognize "": no
matches for kind "Deployment" in version "extensions/v1beta1", unable
to recognize "": no matches for kind "StatefulSet" in version
"apps/v1beta2", unable to recognize "": no matches for kind
"StatefulSet" in version "apps/v1beta1"]
My Kubernetes version is 1.16.
How can I fix the problem with the API version?
From investigating, I have found that Kubernetes doesn't support apps/v1beta2, apps/v1beta1.
How can I make Kubernetes use a not deprecated version or some other supported version?
I am new to Kubernetes and anyone who can support me I am happy
In Kubernetes 1.16 some apis have been changed.
You can check which apis support current Kubernetes object using
$ kubectl api-resources | grep deployment
deployments deploy apps true Deployment
This means that only apiVersion with apps is correct for Deployments (extensions is not supporting Deployment). The same situation with StatefulSet.
You need to change Deployment and StatefulSet apiVersion to apiVersion: apps/v1.
If this does not help, please add your YAML to the question.
EDIT
As issue is caused by HELM templates included old apiVersions in Deployments which are not supported in version 1.16, there are 2 possible solutions:
1. git clone whole repo and replace apiVersion to apps/v1 in all templates/deployment.yaml using script
2. Use older version of Kubernetes (1.15) when validator accept extensions as apiVersion for Deployment and StatefulSet.
to convert an older Deployment to apps/v1, you can run:
kubectl convert -f ./my-deployment.yaml --output-version apps/v1
You can change manually as an alternative. Fetch the helm chart:
helm fetch --untar stable/metabase
Access the chart folder:
cd ./metabase
Change API version:
sed -i 's|extensions/v1beta1|apps/v1|g' ./templates/deployment.yaml
Add spec.selector.matchLabels:
spec:
[...]
selector:
matchLabels:
app: {{ template "metabase.name" . }}
[...]
Finally install your altered chart:
helm install ./ \
-n metabase \
--namespace metabase \
--set ingress.enabled=true \
--set ingress.hosts={metabase.$(minikube ip).nip.io}
Enjoy!
I prefer kubectl explain.
# kubectl explain deploy
KIND: Deployment
VERSION: apps/v1
DESCRIPTION:
Deployment enables declarative updates for Pods and ReplicaSets.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
Standard object metadata.
spec <Object>
Specification of the desired behavior of the Deployment.
status <Object>
Most recently observed status of the Deployment.
With kubectl explain you can also see specific parameters of an object:
# kubectl explain Service.spec.externalTrafficPolicy
KIND: Service
VERSION: v1
FIELD: externalTrafficPolicy <string>
DESCRIPTION:
externalTrafficPolicy denotes if this Service desires to route external
traffic to node-local or cluster-wide endpoints. "Local" preserves the
client source IP and avoids a second hop for LoadBalancer and Nodeport type
services, but risks potentially imbalanced traffic spreading. "Cluster"
obscures the client source IP and may cause a second hop to another node,
but should have good overall load-spreading.
To put it simply, you don't force the current installation to use an outdated version of the API; you fix the version in your config files.
If you want to check which version your current kube supports, run :
root#ubn64:~# kubectl api-versions | grep -i apps
apps/v1
I was getting below error -
error: unable to recognize "deployment.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"
solution that worked for me -
modified the line from apiVersion: extensions/v1beta1 to apiVersion: apps/v1 in deployment.yaml
Reason -
we had upgraded the K8 cluster hence this error occured.
This was annoying me because I am testing lots of helm packages so I wrote a quick script - which could be modified to sort your workflow perhaps
see below
New workflow
First fetch the chart as a tgz to your working directory
helm fetch repo/chart
then in your working directly run bash script below - which I named helmk
helmk myreleasename mynamespace chart.tgz [any parameters for kubectl create]
Contents of helmk - need to edit your kubeconfig clustername to work
#!/bin/bash
echo usage $0 releasename namespace chart.tgz [createparameter1] [createparameter2] ... [createparameter n]
echo This will use your namespace then shift back to default so be careful!!
kubectl create namespace $2 #this will create harmless error if namespace exists have to ignore
kubectl config set-context MYCLUSTERNAME --namespace $2
helm template -n $1 --namespace $2 $3 | kubectl convert -f /dev/stdin | kubectl create --save-config=true ${#:4} -f /dev/stdin
#note the --namespace parameter in helm template above seems to be ignored so we have to manually switch context
kubectl config set-context MYCLUSTERNAME --namespace default
It's a slightly dangerous hack since I manually switch to your new desired namespace context then back again so only to be used for single user devs really or comment that out.
You will get a warning about using the kubectl convert facility like this
If you need to edit the YAML to customise - just replace one of the /dev/stdin to intermediate files but It's probably better to get it up using "create" with a save-config as I have and then simply "apply" your changes which means that they will be recorded in kubernetes too.
Good luck
I was facing the same issue on a cluster that was upgraded to a version that does not support certain api versions (v1.17 and apps/v1beta2).
$ helm get manifest some-deployment
...
# Source: some-deployment/templates/deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: some-deployment
labels:
...
Looking at the helm docs, it seems that the manifest is stored in the cluster for helm to reference, and it may include invalid api versions, leading to errors.
The 2 proposed methods are to either manually edit the manifest (a rather tedious multi-stage process), or use a helm plugin called mapkubeapis that does it automatically.
$ helm plugin install https://github.com/helm/helm-mapkubeapis
It can be run with the --dry-run flag to simulate the effects:
$ helm mapkubeapis --dry-run some-deployment
2021/02/15 09:33:29 NOTE: This is in dry-run mode, the following actions will not be executed.
2021/02/15 09:33:29 Run without --dry-run to take the actions described below:
2021/02/15 09:33:29
2021/02/15 09:33:29 Release 'some-deployment' will be checked for deprecated or removed Kubernetes APIs and will be updated if necessary to supported API versions.
2021/02/15 09:33:29 Get release 'some-deployment' latest version.
2021/02/15 09:33:30 Check release 'some-deployment' for deprecated or removed APIs...
2021/02/15 09:33:30 Found deprecated or removed Kubernetes API:
"apiVersion: apps/v1beta2
kind: Deployment"
Supported API equivalent:
"apiVersion: apps/v1
kind: Deployment"
2021/02/15 09:33:30 Finished checking release 'some-deployment' for deprecated or removed APIs.
2021/02/15 09:33:30 Deprecated or removed APIs exist, updating release: some-deployment.
2021/02/15 09:33:30 Map of release 'some-deployment' deprecated or removed APIs to supported versions, completed successfully.
and then run without the flag to apply the changes.

Namespace deployment issue in Kubernetes Helm Chart

I am now testing the deployment into different namespace using Kubernetes. Here I am using Kubernetes Helm Chart for that. In my chart, I have deployment.yaml and service.yaml.
When I am defining the "namespace" parameter with Helm command helm install --upgrade, it is not working. When I a read about that I found the statement that - "Helm 2 is not overwritten by the --namespace parameter".
I tried the following command:
helm upgrade --install kubedeploy --namespace=test pipeline/spacestudychart
NB Here my service is deploying with default namespace.
Screenshot of describe pod:
Here my "helm version" command output is like follows:
docker#mildevdcr01:~$ helm version
Client: &version.Version{SemVer:"v2.14.3",
GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3",
GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Because of this reason, I tried to addthis command in deployment.yaml, under metadata.namespace like following,
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "spacestudychart.fullname" . }}
namespace: test
I created test and prod, 2 namespaces. But here also it's not working. When I adding like this, I am not getting my service up. I am not able to accessible. And in Jenkins console there is no error. When I defined in helm install --upgrade command it was deploying with default namespace. But here not deploying also.
After this, I removed the namespace from deployment.yaml and added metadata.namespace like the same. There also I am not able to access deployed service. But Jenkins console output still showing success.
Why namespace is not working with my Helm deployment? What changes I need to do here for deploying test/prod instead of this default namespace.
Remove namespace: test from all of your chart files and helm install --namespace=namespace2 ... should work.
On Helm 3.2+, I would suggest (based on this thread) to move the namespace creation to the CLI:
1 ) Add the --create-namespace after the -n flag:
helm upgrade --install <name> <repo> -n <namespace> --create-namespace
2 ) Inside the different resources - pass the Release namespace:
namespace: {{ .Release.Namespace }}

How to set a different namespace for child helm charts?

When you install a chart with a child chart that doesn't specify a namespace, Helm will use the one specified on command line via --namespace. Is it possible to override this flag for a specific child chart?
For example if I have chart A which depends on chart B and I specify --namespace foo, I want to be able to customize the resources of chart B to be installed into some other namespace bar instead of foo.
Update 2:
Helm 3 added support for multi namespaces https://github.com/helm/helm/issues/2060
Update 1:
If a resource template specifies a metadata.namespace, then it will be installed in that namespace. For example, if I have a pod with metadata.namespace: x and I run helm install mychart --namespace y, that pod will be installed in x. I guess you could use regular helm templates with the namespace to parameterize it.
Original answer:
We do not plan on fully supporting multi-namespaced releases until Helm 3.0
https://github.com/kubernetes/helm/issues/2060#issuecomment-306847365
As a workaround, you install for each namespace individually using --skip-dependencies or with dependency conditions
If you already have different charts then you can use helmfile to achieve this.
Step 1:
create the following folder.
my-awesome-infrastructure/
helm
helmfile
helmfile.yaml
Where helm and helmfile are the binary executables.
Step 2: install the helm diff plugin which is needed used helmfile.
helm plugin install https://github.com/databus23/helm-diff
Step 3: declare your charts in the helmfile.yaml.
helmBinary: ./helm
repositories:
- name: ingress-nginx
url: https://kubernetes.github.io/ingress-nginx
- name: bitnami
url: https://charts.bitnami.com/bitnami
releases:
- name: nginx-ingress
namespace: nginx-ingress
createNamespace: true
chart: ingress-nginx/ingress-nginx
version: ~4.1.0
- name: jupyterhub
namespace: jupyterhub
createNamespace: true
chart: bitnami/jupyterhub
version: ~1.1.12
- name: metrics-server
namespace: metrics-server
createNamespace: true
chart: bitnami/metrics-server
version: ~5.11.9
Step 4: run helmfile to deploy all charts.
./helmfile apply
In the above example, you are deploying three separate charts to three separate namespaces.
Under the covers, helmfile will run helm install separately and create separate releases.