How to create dependency between releases in helmfile - kubernetes-helm

I have a following helmfile and I want for nexus, teamcity-server, nexus, hub to be depended on certificates chart
releases:
- name: certificates
createNamespace: true
chart: ./charts/additional-dep
namespace: system
values:
- ./environments/default/system-values.yaml
- ./environments/{{ .Environment.Name }}/system-values.yaml
- name: hub
chart: ./charts/hub
namespace: system
values:
- ./environments/default/system-values.yaml
- name: nexus
chart: ./charts/nexus
namespace: system
values:
- ./environments/default/system-values.yaml
- ./environments/{{ .Environment.Name }}/system-values.yaml
dependsOn:
- certificates
- name: teamcity-server
chart: ./charts/teamcity-server
namespace: system
values:
- ./environments/default/system-values.yaml
- ./environments/{{ .Environment.Name }}/system-values.yaml
dependsOn:
- certificates
I have tried to use dependsOn in helmfile.yaml, however it has resulted in errors

Helmfile calls this functionality needs:, so
releases:
- name: certificates
...
- name: nexus
needs:
- certificates
...
This means the certificates: chart needs to be successfully installed before Helmfile will move on to nexus or teamcity-server. This is specific to Helmfile, so you're allowed to helm uninstall certificates and Helm itself won't know about the dependency. It also doesn't establish any sort of runtime dependency between the two charts, so if something happens later that causes certificates to fail, nexus and the other dependents won't be automatically stopped.

Related

Build Kustomize with Helm Fails to Build

kustomize build --enable-helm .I have the following project structure:
project
- helm-k8s
- values.yml
- Chart.yml
- templates
- base
- project-namespace.yml
- grafana
- grafana-service.yml
- grafana-deployment.yml
- grafana-datasource-config.yml
- prometheus
- prometheus-service.yml
- prometheus-deployment.yml
- prometheus-config.yml
- prometheus-roles.yml
- kustomization.yml
- prod
- kustomization.yml
- test
- kustomization.yml
I'm trying to build my kustomization file using helm like below:
project/helm-k8s/templates/base/$ kubectl kustomize build . --enable-helm -> dummy.yml
I get an error message like this:
project/helm-k8s/templates/base$ kubectl kustomize . --enable-helm
error: accumulating resources: accumulation err='accumulating resources from 'project-namespace.yml': missing metadata.name in object {{v1 Namespace} {{ } map[name:] map[]}}': must build at directory: '/home/my-user/project/helm-k8s/templates/base/project-namespace.yml': file is not directory
Is it not possible for kustomize to use the values.yml which is located directly under helm-k8s folder and create the final manifest for my cluster? What am I doing wrong here?
EDIT: Here is how my kustomization.yml looks like:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: open-electrons-monitoring-kustomization
resources:
# 0. Get the namespaces first
- project-namespace.yml
# 1. Set up monitoring services (prometheus)
#- monitoring/prometheus/prometheus-roles.yml
- prometheus/prometheus-config.yml
- prometheus/prometheus-roles.yml
- prometheus/prometheus-deployment.yml
- prometheus/prometheus-service.yml
# 2. Set up monitoring services (grafana)
- grafana/grafana-datasource-config.yml
- grafana/grafana-deployment.yml
- grafana/grafana-service.yml
I think you may have misunderstood the use of the --enable-helm parameter. It does not allow kustomize to perform helm-style templating on files, so when you write:
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.app.namespace }}
labels:
name: {{ .Values.app.namespace }}
That doesn't do anything useful. It just generates invalid YAML output.
The --enable-helm option allows you to explode Helm charts using Kustomize; see here for the documentation, but for example it allows you to process a kustomization.yaml file like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: traefik
repo: https://helm.traefik.io/traefik
includeCRDs: true
releaseName: example
version: 20.8.0
valuesInline:
deployment:
replicas: 3
logs:
access:
enabled: true
Running kubectl kustomize --enable-helm will cause kustomize to fetch the helm chart and run helm template on it, producing YAML manifests on stdout.

How to get specific branch from chart repository for Helmfile

Do I need to include repositories field in Helmfile, If it is the local helm chart and It would not be needed to be donwloaded?
Right now I have folloving helmfile.yaml:
repositories:
- name: system-test
url: https://github.com/test/test.system.configuration.git
releases:
- name: system-test-release
chart: ./charts/test
namespace: system-test
values:
- ./charts/test/values.yaml
The repositories: would only get used if you're actually pulling a chart from that repository; for example
releases:
- name: end-to-end
chart: system-test/end-to-end
If you're just referring to local charts with filesystem paths, the repositories: don't get used.

Templates and Values in different repos via ArgoCD

I'm looking for insights for the following situation...
I have one ArgoCD application pointing to a Git repo (A), where there's a values.yaml;
I would like to use the Helm templates stored in a different repo (B);
Any suggestions/alternatives on how to make this work?
I think helm dependency can help solve your problem.
In file Chart.yaml of repo (A), declares dependency (chart of repo B)
# Chart.yaml
dependencies:
- name: chartB
version: "0.0.1"
repository: "https://link_to_chart_B"
Link references:
https://github.com/argoproj/argocd-example-apps/tree/master/helm-dependency
P/s: You need add repo chart into ArgoCD.
The way we solved it is by writing a very simple helm plugin
and pass to it the URL where the Helm chart location (chartmuseum in our case) as an env variable
server:
name: server
config:
configManagementPlugins: |
- name: helm-yotpo
generate:
command: ["sh", "-c"]
args: ["helm template --version ${HELM_CHART_VERSION} --repo ${HELM_REPO_URL} --namespace ${NAMESPACE} $HELM_CHART_NAME --name-template=${HELM_RELEASE_NAME} -f $(pwd)/${HELM_VALUES_FILE} "]
you can run the helm command with the flag of --repo
and in the ArgoCD Application yaml you call the new plugin
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: application-test
namespace: infra
spec:
destination:
namespace: infra
server: https://kubernetes.default.svc
project: infra
source:
path: "helm-values-files/telegraf"
repoURL: https://github.com/YotpoLtd/argocd-example.git
targetRevision: HEAD
plugin:
name: helm-yotpo
env:
- name: HELM_RELEASE_NAME
value: "telegraf-test"
- name: HELM_CHART_VERSION
value: "1.8.18"
- name: NAMESPACE
value: "infra"
- name: HELM_REPO_URL
value: "https://helm.influxdata.com/"
- name: HELM_CHART_NAME
value: "telegraf"
- name: HELM_VALUES_FILE
value: "telegraf.yaml"
you can read more about it in the following blog
post

terraform-cli : Duplicate value: "step-init". Maybe missing or invalid Task default/terraform-cli error

I've been attempting to use Tekton to deploy some AWS infrastructure via Terraform but not having much success.
The pipeline clones a Github repo containing TF code , it then attempts to use the terraform-cli task to provision the AWS infrastructure. For initial testing I just want to perform the initial TF init and provision the AWS VPC.
Expected behaviour
Clone Github Repo
Perform Terraform Init
Create the VPC using targeted TF apply
Actual Result
task terraform-init has failed: failed to create task run pod "my-infra-pipelinerun-terraform-init": Pod "my-infra-pipelinerun-terraform-init-pod" is invalid: spec.initContainers[1].name: Duplicate value: "step-init". Maybe missing or invalid Task default/terraform-cli
pod for taskrun my-infra-pipelinerun-terraform-init not available yet
Tasks Completed: 2 (Failed: 1, Cancelled 0), Skipped: 1
Steps to Reproduce the Problem
Prerequisites: Install Tekton command line tool, git-clone and terraform-cli
create this pipeline in Minikube
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: my-infra-pipeline
spec:
description: Pipeline for TF deployment
params:
- name: repo-url
type: string
description: Git repository URL
- name: branch-name
type: string
description: The git branch
workspaces:
- name: tf-config
description: The workspace where the tf config code will be stored
tasks:
- name: clone-repo
taskRef:
name: git-clone
workspaces:
- name: output
workspace: tf-config
params:
- name: url
value: $(params.repo-url)
- name: revision
value: $(params.branch-name)
- name: terraform-init
runAfter: ["clone-repo"]
taskRef:
name: terraform-cli
workspaces:
- name: source
workspace: tf-config
params:
- name: terraform-secret
value: "tf-auth"
- name: ARGS
value:
- init
- name: build-vpc
runAfter: ["terraform-init"]
taskRef:
name: terraform-cli
workspaces:
- name: source
workspace: tf-config
params:
- name: terraform-secret
value: "tf-auth"
- name: ARGS
value:
- apply
- "-target=aws_vpc.vpc -auto-approve"
Run the pipeline by creating a pipelinerun resource in k8s
Review the logs > tkn pipelinerun logs my-tf-pipeline -a
Additional Information
Pipeline version: v0.35.1
There is a known issue regarding "step-init" in some earlier versions - I suggest you upgrade to latest version (0.36.0) and try again.

Use of Umbrella Chart in CI/CD Pipeline w/ Multiple Contractors

I am new to this group. Glad to have connected.
I am wondering if someone has experience in using an umbrella helm chart in a CI/CD process?
In our project, we have 2 separate developer contractors. Each contractor is responsible for specific microservices.
We are using Harbor as our repository for charts and accompanying container images and GitLab for our code repo and CI/CD orchestrator...via GitLab runners.
The plan is to use an umbrella chart to deploy all approx 60 microservices as one system.
I am interested in hearing from any groups that have taken a similar approach and how they treated/handled the umbrella chart in their CI/CD process.
Thank you for any input/guidance.
VR,
We use similar kind of pattern where we have 30+ microservices.
We have got a Github repo for base-charts.
The base-microservice chart has all sorts of kubernetes templates (like HPA,ConfigMap,Secrets,Deployment,Service,Ingress etc) ,each having the option to be enabled or disabled.
Note- The base chart can even contain other charts too
eg. This base-chart has a dependency of nginx-ingress chart:
apiVersion: v2
name: base-microservice
description: A base helm chart for deploying a microservice in Kubernetes
type: application
version: 0.1.6
appVersion: 1
dependencies:
- name: nginx-ingress
version: "~1.39.1"
repository: "alias:stable"
condition: nginx-ingress.enabled
Below is an example template for secrets.yaml template:
{{- if .Values.secrets.enabled -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "base-microservice.fullname" . }}
type: Opaque
data:
{{- toYaml .Values.secrets.data | nindent 2}}
{{- end}}
Now when commit happens in this base-charts repo, as part of CI process, (along with other things) we do
Check if Helm index already exists in charts repository
If exists, then download the existing index and merge currently generated index with existing one -> helm repo index --merge oldindex/index.yaml .
If it does not exist, then we create new Helm index ->( helm repo index . ) Then upload the archived charts and index yaml to our charts repository.
Now in each of our microservice, we have a charts directory , inside which we have 2 files only:
Chart.yaml
values.yaml
Directory structure of a sample microservice:
The Chart.yaml for this microservice A looks like:
apiVersion: v2
name: my-service-A
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: 1
dependencies:
- name: base-microservice
version: "0.1.6"
repository: "alias:azure"
And the values.yaml for this microservice A has those values which need to be overriden for the base-microservice values.
eg.
base-microservice:
nameOverride: my-service-A
image:
repository: myDockerRepo/my-service-A
resources:
limits:
cpu: 1000m
memory: 1024Mi
requests:
cpu: 300m
memory: 500Mi
probe:
initialDelaySeconds: 120
nginx-ingress:
enabled: true
ingress:
enabled: true
Now while doing Continuous Deployment of this microservice, we have these steps (among others):
Fetch helm dependencies (helm dependency update ./charts/my-service-A)
Deploy my release to kubernetes (helm upgrade --install my-service-a ./charts/my-service-A)