Spinnaker multiple auto-triggers for multi-image Helm Chart - kubernetes-helm

Hello Ladies & Gentlemen,
My Helm Chart consists of 9 docker images. I would like to be able to deploy it to 4 environments by using Spinnaker which is installed on Ubuntu. I use GitHub & ECR auto-triggers; images are in ECR, Helm Chart is in GitHub.
values.yaml is something like that:
image:
repository: 123123123.dkr.ecr.eu-central-1.amazonaws.com/docker/app1
tag: ""
image:
repository: 123123123.dkr.ecr.eu-central-1.amazonaws.com/docker/app3
tag: ""
image:
repository: 123123123.dkr.ecr.eu-central-1.amazonaws.com/docker/app2
tag: ""
My pipeline is auto-triggered by multiple ECR repositories, multiple images.
Triggers work, Spinnaker gets the payload and deployment starts.
The problem is that I am not be able to place the image/tag which is transmitted by ECR webhook to the right place in the baked manifest.
For example;
Spinnaker Igor gets image information from ECR webhook something like that:
Found 1 new images for svc-spinnaker-ecr. Images: [{imageId=igor:dockerRegistry:v2:svc-spinnaker-ecr:docker/app3:0.1.0-dev.9, sendEvent=true}]
This string app3:0.1.0-dev.9 or this one 0.1.0-dev.9 must be placed/replaced in baked manifest in the right line above.
How can I accomplish it? Could you please advise?
Thanks & Reagrds
I tried following SpEL expressions:
${ #stage('Find Image from Staging Environment').outputs.artifacts[0].reference.split(':')[1] }
${ trigger['tag']}
${ #triggerResolvedArtifactByType("docker/image")["reference"]}
${ #stage('bake')['outputs']['artifacts'].?[type == 'docker/image'].![reference] }
${ trigger['artifacts'].?[type == 'docker/image'].![reference] }
${ trigger['artifacts'].?[type == 'docker/image'].![reference.split(':')[1]] }
${ trigger['artifacts'].?[type == 'docker/image'].![reference.replace("[", "").replace("]", "").split(':')[1]] }
${ trigger['artifacts'].?[type == 'docker/image'].![reference.split(':')[1]] }
...but none of them has image name constraint so they are not usable as variable or override.

Related

Diff values file using helm provider

I hope you can help me light some more light on my issue.
Currently I'm using:
Terraform v1.3.5
Helm provider v2.8.0
Kubernetes provider v2.16.1
Lately I've been adapting the helm provider on Terraform, to help me manage Helm releases with more resiliency. It helps a lot to be able to plan changes and see what has changed and what remains the same. It merges really well with the infrastructure details and I can manage everything with just one tool, it has been great.
There is one thing that bothers me a little. It's the terraform plan preview of the values file, it just shows you that some changes have been made, but not where or which. Let me add an example.
File main.tf:
# I'm using the "manifest" setting to calculate manifest diffs
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
config_context = "clusterconfig"
}
experiments {
manifest = true
}
}
# The helm chart lives locally on my repo. I'm passing the values file and an
# override for the image tag.
resource "helm_release" "release" {
name = "example"
chart = "../../helm/chart"
namespace = "example"
wait = true
set {
name = "image.tag"
value = "latest"
}
values = [
file("helm/values-example.yaml")
]
}
This works great, the problem comes when I make a change on the values file. It shows the whole file instead of just the changes. For example in my values file I change the replicas from 1 to 2:
File values-example.yaml:
replicaCount: 1
image:
repository: test
pullPolicy: ifNotPresent
The execution:
$ terraform plan
(...)
Terraform will perform the following actions:
# helm_release.example will be updated in-place
~ resource "helm_release" "example" {
~ manifest = jsonencode(
~ {
~ "deployment.apps/apps/v1/deployname" = {
~ spec = {
- replicas = 1 -> 2
}
}
}
}
)
~ values = [
- <<-EOT
replicaCount: 1
image:
repository: test
pullPolicy: ifNotPresent
EOT,
+ <<-EOT
replicaCount: 2
image:
repository: test
pullPolicy: ifNotPresent
EOT,
]
This makes it very difficult to see which values settings have been changed, when the values file is bigger.
So then, my question, do you know if there is a way to diff the values? I would like to see only the changes instead of the whole file.
What I've seen online:
It's been asked for on GitHub, but closed https://github.com/hashicorp/terraform-provider-helm/issues/305
Maybe something like this can be implemented: Use diferent values in helm deploy through Terraform (for_each)
Thanks in advance for the help. Let me know if I can help with any more information.

Is it possible to fetch the image tag from a deployment in EKS using terraform kubernetes provider?

Context:
I'm reusing terraform modules and I deploy microservices using helm provider within terraform.
Problem:
I'm trying to translate this line into terraform code, to get the current image tag live from prod (in the interest of reusing it). I'm already using kubernetes provider's auth and doesn't make sense to pull kubectl in my CI just for this.
k get deploy my-deployment -n staging -o jsonpath='{$.spec.template.spec.containers[:1].image}'
Kubernetes terraform provider doesn't seem to support data blocks nor helm provider outputs blocks.
Does anyone know how could we get (read) the image tag of a deployment using terraform?
EDIT:
My deployment looks like this:
resource "helm_release" "example" {
name = "my-redis-release"
repository = "https://charts.bitnami.com/bitnami"
chart = "redis"
version = "6.0.1"
values = [
"${file("values.yaml")}"
]
set {
name = "image.tag"
value = "latest"
}
}
The tag will be a hash that will change often and passed on from another repo.
latest in this case should be replaced by the current running tag in the cluster. I can get it using kubectl, using the line above, but not sure how using terraform.
It turns out there are multiple ways of doing it, where the easiest one for me is to reference the set argument of the helm_release resource:
output "helm_image_tag" {
value = [ for setting in helm_release.example.set : setting.value if setting.name == "image.tag" ]
}
The output will then be a list where you can reference it in a shell script (or another scripting language):
+ helm_image_tag = [
+ "latest",
]
If the list format does not suit you, you can create a map output:
output "helm_image_tag" {
value = { for setting in helm_release.example.set : setting.name => setting.value if setting.name == "image.tag" }
}
This produces the following output:
+ helm_image_tag = {
+ "image.tag" = "latest"
}
By using terraform output helm_image_tag you can access this output value and decide what to do with it in the CI.

How to render only selected template in Helm?

I have ~20 yamls in my helm chart + tons of dependencies and I want to check the rendered output of the specific one. helm template renders all yamls and produces a hundred lines of code. Is there a way (it would be nice to have even a regex) to render only selected template (by a file or eg. a name).
From helm template documentation
-s, --show-only stringArray only show manifests rendered from the given templates
For rendering only one resource use helm template -s templates/deployment.yaml .
If you have multiple charts in one directory:
|helm-charts
|-chart1
|--templates
|---deployment.yaml
|--values.yaml
|--Chart.yaml
|...
|- chart2
If you want to generate only one file e.g. chart1/deployment.yaml using values from file chart1/values.yaml follow these steps:
Enter to the chart folder:
cd chart1
Run this command:
helm template . --values values.yaml -s templates/deployment.yaml --name-template myReleaseName > chart1-deployment.yaml
Generated manifest will be inside file chart1-deployment.yaml.

rules_k8s - k8s_deploy template away images

We got a project which consists of more than 20 small services that all reside inside the same repository and are built using bazel.
To reduce management overhead we would like to automagically generate as much as possible, including our images and k8s deployments.
So the question is:
Is there a way to avoid setting the image key in the k8s_deploy step by a rule or function?
We already got a rule which is templating the image inside our manifest to have the image name (and k8s object name) based on the label:
_TEMPLATE = "//k8s:deploy.yaml"
def _template_manifest_impl(ctx):
name = '{}'.format(ctx.label).replace("//cmd/", "").replace("/", "-").replace(":manifest", "")
ctx.actions.expand_template(
template = ctx.file._template,
output = ctx.outputs.source_file,
substitutions = {
"{NAME}": name,
},
)
template_manifest = rule(
implementation = _template_manifest_impl,
attrs = {
"_template": attr.label(
default = Label(_TEMPLATE),
allow_single_file = True,
),
},
outputs = {"source_file": "%{name}.yaml"},
)
This way the service under //cmd/endpoints/customer/log would result in the image eu.gcr.io/project/endpoints-customer-log.
While this works fine so far, we still have to manually set the images dict for k8s_deploy like this:
k8s_deploy(
name = "dev",
images = {
"eu.gcr.io/project/endpoints-customer-log:dev": ":image",
},
template = ":manifest",
)
It would be great to get rid of this, but I failed to find a way yet.
Using a rule does not work because images does not take a label and using a function does not work because i found no way of accessing context in there.
Am I missing something?
The solution I found to get the container registry names out of the build step, was to use bazel for build and skaffold for deploy. Both steps are performed in the same CI pipeline.
My skaffold.yaml is very simple, and provides the mapping of bazel targets to gcr names.
apiVersion: skaffold/v2alpha4
kind: Config
metadata:
name: my_services
build:
tagPolicy:
gitCommit:
variant: AbbrevCommitSha
artifacts:
- image: gcr.io/jumemo-dev/service1
bazel:
target: //server1/src/main/java/server1:server1.tar
- image: gcr.io/jumemo-dev/service2
bazel:
target: //server2/src/main/java/server2:server2.tar
It is invoked using:
$ skaffold build

overriding values in kubernetes helm subcharts

I'm building a helm chart for my application, and I'm using stable/nginx-ingress as a subchart. I have a single overrides.yml file that contains (among other overrides):
nginx-ingress:
controller:
annotations:
external-dns.alpha.kubernetes.io/hostname: "*.{{ .Release.Name }}.mydomain.com"
So, I'm trying to use the release name in the overrides file, and my command looks something like: helm install mychart --values overrides.yml, but the resulting annotation does not do the variable interpolation, and instead results in something like
Annotations: external-dns.alpha.kubernetes.io/hostname=*.{{ .Release.Name }}.mydomain.com
I installed the subchart by using helm fetch, and I'm under the (misguided?) impression that it would be best to leave the fetched thing as-is, and override values in it - however, if variable interpolation isn't available with that method, I will have to put my values in the subchart's values.yaml.
Is there a best practice for this? Is it ok to put my own values in the fetched subchart's values.yaml? If I someday helm fetch this subchart again, I'll have to put those values back in by hand, instead of leaving them in an untouched overrides file...
Thanks in advance for any feedback!
I found the issue on github -- it is not supported yet:
https://github.com/kubernetes/helm/issues/2133
Helm 3.x (Q4 2019) now includes more about this, but for chart only, not for subchart (see TBBle's comment)
Milan Masek adds as a comment:
Thankfully, latest Helm manual says how to achieve this.
The trick is:
enclosing variable in " or in a yaml block |-, and
then referencing it in a template as {{ tpl .Values.variable . }}
This seems to make Helm happy.
Example:
$ cat Chart.yaml | grep appVersion
appVersion: 0.0.1-SNAPSHOT-d2e2f42
$ cat platform/shared/t/values.yaml | grep -A2 image:
image:
tag: |-
{{ .Chart.AppVersion }}
$ cat templates/deployment.yaml | grep image:
image: "{{ .Values.image.repository }}:{{ tpl .Values.image.tag . }}"
$ helm template . --values platform/shared/t/values.betradar.yaml | grep image
image: "docker-registry.default.svc:5000/namespace/service:0.0.1-SNAPSHOT-d2e2f42"
imagePullPolicy: Always
image: busybox
Otherwise there is an error thrown..
$ cat platform/shared/t/values.yaml | grep -A1 image:
image:
tag: {{ .Chart.AppVersion }}
1 $ helm template . --values platform/shared/t/values.yaml | grep image
Error: failed to parse platform/shared/t/values.yaml: error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Chart.AppVersion":interface {}(nil)}
For Helm subchart, TBBle adds to issue 2133
#MilanMasek 's solution won't work in general for subcharts, because the context . passed into tpl will have the subchart's values, not the parent chart's values.
!<
It happens to work in the specific example this ticket was opened for, because .Release.Name should be the same in all the subcharts.
It won't work for .Chart.AppVersion as in the tpl example.
There was a proposal to support tval in #3252 for interpolating templates in values files, but that was dropped in favour of a lua-based Hook system which has been proposed for Helm v3: #2492 (comment)
That last issue 2492 include workarounds like this one:
You can put a placeholder in the text that you want to template and then replace that placeholder with the template that you would like to use in yaml files in the template.
For now, what I've done in the CI job is run helm template on the values.yaml file.
It works pretty well atm.
cp values.yaml templates/
helm template $CI_BUILD_REF_NAME ./ | sed -ne '/^# Source:
templates\/values.yaml/,/^---/p' > values.yaml
rm templates/values.yaml
helm upgrade --install ...
This breaks if you have multiple -f values.yml files, but I'm thinking of writing a small helm wrapper that runs essentially runs that bash script for each values.yaml file.
fsniper illustrates again the issue:
There is a use case where you would need to pass deployment name to dependency charts where you have no control.
For example I am trying to set podAffinity for zookeeper. And I have an application helm chart which sets zookeeper as a dependency.
In this case, I am passing pod antiaffinity to zookeeper via values. So in my apps values.yaml file I have a zookeeper.affinity section.
If I had the ability to get the release name inside the values yaml I would just set this as default and be done with it.
But now for every deployment I have to override this value, which is a big problem.
Update Oct. 2022, from issue 2133:
lazychanger proposes
I submitted a plugin to override values.yaml with additional templates.
See lazychanger/helm-viv: "Helm-variable-in-values" and its example.