Diff values file using helm provider - kubernetes

I hope you can help me light some more light on my issue.
Currently I'm using:
Terraform v1.3.5
Helm provider v2.8.0
Kubernetes provider v2.16.1
Lately I've been adapting the helm provider on Terraform, to help me manage Helm releases with more resiliency. It helps a lot to be able to plan changes and see what has changed and what remains the same. It merges really well with the infrastructure details and I can manage everything with just one tool, it has been great.
There is one thing that bothers me a little. It's the terraform plan preview of the values file, it just shows you that some changes have been made, but not where or which. Let me add an example.
File main.tf:
# I'm using the "manifest" setting to calculate manifest diffs
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
config_context = "clusterconfig"
}
experiments {
manifest = true
}
}
# The helm chart lives locally on my repo. I'm passing the values file and an
# override for the image tag.
resource "helm_release" "release" {
name = "example"
chart = "../../helm/chart"
namespace = "example"
wait = true
set {
name = "image.tag"
value = "latest"
}
values = [
file("helm/values-example.yaml")
]
}
This works great, the problem comes when I make a change on the values file. It shows the whole file instead of just the changes. For example in my values file I change the replicas from 1 to 2:
File values-example.yaml:
replicaCount: 1
image:
repository: test
pullPolicy: ifNotPresent
The execution:
$ terraform plan
(...)
Terraform will perform the following actions:
# helm_release.example will be updated in-place
~ resource "helm_release" "example" {
~ manifest = jsonencode(
~ {
~ "deployment.apps/apps/v1/deployname" = {
~ spec = {
- replicas = 1 -> 2
}
}
}
}
)
~ values = [
- <<-EOT
replicaCount: 1
image:
repository: test
pullPolicy: ifNotPresent
EOT,
+ <<-EOT
replicaCount: 2
image:
repository: test
pullPolicy: ifNotPresent
EOT,
]
This makes it very difficult to see which values settings have been changed, when the values file is bigger.
So then, my question, do you know if there is a way to diff the values? I would like to see only the changes instead of the whole file.
What I've seen online:
It's been asked for on GitHub, but closed https://github.com/hashicorp/terraform-provider-helm/issues/305
Maybe something like this can be implemented: Use diferent values in helm deploy through Terraform (for_each)
Thanks in advance for the help. Let me know if I can help with any more information.

Related

Spinnaker multiple auto-triggers for multi-image Helm Chart

Hello Ladies & Gentlemen,
My Helm Chart consists of 9 docker images. I would like to be able to deploy it to 4 environments by using Spinnaker which is installed on Ubuntu. I use GitHub & ECR auto-triggers; images are in ECR, Helm Chart is in GitHub.
values.yaml is something like that:
image:
repository: 123123123.dkr.ecr.eu-central-1.amazonaws.com/docker/app1
tag: ""
image:
repository: 123123123.dkr.ecr.eu-central-1.amazonaws.com/docker/app3
tag: ""
image:
repository: 123123123.dkr.ecr.eu-central-1.amazonaws.com/docker/app2
tag: ""
My pipeline is auto-triggered by multiple ECR repositories, multiple images.
Triggers work, Spinnaker gets the payload and deployment starts.
The problem is that I am not be able to place the image/tag which is transmitted by ECR webhook to the right place in the baked manifest.
For example;
Spinnaker Igor gets image information from ECR webhook something like that:
Found 1 new images for svc-spinnaker-ecr. Images: [{imageId=igor:dockerRegistry:v2:svc-spinnaker-ecr:docker/app3:0.1.0-dev.9, sendEvent=true}]
This string app3:0.1.0-dev.9 or this one 0.1.0-dev.9 must be placed/replaced in baked manifest in the right line above.
How can I accomplish it? Could you please advise?
Thanks & Reagrds
I tried following SpEL expressions:
${ #stage('Find Image from Staging Environment').outputs.artifacts[0].reference.split(':')[1] }
${ trigger['tag']}
${ #triggerResolvedArtifactByType("docker/image")["reference"]}
${ #stage('bake')['outputs']['artifacts'].?[type == 'docker/image'].![reference] }
${ trigger['artifacts'].?[type == 'docker/image'].![reference] }
${ trigger['artifacts'].?[type == 'docker/image'].![reference.split(':')[1]] }
${ trigger['artifacts'].?[type == 'docker/image'].![reference.replace("[", "").replace("]", "").split(':')[1]] }
${ trigger['artifacts'].?[type == 'docker/image'].![reference.split(':')[1]] }
...but none of them has image name constraint so they are not usable as variable or override.

Is it possible to fetch the image tag from a deployment in EKS using terraform kubernetes provider?

Context:
I'm reusing terraform modules and I deploy microservices using helm provider within terraform.
Problem:
I'm trying to translate this line into terraform code, to get the current image tag live from prod (in the interest of reusing it). I'm already using kubernetes provider's auth and doesn't make sense to pull kubectl in my CI just for this.
k get deploy my-deployment -n staging -o jsonpath='{$.spec.template.spec.containers[:1].image}'
Kubernetes terraform provider doesn't seem to support data blocks nor helm provider outputs blocks.
Does anyone know how could we get (read) the image tag of a deployment using terraform?
EDIT:
My deployment looks like this:
resource "helm_release" "example" {
name = "my-redis-release"
repository = "https://charts.bitnami.com/bitnami"
chart = "redis"
version = "6.0.1"
values = [
"${file("values.yaml")}"
]
set {
name = "image.tag"
value = "latest"
}
}
The tag will be a hash that will change often and passed on from another repo.
latest in this case should be replaced by the current running tag in the cluster. I can get it using kubectl, using the line above, but not sure how using terraform.
It turns out there are multiple ways of doing it, where the easiest one for me is to reference the set argument of the helm_release resource:
output "helm_image_tag" {
value = [ for setting in helm_release.example.set : setting.value if setting.name == "image.tag" ]
}
The output will then be a list where you can reference it in a shell script (or another scripting language):
+ helm_image_tag = [
+ "latest",
]
If the list format does not suit you, you can create a map output:
output "helm_image_tag" {
value = { for setting in helm_release.example.set : setting.name => setting.value if setting.name == "image.tag" }
}
This produces the following output:
+ helm_image_tag = {
+ "image.tag" = "latest"
}
By using terraform output helm_image_tag you can access this output value and decide what to do with it in the CI.

explanation of Service.get from pulumi

I am using pulumi release to deploy a helm chart including many service and trying to get one of the deployed service. https://www.pulumi.com/blog/full-access-to-helm-features-through-new-helm-release-resource-for-kubernetes/#how-do-i-use-it shows we can use Service.get to achieve this goal but I failed to find any information of the parameters of the method. Could someone explain it a bit or point me to the correct documentation on Service.get?
Thanks
I think there's a bug in that post; it should be -master, not -redis-master:
...
srv = Service.get("redis-master-svc", Output.concat(status.namespace, "/", status.name, "-master"))
As for what's going on here, I'll try to explain, as you're right that this doesn't seem to be documented in a way that's easy to find, as it isn't part of the Kubernetes provider API, but rather part of the core Pulumi resource API.
To address the
If you change up the example to use -master instead, you should be able to run the Pulumi program as otherwise quoted in that blog post. Here's the complete, modified program I'm using for reference:
import pulumi
from pulumi import Output
from pulumi_random.random_password import RandomPassword
from pulumi_kubernetes.core.v1 import Namespace, Service
from pulumi_kubernetes.helm.v3 import Release, ReleaseArgs, RepositoryOptsArgs
namespace = Namespace("redis-ns")
redis_password = RandomPassword("pass", length=10)
release_args = ReleaseArgs(
chart="redis",
repository_opts=RepositoryOptsArgs(
repo="https://charts.bitnami.com/bitnami"
),
version="13.0.0",
namespace=namespace.metadata["name"],
# Values from Chart's parameters specified hierarchically,
# see https://artifacthub.io/packages/helm/bitnami/redis/13.0.0#parameters
# for reference.
values={
"cluster": {
"enabled": True,
"slaveCount": 3,
},
"metrics": {
"enabled": True,
"service": {
"annotations": {
"prometheus.io/port": "9127",
}
},
},
"global": {
"redis": {
"password": redis_password.result,
}
},
"rbac": {
"create": True,
},
},
# By default Release resource will wait till all created resources
# are available. Set this to true to skip waiting on resources being
# available.
skip_await=False)
release = Release("redis-helm", args=release_args)
# We can lookup resources once the release is installed. The release's
# status field is set once the installation completes, so this, combined
# with `skip_await=False` above, will wait to retrieve the Redis master
# ClusterIP till all resources in the Chart are available.
status = release.status
pulumi.export("namespace", status.namespace)
srv = Service.get("redis-master-svc", Output.concat(status.namespace, "/", status.name, "-master"))
pulumi.export("redisMasterClusterIP", srv.spec.cluster_ip)
When you deploy this program with pulumi up (e.g., locally with Minikube), you'll have a handful of running services:
$ pulumi up --yes
...
Updating (dev)
...
Type Name Status
+ pulumi:pulumi:Stack so-71802926-dev created
+ ├─ kubernetes:core/v1:Namespace redis-ns created
+ ├─ random:index:RandomPassword pass created
+ ├─ kubernetes:helm.sh/v3:Release redis-helm created
└─ kubernetes:core/v1:Service redis-master-svc
Outputs:
namespace : "redis-ns-0f9e4b1e"
redisMasterClusterIP: "10.103.98.199"
Resources:
+ 4 created
Duration: 1m13s
$ minikube service list
|-------------------|------------------------------|--------------|-----|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------------|------------------------------|--------------|-----|
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
| redis-ns-0f9e4b1e | redis-helm-b5f3ea12-headless | No node port |
| redis-ns-0f9e4b1e | redis-helm-b5f3ea12-master | No node port |
| redis-ns-0f9e4b1e | redis-helm-b5f3ea12-metrics | No node port |
| redis-ns-0f9e4b1e | redis-helm-b5f3ea12-slave | No node port |
|-------------------|------------------------------|--------------|-----|
Getter functions like Service.get are explained here, in the Resources docs: https://www.pulumi.com/docs/intro/concepts/resources/get/
Service.get takes two arguments. The first is the logical name you want to use to refer to the fetched resource in your stack; it can generally be any string, as long as it's unique among other resources in the stack. The second is the "physical" (i.e., provider-native) ID by which to look it up. It looks like the Kubernetes provider wants that ID to be of the form {namespace}/{name}, which is why you need to use Output.concat to assemble a string composed of the eventual values of status.namespace and status.name (as these values aren't known until the update completes). You can learn more about Outputs and Output.concat in the Resources docs as well: https://www.pulumi.com/docs/intro/concepts/inputs-outputs/
Hope that helps! Let me know if you have any other questions. I've also submitted a PR to get that blog post fixed up.

Kubernetes secret with Flux and Terraform

I am new to terraform and devops in general. First I need to get ssh key from url to known host to later use for Flux.
data "helm_repository" "fluxcd" {
name = "fluxcd"
url = "https://charts.fluxcd.io"
}
resource "helm_release" "flux" {
name = "flux"
namespace = "flux"
repository = data.helm_repository.fluxcd.metadata[0].name
chart = "flux"
set {
name = "git.url"
value = "git.project"
}
set {
name = "git.secretName"
value = "flux-git-deploy"
}
set {
name = "syncGarbageCollection.enabled"
value = true
}
set_string {
name = "ssh.known_hosts"
value = Need this value from url
}
}
Then I need to generate key and use it to create kubernetes secret to communicate with gitlab repository.
resource "kubernetes_secret" "flux-git-deploy" {
metadata {
name = "flux-git-deploy"
namespace = "flux"
}
type = "Opaque"
data = {
identity = tls_private_key.flux.private_key_pem
}
}
resource "gitlab_deploy_key" "flux_deploy_key" {
title = "Title"
project = "ProjectID"
key = tls_private_key.flux.public_key_openssh
can_push = true
}
I am not sure if I am on the right track. Any advice will help.
There are few approaches you could use. These can be divided into "two categories":
generate manually the ssh_known_hosts and use the output through variables or files
create the file on the machine where you're running terraform with the command ssh-keyscan <git_domain> and set the path as value for ssh.known_hosts.
You can also use the file function directly in the variable or use the file output directly as env variable. Personally I would not recommend it because the value is saved directly in the terraform state but in this case it is not a critical issue. Critical would be if you're using ssh_keys or credentials.
Another approach would be to use the local-exec provisioner with a null_resource before you create the helm resource for flux and create the file directly in terraform. But additional to that you have to take care of accessing the file you created and also managing the triggers to run the command if a setting is changed.
In general, I would not use terraform for such things. It is fine to provide infrastructure like aws resources or services which are directly bound to the infrastructure but in order to create and run services you need a provisioning tool like ansible where you can run commands like "ssh-keyscan" directly as module. At the end you need a stable pipeline where you run ansible (or your favorite provisioning tool) after a terraform change.
But if you want to use only terraform you're going to right way.

rules_k8s - k8s_deploy template away images

We got a project which consists of more than 20 small services that all reside inside the same repository and are built using bazel.
To reduce management overhead we would like to automagically generate as much as possible, including our images and k8s deployments.
So the question is:
Is there a way to avoid setting the image key in the k8s_deploy step by a rule or function?
We already got a rule which is templating the image inside our manifest to have the image name (and k8s object name) based on the label:
_TEMPLATE = "//k8s:deploy.yaml"
def _template_manifest_impl(ctx):
name = '{}'.format(ctx.label).replace("//cmd/", "").replace("/", "-").replace(":manifest", "")
ctx.actions.expand_template(
template = ctx.file._template,
output = ctx.outputs.source_file,
substitutions = {
"{NAME}": name,
},
)
template_manifest = rule(
implementation = _template_manifest_impl,
attrs = {
"_template": attr.label(
default = Label(_TEMPLATE),
allow_single_file = True,
),
},
outputs = {"source_file": "%{name}.yaml"},
)
This way the service under //cmd/endpoints/customer/log would result in the image eu.gcr.io/project/endpoints-customer-log.
While this works fine so far, we still have to manually set the images dict for k8s_deploy like this:
k8s_deploy(
name = "dev",
images = {
"eu.gcr.io/project/endpoints-customer-log:dev": ":image",
},
template = ":manifest",
)
It would be great to get rid of this, but I failed to find a way yet.
Using a rule does not work because images does not take a label and using a function does not work because i found no way of accessing context in there.
Am I missing something?
The solution I found to get the container registry names out of the build step, was to use bazel for build and skaffold for deploy. Both steps are performed in the same CI pipeline.
My skaffold.yaml is very simple, and provides the mapping of bazel targets to gcr names.
apiVersion: skaffold/v2alpha4
kind: Config
metadata:
name: my_services
build:
tagPolicy:
gitCommit:
variant: AbbrevCommitSha
artifacts:
- image: gcr.io/jumemo-dev/service1
bazel:
target: //server1/src/main/java/server1:server1.tar
- image: gcr.io/jumemo-dev/service2
bazel:
target: //server2/src/main/java/server2:server2.tar
It is invoked using:
$ skaffold build