Is it possible to fetch the image tag from a deployment in EKS using terraform kubernetes provider? - kubernetes

Context:
I'm reusing terraform modules and I deploy microservices using helm provider within terraform.
Problem:
I'm trying to translate this line into terraform code, to get the current image tag live from prod (in the interest of reusing it). I'm already using kubernetes provider's auth and doesn't make sense to pull kubectl in my CI just for this.
k get deploy my-deployment -n staging -o jsonpath='{$.spec.template.spec.containers[:1].image}'
Kubernetes terraform provider doesn't seem to support data blocks nor helm provider outputs blocks.
Does anyone know how could we get (read) the image tag of a deployment using terraform?
EDIT:
My deployment looks like this:
resource "helm_release" "example" {
name = "my-redis-release"
repository = "https://charts.bitnami.com/bitnami"
chart = "redis"
version = "6.0.1"
values = [
"${file("values.yaml")}"
]
set {
name = "image.tag"
value = "latest"
}
}
The tag will be a hash that will change often and passed on from another repo.
latest in this case should be replaced by the current running tag in the cluster. I can get it using kubectl, using the line above, but not sure how using terraform.

It turns out there are multiple ways of doing it, where the easiest one for me is to reference the set argument of the helm_release resource:
output "helm_image_tag" {
value = [ for setting in helm_release.example.set : setting.value if setting.name == "image.tag" ]
}
The output will then be a list where you can reference it in a shell script (or another scripting language):
+ helm_image_tag = [
+ "latest",
]
If the list format does not suit you, you can create a map output:
output "helm_image_tag" {
value = { for setting in helm_release.example.set : setting.name => setting.value if setting.name == "image.tag" }
}
This produces the following output:
+ helm_image_tag = {
+ "image.tag" = "latest"
}
By using terraform output helm_image_tag you can access this output value and decide what to do with it in the CI.

Related

Diff values file using helm provider

I hope you can help me light some more light on my issue.
Currently I'm using:
Terraform v1.3.5
Helm provider v2.8.0
Kubernetes provider v2.16.1
Lately I've been adapting the helm provider on Terraform, to help me manage Helm releases with more resiliency. It helps a lot to be able to plan changes and see what has changed and what remains the same. It merges really well with the infrastructure details and I can manage everything with just one tool, it has been great.
There is one thing that bothers me a little. It's the terraform plan preview of the values file, it just shows you that some changes have been made, but not where or which. Let me add an example.
File main.tf:
# I'm using the "manifest" setting to calculate manifest diffs
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
config_context = "clusterconfig"
}
experiments {
manifest = true
}
}
# The helm chart lives locally on my repo. I'm passing the values file and an
# override for the image tag.
resource "helm_release" "release" {
name = "example"
chart = "../../helm/chart"
namespace = "example"
wait = true
set {
name = "image.tag"
value = "latest"
}
values = [
file("helm/values-example.yaml")
]
}
This works great, the problem comes when I make a change on the values file. It shows the whole file instead of just the changes. For example in my values file I change the replicas from 1 to 2:
File values-example.yaml:
replicaCount: 1
image:
repository: test
pullPolicy: ifNotPresent
The execution:
$ terraform plan
(...)
Terraform will perform the following actions:
# helm_release.example will be updated in-place
~ resource "helm_release" "example" {
~ manifest = jsonencode(
~ {
~ "deployment.apps/apps/v1/deployname" = {
~ spec = {
- replicas = 1 -> 2
}
}
}
}
)
~ values = [
- <<-EOT
replicaCount: 1
image:
repository: test
pullPolicy: ifNotPresent
EOT,
+ <<-EOT
replicaCount: 2
image:
repository: test
pullPolicy: ifNotPresent
EOT,
]
This makes it very difficult to see which values settings have been changed, when the values file is bigger.
So then, my question, do you know if there is a way to diff the values? I would like to see only the changes instead of the whole file.
What I've seen online:
It's been asked for on GitHub, but closed https://github.com/hashicorp/terraform-provider-helm/issues/305
Maybe something like this can be implemented: Use diferent values in helm deploy through Terraform (for_each)
Thanks in advance for the help. Let me know if I can help with any more information.

Trigger multiple azure devops Pipelines in Parallel to create VM's on Azure using same Terraform module

I have terraform Module for example to create a VM on Azure and it works when I trigger the Pipeline.
But When I trigger the Pipeline twice it fails to create two VM's. How do I manipulate terraform State file ? Only way I can think of is two run multiple pipeline in different agents, does that work ?
What we have done is create terraform "common" modules (basically a subdirectory with tf files), which we source into a terraform environment multiple times with different parameters.
These we usually put into a list with a loop.
In your environments terraform:
locals {
azure_vms = [
{ name = "vm1", size = "Standard_B2s" },
{ name = "vm2", size = "Standard_B4s" }
]
}
module "my_azure_vm" {
source = "./common/my_azure_vm"
for_each = { for vm in local.azure_vms : vm.name => vm }
size = each.value.size
name = each.value.name
}
In common my_azure_vm, you can define inputs for size and name, then use those to create the VM's with your standard parameters.

Kubernetes secret with Flux and Terraform

I am new to terraform and devops in general. First I need to get ssh key from url to known host to later use for Flux.
data "helm_repository" "fluxcd" {
name = "fluxcd"
url = "https://charts.fluxcd.io"
}
resource "helm_release" "flux" {
name = "flux"
namespace = "flux"
repository = data.helm_repository.fluxcd.metadata[0].name
chart = "flux"
set {
name = "git.url"
value = "git.project"
}
set {
name = "git.secretName"
value = "flux-git-deploy"
}
set {
name = "syncGarbageCollection.enabled"
value = true
}
set_string {
name = "ssh.known_hosts"
value = Need this value from url
}
}
Then I need to generate key and use it to create kubernetes secret to communicate with gitlab repository.
resource "kubernetes_secret" "flux-git-deploy" {
metadata {
name = "flux-git-deploy"
namespace = "flux"
}
type = "Opaque"
data = {
identity = tls_private_key.flux.private_key_pem
}
}
resource "gitlab_deploy_key" "flux_deploy_key" {
title = "Title"
project = "ProjectID"
key = tls_private_key.flux.public_key_openssh
can_push = true
}
I am not sure if I am on the right track. Any advice will help.
There are few approaches you could use. These can be divided into "two categories":
generate manually the ssh_known_hosts and use the output through variables or files
create the file on the machine where you're running terraform with the command ssh-keyscan <git_domain> and set the path as value for ssh.known_hosts.
You can also use the file function directly in the variable or use the file output directly as env variable. Personally I would not recommend it because the value is saved directly in the terraform state but in this case it is not a critical issue. Critical would be if you're using ssh_keys or credentials.
Another approach would be to use the local-exec provisioner with a null_resource before you create the helm resource for flux and create the file directly in terraform. But additional to that you have to take care of accessing the file you created and also managing the triggers to run the command if a setting is changed.
In general, I would not use terraform for such things. It is fine to provide infrastructure like aws resources or services which are directly bound to the infrastructure but in order to create and run services you need a provisioning tool like ansible where you can run commands like "ssh-keyscan" directly as module. At the end you need a stable pipeline where you run ansible (or your favorite provisioning tool) after a terraform change.
But if you want to use only terraform you're going to right way.

Bazel rules_k8s - How to Apply External Configuration Files? (From URL)

I am trying to fully automate the deployment to my Kubernetes Cluster with Bazel and rules_k8s.
But I don't know how to apply external configurations to my cluster.
Usually I would run a command like
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.12.0/cert-manager.yaml
But I want this to happen automatically when I run my
k8s_objects(
name = "kubernetes_deployment",
objects = [
"//kubernetes:nginx",
"//services/gateway:k8s",
"//services/ideas:k8s",
# ...
]
)
rule to deploy everything to Kubernetes.
try this in your BUILD file, I'm not sure its the best way as it will be re-ran on every build. It would be nice if we could use an http_file here instead of a genrule.
genrule(
name = "extyaml",
srcs = [],
outs = ["certman-k8s.yaml"],
cmd = "curl -L https://github.com/jetstack/cert-manager/releases/download/v0.12.0/cert-manager.yaml > $#",
)
k8s_object(
name = "certman",
cluster = "minikube",
template = ":certman-k8s.yaml",
)

rules_k8s - k8s_deploy template away images

We got a project which consists of more than 20 small services that all reside inside the same repository and are built using bazel.
To reduce management overhead we would like to automagically generate as much as possible, including our images and k8s deployments.
So the question is:
Is there a way to avoid setting the image key in the k8s_deploy step by a rule or function?
We already got a rule which is templating the image inside our manifest to have the image name (and k8s object name) based on the label:
_TEMPLATE = "//k8s:deploy.yaml"
def _template_manifest_impl(ctx):
name = '{}'.format(ctx.label).replace("//cmd/", "").replace("/", "-").replace(":manifest", "")
ctx.actions.expand_template(
template = ctx.file._template,
output = ctx.outputs.source_file,
substitutions = {
"{NAME}": name,
},
)
template_manifest = rule(
implementation = _template_manifest_impl,
attrs = {
"_template": attr.label(
default = Label(_TEMPLATE),
allow_single_file = True,
),
},
outputs = {"source_file": "%{name}.yaml"},
)
This way the service under //cmd/endpoints/customer/log would result in the image eu.gcr.io/project/endpoints-customer-log.
While this works fine so far, we still have to manually set the images dict for k8s_deploy like this:
k8s_deploy(
name = "dev",
images = {
"eu.gcr.io/project/endpoints-customer-log:dev": ":image",
},
template = ":manifest",
)
It would be great to get rid of this, but I failed to find a way yet.
Using a rule does not work because images does not take a label and using a function does not work because i found no way of accessing context in there.
Am I missing something?
The solution I found to get the container registry names out of the build step, was to use bazel for build and skaffold for deploy. Both steps are performed in the same CI pipeline.
My skaffold.yaml is very simple, and provides the mapping of bazel targets to gcr names.
apiVersion: skaffold/v2alpha4
kind: Config
metadata:
name: my_services
build:
tagPolicy:
gitCommit:
variant: AbbrevCommitSha
artifacts:
- image: gcr.io/jumemo-dev/service1
bazel:
target: //server1/src/main/java/server1:server1.tar
- image: gcr.io/jumemo-dev/service2
bazel:
target: //server2/src/main/java/server2:server2.tar
It is invoked using:
$ skaffold build