Terraform append tags - tags

I create VPC and subnets which I add tags.
Later, I create EKS cluster which appends its own tags and if I apply again, the tags are overwritten.
I need any method to read the current tags and then merge with my custom tags. The problem is if the VPC resources are being created for the first time I can't query if some tags exists.
Here is my subnets definition
resource "aws_subnet" "k8s" {
count = "${var.create_vpc && length(var.k8s_subnets) > 0 ? length(var.k8s_subnets) : 0}"
vpc_id = "${local.vpc_id}"
cidr_block = "${var.k8s_subnets[count.index]}"
availability_zone = "${element(var.azs, count.index)}"
tags = "${merge(map("Name", format("subnet-%s-${var.k8s_subnet_suffix}-%s", var.name, element(var.azs, count.index))), var.tags, var.k8s_subnet_tags)}"
}
This is the tag EKS adds:
kubernetes.io/cluster/eks-cluster : shared
I'm stuck with this kind of... which comes first, the chicken or the egg? Any idea or suggestion?
-- Edited
something like self.tags could be the solution but unfortunately is not possible:
self.ATTRIBUTE syntax is only allowed and valid within provisioners.
and it shows an error:
Error: resource 'aws_subnet.k8s' config: cannot contain self-reference self.tags

This is what I do:
I define the common tags in env.sh which wraps terraform
When creating another component, I do this: tags = "${merge(var.default_tags, map("Name", format("%s-Jenkins-ELB", var.env)))}"
You could write the VPC tags to SSM parameter store and retrieve them later to be use by the EKS cluster.

Related

Terraform Azure Devops Provider

We are trying to automate the Azure DevOps functions using Terraform. We are able to create Projects and Repos using Terraform. But we need to create multiple projects and repos specific to each project.
I have my terraform.tfvars file as given below
Proj1_Repos = ["Repo1","Repo2","Repo3"]
Proj2_Repos = ["Repo4","Repo5","Repo7"]
Project_Name = ["Proj1","Proj2"]
How i can write my terraform configuration file to create Proj1_Repos in Proj1 and Proj2_Repos in Proj2
I think you will have an easier time restructuring the variables to look something like:
"Projects" = {
"Proj1" = {
"repos" = ["Repo1","Repo2","Repo3"]
},
"Proj2" = {
"repos" = ["Repo4","Repo5","Repo6"]
}
}
This way you can more cleanly iterate over your declarations using the for_each operator for your devops repo resources.
Alternatively, if restructuring the input variables isn't an option, you can use the locals block to construct an association map for your variables. Something like this
If you are looking for a way to feed a variable value to reference another variable, you will not be able to do so without constructing a custom data object using the key and value of your variables. This route can get pretty wonky and not recommended.

Best practice for using variables to configure and create new Github repository instance in Terraform instead of updating-in-place

I am trying to set up a standard Github repository template for my organization that uses Terraform to spin up new repos with the configured settings.
Every time I try to update the configuration file to create a new instance of the repository with a new name, instead it will try to update-in-place any repo that was already created using that file.
My question is what is the best practice for making my configuration file reusable with input variables like repo name? Should I make a module or is there some way of reusing that file otherwise?
Thanks for the help.
Terraform is a desired-state-configuration system, which means that your configuration should represent the full set of objects that should exist rather than an instruction to create a single object.
Therefore the typical way to add a new repository is to add a new resource block declaring that new repository, and leave the existing ones unchanged. Terraform will then see that there's a new resource not currently tracked in the state and will propose to create it.
If your repositories are configured in some systematic way that you can describe using a mechanical rule rather than manual configuration then you can potentially use the for_each meta-argument to declare multiple resource instances from the same resource block, using Terraform language expressions to describe the systematic rule.
For example, you could create a local value with a higher-level data structure that describes what should be different between your repositories and then use that data structure with for_each on a single resource block:
locals {
repositories = tomap({
example_1 = {
description = "First example repository"
}
example_2 = {
description = "Second example repository"
}
})
}
resource "github_repository" "all" {
for_each = local.repositories
name = each.key
description = each.value.description
private = true
}
For simplicity in this example I've only made the name and description variable between the instances, but you can add whatever extra attributes you need for each of the elements of local.repositories and then access them via each.value inside the resource block.
The private argument above illustrates how this approach can avoid the need to re-state argument values that will be the same for each declared repository, and have your local.repositories data structure focus only on the minimum attributes needed to describe the variations you need for your local policies around GitHub repositories.
A resource block with for_each set appears as a map of objects when used in expressions elsewhere, using the same keys as in the map given in for_each. Therefore if you need to access the repository ids, or any other attribute of the systematically-declared objects, you can write Terraform expressions that work with maps. For example, if you want to output all of the repository ids as a map of strings:
output "repository_ids" {
value = tomap({
for k, r in github_repository.all : k => r.repo_id
})
}

Resolution error: Cannot use resource 'x' in a cross-environment fashion, the resource's physical name must be explicit set

I'm trying to pass an ecs cluster from one stack to another stack.
I get this error:
Error: Resolution error: Resolution error: Resolution error: Cannot use resource 'BackendAPIStack/BackendAPICluster' in a cross-environment fashion, the resource's physical name must be explicit set or use `PhysicalName.GENERATE_IF_NEEDED`.
The cluster is defined as below in BackendAPIStack:
this.cluster = new ecs.Cluster(this, 'BackendAPICluster', {
vpc: this.vpc
});
The stacks are defined as follows:
const backendAPIStack = new BackendAPIStack(app, `BackendAPIStack${settingsForThisEnv.stackVersion}`, {
env: {
account: process.env.CDK_DEFAULT_ACCOUNT,
region: process.env.CDK_DEFAULT_REGION
},
digicallPolicyQueue: digicallPolicyQueue,
environmentName,
...settingsForThisEnv
});
const metabaseStack = new MetabaseStack(app, 'MetabaseStack', backendAPIStack.vpc, backendAPIStack.cluster, {
vpc: backendAPIStack.vpc,
cluster: backendAPIStack.cluster
});
metabaseStack.addDependency(backendAPIStack);
Here's the constructor for metabaseStack:
constructor(scope: cdk.Construct, id: string, vpc: ec2.Vpc, cluster: ecs.Cluster, props: MetabaseStackProps) {
super(scope, id, props);
console.log('cluster', cluster)
this.vpc = vpc;
this.cluster = cluster;
this.setupMetabase()
}
and then I'm using the cluster here:
const metabaseService = new ecs_patterns.ApplicationLoadBalancedFargateService(this, 'Metabase', {
assignPublicIp: false,
cluster: this.cluster,
...
I can't find documentation on how to do what I'm trying to do.
You're creating a Region/Account-specific Stack with BackendAPIStack because you're binding the stack to a specific account and region via the env prop value.
Then you're creating a Region/Account-agnostic stack by creating the MetabaseStack without any env prop value.
In general, having two independent stacks like this is fine, but here you're linking them together by passing a reference from the BackendAPIStack to the MetabaseStack, which won't work.
This is a problem because CDK normally links Stacks together by performing Stack Exports and Imports of values, but CloudFormation does not support cross-region or cross-account Stack references
So, your possible solutions are:
(A) Set up your MetabaseStack to use the same account/region as your BackendAPIStack
Under the hood this will setup the Cluster's ARN to be a Stack export from BackendAPICluster and then MetabaseStack will be able to import it.
(B1) Create BackendAPICluster with a clusterName that you pick.
i.e. new Cluster(..., {vpc: this.vpc, clusterName: 'backendCluster' })
By not providing a name, you're using the default of "CloudFormation-generated name" which is the basis of the issue that CDK is reporting, albeit it in a confusing way.
When you do provide a name, then the ARN for the cluster is deterministic (not picked by CloudFormation at deployment time) so CDK then has enough information at build time to determine what the Cluster's ARN will be and can provide that to your MetabaseStack.
(B2) Create BackendAPICluster with a clusterName and let CDK pick
This is done by setting the clusterName to PhysicalName.GENERATE_IF_NEEDED
i.e. new Cluster(..., {clusterName: PhysicalName.GENERATE_IF_NEEDED })
PhysicalName.GENERATE_IF_NEEDED is a marker that indicates that a physical (name) will only be generated by the CDK if it is needed for cross-environment references. Otherwise, it will be allocated by CloudFormation.
This is what the error is trying to tell you, but I didn't understand it either...
If possible, I would go with (A). I suspect it was just an oversight anyway that you weren't passing the same env values to the MetabaseStack and you probably want both of these stacks in the same region to reduce latency and all that.
If not, then I would personally then go with (B2) next because I try to not give any of my resources explicit names unless they are part of some contract with another group. I.e. Assume the role named 'ServiceWorker' in Account XYZ or Download the data from Bucket 'ABC'.

Error using CLI for cloud functions with IAM namespaces

I'm trying to create an IBM Cloud Function web action from some python code. This code has a dependency which isn't in the runtime, so I've followed the steps here to package the dependency with my code. I now need to create the action on the cloud for this package, using the steps described here. I've got several issues.
The first is that I want to check that this will be going into the right namespace. However though I have several, none are showing up when i do ibmcloud fn namespace list, I just get the empty table with headers. I checked that I was targeting the right region using ibmcloud target -r eu-gb.
The second is that when I try to bypass the problem above by creating a namespace from the command line using ibmcloud fn namespace create nyNamespaceName, it works, but I then check on the web UI, and this new namespace has been created in the Dallas region instead of the London one… I can’t seem to get it to create a namespace in the region that I am currently targeting for some reason, it’s always Dallas.
The third problem is that when I try to follow the steps 2 and 3 from here regardless, accepting that it will end up in the unwanted Dallas namespace, by running the equivalent of ibmcloud fn action create demo/hello <filepath>/hello.js --web true, it keeps telling me I need to target an org and a space. But my namespace is an IAM namespace, it doesn’t have an org and a space, so there are none to give?
Please let me know if I’m missing something obvious or have misunderstood something, because to me it feels like the CLI is not respecting the targeting of a region and not handling IAM stuff correctly.
Edit: adding code as suggested, but this code runs fine locally, it's the CLI part that I'm struggling with?
import sys
import requests
import pandas as pd
import json
from ibm_ai_openscale import APIClient
def main(dict):
# Get AI Openscale GUID
AIOS_GUID = None
token_data = {
'grant_type': 'urn:ibm:params:oauth:grant-type:apikey',
'response_type': 'cloud_iam',
'apikey': 'SOMEAPIKEYHERE'
}
response = requests.post('https://iam.bluemix.net/identity/token', data=token_data)
iam_token = response.json()['access_token']
iam_headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer %s' % iam_token
}
resources = json.loads(requests.get('https://resource-controller.cloud.ibm.com/v2/resource_instances', headers=iam_headers).text)['resources']
for resource in resources:
if "aiopenscale" in resource['id'].lower():
AIOS_GUID = resource['guid']
AIOS_CREDENTIALS = {
"instance_guid": AIOS_GUID,
"apikey": 'SOMEAPIKEYHERE',
"url": "https://api.aiopenscale.cloud.ibm.com"
}
if AIOS_GUID is None:
print('AI OpenScale GUID NOT FOUND')
else:
print('AI OpenScale FOUND')
#GET OPENSCALE SUBSCRIPTION
ai_client = APIClient(aios_credentials=AIOS_CREDENTIALS)
subscriptions_uids = ai_client.data_mart.subscriptions.get_uids()
for sub in subscriptions_uids:
if ai_client.data_mart.subscriptions.get_details(sub)['entity']['asset']['name'] == "MYMODELNAME":
subscription = ai_client.data_mart.subscriptions.get(sub)
#EXPLAINABILITY TEST
sample_transaction_id="SAMPLEID"
run_details = subscription.explainability.run(transaction_id=sample_transaction_id, cem=False)
#Formating results
run_details_json = json.dumps(run_details)
return run_details_json
I know the OP said they were 'targeting the right region'. But I want to make it clear that the 'right region' is the exact region in which the namespaces you want to list or target are located.
Unless you target this region, you won't be able to list or target any of those namespaces.
This is counterintuitive because
You are able to list Service IDs of namespaces in regions other than the one you are targeting.
The web portal allows you to see namespaces in all regions, so why shouldn't the CLI?
I was having an issue very similar to the OP's first problem, but once I targeted the correct region it worked fine.

Custom CloudFormation Resources in Terraform

I'm trying out Terraform, and am in the process of translating one of my more interesting CloudFormation stacks to TF. Included as a key part of the stack is the following declaration that specifies a custom resource for the template - a Lambda that queries a list of AMIs and selects the latest one for the context, based on the description as a filter.
LatestAMI:
Type: Custom::LatestAMI
Properties:
ServiceToken: arn:aws:lambda:us-east-1:XXXXXXX:function:GetLatestAMI
Description: ubuntu-16.04
I've looked around the Terraform docs, but I can't seem to find out how I can specify this resource. Is there a Terraform analog for custom resources in CloudFormation?
The CF codes you posted calls a lambda function to get the latest ami id (filter with Description: ubuntu-16.04. There is simpler way to do in terraform.
You need data source aws_ami
https://www.terraform.io/docs/providers/aws/d/ami.html
Use this data source to get the ID of a registered AMI for use in other resources.
data "aws_ami" "latest_ami" {
most_recent = true
executable_users = ["all"]
filter {
name = "owner-alias"
values = ["amazon"]
}
filter {
name = "name"
values = ["*ubuntu-16.04*"]
}
}