Creating Issue Labels with Terraform using the Github Provider - github

I'm trying to automate my repository setup with terraform. First thing is creating issue labels for a bunch of repos using the Terraform Guthub provider.
It works when I explicitly state the repo and the labels:
terraform {
required_providers {
github = {
source = "integrations/github"
version = "~> 5.0"
}
}
}
# Use env-var
provider "github" {}
data "github_repository" "trashbox" {
full_name = "sebastian-sommerfeld-io/trashbox"
}
resource "github_issue_label" "bug" {
repository = data.github_repository.trashbox.id
name = "bug"
description = "Something is not working"
color = "B60205"
}
resource "github_issue_label" "security" {
repository = data.github_repository.trashbox.id
name = "security"
description = "CVEs, code scan violations, etc."
color = "cd3ad7"
}
But this would mean that I would have to duplicate everything for another repo. Or at least that I need to update my terraform config manually when I add another repo. I'd prever to have all relevant repos auto-detected.
Auto-detecting works with this snippet ... this returns all repos I want to configure.
data "github_repositories" "repos" {
query = "user:sebastian-sommerfeld-io archived:false"
include_repo_id = true
}
But now I cannot create the labels. When I run terraform apply I always get this error:
Error: POST https://api.github.com/repos/sebastian-sommerfeld-io/sebastian-sommerfeld-io/website-sommerfeld-io/labels: 404 Not Found []
with github_issue_label.bug["sebastian-sommerfeld-io/website-sommerfeld-io"],
on issues.tf line 1, in resource "github_issue_label" "bug":
1: resource "github_issue_label" "bug" {
The odd thing is, that terraform plan does not hint at any error:
# github_issue_label.bug["sebastian-sommerfeld-io/website-sommerfeld-io"] will be created
+ resource "github_issue_label" "bug" {
+ color = "B60205"
+ description = "Something is not working"
+ etag = (known after apply)
+ id = (known after apply)
+ name = "bug"
+ repository = "sebastian-sommerfeld-io/website-sommerfeld-io"
+ url = (known after apply)
}
My complete Terraform config which generates the outputs from terraform plan and terraform apply is this:
terraform {
required_providers {
github = {
source = "integrations/github"
version = "~> 5.0"
}
}
}
# Use env-var
provider "github" {}
data "github_repositories" "repos" {
query = "user:sebastian-sommerfeld-io archived:false"
include_repo_id = true
}
resource "github_issue_label" "bug" {
for_each = toset(data.github_repositories.repos.full_names)
repository = each.value
name = "bug"
description = "Something is not working"
color = "B60205"
}
The repositories are queried correctly. I confirmed this via:
output "affected_repos" {
value = data.github_repositories.repos.full_names
description = "Github Repos"
}
This lists all repos correctly:
affected_repos = tolist([
"sebastian-sommerfeld-io/website-sommerfeld-io",
"sebastian-sommerfeld-io/jarvis",
"sebastian-sommerfeld-io/github-action-generate-docs",
"sebastian-sommerfeld-io/configs",
"sebastian-sommerfeld-io/website-tafelboy-de",
"sebastian-sommerfeld-io/website-numero-uno-de",
"sebastian-sommerfeld-io/website-masterblender-de",
"sebastian-sommerfeld-io/monitoring",
"sebastian-sommerfeld-io/github-action-update-antora-yml",
"sebastian-sommerfeld-io/github-action-generate-readme",
"sebastian-sommerfeld-io/docker-image-tf-graph-beautifier",
"sebastian-sommerfeld-io/docker-image-jq",
"sebastian-sommerfeld-io/docker-image-git",
"sebastian-sommerfeld-io/docker-image-ftp-client",
"sebastian-sommerfeld-io/docker-image-folderslint",
"sebastian-sommerfeld-io/docker-image-adoc-antora",
"sebastian-sommerfeld-io/trashbox",
"sebastian-sommerfeld-io/provinzial",
])
I guess I don't get the for_each stuff right. Can anyone help me? I want to query all my repos taht fit my criteria and add labels to them.
UPDATE: I just detected that with my static approach I pass id, not full_name. I updated my code to this (snippet from above):
resource "github_issue_label" "bug" {
for_each = data.github_repositories.repos.repo_ids
repository = each.value
name = "bug"
description = "Something is not working"
color = "B60205"
}
Now at least the error message is different:
│ Error: Invalid for_each argument
│
│ on issues.tf line 2, in resource "github_issue_label" "bug":
│ 2: for_each = data.github_repositories.repos.repo_ids
│ ├────────────────
│ │ data.github_repositories.repos.repo_ids is list of number with 18 elements
│
│ The given "for_each" argument value is unsuitable: the "for_each" argument
│ must be a map, or set of strings, and you have provided a value of type
│ list of number.

Related

Error when adding tags to Snowflake resource (role)

I am using:
Terraform v1.2.9
on windows_amd64
provider registry.terraform.io/snowflake-labs/snowflake v0.42.1
My main.tf file is:
terraform {
required_version = ">= 1.1.7"
backend "http" {
}
required_providers {
snowflake = {
source = "Snowflake-Labs/snowflake"
version = "~> 0.42"
}
}
}
provider "snowflake" {
username = "xxxxxx"
account = "yyyyyy"
region = "canada-central.azure"
}
When I add the following tags to a Snowflake role, I have an error. Can you help?
resource "snowflake_role" "operations_manager" {
name = "OPERATIONS_MANAGER"
comment = "A comment"
tag = {
managed-with = "Terraform",
owner = "Joe Smith"
}
}
Error: Unsupported argument
│
│ on functional_roles.tf line 35, in resource "snowflake_role" "operations_manager":
│ 35: tag = {
│
│ An argument named "tag" is not expected here. Did you mean to define a block of type "tag"?

Create Azure AKS with Managed Identity using Terraform gives AutoUpgradePreview not enabled error

I am trying to create an AKS cluster with managed identity using Terraform. This is my code so far, pretty basic and standard from a few documentation and blog posts I found online.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "2.79.1"
}
}
}
provider "azurerm" {
features {}
use_msi = true
}
resource "azurerm_resource_group" "rg" {
name = "prod_test"
location = "northeurope"
}
resource "azurerm_kubernetes_cluster" "cluster" {
name = "prod_test_cluster"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
dns_prefix = "weak"
default_node_pool {
name = "default"
node_count = "4"
vm_size = "standard_ds3_v2"
}
identity {
type = "SystemAssigned"
}
}
And this is the error message that I can't come around to a solution. Any thoughts on it?
Error: creating Managed Kubernetes Cluster "prod_test_cluster" (Resource Group "prod_test"): containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="BadRequest" Message="Feature Microsoft.ContainerService/AutoUpgradePreview is not enabled. Please see https://aka.ms/aks/previews for how to enable features."
│
│ with azurerm_kubernetes_cluster.cluster,
│ on main.tf line 19, in resource "azurerm_kubernetes_cluster" "cluster":
│ 19: resource "azurerm_kubernetes_cluster" "cluster" {
│
I tested it on my environment and faced the same issue as you can see below:
So, to give a description on the issue the AutoChannelUpgrade went
to public preview on August 2021. And as per the terraform azurerm provider 2.79.0 , it bydefault passes that value to none in the
backend but as we have not registered for the feature it fails giving
the error Feature Microsoft.ContainerService/AutoUpgradePreview is not enabled.
To confirm you don't have the feature registered you can use the
below command :
az feature show -n AutoUpgradePreview --namespace Microsoft.ContainerService
You will see it not registered as below:
Now to overcome this you can try two solutions as given below:
You can try using terraform azurerm provider 2.78.0 instead of 2.79.1.
Other solution will be to register for the feature and then you can
use the same code that you are using .
You can follow the below steps:
You can use below command to register the feature (it will take around 5
mins to get registered) :
az login --identity
az feature register --namespace Microsoft.ContainerService -n AutoUpgradePreview
After the above is done you can check the registration stauts with below command :
az feature registration show --provider-namespace Microsoft.ContainerService -n AutoUpgradePreview
After the feature status becomes registered you can do a terraform apply to your code .
I tested it using the below code on my VM:
provider "azurerm" {
features {}
subscription_id = "948d4068-xxxxx-xxxxxx-xxxx-e00a844e059b"
tenant_id = "72f988bf-xxxxx-xxxxxx-xxxxx-2d7cd011db47"
use_msi = true
}
resource "azurerm_resource_group" "rg" {
name = "terraformtestansuman"
location = "west us 2"
}
resource "azurerm_kubernetes_cluster" "cluster" {
name = "prod_test_cluster"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
dns_prefix = "weak"
default_node_pool {
name = "default"
node_count = "4"
vm_size = "standard_ds3_v2"
}
identity {
type = "SystemAssigned"
}
}
Outputs:
Reference:
Github Issue
Install Azure CLI if not installed on the VM using Microsoft Installer

Azure app registration creation error through terraform Azure Devops yml pipeline [duplicate]

This question already has answers here:
json.Marshal(): json: error calling MarshalJSON for type msgraph.Application
(2 answers)
Closed 1 year ago.
I have very simple terraform code.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.46.0"
}
azuread = {
source = "hashicorp/azuread"
version = "~> 2.0.0"
}
}
}
provider "azurerm" {
features {}
}
provider "azuread" {
tenant_id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
terraform {
backend "azurerm" {
resource_group_name = "xxxx"
storage_account_name = "xxxxxxxxx"
container_name = "xxxxxxxxxxxxx"
key = "xxxxxxxxxxxxxxxxx"
}
}
data "azuread_client_config" "current" {}
resource "azurerm_resource_group" "test" {
name = "test-rg-005"
location = "East US"
}
resource "azuread_application" "example" {
display_name = "Example-app"
}
However when i run this through yml pipeline on azure devops, i am getting this error during apply stage.
Plan: 1 to add, 0 to change, 0 to destroy.
azuread_application.example: Creating...
│ Error: Could not create application
│
│ with azuread_application.example,
│ on terraform.tf line 42, in resource "azuread_application" "example":
│ 42: resource "azuread_application" "example" {
│
│ json.Marshal(): json: error calling MarshalJSON for type
│ msgraph.Application: json: error calling MarshalJSON for type
│ *msgraph.Owners: marshaling Owners: encountered DirectoryObject with nil
│ ODataId
##[error]Error: The process '/opt/hostedtoolcache/terraform/1.0.5/x64/terraform' failed with
exit code 1
Any clue will be helpful, not really clear what this error is about?
Thanks.
There is a bug in azure Active directory provider after an MSFT update. This is impacting any azure ad provider usage creating new resources, however it seems to be working on already deployed resources, i.e. changing and upgrading the configurations of already deployed resource within azure ad. Following is the link for the bug updates.
https://github.com/hashicorp/terraform-provider-azuread/issues/588

Issue creating a RDS Postregsql instance, with AWS Terraform module

I am attempting to run this module: https://registry.terraform.io/modules/azavea/postgresql-rds/aws/latest Here is the main.tf file created based on the information found there:
provider "aws" {
region = "us-east-2"
access_key = "key_here"
secret_key = "key_here"
}
module postgresql_rds {
source = "github.com/azavea/terraform-aws-postgresql-rds"
vpc_id = "vpc-2470994d"
instance_type = "db.t3.micro"
database_name = "tyler"
database_username = "admin"
database_password = "admin1234"
subnet_group = "tyler-subnet-1"
project = "Postgres-ts"
environment = "Staging"
alarm_actions = ["arn:aws:sns:us-east-2:304831784377:tyler-test"]
ok_actions = ["arn:aws:sns:us-east-2:304831784377:tyler-test"]
insufficient_data_actions = ["arn:aws:sns:us-east-2:304831784377:tyler-test"]
database_identifier = "jl23kj32sdf"
}
I am getting an error:
Error: Error creating DB Instance: DBSubnetGroupNotFoundFault: DBSubnetGroup 'tyler-subnet-1' not found.
│ status code: 404, request id: a95975dd-5456-444a-8f64-440fc4c1782f
│
│ with module.postgresql_rds.aws_db_instance.postgresql,
│ on .terraform/modules/postgresql_rds/main.tf line 46, in resource "aws_db_instance" "postgresql":
│ 46: resource "aws_db_instance" "postgresql" {
I have tried the example from the page:
subnet_group = aws_db_subnet_group.default.name
I used the default example from the page, under "Usage" ie subnet_group = aws_db_subnet_group.default.name. I have also used the subnet ID from AWS. I also assigned a name to the subnet, and used the name "tyler-subnet-1 in the above main.tf). I am getting the same basic error, with all three attempted inputs. Is there something I'm not understanding about the information that is being requested here?
Assuming you have a default subnet group, you can just use it:
subnet_group = "default"
If not you have to create a custom subnet group using aws_db_subnet_group:
resource "aws_db_subnet_group" "default" {
name = "my-subnet-group"
subnet_ids = [<subnet-id-1>, <subnet-id-2>]
tags = {
Name = "My DB subnet group"
}
}
and use the custom group:
subnet_group = aws_db_subnet_group.default.name

Features block terraform

terraform init successfully initializes but gets stuck on terraform plan.
The error is related to the feature block. I'm unsure where to add the feature block:
Insufficient features blocks (source code not available) At least 1 "features" blocks are required.
My configuration looks like
terraform {
required_version = ">= 0.11"
backend "azurerm" {
features {}
}
}
I tried removing and adding features block as github page
When you run updated version of terraform you need to define another block defined below
provider "azurerm" {
features {}
}
An other reason for the message could be, that a named provider is in use:
provider "azurerm" {
alias = "some_name" # <- here
features {}
}
But not specified on the resource:
resource "azurerm_resource_group" "example" {
# might this block is missing
# -> provider = azurerm.some_name
name = var.rg_name
location = var.region
}
Error message:
terraform plan
╷
│ Error: Insufficient features blocks
│
│ on <empty> line 0:
│ (source code not available)
│
│ At least 1 "features" blocks are required.
In Terraform >= 0.13, here's what a sample versions.tf looks like (note the provider config being in a separate block):
# versions.tf
terraform {
required_providers {
azurerm = {
# ...
}
}
required_version = ">= 0.13"
}
# This block goes outside of the required_providers block!
provider "azurerm" {
features {}
}
Please check if highlighted lines are added to your template