Azure Role Assignment to AAD group fails with Terraform - azure-devops

I am trying to assign contributor rights on a resource group to an Azure Active Directory Group using Terraform. The Terraform script I use looks like this:
# Deploy Resource Groups
resource "azurerm_resource_group" "rg" {
name = "rg-companyname-syn-env-100"
location = "westeurope"
}
# Retrieve data for AAD CloudAdmin groups
data "azuread_group" "cloud_admin" {
display_name = "AAD-GRP-companyname-CloudAdministrators-env"
security_enabled = true
}
# Add "Contributor" role to Cloudadmin AAD group
resource "azurerm_role_assignment" "cloud_admin" {
scope = azurerm_resource_group.rg.id
role_definition_name = "Contributor"
principal_id = data.azuread_group.cloud_admin.id
depends_on = [azurerm_resource_group.rg]
}
If I run this I receive the following error:
╷
│ Error: authorization.RoleAssignmentsClient#Create: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client '<application_object_id>' with object id '<application_object_id>' does not have authorization to perform action 'Microsoft.Authorization/roleAssignments/write' over scope '/subscriptions/<subscription_id>/resourceGroups/rg-companyname-syn-env-100/providers/Microsoft.Authorization/roleAssignments/<role_assignment_id>' or the scope is invalid. If access was recently granted, please refresh your credentials."
│
│ with azurerm_role_assignment.cloud_admin["syn"],
│ on rg.tf line 15, in resource "azurerm_role_assignment" "cloud_admin":
│ 15: resource "azurerm_role_assignment" "cloud_admin" {
│
╵
Note the AAD Group (AAD-GRP-companyname-CloudAdministrators-env) already has the Owner role on the subscription used.
Is there somebody that knows a fix for this problem?

I had this issue occur today after someone else in the team had changed the service principal my deployment ran under to a contributor rather than owner of the subscription. Assign the role the role "owner" of the subscription to the service account your deployments run under

Whichever principal (either you, a service principal or managed identity assigned to a build agent) likely has the built in Azure role Contributor which can manage resources, but not Role Based Access Control (RBAC). So that is the reason you are getting a 403 Unauthorized response.
Contributor role is allowed to read, but not write role assignments. Since you are using Terraform, I would suggest creating a custom role definition which will allow write as well as delete so you can use terraform destroy
You can create a custom role definition by clicky-clicking in the portal, azure cli or Terraform (snippet below); executed by someone with the Owner role.
Once you have a custom role assignment with the appropriate permissions then assign the principal that is executing the terraform apply with that custom role.
data "azurerm_client_config" "current" {
}
resource "azurerm_role_definition" "role_assignment_write_delete" {
name = "RBAC Owner"
scope = data.azurerm_client_config.current.subscription_id
description = "Management of role assignments"
permissions {
actions = [
"Microsoft.Authorization/roleAssignments/write",
"Microsoft.Authorization/roleAssignments/delete",
]
not_actions = []
}
assignable_scopes = [
data.azurerm_client_config.current.subscription_id //or management group
]
}

Related

To enable preview feature of azure resource provider

I would like to enable an azure preview feature via terraform. I have configured skip provider registration but when I tried to apply still get provider already exists error. I have to import manually as a workaround.
QUERY?:
do we must import manually to avoid provider exist error when register preview feature?
as I already define skip registration but seems it didn’t work.
Thanks!
======== configuration ========
Configure the Azure provider
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm”
version = “~> 2.99"
}
}
required_version = “>= 1.1.0”
}
provider “azurerm” {
features {}
skip_provider_registration = true
}
resource “azurerm_resource_provider_registration” “example” {
name = “Microsoft.Network”
feature {
name = “AFWEnableNetworkRuleNameLogging”
registered = true
}
}
have configured to skip provider registration but when I tried to apply still get provider already exists.
======== error log
terraform apply main.tf plan
azurerm_resource_provider_registration.example: Creating…
╷
│ Error: A resource with the ID. “/subscriptions/xxxx-xxxx/providers/Microsoft.Network” already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for “azurerm_resource_provider_registration” for more information.
Any solution on the above requirement to enable the preview feature of the corresponding namespace resource provider.
If the Terraform statefile already contains the relevant providers, we should import it first before making any changes. Then only Terraform will read the respective changes from statefile.
Step1:
Add below code in provider tf and main tf as below
provider tf file
terraform {
required_version = ">= 1.1.0"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.99"
}
}
}
provider "azurerm" {
features {}
skip_provider_registration = true
}
main tf file as follows
resource "azurerm_resource_provider_registration" "example" {
name = "Microsoft.Network"
feature {
name = "AFWEnableNetworkRuleNameLogging"
registered = true
}
}
Step2:
Run bellow commands
terraform plan
Run below command
terraform apply -auto-approve
NOTE:
if error saying its "already exists - to be managed via Terraform this resource needs to be imported into the State." then please run below command to import the respective service via terraform
terraform import azurerm_resource_provider_registration.example /subscriptions/************************/providers/Microsoft.Network
Output as follows:
Step3:
run below commands
terraform plan
terraform apply -auto-approve

Terraform apply error 'The number of path segments is not divisible by 2' for Azure App Feature Flag

Terraform apply error 'The number of path segments is not divisible by 2' for Azure App Feature Flag
Why am I seeing this error? Hard to find any answer to this anywhere. I am using Terraform v2.93.0
and I also tried 2.90.0 and 2.56.0, and got the same problem. I was adding configs just fine but
as soon as I tried to configure a Feature Flag, it breaks the Terraform project AND
I am forced to rebuild re-init from scratch. Terraform is not able to recover on its own if I remove the config and running plan again.
╷
│ Error: while parsing resource ID: while parsing resource ID:
| The number of path segments is not divisible by 2 in
| "subscriptions/{key}/resourceGroups/my-config-test/providers/Microsoft.AppConfiguration/configurationStores/my-app-configuration/AppConfigurationFeature/.appconfig.featureflag/DEBUG/Label/my-functions-test"
│
│ while parsing resource ID: while parsing resource ID:
| The number of path segments is not divisible by 2 in
│ "subscriptions/{key}/resourceGroups/my-config-test/providers/Microsoft.AppConfiguration/configurationStores/my-app-configuration/AppConfigurationFeature/.appconfig.featureflag/DEBUG/Label/my-functions-test"
╵
╷
│ Error: obtaining auth token for "https://my-app-configuration.azconfig.io": getting authorization token for endpoint https://my-app-configuration.azconfig.io:
| obtaining Authorization Token from the Azure CLI: parsing json result from the Azure CLI: waiting for the Azure CLI: exit status 1: ERROR: The command failed with an unexpected error. Here is the traceback:
│ ERROR: [Errno 2] No such file or directory
WHY is the slash missing from the front of the ID????
And here is the config that breaks it:
resource "azurerm_app_configuration_feature" "my_functions_test_DEBUG" {
configuration_store_id = azurerm_app_configuration.my_app_configuration.id
description = "Debug Flag"
name = "DEBUG"
label = "my-functions-test"
enabled = false
}
When it is healthy, the apply on configs works, and looks like this:
Plan: 4 to add, 0 to change, 0 to destroy.
Do you want to perform these actions in workspace "my-app-config-test"?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
azurerm_resource_group.my_config_rg_test: Creating...
azurerm_resource_group.my_config_rg_test: Creation complete after 0s [id=/subscriptions/{key}/resourceGroups/my-config-test]
Ok, I figured it out. There is a bug: when create a azurerm_app_configuration_key resource, the key can be like so key = "/application/config.EXSTREAM_DOMAIN" BUT when you create a azurerm_app_configuration_feature, you will HOSE your terraform config if you try to set the name field to name = .appconfig.featureflag/DEBUG. Instead, just set the name field to DEBUG. If you don't do that, you have to completely reset your terraform and re-initialize all the resources. Had to learn the hard way. There error message was not helpful but could be updated to be helpful in this respect.

On validating the Azure Devops git configuration in the Azure portal for Azure Data Factory service getting an error prompt

I had used Terraform to configure Azure git in Azure Data Factory but after post deployment on Validating the connection I am getting a prompt with an error. I have attached the screenshot below.
data "azurerm_client_config" "current" {}
resource "azurerm_resource_group" "example" {
name = "testadfrg"
location = "West Europe"
}
resource "azurerm_data_factory" "df" {
name = "testadfadf"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
public_network_enabled = "true"
vsts_configuration {
account_name = "organizationaz440"
branch_name = "Development"
project_name = "TestProject"
repository_name = "DataOps"
root_folder = "/ADF"
tenant_id = data.azurerm_client_config.current.tenant_id
}
}
It seems your user account which is trying to open ADF studio to validate changes doesn't have access on Git Repo. Kindly work with Git repo admins with error details. They should be able to give you relevant permissions.

Nextflow doesn't use the right service account to deploy workflows to kubernetes

We're trying to use nextflow on a k8s namespace other than our default, the namespace we're using is nextflownamespace. We've created our PVC and ensured the default service account has an admin rolebinding. We're getting an error that nextflow can't access the PVC:
"message": "persistentvolumeclaims \"my-nextflow-pvc\" is forbidden:
User \"system:serviceaccount:mynamespace:default\" cannot get resource
\"persistentvolumeclaims\" in API group \"\" in the namespace \"nextflownamespace\"",
In that error we see that system:serviceaccount:mynamespace:default is incorrectly pointing to our default namespace, mynamespace, not nextflownamespace which we created for nextflow use.
We tried adding debug.yaml = true to our nextflow.config but couldn't find the YAML it submits to k8s to validate the error. Our config file looks like this:
profiles {
standard {
k8s {
executor = "k8s"
namespace = "nextflownamespace"
cpus = 1
memory = 1.GB
debug.yaml = true
}
aws{
endpoint = "https://s3.nautilus.optiputer.net"
}
}
We did verify that when we change the namespace to another arbitrary value the error message used the new arbitrary namespace, but the service account name continued to point to the users default namespace erroneously.
We've tried every variant of profiles.standard.k8s.serviceAccount = "system:serviceaccount:nextflownamespace:default" that we could think of but didn't get any change with those attempts.
I think it's best to avoid using nested config profiles with Nextflow. I would either remove the 'standard' layer from your profile or just make 'standard' a separate profile:
profiles {
standard {
process.executor = 'local'
}
k8s {
executor = "k8s"
namespace = "nextflownamespace"
cpus = 1
memory = 1.GB
debug.yaml = true
}
aws{
endpoint = "https://s3.nautilus.optiputer.net"
}
}

Terraform - AWS - API Gateway dependency conundrum

I am trying to provision some AWS resources, specifically an API Gateway which is connected to a Lambda. I am using Terraform v0.8.8.
I have a module which provisions the Lambda and returns the lambda function ARN as an output, which I then provide as a parameter to the following API Gateway provisioning code (which is based on the example in the TF docs):
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
# Variables
variable "myregion" { default = "eu-west-2" }
variable "accountId" { default = "" }
variable "lambdaArn" { default = "" }
variable "stageName" { default = "lab" }
# API Gateway
resource "aws_api_gateway_rest_api" "api" {
name = "myapi"
}
resource "aws_api_gateway_method" "method" {
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
resource_id = "${aws_api_gateway_rest_api.api.root_resource_id}"
http_method = "GET"
authorization = "NONE"
}
resource "aws_api_gateway_integration" "integration" {
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
resource_id = "${aws_api_gateway_rest_api.api.root_resource_id}"
http_method = "${aws_api_gateway_method.method.http_method}"
integration_http_method = "POST"
type = "AWS"
uri = "arn:aws:apigateway:${var.myregion}:lambda:path/2015-03-31/functions/${var.lambdaArn}/invocations"
}
# Lambda
resource "aws_lambda_permission" "apigw_lambda" {
statement_id = "AllowExecutionFromAPIGateway"
action = "lambda:InvokeFunction"
function_name = "${var.lambdaArn}"
principal = "apigateway.amazonaws.com"
source_arn = "arn:aws:execute-api:${var.myregion}:${var.accountId}:${aws_api_gateway_rest_api.api.id}/*/${aws_api_gateway_method.method.http_method}/resourcepath/subresourcepath"
}
resource "aws_api_gateway_deployment" "deployment" {
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
stage_name = "${var.stageName}"
}
When I run the above from scratch (i.e. when none of the resources exist) I get the following error:
Error applying plan:
1 error(s) occurred:
* aws_api_gateway_deployment.deployment: Error creating API Gateway Deployment: BadRequestException: No integration defined for method
status code: 400, request id: 15604135-03f5-11e7-8321-f5a75dc2b0a3
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
If I perform a 2nd TF application it consistently applies successfully, but every time I destroy I then receive the above error upon the first application.
This caused me to wonder if there's a dependency that I need to explicitly declare somewhere, I discovered #7486, which describes a similar pattern (although relating to an aws_api_gateway_integration_response rather than an aws_api_gateway_deployment). I tried manually adding an explicit dependency from the aws_api_gateway_deployment to the aws_api_gateway_integration but this had no effect.
Grateful for any thoughts, including whether this may indeed be a TF bug in which case I will raise it in the issue tracker. I thought I'd check with the community before doing so in case I'm missing something obvious.
Many thanks,
Edd
P.S. I've asked this question on the Terraform user group but this seems to get very little in the way of responses, I'm yet to figure out the cause of the issue hence now asking here.
You are right about the explicit dependency declaration.
Normally Terraform would be able to figure out the relationships and schedule create/update/delete operations accordingly to that - this is mostly possible because of the interpolation mechanisms under the hood (${resource_type.ref_name.attribute}). You can display the relationships affecting this in a graph via terraform graph.
Unfortunately in this specific case there's no direct relationship between API Gateway Deployments and Integrations - meaning the API interface for managing API Gateway resources doesn't require you to reference integration ID or anything like that to create deployment and the api_gateway_deployment resource in turn doesn't require that either.
The documentation for aws_api_gateway_deployment does mention this caveat at the top of the page. Admittedly the Deployment not only requires the method to exist, but integration too.
Here's how you can modify your code to get around it:
resource "aws_api_gateway_deployment" "deployment" {
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
stage_name = "${var.stageName}"
depends_on = ["aws_api_gateway_method.method", "aws_api_gateway_integration.integration"]
}
Theoretically the "aws_api_gateway_method.method" is redundant since the integration already references the method in the config:
http_method = "${aws_api_gateway_method.method.http_method}"
so it will be scheduled for creation/update prior to the integration either way, but if you were to change that to something like
http_method = "GET"
then it would become necessary.
I have submitted PR to update the docs accordingly.