Terraform backend cannot connect to storage account - azure-devops

This my from terraform.tf
terraform {
backend "azurerm" {
resource_group_name = "tstate"
storage_account_name = "strorageaccount1"
container_name = "terraform.tfstate"
access_key = "asdfg45454..."
}
}
This fails when my storage account is not in "all networks". My settings of storage account network is given below. Blob storage private or public it works so no problem there. But "all networks" must be enabled for it to work. How can I make it work with "all networks" disabled? The error I get is as follows:
Error: Failed to get existing workspaces: storage: service returned
error: StatusCode=403, ErrorCode=AuthorizationFailure,
ErrorMessage=This request is not authorized to perform this operation.
There is no IP or Vnet needed as Azure default agent is running the devops pipeline. And the SPN has owner access on subscription. What am I missing?

Well, you explicitly forbid almost any service (or server) to access your storage account. with the exception of "trusted Microsoft services". However, your Azure DevOps Build Agent does not fall under that category.
So, you need to whitelist your build agent first. There are two ways you can do this:
Use a self-hosted agent that you run inside a VNET. Then allow access from that VNET in your firewall rules of your storage account
If you want to stick with managed build agents: Run a AZ CLI or Azure Powershell script first, that does fetch the public IP of your build agent (https://api.ipify.org) and add that to your firewall. After terraform finished, have another script that removes that IP exception again.

Related

GitHub CI/CD cannot deploy to Azure SQL as it cannot add firewall rule due to "deny public network access" being set to Yes

I have an Azure SQL server where I wish to deploy my database via dacpac using GitHub CI/CD. I am using the Azure SQL Deploy action with Service Principal for Azure login action
Due to policy restrictions on the client side, the "Deny Public Network Access" is always enabled and therefore while deploying even though the service principal login works, the GitHub action is unable to add the IP Address to the firewall rule.
We are using Self-Hosted GitHub runners. Is there any workaround to deploying the database via CI/CD under such circumstances where we cannot add the firewall rule to whitelist the agent/runners IP Address?
The solution was to do away with Azure Login action and add self-hosted runner virtual network in the Azure SQL Firewall settings:
The Azure Login action attempts to add IP Address of the runner to the Azure SQL Firewall. Hence, this action must not be used. I removed this action and relied on the second step for seamlessly accessing Azure SQL Database instead.
The Azure SQL Deploy action requires either above login step or "allow access to azure services" to be turned ON. Both of which were not an option for me. I decided to go with an action that runs sqlpackage.exe. https://learn.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage-publish?view=sql-server-ver15
I am using self-hosted runners which are hosted within the virtual network configured within Azure. However, the point that I had missed was to add the virtual network to the Firewall settings in Azure SQL. Once, I did that, I did not require an explicit login or whitelisting runner ip addresses.

How to Manage IBM Cloud Key-Protect Instance from CLI when Private Network Only Policy is Applied?

In doing some testing of the IBM Cloud Security and Compliance items, specifically the CIS Benchmarks for Best Practices, one item I was non-compliant on was in Cloud Key protect for the Goal "Check whether Key Protect is accessible only by using private endpoints"
My Key-protect instance was indeed set to "Public and Private" so I changed it to Private. This change now requires me to manage my Key-Protect instance from the CLI.
When I try to even look at my Key-Protect instance policy from the CLI I receive the following error:
ibmcloud kp instance -i my_instance_id policies
Retrieving policy details for instance: my_instance_id...
Error while getting instance policy: kp.Error: correlation_id='cc54f61d-4424-4c72-91aa-d2f6bc20be68', msg='Unauthorized: The user does not have access to the specified resource'
FAILED
Unauthorized: The user does not have access to the specified resource
Correlation-ID:cc54f61d-4424-4c72-91aa-d2f6bc20be68
I'm confused - I am running the CLI logged, in as the tenant admin with Access policy of All resources in account (including future IAM enabled services)
What am I doing wrong here?
Private endpoints are only accessible from within IBM Cloud. If you connect from the public internet, access should be blocked.
There are multiple ways, how to work with such a policy in place. One is to deploy (a VPC with) a virtual machine on a private network. Then, connect to it with a VPN or Direct Link. Thus, your resources are not accessible from the public internet, but only through private connectivity. You could continue to use the IBM Cloud CLI, but set it to use private endpoints.

Accessing Amazon RDS Postgresql from Azure DevOps Hosted Agent

How can I allow Azure DevOps Hosted Agent access my Amazon RDS PostgreSql without setting the Security Group to Anywhere. I was looking for IP Range or something to whitelist Azure DevOps Agents but can't find it.
In Azure, I can check a box to grant all "Azure DevOps Services" access to my Azure SQL Database but of course its not present in AWS.
I don't think we can access the Amazon RDS PostgreSql directly from Azure DevOps Hosted Agent, I mean using the hosted service account.
However, Amazon RDS for PostgreSQL Supports User Authentication with Kerberos and Microsoft Active Directory, so we can try writing script to access it by using the specific credential. Then run the scripts in pipeline by adding corresponding tasks (e.g AWS CLI or AWS PowerShell).
Also check How do I allow users to connect to Amazon RDS with IAM credentials?
For the IP ranges, please refer to Allowed address lists and network connections and Microsoft-hosted Agents for details.
The IPs used for the hosted Agent IP ranges are linked through here. I have not had much success using it for hosted agents. The list is big and the documentation is not really clear about what types of services you need to whitelist.
I would go with whitelisting the hosted agent IP just-in-time during the pipeline run, then remove it as a final step. First you can grab the ip of the hosted agent:
$hostedIPAddress = Invoke-RestMethod http://ipinfo.io/json | Select -exp ip
Then you could use the AWS CLI or AWS PowerShell module to add the specific IP. Azure DevOps AWS tools task includes the CLI.
Do the needed work against the DB, then make sure you clean up the rule\temp security group at the end.

How can I deploy content to a static website in Azure Storage that has IP restrictions enabled?

I'm getting an error in my release pipeline (Azure DevOps) when I deploy content to a static website in Azure Storage with IP restrictions enabled.
Error parsing destination location "https://MYSITE.blob.core.windows.net/$web": Failed to validate destination. The remote server returned an error: (403) Forbidden.
The release was working fine until I added IP Restrictions to the storage account to keep the content private. Today, we use IP restrictions to control access. Soon, we will remove the IP restrictions in favor of vpn and vnets. However, my expectation is that I will have the same problem.
My assumption is that Azure DevOps cannot access the storage account because it is not whitelisted in the IP Address list. My release pipeline uses the AzureBlob File Copy task.
steps:
- task: AzureFileCopy#2
displayName: 'AzureBlob File Copy'
inputs:
SourcePath: '$(System.DefaultWorkingDirectory)/_XXXXX/_site'
azureSubscription: 'XXXX'
Destination: AzureBlob
storage: XXXX
ContainerName: '$web'
I have already enabled "trusted Microsoft services" but that doesn't
help.
Whitelisting the IP Addresses for Azure DevOps is not a good option because there are TONS of them and they change regularly.
I've seen suggestions to remove the IP restrictions and re-enable them after the publish step. This is risky because if something were to fail after the IP restrictions are removed, my site would be publicly accessible.
I'm hoping someone has other ideas! Thanks.
You can add a step to whitelist the agent IP address, then remove it from the whitelist at the end of the deployment. You can get the IP address by making a REST call to something like ipify.
I have done that for similar scenarios and it works well.
I would recommend a different approach: running an Azure DevOps agent with a static IP and/or inside the private VNet.
Why I consider this a better choice:
audit logs will be filled with addition and removal of rules, making harder analysis in case of attack
the Azure connection must be more powerful than needed, specifically to change Rules in Security Groups or Firewall or Application Gateway or else, while it only needs deploy permissions
it opens traffic from outside, while temporarily, while a private agent needs always initiate from inside
No solution is perfect, so it is important to chose the best for your specific scenario.

Deployed static website to Azure via terraform - but the blade is inaccessible with permission error?

I've got a very basic terraform deployment going through Azure Devops, which defines a storage bucket and static website. However, when I go into the Azure Portal, the static website blade gives me "Access Denied. You do not have access". All other aspects of the storage bucket are available, though, so it doesn't appear to be a general permissions issue.
Terraform doesn't support the config in the AZ RM, so I'm using the local-exec pattern to configure the static website.
Running in DevOps, my terraform has a system connection and runs as a service user. However, I've also tried destroying the bucket and re-running the terraform as my user - this doesn't make any difference.
I've tried adding myself onto the IAM on the bucket, that also doesn't make any difference.
The definition for the storage bucket is:
name = "website"
resource_group_name = "${azurerm_resource_group.prod.name}"
location = var.region
account_kind = "StorageV2"
account_tier = "Standard"
account_replication_type = "LRS"
provisioner "local-exec" {
# https://github.com/terraform-providers/terraform-provider-azurerm/issues/1903
command = "az storage blob service-properties update --account-name ${azurerm_storage_account.website-storage.name} --static-website --index-document index.html --404-document 404.html"
}
}
I'm expecting to be able to get to the static website blade within the Portal - is there some reason why this wouldn't work?
I don't yet have a reason why this happened. I had earlier tried removing the storage and re-creating, but I removed the storage via the portal. This evening I tried renaming the resource in Terraform which forced it to destroy and recreate, and that works.
I had previously messed about with StorageV1 resource / container / blob definition of the same name; potentially there was something "invisible" in Azure which was causing this oddness...