How can I deploy content to a static website in Azure Storage that has IP restrictions enabled? - azure-devops

I'm getting an error in my release pipeline (Azure DevOps) when I deploy content to a static website in Azure Storage with IP restrictions enabled.
Error parsing destination location "https://MYSITE.blob.core.windows.net/$web": Failed to validate destination. The remote server returned an error: (403) Forbidden.
The release was working fine until I added IP Restrictions to the storage account to keep the content private. Today, we use IP restrictions to control access. Soon, we will remove the IP restrictions in favor of vpn and vnets. However, my expectation is that I will have the same problem.
My assumption is that Azure DevOps cannot access the storage account because it is not whitelisted in the IP Address list. My release pipeline uses the AzureBlob File Copy task.
steps:
- task: AzureFileCopy#2
displayName: 'AzureBlob File Copy'
inputs:
SourcePath: '$(System.DefaultWorkingDirectory)/_XXXXX/_site'
azureSubscription: 'XXXX'
Destination: AzureBlob
storage: XXXX
ContainerName: '$web'
I have already enabled "trusted Microsoft services" but that doesn't
help.
Whitelisting the IP Addresses for Azure DevOps is not a good option because there are TONS of them and they change regularly.
I've seen suggestions to remove the IP restrictions and re-enable them after the publish step. This is risky because if something were to fail after the IP restrictions are removed, my site would be publicly accessible.
I'm hoping someone has other ideas! Thanks.

You can add a step to whitelist the agent IP address, then remove it from the whitelist at the end of the deployment. You can get the IP address by making a REST call to something like ipify.
I have done that for similar scenarios and it works well.

I would recommend a different approach: running an Azure DevOps agent with a static IP and/or inside the private VNet.
Why I consider this a better choice:
audit logs will be filled with addition and removal of rules, making harder analysis in case of attack
the Azure connection must be more powerful than needed, specifically to change Rules in Security Groups or Firewall or Application Gateway or else, while it only needs deploy permissions
it opens traffic from outside, while temporarily, while a private agent needs always initiate from inside
No solution is perfect, so it is important to chose the best for your specific scenario.

Related

Whitelisting Azure DevOps Pipeline

I have a server in AWS, which is hosting a security tool. Azure DevOps supports this tool and I've installed the add on for it. I've added the step to my Pipeline and configured the service connection.
We are using Hosted Agents in a Cloud AZD instance.
When I run my pipeline, I get the following error:
##[error][TOOL] API GET '/api/server/version' failed, error was: {"errno":"ETIMEDOUT","code":"ETIMEDOUT","syscall":"connect","address":"1.1.1.1","port":443}
In my AWS security group, I have allowed the Inbound IP's for Azure DevOps listed here https://learn.microsoft.com/en-us/azure/devops/organizations/security/allow-list-ip-url?view=azure-devops&tabs=IP-V4#ip-addresses-and-range-restrictions
I have also allowed the Geographical IP's for listed in the json file here https://www.microsoft.com/en-us/download/details.aspx?id=56519
If I allow all traffic for 443 through the security group as a test, this works as expected. This is not a solution however as this is a security tool and should not be public.
In my pipeline, I added a task to run a curl command to inspect the IP's of the pipeline. Neither of these ranges appear in any list I can find published.
51.142.72.0/24
51.142.229.0/24
I was advised to post here by AzureDevOps on Twitter for some help, so hopefully someone can assist me here.

GitHub CI/CD cannot deploy to Azure SQL as it cannot add firewall rule due to "deny public network access" being set to Yes

I have an Azure SQL server where I wish to deploy my database via dacpac using GitHub CI/CD. I am using the Azure SQL Deploy action with Service Principal for Azure login action
Due to policy restrictions on the client side, the "Deny Public Network Access" is always enabled and therefore while deploying even though the service principal login works, the GitHub action is unable to add the IP Address to the firewall rule.
We are using Self-Hosted GitHub runners. Is there any workaround to deploying the database via CI/CD under such circumstances where we cannot add the firewall rule to whitelist the agent/runners IP Address?
The solution was to do away with Azure Login action and add self-hosted runner virtual network in the Azure SQL Firewall settings:
The Azure Login action attempts to add IP Address of the runner to the Azure SQL Firewall. Hence, this action must not be used. I removed this action and relied on the second step for seamlessly accessing Azure SQL Database instead.
The Azure SQL Deploy action requires either above login step or "allow access to azure services" to be turned ON. Both of which were not an option for me. I decided to go with an action that runs sqlpackage.exe. https://learn.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage-publish?view=sql-server-ver15
I am using self-hosted runners which are hosted within the virtual network configured within Azure. However, the point that I had missed was to add the virtual network to the Firewall settings in Azure SQL. Once, I did that, I did not require an explicit login or whitelisting runner ip addresses.

Terraform backend cannot connect to storage account

This my from terraform.tf
terraform {
backend "azurerm" {
resource_group_name = "tstate"
storage_account_name = "strorageaccount1"
container_name = "terraform.tfstate"
access_key = "asdfg45454..."
}
}
This fails when my storage account is not in "all networks". My settings of storage account network is given below. Blob storage private or public it works so no problem there. But "all networks" must be enabled for it to work. How can I make it work with "all networks" disabled? The error I get is as follows:
Error: Failed to get existing workspaces: storage: service returned
error: StatusCode=403, ErrorCode=AuthorizationFailure,
ErrorMessage=This request is not authorized to perform this operation.
There is no IP or Vnet needed as Azure default agent is running the devops pipeline. And the SPN has owner access on subscription. What am I missing?
Well, you explicitly forbid almost any service (or server) to access your storage account. with the exception of "trusted Microsoft services". However, your Azure DevOps Build Agent does not fall under that category.
So, you need to whitelist your build agent first. There are two ways you can do this:
Use a self-hosted agent that you run inside a VNET. Then allow access from that VNET in your firewall rules of your storage account
If you want to stick with managed build agents: Run a AZ CLI or Azure Powershell script first, that does fetch the public IP of your build agent (https://api.ipify.org) and add that to your firewall. After terraform finished, have another script that removes that IP exception again.

Tyring to run VSTS agent thru a proxy which limits sites

Have installed VSTS agent in a very locked down environment. It makes a connection to VSTS, gets job but fails when downloading artefact. Gives error
Error: in getBuild, so retrying => retries pending : 4.
It retries 4 times and fails.
The agent is going thru a proxy. Have setup the proxy using ./config --proxyurl and also set HTTP_PROXY AND HTTPS_PROXY system environment vars.
The proxy is very limiting in that URLS are locked down, there is no authentication required. Does anybody know what URLs the agent accesses? Am hoping if can get a definitive list this will solve the issue. If anybody knows how can get a list would be great. Or maybe I have misconfigured?
Any ideas?
Tyring to run VSTS agent thru a proxy which limits sites
According to the document
I'm running a firewall and my code is in Azure Repos. What URLs does the agent need to communicate with?:
To ensure your organization works with any existing firewall or IP
restrictions, ensure that dev.azure.com and dev.azure.com are open
and update your allow-listed IPs to include the following IP
addresses, based on your IP version. If you're currently allow-listing
the 13.107.6.183 and 13.107.9.183 IP addresses, leave them in place,
as you don't need to remove them.
And With just the organization's name or ID, you can get its base URL using the global Resource Areas REST API (https://dev.azure.com/_apis/resourceAreas). This API doesn't require authentication and provides information about the location (URL) of the organization as well as the base URL for REST APIs, which can live on different domains.
Please check this document Best practices for working with URLs in Azure DevOps extensions and integrations for some more details.
Hope this helps.

VSTS What User Account does Hosted Agent run under?

We are trying to setup Release (continuous deployment) from our VSTS in the cloud. After the build is done, the Hosted Agent VS2017 tries to deploy the artifacts to the target server.
Firstly, it failed because our firewall blocked the target server from receiving the artifact (a .zip containing all the stuff). In fact, if I connect to the server via RDP and try to download the artifact from a browser, it's blocked.
Our security team temporarily disabled this firewall rule, and it worked (this also means the hosted agent has line of sight for the target server). Now, they don't want this rule off, they would like to know what is the User Account that tries to download/publish the artifact from the hosted agent, so they would allow the download of the .zip only for that specific user. I'm not sure if it's the same account which runs the service in the Host Agent, or if it's Network Service (therefore the own target server credentials), os some other account.
How do I know what user account should be granted rights in our firewall to download anything?
Use the Windows Machine File Copy task, you can provide a username/password to use for copying the files.
However, it uses RoboCopy over SMB to copy the files. As a result, it's probably going to be safer to set up a private agent within your network that has line-of-sight to the target servers, rather than opening up holes in your firewall a whole slew of ports.