google-cloud-dns Secondary DNS support - google-cloud-dns

I am trying to determine if Google Cloud DNS has support for secondary DNS (AXFR/IXFR) transfer via NOTIFY request? I can not find anything online and Google does not explicitly state it is not supported.

Google Cloud DNS currently does not support running as a secondary DNS server (with TSIG signed AXFR/IXFR on receipt of NOTIFY messages).
Nor does it support incoming AXFR/IXFR requests or send NOTIFY messages, in case that's what you meant.
If you are interested in secondary DNS because you want to spread your DNS authorities across multiple providers, you could try the alternative "multiple master" or "split authority" configuration, where you have two (or more) independent DNS services that are kept in synchronization from another source.
There are several DNS-specific tools to synchronize multiple DNS services, from Netflix’s Denominator, StackExchange’s DNSControl, and GitHub’s OctoDNS to Men & Mice’s commercial xDNS.
You can also use HashiCorp’s Terraform to manage multiple DNS providers as well as many other cloud resources.
All of these support many different DNS providers and DNS name server software such as BIND. The support for specific record types and features varies by provider (and tool). DNSControl has a useful feature matrix showing support for specific features.
The following list of DNS providers and software shows the support by different tools as of November 2017,
AWS Route 53: Denominator, DNSControl, OctoDNS, Terraform
Azure: OctoDNS, Terraform
BIND: DNSControl, Terraform(RFC2136)
CloudFlare: DNSControl, OctoDNS, Terraform
Digitalocean: DNSControl, Terraform
DNSimple: DNSControl, OctoDNS, Terraform
DnsMadeEasy: Terraform
Dyn: Denominator, OctoDNS, Terraform
Gandi: DNSControl
Google Cloud DNS: DNSControl, OctoDNS, Terraform
Knot: Terraform(RFC2136)
Microsoft Active Directory: DNSControl, OctoDNS
Namecheap: DNSControl
Name.com: DNSControl
NS1: DNSControl, OctoDNS, Terraform
OpenStack Designate: Denominator
OVH: OctoDNS
PowerDNS: OctoDNS, Terraform
Rackspace Cloud DNS: Denominator
SoftLayer: DNSControl
UltraDNS: Denominator, Terraform
Vultr: DNSControl
Terraform can use RFC 2136 DNS Update to make changes to existing zones, but not to provision entirely new ones.
If you need support for another DNS provider, there are GitHub repositories for all the open source tools. Denominator is written in Java, OctoDNS is written in Python, and DNSControl and Terraform are written in Go.

Related

Does every Virtual Machine that Github spins up for a workflow run get a new IP address?

The way Github Actions work is that they spin up a VM for every workflow run. Therefore, every run takes place on a different VM. Virtual Machines generally get a different IP whenever they are spun up. I can however find no official documentation which clarifies if this is the case with Github Actions runner VMs.
Update 2022:
As noted in Krzysztof Madej's answer, GitHub now (Sept. 2022) proposes:
GitHub Actions Larger runners – Are now in public beta
That includes (for Team and Enterprise GitHub Action users only):
Fixed IP ranges to provide access to runners via allow list services.
So that would not apply for github.com runners.
2021:
This thread mentions (in 2019, so that might have changed since then):
Windows and Ubuntu hosted runners are hosted in Azure and have the same IP address ranges as Azure Data centers.
Currently, all hosted runners are in the East US 2 Azure region, but more regions may added over time.
Microsoft updates the Azure IP address ranges weekly in a JSON file that you can download from the Azure IP Ranges and Service Tags - Public Cloud 153 website. You can use this range of IP addresses if you require an allow-list to prevent unauthorized access to your internal resources.
So there should be a new address within a range of IPs.
It references: "Specifications for GitHub-hosted runners", which mentions:
Note: If you use an IP address allow list for your GitHub organization or enterprise account, you cannot use GitHub-hosted runners and must instead use self-hosted runners.
For more information, see "About self-hosted runners."
(Specifically, the IP address section)
You can assign fixed IP address for your runners:
Fixed IP ranges
Setup a fixed IP range for your machines by simply ticking a check box, this provides an IP range that can be allow listed in internal systems and in GitHub’s allow list to keep using Actions while making your GitHub environment more secure.
This in beta, but it helps to whitelist it on your firewall.

Can I create a lambda function in CloudFormation that runs an aws cli command that updates nameservers for the registered domain?

I registered a domain example.com in my Route53, now I created a CloudFormation stack that creates a hosted zone called example53 an A record example.com that routes traffic to my ALB and an ACM resources that should validate the example.com domain.
The problem is it will never validate the domain if the nameservers are wrong and before the ACM resource I need to update the nameservers for the domain registered in my Route53 with the name servers the NS record has in my hosted zone.
There is no CloudFormation domain resource manipulation but there is an AWS CLI command that can change the name servers for the domain, is there a way I can run that AWS CLI command with a Lambda resource created in CloudFormation?
I run the stack with a Makefile, can a Makefile run the AWS CLI command and realized the conditions such as when HostedZone is first created.
You can create a custom resource on CloudFormation. The resource would be in the form of a lambda function. The function would use AWS SDK, e.g. boto3 to perform actions on R53 resources that you require.
Since the function is developed by you, not provided by AWS, the custom resource can do whatever you want it to do. Its not limited by regular CloudFormation shortcomings.

Accessing Amazon RDS Postgresql from Azure DevOps Hosted Agent

How can I allow Azure DevOps Hosted Agent access my Amazon RDS PostgreSql without setting the Security Group to Anywhere. I was looking for IP Range or something to whitelist Azure DevOps Agents but can't find it.
In Azure, I can check a box to grant all "Azure DevOps Services" access to my Azure SQL Database but of course its not present in AWS.
I don't think we can access the Amazon RDS PostgreSql directly from Azure DevOps Hosted Agent, I mean using the hosted service account.
However, Amazon RDS for PostgreSQL Supports User Authentication with Kerberos and Microsoft Active Directory, so we can try writing script to access it by using the specific credential. Then run the scripts in pipeline by adding corresponding tasks (e.g AWS CLI or AWS PowerShell).
Also check How do I allow users to connect to Amazon RDS with IAM credentials?
For the IP ranges, please refer to Allowed address lists and network connections and Microsoft-hosted Agents for details.
The IPs used for the hosted Agent IP ranges are linked through here. I have not had much success using it for hosted agents. The list is big and the documentation is not really clear about what types of services you need to whitelist.
I would go with whitelisting the hosted agent IP just-in-time during the pipeline run, then remove it as a final step. First you can grab the ip of the hosted agent:
$hostedIPAddress = Invoke-RestMethod http://ipinfo.io/json | Select -exp ip
Then you could use the AWS CLI or AWS PowerShell module to add the specific IP. Azure DevOps AWS tools task includes the CLI.
Do the needed work against the DB, then make sure you clean up the rule\temp security group at the end.

How can I deploy content to a static website in Azure Storage that has IP restrictions enabled?

I'm getting an error in my release pipeline (Azure DevOps) when I deploy content to a static website in Azure Storage with IP restrictions enabled.
Error parsing destination location "https://MYSITE.blob.core.windows.net/$web": Failed to validate destination. The remote server returned an error: (403) Forbidden.
The release was working fine until I added IP Restrictions to the storage account to keep the content private. Today, we use IP restrictions to control access. Soon, we will remove the IP restrictions in favor of vpn and vnets. However, my expectation is that I will have the same problem.
My assumption is that Azure DevOps cannot access the storage account because it is not whitelisted in the IP Address list. My release pipeline uses the AzureBlob File Copy task.
steps:
- task: AzureFileCopy#2
displayName: 'AzureBlob File Copy'
inputs:
SourcePath: '$(System.DefaultWorkingDirectory)/_XXXXX/_site'
azureSubscription: 'XXXX'
Destination: AzureBlob
storage: XXXX
ContainerName: '$web'
I have already enabled "trusted Microsoft services" but that doesn't
help.
Whitelisting the IP Addresses for Azure DevOps is not a good option because there are TONS of them and they change regularly.
I've seen suggestions to remove the IP restrictions and re-enable them after the publish step. This is risky because if something were to fail after the IP restrictions are removed, my site would be publicly accessible.
I'm hoping someone has other ideas! Thanks.
You can add a step to whitelist the agent IP address, then remove it from the whitelist at the end of the deployment. You can get the IP address by making a REST call to something like ipify.
I have done that for similar scenarios and it works well.
I would recommend a different approach: running an Azure DevOps agent with a static IP and/or inside the private VNet.
Why I consider this a better choice:
audit logs will be filled with addition and removal of rules, making harder analysis in case of attack
the Azure connection must be more powerful than needed, specifically to change Rules in Security Groups or Firewall or Application Gateway or else, while it only needs deploy permissions
it opens traffic from outside, while temporarily, while a private agent needs always initiate from inside
No solution is perfect, so it is important to chose the best for your specific scenario.

Google Storage access based on IP Address

Is there a way to give access to a Google Cloud Storage bucket based on the IP address it is coming from.
On Amazon s3, you can just set this in the access policy like this:
"Condition" : {
"IpAddress" : {
"aws:SourceIp" : ["192.168.176.0/24","192.168.143.0/24"]
}
}
I do not want to use a signed url.
The updated answers on this page are only partially correct and should not be recommended for the use case of access control to Cloud Storage Objects.
Access Context Manager (ACM) defines rules to allow access (e.g. an IP address).
VPC Service Controls create an "island" around a project and ACM rules can be attached. These rules are "ingress" rules and not "egress" rules meaning "anyone at that IP can get into all resources in the project with the correct IAM permissions".
The ACM rule specifying an IP address will allow that IP address to access all Cloud Storage Objects and all other protected resources owned by that project. This is usually not the intended result. You cannot apply an IP address rule to an object, only to all objects in a project. VPC Service Controls are designed to prevent data from getting out of a project and are NOT designed to allow untrusted anonymous users access to a project's resources.
UPDATE: This is now possible using VPC Service Controls
No, this is not currently possible.
There's currently a Feature request to restrict google cloud storage bucket by IP Address.
The VPC Service Controls [1] allow users to define a security perimeter around Google Cloud Platform resources such as Cloud Storage buckets (and some others) to constrain data within a VPC and help mitigate data exfiltration risks.
[1] https://cloud.google.com/vpc-service-controls/
I used VPC Service Controls on behalf of a client recently to attempt to accomplish this. You cannot use VPC Service Controls to whitelist an ip address on a single bucket. Jterrace is right. There is no such solution for that today. However, using VPC Service Controls you can draw a service perimeter around the Google Cloud Storage (GCS) service as a whole within a given project, then apply an access level to your service perimeter in the project to allow an ip address/ip address range access to the service (and resources within). The implications are that any new buckets created within the project will be created within the service perimeter and thus be regulated by the access levels applied to the perimeter. So you'll likely want this to be the sole bucket in this project.
Note that the service perimeter only affects services you specify. It does not protect the project as a whole.
Developer Permissions:
Access Context Manager
VPC Service Controls
Steps to accomplish this:
Use VPC Service Controls to create a service perimeter around the entire Google Cloud Storage service in the project of your choosing
Use Access Context Manager to create access levels for ip address you want to whitelist and users/groups who will have access to the service
Apply these access levels to the service perimeter created in the previous step (it will take 30 minutes for this change to take effect)
Note: Best practice would be to provide access to the bucket using a service account or users/groups ACL, if that is possible. I know it isn't always so.