We have an Azure PostgreSQL Flexible Server on a VNET subnet which we're trying to lock down as much as possible via NSG rules.
As per the Microsoft documentation we've added rules to cover the guidance given:
High availability Features of Azure Database for PostgreSQL - Flexible
Server require ability to send\receive traffic to destination ports
5432, 6432 within Azure virtual network subnet where Azure Database
for PostgreSQL - Flexible Server is deployed , as well as to Azure
storage for log archival. If you create Network Security Groups (NSG)
to deny traffic flow to or from your Azure Database for PostgreSQL -
Flexible Server within the subnet where its deployed, please make sure
to allow traffic to destination ports 5432 and 6432 within the subnet,
and also to Azure storage by using service tag Azure Storage as a
destination.
And we have added another rule to deny all other outbound traffic to lock things down further, but in the Network Watcher Flow Logs we're seeing blocked outbound traffic to port 443 from the PostgreSQL IP address.
The IP addresses being called are associated with Akamai and Microsoft when investigated, but we're a little puzzled what they may be doing and how to add relevant rules to cover this seemingly un-documented behaviour.
A sample of the outbound IP address calls being blocked:
104.74.50.201
23.0.237.118
52.239.130.228
What are the best practices to lock things down but allow PostgreSQL to call out to what it needs to? Is there some more comprehensive documentation somewhere?
The outbound NSG rules:
We understand that there's default rules in place, but we're trying to restrict traffic further to very specific resources.
In my knowledge, recommended steps will be
Create a new Priority rule to Deny all the traffic in Inbound and Outbound. On top we can create a new rule to allow traffic.
If applications that are deployed on subnets within the virtual network, allow only those subnet range on NSG inbound rule
Example:
Deployed PostgresSQL with Vnet
Address Space: 10.1.0.0/16 and Subnet range: 10.1.0.0/24
In Inbound always allow only specific port and Destination IP addresses
If application is consuming any load balancer / Cluster ip's we should allow only those IPs on outbound rules under destination
Related
I'm trying to configure network access of a MongoDB cluster to allow connections from an Azure App Service. I found the outbound IP addresses of my App Service in the Azure portal (see Azure docs). And entered them in the IP access list according to MongoDB Atlas docs. I appended "/32" to the IP addresses to allow only a single host (CIDR notation).
However, when trying to connect on App Service start I get an error indicating to check the IP whitelist of the MongoDB cluster.
This actually seems to be the problem, because adding 0.0.0.0/0 (allow access from anywhere) solves the problem.
What could be the problem here?
I double checked the outbound IP addresses of the Azure App Service and the IP access list from the MongoDB Cluster.
What I did was indeed the answer to another question, so I think I'm missing something...
Actually /32 is not a valid CIDR in Azure. The minimum size of a single VNET is /29.
This will restrict your range to only 3 IPs (not 8 as you would expect), as Azure will reserve the first four IPs and the last one for internal routing.
Please consider also that if you are running the MongoDB cluster inside a private network and it is not exposed externally via a network appliance (such as Application Gateway, Load Balancer, Front Door or Traffic Manager), you will need to enable VNET Integration on Azure Web App side.
If this is your case, navigate through your App in the portal and go into the "Networking" blade.
Here you can add VNET Integration, but you should consider that in this case the minimum size of your subnet can only be /28 (you cannot add a smaller subnet)
I only added the IP addresses listed in the "outbound IP addresses" property of my Azure App Service. After adding the IP addresses listed in the "Additional Outbound IP Addresses" property also the App Service connects to the MongoDB cluster successfully.
This is somewhat surprising to me because the documentation on when outbound IPs change says that the "...set of outbound IP addresses for your app changes when you perform one of the following actions:
Delete an app and recreate it in a different resource group (deployment unit may change).
Delete the last app in a resource group and region combination and recreate it (deployment unit may change).
Scale your app between the lower tiers (Basic, Standard, and Premium), the PremiumV2, and the PremiumV3 tier (IP addresses may be added to or subtracted from the set).
..."
None of the above actions happened. 🙄
I'm trying to set up my Ec2 server on AWS, and I want to make it so only requests from the same ip address are allowed (for my backend port.). What security group allows this? The reason I want to restrict which ips can make requests to the backend is to stop abuse ips from making a ton of random requests.
I have a VM instance (e2-micro) on GCP running with postgres. I added my own external ip address to pg_hba.conf so I can connect to the database on my local machine. Next to that I have a nodeJS application which I want to connect to that database. Locally that works, the application can connect to the database on the VM instance. But when I deploy the app to GCP I get a 500 Server Error when I try to visit the page in the browser.
These are the things I already did/tried:
Created a Firewall rule to allow connections on my own external ip address
Created a VPC connector and added that connector to my app.yaml
Made sure everything is in the same project and region (europe-west1)
If I allow all ip addresses on my VM instance with 0.0.0.0/0 then App Engine can connect, so my guess is that I'm doing something wrong the connector? I use 10.8.0.0/28 as ip range while the internal ip address of the VM instance is 10.132.0.2, is that an issue? I tried an ip range with 10.0.0.0 but that also didn't work.
First check if your app uses a /28 IP address range (see the documentation):
When you create a connector, you also assign it an IP range. Traffic
sent through the connector into your VPC network will originate from
an address in this range. The IP range must be a CIDR /28 range that
is not already reserved in your VPC network.
When you create a VPC connector a proper firewall rulle is also created to allow traffic:
An implicit firewall rule with priority 1000 is created on your VPC
network to allow ingress from the connector's IP range to all
destinations in the network.
As you wrote yourself when you create a rule that allows traffic from any IP it works (your app can connect). So - look for the rule that allows traffic from the IP range that your app is in - if it's not there create it.
Or - you can connect your app to your DB over public IP's - in such case you also have to create a proper rule that will allow the traffic from the app to DB.
Second - check the IP of the DB that app uses.
My guess is that you didn't change the IP of the DB (that app uses) and it tries to connect not via VPC connector but via external IP and that's why it cannot (and works only when you create a firewall rule).
This answer pointed me in the right direction: https://stackoverflow.com/a/64161504/3323605.
I needed to deploy my app with
gcloud beta app deploy
since the VPC connector method was on beta. Also, I tried to connect to the external IP in my app.yaml but that needed to be the internal IP ofcourse.
I want to connect externally to my Redshift cluster (VPC, NOT classic) using aginity workbench. Hence I added my public ip address to the ec2 inbound rules, but I get a connection timeout.
When I allow all traffic to the inbound rules (0.0.0.0/0) it is possible to connect.Off course this is not a preferred solution because of security.
Does anybody have an idea why/where it is failing with my public ip (grabbed from whatismyipaddress.com)?
I have to run a program on an EC2 that reads the host's public IP address from config (which I don't appear to be able to easily change), and then connects to it, i.e. it's looping back to the instance via the public IP address.
I can't find out how to create a security group that can loopback to the the EC2 instance. My rules are:
outbound: 0.0.0.0/0 all tcp
inbound: [private IP/32, 127.0.0.1/32, public IP/32] all tcp 4440 (the port I need)
None of the inbound IPs work. I'm testing this by telnetting on the host to the public IP: telnet x.x.x.x 4440, and I'm never able to (where x.x.x.x is my public IP). I can do it by specifying 127.0.0.1 though, so the server I'm connecting to is online and bound correctly. I can also access the server through my browser. I just can't loopback. The connection hangs, which is why I think it's a security group issue.
How can I allow this program - which tries to connect to the public IP from the instance - to connect to the same instance by its public IP address?
I just did a test (using ICMP rule) , you have to add a rule in the security group as you said. you should add it normally, and set the source to 1.2.3.4/32 (following your example). please note that I am using Elastic IP in my tests.
According to the docs, it should also be possible to list that security group as its own source. This would permit the loopback even if the IP address changes due to a stop/start.
Another security group. This allows instances associated with the specified security group to access instances associated with this security group. This does not add rules from the source security group to this security group. You can specify one of the following security groups:
The current security group
A different security group for the same VPC
A different security group for a peer VPC in a VPC peering connection