Suppose I have a network with user A and node B, which is a subnet router advertising some routes.
Does an ACL restricting the user A from the node B also restrict the user's access to the routes, given that they can't access the node? Or does this work differently and the user can still access the subnet routes?
ACLs specify what you want to have access to by IP, and don't limit discovery of routes. You can restrict a node's 100.x.y.z IP separately from ACLs on the IPs it routes. This of course depends on there being an advertised path that allows for a connection.
So, you could create an ACL to allow access to a subnet without allowing access to the node advertising it.
You could also create an ACL to allow access to only certain subnets, for example:
{
“Action”: “accept”,
“Users”: [“group:admins”],
“Ports”: [“10.0.48.0/24:22”],
}
This should give access to the 10.0.48.0/24 subnet on port 22.
Related
We have an Azure PostgreSQL Flexible Server on a VNET subnet which we're trying to lock down as much as possible via NSG rules.
As per the Microsoft documentation we've added rules to cover the guidance given:
High availability Features of Azure Database for PostgreSQL - Flexible
Server require ability to send\receive traffic to destination ports
5432, 6432 within Azure virtual network subnet where Azure Database
for PostgreSQL - Flexible Server is deployed , as well as to Azure
storage for log archival. If you create Network Security Groups (NSG)
to deny traffic flow to or from your Azure Database for PostgreSQL -
Flexible Server within the subnet where its deployed, please make sure
to allow traffic to destination ports 5432 and 6432 within the subnet,
and also to Azure storage by using service tag Azure Storage as a
destination.
And we have added another rule to deny all other outbound traffic to lock things down further, but in the Network Watcher Flow Logs we're seeing blocked outbound traffic to port 443 from the PostgreSQL IP address.
The IP addresses being called are associated with Akamai and Microsoft when investigated, but we're a little puzzled what they may be doing and how to add relevant rules to cover this seemingly un-documented behaviour.
A sample of the outbound IP address calls being blocked:
104.74.50.201
23.0.237.118
52.239.130.228
What are the best practices to lock things down but allow PostgreSQL to call out to what it needs to? Is there some more comprehensive documentation somewhere?
The outbound NSG rules:
We understand that there's default rules in place, but we're trying to restrict traffic further to very specific resources.
In my knowledge, recommended steps will be
Create a new Priority rule to Deny all the traffic in Inbound and Outbound. On top we can create a new rule to allow traffic.
If applications that are deployed on subnets within the virtual network, allow only those subnet range on NSG inbound rule
Example:
Deployed PostgresSQL with Vnet
Address Space: 10.1.0.0/16 and Subnet range: 10.1.0.0/24
In Inbound always allow only specific port and Destination IP addresses
If application is consuming any load balancer / Cluster ip's we should allow only those IPs on outbound rules under destination
I'm trying to configure network access of a MongoDB cluster to allow connections from an Azure App Service. I found the outbound IP addresses of my App Service in the Azure portal (see Azure docs). And entered them in the IP access list according to MongoDB Atlas docs. I appended "/32" to the IP addresses to allow only a single host (CIDR notation).
However, when trying to connect on App Service start I get an error indicating to check the IP whitelist of the MongoDB cluster.
This actually seems to be the problem, because adding 0.0.0.0/0 (allow access from anywhere) solves the problem.
What could be the problem here?
I double checked the outbound IP addresses of the Azure App Service and the IP access list from the MongoDB Cluster.
What I did was indeed the answer to another question, so I think I'm missing something...
Actually /32 is not a valid CIDR in Azure. The minimum size of a single VNET is /29.
This will restrict your range to only 3 IPs (not 8 as you would expect), as Azure will reserve the first four IPs and the last one for internal routing.
Please consider also that if you are running the MongoDB cluster inside a private network and it is not exposed externally via a network appliance (such as Application Gateway, Load Balancer, Front Door or Traffic Manager), you will need to enable VNET Integration on Azure Web App side.
If this is your case, navigate through your App in the portal and go into the "Networking" blade.
Here you can add VNET Integration, but you should consider that in this case the minimum size of your subnet can only be /28 (you cannot add a smaller subnet)
I only added the IP addresses listed in the "outbound IP addresses" property of my Azure App Service. After adding the IP addresses listed in the "Additional Outbound IP Addresses" property also the App Service connects to the MongoDB cluster successfully.
This is somewhat surprising to me because the documentation on when outbound IPs change says that the "...set of outbound IP addresses for your app changes when you perform one of the following actions:
Delete an app and recreate it in a different resource group (deployment unit may change).
Delete the last app in a resource group and region combination and recreate it (deployment unit may change).
Scale your app between the lower tiers (Basic, Standard, and Premium), the PremiumV2, and the PremiumV3 tier (IP addresses may be added to or subtracted from the set).
..."
None of the above actions happened. 🙄
I'm trying to set up my Ec2 server on AWS, and I want to make it so only requests from the same ip address are allowed (for my backend port.). What security group allows this? The reason I want to restrict which ips can make requests to the backend is to stop abuse ips from making a ton of random requests.
A newbie question on using subnet ACLs with IBM gen2 VPC.
I've an internet facing application which accepts inbound requests, as well as, makes outbound requests to the peer hosts. To enable this, I've to practically open all(>1024) inbound and outbound ports on my subnets.
I'm using IBM's security groups to firewall my VMs, but just curious why make the ACLs stateless, and force the user to open all ports? I certainly see the usage of subnet to subnet ACLs but I'm asking about in my particular use case.
Am I missing something here? Would you please recommend best practice?
Stateless network acls are common for cloud providers supplying VPC. If the remote IP ranges are not fixed it will not be possible to limit them further with acls. Same with the ports where in your example most port numbers will be valid except the non-ephemeral ports that you are not using (as you mention).
You can imagine more constrained use cases where acls would add another layer of security and associated reasoning about connectivity. Say you had a Direct Link from on premises to the cloud and the IP range could be constricted, etc.
I have multiple domain names and I want all of them point to the same webserver I have on google compute engine instance, how can I do that?
You don't need to have a separate static IP address per website—you can serve an arbitrary number of sites from a single VM by using a feature such as Apache virtual hosts which let you serve a different site depending on the hostname that is requested by the user.
As Per the Google Compute Engine docs on static IP addresses:
"An instance can have only one external IP address. If it already has an external IP address, you must first remove that address by deleting the old access configuration, then adding a new access configuration with the new external IP address"
but using Protocol Forwarding
You can archive multiple external IPs for one VM instance, but need some configuration. 1) By default, VM will be assigned with an ephemeral external IP, you can promote it to static external IP, which will remain unchanged after stop and restart. 2) Extra external IPs have to be attached to ForwardingRules which target the VM. You can use (or promote to) static IPs as well.
The command you may want to use would be:
1) Create a TargetInstance for your VM instance:
gcloud compute target-instances create <target-instance-name> --instance <instance-name> --zone=<zone>
2) Create a ForwardingRule pointing to the TargetInstance:
gcloud compute forwarding-rules create <forwarding-rule-name> --target-instance=<target-instance-name> --ip-protocol=TCP --ports=<ports>