A newbie question on using subnet ACLs with IBM gen2 VPC.
I've an internet facing application which accepts inbound requests, as well as, makes outbound requests to the peer hosts. To enable this, I've to practically open all(>1024) inbound and outbound ports on my subnets.
I'm using IBM's security groups to firewall my VMs, but just curious why make the ACLs stateless, and force the user to open all ports? I certainly see the usage of subnet to subnet ACLs but I'm asking about in my particular use case.
Am I missing something here? Would you please recommend best practice?
Stateless network acls are common for cloud providers supplying VPC. If the remote IP ranges are not fixed it will not be possible to limit them further with acls. Same with the ports where in your example most port numbers will be valid except the non-ephemeral ports that you are not using (as you mention).
You can imagine more constrained use cases where acls would add another layer of security and associated reasoning about connectivity. Say you had a Direct Link from on premises to the cloud and the IP range could be constricted, etc.
Related
We have an Azure PostgreSQL Flexible Server on a VNET subnet which we're trying to lock down as much as possible via NSG rules.
As per the Microsoft documentation we've added rules to cover the guidance given:
High availability Features of Azure Database for PostgreSQL - Flexible
Server require ability to send\receive traffic to destination ports
5432, 6432 within Azure virtual network subnet where Azure Database
for PostgreSQL - Flexible Server is deployed , as well as to Azure
storage for log archival. If you create Network Security Groups (NSG)
to deny traffic flow to or from your Azure Database for PostgreSQL -
Flexible Server within the subnet where its deployed, please make sure
to allow traffic to destination ports 5432 and 6432 within the subnet,
and also to Azure storage by using service tag Azure Storage as a
destination.
And we have added another rule to deny all other outbound traffic to lock things down further, but in the Network Watcher Flow Logs we're seeing blocked outbound traffic to port 443 from the PostgreSQL IP address.
The IP addresses being called are associated with Akamai and Microsoft when investigated, but we're a little puzzled what they may be doing and how to add relevant rules to cover this seemingly un-documented behaviour.
A sample of the outbound IP address calls being blocked:
104.74.50.201
23.0.237.118
52.239.130.228
What are the best practices to lock things down but allow PostgreSQL to call out to what it needs to? Is there some more comprehensive documentation somewhere?
The outbound NSG rules:
We understand that there's default rules in place, but we're trying to restrict traffic further to very specific resources.
In my knowledge, recommended steps will be
Create a new Priority rule to Deny all the traffic in Inbound and Outbound. On top we can create a new rule to allow traffic.
If applications that are deployed on subnets within the virtual network, allow only those subnet range on NSG inbound rule
Example:
Deployed PostgresSQL with Vnet
Address Space: 10.1.0.0/16 and Subnet range: 10.1.0.0/24
In Inbound always allow only specific port and Destination IP addresses
If application is consuming any load balancer / Cluster ip's we should allow only those IPs on outbound rules under destination
I'm trying to configure network access of a MongoDB cluster to allow connections from an Azure App Service. I found the outbound IP addresses of my App Service in the Azure portal (see Azure docs). And entered them in the IP access list according to MongoDB Atlas docs. I appended "/32" to the IP addresses to allow only a single host (CIDR notation).
However, when trying to connect on App Service start I get an error indicating to check the IP whitelist of the MongoDB cluster.
This actually seems to be the problem, because adding 0.0.0.0/0 (allow access from anywhere) solves the problem.
What could be the problem here?
I double checked the outbound IP addresses of the Azure App Service and the IP access list from the MongoDB Cluster.
What I did was indeed the answer to another question, so I think I'm missing something...
Actually /32 is not a valid CIDR in Azure. The minimum size of a single VNET is /29.
This will restrict your range to only 3 IPs (not 8 as you would expect), as Azure will reserve the first four IPs and the last one for internal routing.
Please consider also that if you are running the MongoDB cluster inside a private network and it is not exposed externally via a network appliance (such as Application Gateway, Load Balancer, Front Door or Traffic Manager), you will need to enable VNET Integration on Azure Web App side.
If this is your case, navigate through your App in the portal and go into the "Networking" blade.
Here you can add VNET Integration, but you should consider that in this case the minimum size of your subnet can only be /28 (you cannot add a smaller subnet)
I only added the IP addresses listed in the "outbound IP addresses" property of my Azure App Service. After adding the IP addresses listed in the "Additional Outbound IP Addresses" property also the App Service connects to the MongoDB cluster successfully.
This is somewhat surprising to me because the documentation on when outbound IPs change says that the "...set of outbound IP addresses for your app changes when you perform one of the following actions:
Delete an app and recreate it in a different resource group (deployment unit may change).
Delete the last app in a resource group and region combination and recreate it (deployment unit may change).
Scale your app between the lower tiers (Basic, Standard, and Premium), the PremiumV2, and the PremiumV3 tier (IP addresses may be added to or subtracted from the set).
..."
None of the above actions happened. 🙄
So I have GCP set up and Kubernetes, I have a web app (Apache OFBiz) running on pods in the GKE cluster. We have a domain that points itself to the web app, so essentially it's accessible from anywhere on the internet. Our issue is since this is a school project, we want to limit the access to the web app to the internal network on GCP, we want to simulate a VPN connection. I have a VPN gateway set up, but I have no idea what to do on any random computer to simulate a connection to the internal network on GCP. Do I need something else to make this work? What are the steps on the host to connect to GCP? And finally, how do I go about limiting access to the webapp so only people in the internal network have access to the webapp?
When I want to test a VPN, I simply create a new VPC in my project and I connect both with Cloud VPN. Then, in the new VPC, you can create VM that simulate computer in the other side of the VPN and thus simulate what you want.
To setup a VPN on GCP you can use Cloud VPN using static or dynamic routing, you will need to configure a remote peer from the location you want to access your GCP resources to establish the connection towards the Cloud VPN gateway on GCP end.
This means you may require a router that supports creating VPN tunnels on your on-premises or use a host that acts like a router to establish this connection using a VPN software towards Cloud VPN (like Strongswan, for example).
You can block external access to the resources on your VPC network by using GCP firewall rules and just allow specific ports or source IP ranges as you wish.
Another option, even if it's not a VPN or encrypted traffic, is to only allow ingress traffic from the public IP from where you would like to connect to your internal VPC, but this is less secure and would only work if you have an static public IP on your on-premises.
Since you said this is a school project, I would recommend asking your teacher for more direct advice. That said, you can't "simulate" a VPN but you can set up an IPSec client on your laptop or whatever and actually connect to it. Unfortunately Google doesn't appear to have any documentation on this so I'm guessing they presume you already know IPSec well enough to write a connection config yourself.
Using kubectl port-forward might be an easier solution.
We need to set up an ejtserver instance inside an OpenShift cloud and expose it to an external network.
I have been told that a binary protocol is a big no-no in that situation, as it requires an extra, manually-set-up egress route (lots of extra work by external team), and takes up a limited resource (port number - ports numbers for binary egress routes need to be unique).
No such limitations exist for HTTP(S) traffic because the routers know enough about the protocol to differentiate connections through host name, which is an unlimited resource.
So I hope I can make the connection from install4j-maven-plugin to the ejtserver instance through HTTP(S); is this possible?
As of 1.13.1, this is not possible, please contact support#ej-technologies.com for alternative arrangements with build-only license keys.
I am looking for GCP networking best practice, where I can allow connection of auto-scaled instances to Postgresql server installed on separate instance.
So far I tried whitelisting load-balancer IP within firewall and postgresql config file, but failed.
Any help or pointer is highly appreciated.
The load-balancer doesn't process information by itself, it just redirects Frontend addresse(s) and manage the requests with Instance Groups.
That instance group should manage the HTTP requests and connect with the database instance.
The load-balancer is used to dynamically distribute (or even create additional instances) to handle the requests over the same Frontend address.
--
So first you should make it work with a regular instance, configure it and save the instance template. Then you can proceed with creating an instance group that can be managed by a load-balancer.
EDIT - Extended the answer from my comment
"I don't think your problem is related to Google cloud platform now. If you have a known IP address for the PostgreSQL server (connect using an internal network IP address so it doesn't change), then make sure your auto-balanced instances are in the same internal network, use db's internal IP and connect to it."