I am trying to enable ZoneRedundant High Availability on our Azure PostgreSQL Flexible server.
The Azure documentation mentions the following important step:
High availability Features of Azure Database for PostgreSQL - Flexible Server require ability to send\receive traffic to destination ports 5432, 6432 within Azure virtual network subnet where Azure Database for PostgreSQL - Flexible Server is deployed , as well as to Azure storage for log archival. If you create Network Security Groups (NSG) to deny traffic flow to or from your Azure Database for PostgreSQL - Flexible Server within the subnet where its deployed, please make sure to allow traffic to destination ports 5432 and 6432 within the subnet, and also to Azure storage by using service tag Azure Storage as a destination.
I'm finding it hard to get my head around this from the way it is written and can't find many details about it elsewhere.
From what I understand, the first requirement is to add an inbound NSG rule as follows:
Source IP: [CIDR of the database subnet]
Source Port Range: *
Destination IP addresses: [CIDR of the database subnet]
Destination Port Ranges: 5432,6432
Action: Allow
Priority: [Any number before the default inbound DENY rules]
Have I read this correctly? We are effectively allowing traffic between the instance and the replication instance within the subnet, so the source IP and the destination IP should both be the database subnet CIDR.
I know that NSGs are stateful. So am I right in saying I don't need any Outbound rule for this case?
The second requirement to allow traffic 'to Azure storage by using service tag Azure storage as a destination' is confusing me. From the way this is written I don't know if it should be an inbound or an outbound rule. My first guess was outbound as logs would be moving FROM the database to Azure storage. Is that correct?
• As per the statement from the Microsoft documentation, it is imminent that all the resources dealing with or related to the Azure PostgreSQL Database – Flexible Server should be deployed within the same virtual network. The statement accordingly states that for Azure PostgreSQL Database – Flexible server, the incoming as well as outgoing traffic over the ports 5432 and 6432 should be allowed within the same subnet.
It is because the PostgreSQL Database Server listens to the localhost IP, i.e., ‘127.0.0.1’ through these ports only, i.e., 5432 and 6432. Also, since these ports are by default not open and thus, are secure in these scenarios, the related Microsoft documentation text states accordingly.
Thus, based on that, the NSG rule that you have created is clearly sufficient and correct to allow the traffic on these ports for the PostgreSQL database server. And yes, the traffic rule needs to be created to the Azure storage from the Azure PostgreSQL database subnet and from the trusted network ranges/subnets to the Azure storage with the destination as Azure Storage as a service tag if only logs are to be transferred to that storage account. If it is provisioned for some other purpose other than this, then you will have to create rules accordingly.
Ensure to create an outbound NSG rule for the Azure storage account which will suffice the requirement from the Azure PostgreSQL DB Server subnet.
Related
I need to allow inbound connections from a remote platform to do some administrative tasks on one of my databases (in my case, allow a reverse-ETL service to feed one of my postgresql databases in a pod in my k8s cluster)
The remote platform lets me configure a PostgreSQL destination through SSH tunnels or reverse SSH tunnels, or direct connections. Of course, I would like traffic to be encrypted, so I’m opting for the SSH or reverse SSH Tunnel.
Any idea if/how I can setup this access on my k8s cluster ?
I would like to give the remote service ONLY access to one of my pg database (and not the whole cluster/namespace for security reasons)
The scenario I was thinking about
Traefik listens to ssh on specific port (like 2222)
route this port to a SSH bastion pod capable of managing incoming SSH connections, and log in as a specific linux user. Only allow connections from the remote service IPs via an ip whitelist middleware.
Allow connections from this bastion host pod (or ideally, this linux user) ONLY to my postgresql instance on the default pg port
If I open a bastion host (2), by default, all my users will have access to all services on the cluster...right ? How can I isolate my bastion host instance to only connect it to PG ? I haven't used Network policies yet, but I believe they may be the answer... however, would it be possible to activate networking policies for a single pod only ? (my bastion host) and leave the rest as it is ?
I have a kubernetes cluster with several nodes, and it is connecting to a SQL server outside of the cluster. How can I whitelist these (potentially changing) nodes on the SQL server firewall, without having to whitelist each Node's external IP independently?
Is there a clean solution for this? Perhaps some intra-cluster tooling to route all requests through a single node?
You would have to use a NAT. It is possible, but fiddly (we do this weekly in order to connect to a hosted service to make backups, and the hosted service only whitelists a specific IP.)
We used Terraform for spinning up a cluster, then deploying our backup job to it so it could connect to the hosted service, and since it was going via the NAT IP, the remote host would allow the connection.
We used Cloud NAT via Terraform (as we were on GKE): https://registry.terraform.io/modules/terraform-google-modules/cloud-nat/google/latest
Though there are surely similar options for whichever Kubernetes provider you are using. If you are running bare-metal, you'll need to do the routing yourself.
I am trying to set up firehose to send data from a kinesis stream to a redshift cluster. Firehose successfully inserts the data to my s3 bucket, but I am receiving the following error when firehose attempts to execute the s3->Redshift copy command:
The connection to the specified Amazon Redshift cluster failed. Ensure that security settings allow Firehose connections, that the cluster or database specified in the Amazon Redshift destination configuration JDBC URL is correct, and that the cluster is available.
I have performed every setup step according to this except for one: I did not make my Redshift cluster publicly accessible. I am unable to do this bc the cluster is in a private VPC that does not have an internet gateway attached.
After researching the issue, I found this article which provides insight for how to set up an AWS PrivateLink with firehose. However, I have heard that some AWS services support PrivateLink and others do not. Would PrivateLink work for this case?
I am also concerned with how this would affect the security of my VPC. Could anyone provide insight to possible risks to using a PrivateLink?
I was able to solve this issue. Add an Internet gateway to your VPC route table.
Go to Redshift VPC.
On the Routes tab (you must have 3 private routes), choose Edit, Add another route, and add the following routes as necessary. Choose Save when you're done.
For IPv4 traffic, specify 0.0.0.0/0 in the Destination box, and select the internet gateway ID in the Target list.
If you add internet gateway ID to all 3 private routes, you might see Failure in other applications that are using the same route/VPC. To fix that, update only 1 route with internet gateway ID and the other two will have nat as destination for (0.0.0.0/0).
We are running a SaaS service that we are looking to migrate to Kubernetes, preferably at one of the hyperscalars. One specific issue I have not yet found a clean solution for is the need for Egress IP address selection from within the application.
We deal with a large amount of upstream providers that have access control and rate limiting based on source IP adres. Also a partition of our customers are using their own accounts with some of the upstream providers. To access the upstream providers in the context of their account we need to control the source IP used for the connection from within the application.
We are running currently our services in a DMZ behind a load balancer, so direct network interface selection is already impossible. We use some iptables rules on our load balancers/gateways to do address selection based on mapped port numbers. (e.g. egress connections to port 1081 are mapped to source address B and target port 80, port 1082 to source address C port 80)
This however is quite a fragile setup that also does not map nicely when trying to migrate to more standardized *aaS offerings.
Looking for suggestions for a better setup.
One of the things that could help you solve it is Istio Egress Gateway so I suggest you look into it.
Otherwise, it is still dependent on particular platform and way to deploy your cluster. For example on AWS you can make sure your egress traffic always leaves from predefined, known set of IPs by using instances with Elastic IPs assigned to forward your traffic (be it regular EC2s or AWS NAT Gateways). Even with Egress above, you need some way to define a fixed IP for this, so AWS ElasticIP (or equivalent) is a must.
I am using Google Container Engine to launch a cluster that connects to remote services (in a different data center / provider). The containers that are connecting may not have a kubernetes service associated with them and don't need external in-bound ip addresses. However, I want to set up firewall rules on the remote machines and have a known subnet that the nodes will be within when I expand/reduce the cluster or if a node goes down and is re-built.
In looking at Google Networks they appear to be related to internal networks (e.g. 10.128.0.0, etc). The external IP lets me set up single static IP addresses but not a range and I don't see how to apply that to a node — applying to a load balancer won't change the outbound IP address.
Is there a way I can reserve a block of IP addresses for my cluster to use in my firewall rules on my remote servers? Or is there some other solution I'm missing for this kind of thing?
The proper solution for this is to use a VPN to connect the two networks. Google Cloud VPN allows you to create this on the Google side.