Subnet routing to AWS VPC doen't appear to work - amazon-vpc

I'm trying to set up a Tailscale node as a relay to my AWS VPC. I've followed the instructions here to the letter, multiple times. Unfortunately, I just cannot seem to ssh to the second (non-Tailscale) instance. My process, briefly:
Set up an AWS VPC with the VPC wizard
create an instance tailscale-relay on the VPC, on the public subnet, with SSH enabled, and my private key. Assign it a new Security Group called sg-tailscale-relay
ssh to tailscale-relay, install tailscale
enable IP forwarding (per docs here)
sudo tailscale up --advertise-routes=10.0.0.0/24, where 10.0.0.0/24 is the range specified in the private subnet (and equivalently in the public subnet, see photo at bottom)
disable key expiry and authorize subnet routes for this node in the Tailscale console
close off ssh access to tailscale-relay in its Security Group, then verify that I can ssh to it with it's Tailscale IP (annoyingly, still requiring my .pem key)
create another instance, test-tailscale, assign it to the same VPC but to the private subnet. Do NOT give it a public IP. Allow all inbound traffic from the sg-tailscale-relay subnet, but not from anywhere else
Then, from my local machine, SSH to the private IP of test-tailscale times out.
I can ping test-tailscale from tailscale-relay (but not tailscale ping, obviously)
What gives? I don't understand what I'm doing wrong.
Bonus: Can I ssh without the private key?
private subnet route table

One possibility is in the non-AWS Tailscale node which you're using to send the ping, if it is a Linux system. Linux was the first client developed, and the one most often used as a subnet router itself.
All of the other clients accept subnet routes by default, but Linux by default does not and needs tailscale up --accept-routes=true to be specified.

Related

Access mongodb via External IP

I have a replication set which is configured using private IPs, and we are able to access inside the VPC. All are fine,
But when I tried to access via the Public IP of the replica set like node1_ip,node2_ip,node_ip/?replicaSet=dev-mongo-cluster then it is not working. There is network level issues(the port is opened to our IP address)
But If I try to access a single node using Public IP without mentioning the replica set then it is working.
BindIP is set to 0.0.0.0
Any idea how to resolve this?

Cannot ping PostgreSQL in private subnet from a VM in public subnet

I have a private subnet in VNet 1 with Network Security Group only allowing inbound traffic from a specific private IP CIDR. This subnet hosts Azure Database for PostgreSQL with a server name.
I also have a public subnet in a different VNet (VNet2) that hosts a standard VM.
I have done VNet peering to connect the two and they don't overlap the address spaces. I have also whitelisted the Private IP of the VM (in VNet2) in the NSG of Private subnet in VNet 1 but I cannot ping the PostgreSQL DB from my VM. It says:
ping: mydb-dev.postgres.database.azure.com: Name or service not known
Both VNets are in same subscription and same region
Things to be notice.
You can't ping directly with server name ping mydb-dev.postgres.database.azure.com. Because this domain/server name is not registered in any public or private DNS Zone. If you want to ping with Server name your domain should be registered in DNS record.
Another more imortant things is you need to open port default PostgreSQL port is 5432 at both the Vnet for inbound and outbound level.
you are able to ping when your deploying both the resource in one VNEt.So By default, there is no security boundary between subnets for a VNet, so VMs in each of these subnets can talk to one another.

Why can't App Engine connect to Compute Engine VM instance?

I have a VM instance (e2-micro) on GCP running with postgres. I added my own external ip address to pg_hba.conf so I can connect to the database on my local machine. Next to that I have a nodeJS application which I want to connect to that database. Locally that works, the application can connect to the database on the VM instance. But when I deploy the app to GCP I get a 500 Server Error when I try to visit the page in the browser.
These are the things I already did/tried:
Created a Firewall rule to allow connections on my own external ip address
Created a VPC connector and added that connector to my app.yaml
Made sure everything is in the same project and region (europe-west1)
If I allow all ip addresses on my VM instance with 0.0.0.0/0 then App Engine can connect, so my guess is that I'm doing something wrong the connector? I use 10.8.0.0/28 as ip range while the internal ip address of the VM instance is 10.132.0.2, is that an issue? I tried an ip range with 10.0.0.0 but that also didn't work.
First check if your app uses a /28 IP address range (see the documentation):
When you create a connector, you also assign it an IP range. Traffic
sent through the connector into your VPC network will originate from
an address in this range. The IP range must be a CIDR /28 range that
is not already reserved in your VPC network.
When you create a VPC connector a proper firewall rulle is also created to allow traffic:
An implicit firewall rule with priority 1000 is created on your VPC
network to allow ingress from the connector's IP range to all
destinations in the network.
As you wrote yourself when you create a rule that allows traffic from any IP it works (your app can connect). So - look for the rule that allows traffic from the IP range that your app is in - if it's not there create it.
Or - you can connect your app to your DB over public IP's - in such case you also have to create a proper rule that will allow the traffic from the app to DB.
Second - check the IP of the DB that app uses.
My guess is that you didn't change the IP of the DB (that app uses) and it tries to connect not via VPC connector but via external IP and that's why it cannot (and works only when you create a firewall rule).
This answer pointed me in the right direction: https://stackoverflow.com/a/64161504/3323605.
I needed to deploy my app with
gcloud beta app deploy
since the VPC connector method was on beta. Also, I tried to connect to the external IP in my app.yaml but that needed to be the internal IP ofcourse.

AWS - vpc: private subnet access from public subnet

I created the public subnet and private subnet ,Internet gateway associated the public subnet. Now the webserver was provisioned or installed in private subnet but how do we access any content from private subnet ,why was the server installed. in the lecture ? Also, say for example if I install mysql db in private , how do i access the db from outside? in other words how do i access the webserver/db running in private subnet from public or http?
If you want to access your private subnet from outside of the VPC you need to add a bastion host to the public subnet. The bastion host should have a security group which only allows connections from the IP of your personal machine (if this is where your accessing from). And the security group of the insurance in the private subnet should allow traffic from the bastion host's security group. (The private subnet NACL, allows all by default).
If your trying to access the private subnet from within the VPC. Then you don't need to configure anything by default. As the private subnet NACL by default allows all local traffic. (Security groups by default deny all traffic, so ensure the dB instance, if in a security group, allows traffic from the public subnet, ideally limiting to the specific protocol).
You can access to your private subnet in ssh or rdp by using a bastion host which you have to install to your public subnet. But you have to configure well your security groups and your NACL.
For internet access of your private subnet, you have to install a NAT Gateway in your public subnet (for example for your db)
For more information, this is an interesting link for you:
https://cloudacademy.com/blog/aws-bastion-host-nat-instances-vpc-peering-security/
Hope it will help

Access Private RDS DB From Another VPC

I'm trying to access a private RDS Instance from a different VPC using a Peering Connection. I have two VPCs:
VPC-K8S (172.20.0.0/16) with one public subnet
VPC-RDS (172.17.0.0/16) with one public subnet (172.17.0.0/24) and 3 private subnets (172.17.{1,2,3}.0/24)
VPC-RDS has 2 security groups (not actual names):
default, which accepts SSH from my IP
db, which accepts TCP over port 5432 from the default security group.
I deploy my DB instances in VPC-RDS after creating a DB Subnet Group in the private subnets, and configure it to not be publicly accessible. To access it from my workstation, I create a small instance in the public subnet of VPC-RDS with the default security group, and create an SSH tunnel:
ssh -L 5432:rds-host-name.us-east-1.rds.amazonaws.com:5432 -i "KeyName.pem" ec2-user#ec2-host-name.compute-1.amazonaws.com
I can access the RDS from my workstation via localhost.
I want to be able to access my RDS instance from my Kubernetes cluster (VPC-K8S). I set up a peering connection between the two, and configure the route tables appropriately (in VPC-K8S: 172.17.0.0/16 -> pcx-112233; VPC-RDS: 172.20.0.0/16 -> pcx-112233)
I cannot connect to RDS from one of my K8S nodes, or any instance in the K8S VPC. I suspected that it had something to do with the db security group, but even when I opened port 5432 to all IPs (0.0.0.0/0) it didn't help.
Any ideas how to do this, or is this only possible via a publicly accessible RDS instance or a Bastion host that is in VPC-RDS and the default SG?
Stupid oversight, but I'll leave this up if it helps anyone.
My private subnets in VPC-RDS use a different route table than the public subnet. This is done so that internet addresses (for the catch all rule 0.0.0.0/0) point to the NAT gateway as opposed to the internet gateway in the public subnet.
I added a rule to the private subnets' route table for the peering connection (172.20.0.0/16 -> pcx-112233), and then configured the db security group to accept TCP traffic on port 5432 from 172.20.0.0/16.