gcloud compute: configure firewall for external traffic - sockets

I am attempting to configure my google cloud instance to allow external traffic so I can set up a web socket; however despite adding a rule for all external TCP/IP traffic, I can't access it. My rules are:
gcloud compute firewall-rules list
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
default-allow-ssh default 0.0.0.0/0 tcp:22
external-traffic default 0.0.0.0/0 tcp,udp
gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
pi-server us-central1-a n1-standard-1 **.***.*.* **.***.***.*** RUNNING
I have configured this as a static IP (this is displayed in my cloud dashboard):
Name External Address Region Type In use by
crypto-iris-****** **.***.***.*** us-central1 Static VM instance my_instance_name (Zone a)
I also have some Go client/server web socket code that works perfectly on my computer using localhost:8080 as address. So, my question is: can I simply replace localhost with the external static IP of my instance under these rules?
My client makes use of "github.com/gorilla/websocket" on port 8080. Output of client locally is:
connecting to ws://23.251.148.133:8080/echo
dial:dial tcp 23.251.148.133:8080: getsockopt: operation timed out
exit status 1
Code upon request, if anyone wants to see it.

Problem: my golang/gorilla server was hosting on localhost:8080. I changed it to 0.0.0.0:8080. Smooth sailing after that.
See following post about this, but basically the server was listening to the local loopback address (available only to local machine) instead of the outside world.
https://serverfault.com/questions/78048/whats-the-difference-between-ip-address-0-0-0-0-and-127-0-0-1

Related

unable to ping or view IP Nginx on IBM Cloud Virtual Server, but can view locally

i have an IBM Cloud virtual server instance with a bound IP address, let's say 150.2.3.4
I can get the page using curl 127.0.0.1:80 on the server itself
I can ssh into the server ssh root#150.2.3.4
I cannot ping 150.2.3.4 from a public location
I cannot reach http://15.2.3.4:80 either
The range of internal IPs (10.240.x.x/18 in the VPC, a subrange in the subnet, same region etc.) all seem to be correct.
I do have the IP address bound to that server under "Floating IPs for VPC" on eth0. Does anyone know what else is necessary to make the IP address publicly available?
FWIW I get this for the firewall:
root#my-pipe-01:~# ufw app list
Available applications:
Nginx Full
Nginx HTTP
Nginx HTTPS
OpenSSH

Why can't App Engine connect to Compute Engine VM instance?

I have a VM instance (e2-micro) on GCP running with postgres. I added my own external ip address to pg_hba.conf so I can connect to the database on my local machine. Next to that I have a nodeJS application which I want to connect to that database. Locally that works, the application can connect to the database on the VM instance. But when I deploy the app to GCP I get a 500 Server Error when I try to visit the page in the browser.
These are the things I already did/tried:
Created a Firewall rule to allow connections on my own external ip address
Created a VPC connector and added that connector to my app.yaml
Made sure everything is in the same project and region (europe-west1)
If I allow all ip addresses on my VM instance with 0.0.0.0/0 then App Engine can connect, so my guess is that I'm doing something wrong the connector? I use 10.8.0.0/28 as ip range while the internal ip address of the VM instance is 10.132.0.2, is that an issue? I tried an ip range with 10.0.0.0 but that also didn't work.
First check if your app uses a /28 IP address range (see the documentation):
When you create a connector, you also assign it an IP range. Traffic
sent through the connector into your VPC network will originate from
an address in this range. The IP range must be a CIDR /28 range that
is not already reserved in your VPC network.
When you create a VPC connector a proper firewall rulle is also created to allow traffic:
An implicit firewall rule with priority 1000 is created on your VPC
network to allow ingress from the connector's IP range to all
destinations in the network.
As you wrote yourself when you create a rule that allows traffic from any IP it works (your app can connect). So - look for the rule that allows traffic from the IP range that your app is in - if it's not there create it.
Or - you can connect your app to your DB over public IP's - in such case you also have to create a proper rule that will allow the traffic from the app to DB.
Second - check the IP of the DB that app uses.
My guess is that you didn't change the IP of the DB (that app uses) and it tries to connect not via VPC connector but via external IP and that's why it cannot (and works only when you create a firewall rule).
This answer pointed me in the right direction: https://stackoverflow.com/a/64161504/3323605.
I needed to deploy my app with
gcloud beta app deploy
since the VPC connector method was on beta. Also, I tried to connect to the external IP in my app.yaml but that needed to be the internal IP ofcourse.

Connecting Google Cloud Platform's compute engine and app engine via VPC connector

I'd like to know in detail how to connect google compute engine virtual machine instance and app engine.
I've set up a virtual machine instance on Google compute engine, and my Postgres server is running there, following this tutorial: https://cloud.google.com/community/tutorials/setting-up-postgres
I've deployed my flask app under the same project on Google Cloud Platform, creating an app engine instance.
I searched on how to connect compute engine and app engine together, and it seems it should be possible through a VPC connector: connect Google App Engine and Google Compute Engine
This is what my VPC connector looks like:
Serverless VPC access
Name Network Region IP address range Min. throughput Max. throughput
connector-name default europe-west2 10.8.0.0/28 200 300
On my compute engine, I have my VM instance like so:
Name Zone Internal IP External IP
some-name europe-west2-c 10.154.0.2 (nic0) 34.89.113.193
On my flask app, I'm trying to connect to my remote DB like so:
db = PostgresqlExtDatabase(
"some-name", # databse name
user="postgres",
password="some-password",
host="10.154.0.2", # remote host internal ip
port=5432,
)
db.connect()
This is my app.yaml for the vpc access part, I've followed this reference: https://cloud.google.com/appengine/docs/standard/python/connecting-vpc#configuring
vpc_access_connector:
name: projects/some-name/locations/europe-west2/connectors/connector-name
If I understood correctly, if the VPC connector is present, I should just be able to connect using the internal IP address of my VM instance(this case, 10.154.0.2)?
The problem is, when the app is deployed for production, It is still complaining that it cannot connect:
2020-09-26 12:54:51 default[20200926t134815] Is the server running on host "10.154.0.2" and accepting
2020-09-26 12:54:51 default[20200926t134815] TCP/IP connections on port 5432?
If it's connected internally I assume I don't have to add that internal IP to firewall rules, although I did try that as well. As for firewall rules, I have allowed my local machine's IP address so I can connect to the remote Postgres server via PgAdmin.
I've actually tried External IP(34.89.113.193) as well although that doesn't make sense to me.
I'm a bit of a noob on networks and backend stuff in general, any help would be much appreciated.
UPDATED 1
This is my firewall rules:
Direction
Ingress, Egress
Action on match
Allow
Source filters
IP ranges
92.40.176.9/32
78.146.103.141/32
10.154.0.2
Protocols and ports
tcp:5432
Image for reference: Screenshot for the list of firewall rules
It turns out the firewall / postgres configurations were all ok, but because this VPC connector method was on beta, I needed to run:
gcloud beta app deploy
instead of the usual
gcloud app deploy.
This command then updated gcloud Beta Commands and prompted me to enable API:
API [appengine.googleapis.com] not enabled on project [742932836941]. Would you like to enable and retry (this will take a few minutes)? (y/N)?
After enabling this everything worked fine.
Per the information provided seems like both VPC firewall rules and the connector are well configured.
However, based on the messages
2020-09-26 12:54:51 default[20200926t134815] Is the server running on host "10.154.0.2" and accepting
2020-09-26 12:54:51 default[20200926t134815] TCP/IP connections on port 5432?
Seems like the VM or server using 10.154.0.2 is not accepting requests on port 5432 or the port has not been opened, you can use this site to do a port scan.
Based on the guide you followed to create PostgreSQL you are using Ubuntu as OS, therefore I suggest you open the port in ubuntu and see if the issue persists.

If external-ip of coturn is only used for aws?

https://github.com/coturn/coturn/blob/c4477bfddd2cd51de1ad37032ca88330f3c44ed6/docker/coturn/turnserver.conf#L100
In turnserver.conf , I see a world " For Amazon EC2 users", if the external-ip is only used for aws?
I let the stun server run in the k8s cluster, and then expose it to the public network with the nodeport service, but the srflx returned by stun is a gateway address, not the external-ip which I set. My k8s cluster runs on Alibaba Cloud.
I hope someone can help me solve this problem, thank you!!!
AWS EC2 instances, for the most part, run behind a NAT. Even if you've assigned a public IP address (e.g. 1.2.3.4) via the AWS Console, the instance only knows about the private network its on and is unaware of the public IP address assigned to it. That is, the instance thinks its IP address is 172.31.5.6 because that's what the Operating System discovered at boot time. Port forwarding enables certain TCP and UDP ports to be forwarded from the public IP address to the private IP address that the EC2 instance is running on.
This typically isn't a problem for most services run on an AWS EC2 instance. With STUN running in full "2 IP address and 2 port mode", the server needs to advertise its alternate IP address back to the client, should the client want to conduct NAT behavior and filtering tests. But it would be incorrect for the STUN server to send back 172.31.5.7 as its alternate IP - the client has no way of reaching that IP since its private.
Similarly for TURN, when port allocations occur, the server needs to send back the public IP address of the EC2 isntance to the client who allocated it. It would be bad if the client requested a TURN port to share with another peer - only for the TURN srever to send back 172.31.5.6.
Hence, for a STUN or TURN server to be hosted behind a NAT, a set of command line parameters or configuration parameters are needed to tell the server what its "real" IP addresses are. The STUN/TURN software will use these IP addresses for sending responses back to clients.

Google Cloud Networking Issue

I have some services stood up on Google Container Engine and they are hooked up to external IPs.
When I try to query on of these external IPs from within one of my services I get an error like
dial tcp xx.xx.xx.xx:5429: getsockopt: connection refused
Using the exact same service, but running on my local machine, it can connect fine to the same IP and port.
Is there some sort of port opening that I need to do in Google Networking dashboard or in my Kuberenetes pod configuration to allow my pod to connect to this host?
It is a firewall issue. It is trying to set up a connection through 5429 port, which surely is being blocked in the firewall rules.
You can find the Firewall console in the dashboard > Networking > VPC network > Firewall rules.
You only need to allow the connection in the needed port in the network where your instance is and it will work properly.