Does google cloud SQL instance use static IP Address by default? - google-cloud-sql

When I created an sql instance in google cloud sql, it gave me an IP address. Will that IP address ever change, if so how to make it static so that it never changes?

The Google CloudSQL docs indicate that the IP address will remain static until the instance is deleted:
When you enable public IP for your instance, it is configured with a public, static IPv4 address.
This is consistent for private IPs as well.

Related

How do I configure my Aurora SQL database to be accessible using pgAdmin and fix a timeout expired error?

Following this tutorial, I set up an Aurora PostgreSQL database. I then tried to access the database from my computer using pgAdmin. However, pgAdmin gives the error: "Unable to connect to server: timeout expired"
I have tried the following things:
Ensured that the database is set to be publicly accessible
Verified that the database has an IP address (I ran nslookup on my local machine, and it returned a public IP address).
Verified that the database is in a public subnet (it is launched in two subnets, one of which is a public subnet with an Elastic IP address and one of which is a private subnet which directs traffic to a NAT gateway)
Ensured that my database is configured to use port 5432
Modified the security group to add inbound rules allowing TCP traffic on port 5432 from any IPv4 or IPv6 address
Ensured that I can send outbound traffic on port 5432 from my computer using this site
It looks like you have deployed the Aurora DB cluster into two subnets of a VPC and the problem is that one subnet is public, while the other is private. I suspect the DB will be accessible publicly as long as the public DNS resolves to the DB instance in the public subnet, but will be unreachable when it resolves to the instance in the private subnet (though I have not verified this).
To correct this and make the cluster publicly accessible, deploy the DB into public subnets only.

Why can't App Engine connect to Compute Engine VM instance?

I have a VM instance (e2-micro) on GCP running with postgres. I added my own external ip address to pg_hba.conf so I can connect to the database on my local machine. Next to that I have a nodeJS application which I want to connect to that database. Locally that works, the application can connect to the database on the VM instance. But when I deploy the app to GCP I get a 500 Server Error when I try to visit the page in the browser.
These are the things I already did/tried:
Created a Firewall rule to allow connections on my own external ip address
Created a VPC connector and added that connector to my app.yaml
Made sure everything is in the same project and region (europe-west1)
If I allow all ip addresses on my VM instance with 0.0.0.0/0 then App Engine can connect, so my guess is that I'm doing something wrong the connector? I use 10.8.0.0/28 as ip range while the internal ip address of the VM instance is 10.132.0.2, is that an issue? I tried an ip range with 10.0.0.0 but that also didn't work.
First check if your app uses a /28 IP address range (see the documentation):
When you create a connector, you also assign it an IP range. Traffic
sent through the connector into your VPC network will originate from
an address in this range. The IP range must be a CIDR /28 range that
is not already reserved in your VPC network.
When you create a VPC connector a proper firewall rulle is also created to allow traffic:
An implicit firewall rule with priority 1000 is created on your VPC
network to allow ingress from the connector's IP range to all
destinations in the network.
As you wrote yourself when you create a rule that allows traffic from any IP it works (your app can connect). So - look for the rule that allows traffic from the IP range that your app is in - if it's not there create it.
Or - you can connect your app to your DB over public IP's - in such case you also have to create a proper rule that will allow the traffic from the app to DB.
Second - check the IP of the DB that app uses.
My guess is that you didn't change the IP of the DB (that app uses) and it tries to connect not via VPC connector but via external IP and that's why it cannot (and works only when you create a firewall rule).
This answer pointed me in the right direction: https://stackoverflow.com/a/64161504/3323605.
I needed to deploy my app with
gcloud beta app deploy
since the VPC connector method was on beta. Also, I tried to connect to the external IP in my app.yaml but that needed to be the internal IP ofcourse.

Google Cloud SQL External IP not static?

I've got a micro instance of a Postgresql database on Google Cloud SQL, but the external IP seems to change occasionally. I've seen no documentation that says this is going to happen. It's only inconvenient while developing, but I need to understand how to make sure it won't happen when I want to go live with a larger instance. Any info appreciated.
I'm inferring when you say "External IP" you are referring to the Public IP. The Public IP assigned to your instance will not change unless you disable the Public IP and enable it again. You can read the Documentation to know more, it says:
When you disable public IP for an instance, you release its IPv4
address. If you later reenable public IP for this instance, it will
get a different IPv4 address, and all applications that use the public
IP address to connect to this instance must be modified.
Have an eye on your instance and double check that you are not unintentionally disabling the public IP.

How to access REST APIs hosted locally on Alexa

I am developing a custom Alexa Skill and have a requirement where I want Alexa to access REST APIs that are hosted locally on http://localhost:8080? Any idea how to do this?
Thanks!
If you really want to do this, and I’m assuming you are hosting the skill on AWS Lambda, it would involve quite a bit of work.
Your local endpoints need to be accessible from outside of your network, which requires port forwarding in your router to your machine where the endpoints are hosted. This needs to be configured in your router.
An easier way is to deploy your project containing the API to something like Heroku, which can be done easily. They give you a domain and make the endpoints accessible to Lambda. This should be possible within their free tier.
Here' a link to a pretty good article about how IP addresses work.
Allowing a device sitting on your local network (eg. a laptop computer or Raspberry Pi connected to your wifi) to be accessed from outside your local network (eg. from a service running on AWS) will involve mapping 2 separate IP addresses:
The IP address assigned to your router (your public IP)
The private IP addresses assigned by your router to your devices (laptop, iPhone, RPi, etc).
You have a couple options for allowing your router's IP (#1) to be accessible from outside your local network:
a. Pay your internet provider to provide you with a static IP address
b. Use a dynamic DNS service such as DuckDNS or No-IP.
Once you have a fixed public IP that can be used to access your router, you will then need to map a port on your router (#1) to the device IP on your local network (#2). This is usually referred to as "port forwarding". Most routers will support configuring this. In effect, your tell your router "when you get a message to : pass it to my laptop :"
Your local private IP address will typically have an IP value like 192.168.0.23 (where the 23 can be anything from 1 to 254).
An outside IP will start with something other than 192. Refer to the first link above regarding IP ranges.
You can google "port forwarding" and "public IP" for more info on how IP addresses and port forwarding work, but hopefully this will help get you started. It may seem a bit complicated at first, but if I can understand it, then anyone can :-)

Google Cloud SQL "Idling IP Address"

I have been looking at the console’s billing as far as Cloud SQL and the VM Instance is concerned. I see that a lot of the cost comes from the idling hours of the Cloud SQL IP address. I am unsure as to where the settings are for this as i have tried to “unassign” the ip address that is associated with my Cloud SQL Instance. Are these charges static, as in am I always, no matter what, going to be charged for the ip address constantly or is there a way to turn this off when I am not using the Cloud SQL Instance? If so, how can I?
You will be charged $0.01 for every hour the instance is not active and has an IPv4 address assigned.
You can un-assign IP address using the Google Developers Console, go to your Cloud SQL instance, click on Edit and uncheck 'Assign an IPv4 address to my Cloud SQL instance' box.
Click your SQL instance to go to Instance details > Connections > Public IP - uncheck it.
https://cloud.google.com/sql/docs/mysql/configure-ip
Note: When you disable public IP for an instance, you release its IPv4
address. If you later reenable public IP for this instance, it will
get a different IPv4 address, and all applications that use the public
IP address to connect to this instance must be modified.
Basically if you remove the public connection to your instance the IP is released too.
A lot of answers, most of them partial (or old?), it seems. On top, Google's settings aren't very transparent either. I checked in early 2020 and it doesn't seem possible to switch off the IP address or avoid the charges on a stopped instance.
In more detail:
I was billed €6.55 for 600 hours of SKU IP address idling in seconds for DB
The Cloud SQL instance in question was turned off (most or all of) the month of December until right now
Both Private IP and Public IP were deselected under [Google Cloud Project] > SQL > Connections
I then started the instance; no IP selected still. I let it run for some minutes and stopped it.
In the instances overview (only visible when switching from some other section like Logging back to [Google Cloud Project] > SQL), there was an IP listed under Public IP Address now
In [Google Cloud Project] > SQL > Connections, Public IP was selected (I didn't select anything there!)
Starting the instance now doesn't let me deselect both IP address options at the same time anymore like I had it before.
I do have a number of Authorized Networks configured under option Public IP and used these in some earlier months. I cannot test whether removing all if these will let me disable the option right now, as I need them again real soon. So, that's an open question.
In summary, besides a glitch in the system where no IP address option is selected, yet one is set up anyways, the charge seems to be unavoidable for a non-running instance. It's not possible to switch the IP off as #Tony Tseng suggested.
Why is that again, Google?
https://cloud.google.com/sql/docs/mysql/configure-ip
Click the instance name to open its Instance details page.
Select the Connections tab.
Deselect the Public IP checkbox.
Click Save to update the instance.==> Save is disabled on unchecking Public IP checkbox. Looks like either Private IP or Public IP checkbox should be selected to enable Save button.