Google Cloud SQL always on - google-cloud-sql

I've a Google Cloud SQL instance with followings settings:
pricingPlan: PER_USE
activationPolicy: ON_DEMAND
I have added an IPv4 address.
I have verified with Google Cloud SQL API that the settings are well saved.
Problem: I do not have any active query but the instance never stop, charging me 24h per day.
I'm sure the connections don't come from me. I've deleted all authorized networks and I've reboot the instance, but I still always have 1 active connection.
Is the someone that have the same problem?
Many thanks,
Loic

As Juan Munoz says, there is always 1 active connection while your instance is running, but that won't make your instance keep running.
If you are being charged continuously even though you have set activationPolicy=ON_DEMAND and have no authorized networks you might want to check if you have an authorized App Engine app and whether it is connecting to your instance.
Also your instance will be constantly active, regardless of activationPolicy, if it is a replication master. Because the slave keeps a connection open the master will never be able to shut down. I doubt this is occurring here as I imagine the slave's connection would have appeared on your active connections graph.

Related

Suddenly my Phoenix project can't connect to postgres if my VPN is on — how to fix?

I've never had this problem before, but suddenly as of this morning, if I try to fire up my Phoenix app while my VPN is on, I get a bunch of eaddrnotavail errors from Postgres. If I try to start my app with the VPN off, it works fine, and it continues to work fine even if I then turn the VPN on, but if I try to start it with the VPN already running, eaddrnotavail errors every time.
Anyone have any idea why this is happening or how to fix it?
I got a response from ProtonVPN on this. Apparently they're working on a technical solution, but this is the main issue:
outgoing connections to some database-related
ports are currently being blocked on most of our servers for
anti-abuse reasons
Normally, any user connected to the same ProtonVPN
server would have the same authorization to access the database you
are willing to connect to unless there are additional security
measures in place, so this is not recommended and insecure. Even if
you whitelist some ProtonVPN IP addresses with your firewall, that is
still not enough because any user would still be able to reach your
database through the very same ProtonVPN IP address.
we are working on a solution to provide dedicated IPs

How to find connection leaks on PostgreSQL cloud sql

I’m using Postgres provisioned by Google Cloud SQL,
Recently we see the number of connections to increase by a lot.
Had to raise the limit from 200 to 500, then to 1000. In Google Cloud console Postgres reports 800 currenct connections.
However I have no idea where these connections come from. We have one app engine service, with not a lot of traffic at the moment accessing it, another application hosted on kubernetes. And a dozen or so batch jobs that connect to it. Clearly there must be some connection leakage somewhere.
Is there any way I can see from where these connections originate ?
All applications connecting to it are Java based at the moment.
They use the HikariCP connection pool. I’m considering changing the “test query”upon connection to insert a record in a log table. Hence I could perhaps find out from where the connections originate.
But are there better ways available?
Thanks,
Consider monitoring connection activity with pg_stat_activity, i.e: SELECT * from pg_stat_activity;.
As per the documentation:
Connections that show an IP address, such as 1.2.3.4, are connecting using IP. Connections with cloudsqlproxy~1.2.3.4 are using the Cloud SQL Proxy, or else they originated from App Engine. Connections from localhost are usually to a First Generation instance from App Engine, although that path is also used by some internal Cloud SQL processes.
Also, take a look at the best practices for managing database connections that contain information on opening and closing connections, connection count, or on how to set a connection duration in the Java programming language.

500 Error: Failed to establish a backside connection on bluemix java liberty app

I deployed my java web application on Bluemix Dedicated environment and use it with Cloudant Dedicated NoSql DB. In this DB i tried to return 60k documents and server returned
500 Error: Failed to establish a backside connection
to me. So i'm wondering about connection timeout in Bluemix, there're posts where people claim that Bluemix resets a network connection in 120 if there's no response received. Is it possible to change this setting, or maybe someone knows how to solve such problem.
P.S. When I deploy it on my computer then it works fine, but of course it takes some time. Particularly this case may be solved using cloudant pagination, but i develop service for scheduling REST-calls and if bluemix reset all connections after 2 minutes i'll have a big problems with it.
Not sure which Bluemix Dedicated you are using, but the timeout is typically global. Paging would work and I thinking a websocket based approach would work as well.
-r

How do you know if PGBouncer is working?

I've set up PGBouncer and configured it to connect to my postgres DB and it all connects just fine, however I'm not sure if its actually working.
I have a php script that runs as a daemon and picks up beanstalk jobs. The problem is for every distinct user/action on the system it opens a new connection to postgres and then leaves that connection idling because the daemon doesn't actually stop running so the connection is never terminated (a quick fix for this was to reset the connection at the end of the script loop, but that is going to be inefficient with many connects).
Anyway, this caused postgres to eventually run out of connections and lock up...
So PGBouncer seems like the answer.
But now when I run it, I see the same db connection multiple times when i do ps ax | grep postgres.
Isn't PGBouncer supposed to only ever have 1 connection open to the DB and route all traffic through that connection? Then open a new connection if that one is full?
At present I have 3 for one db connection (my access control system) and 2 for the other database (my client specific data).
To me, it feels like if I roll out these changes then I will just be faced with the same problem that the connections will just get eaten up again because they're not being released.
I hope that explains enough for someone to offer any advice.
It sounds like an important step here is to fix the scripts so they release the connections when they're done. Once you do that, PgBouncer will help reduce the connection setup/teardown overhead, but in connection pooling mode it won't give you the ability to maintain more connections to Pg than you otherwise could.
However, you can also use PgBouncer in transaction-pooling mode. When used for transaction pooling, PgBouncer keeps a pool of idle backend transactions and only assigns them when a client does a BEGIN. Connections are returned to the pool after the client does a COMMIT or ROLLBACK. That means that in transaction pooling mode you can have large numbers of open connections to PgBouncer; they don't each need a corresponding connection to a PostgreSQL backend, so long as most of them are idle at any point in time and don't have any transaction open.
The only real downside of transaction pooling mode is that it breaks applications that expect to SET session-level variables, keep prepared statements across transactions, etc. Applications may need modification, e.g. to use SET LOCAL.

Session getting disconnected in the middle of working

Sessions are getting disconnected automatically (in the middle of working).
Disconnection happens for the users when they working by using telnet connection to Linux server via putty telnet application.
During the disconnection, the Network b/w utilization is high and no limitation for total number of users in a network.
Error "Hangup signal received (562)"
Any idea about this ??
The network connection was interrupted or a hangup signal was sent via "kill".
You mention network utilization being "high" when disconnects happen. How do you know that? What measurement are you looking at that tells you it is "high"? That might be a symptom of a networking issue that is at the root of the problem.
There are few directions:
OpenEdge has published this article with links to implementing keep-alive packets:
https://knowledgebase.progress.com/articles/Article/Telnet-connection-times-out-after-15-minutes
Increase the number of "instances" in xinetd.conf, and then restart the service.
Make sure that the database watchdog is up and running: https://documentation.progress.com/output/ua/OpenEdge_latest/index.html#page/dmadm/prowdog-command.html
Check the database log file, to find out what happened just before the hangup (https://documentation.progress.com/output/ua/OpenEdge_latest/index.html#page/gsins/openedge-database-log-file.html)