I'm running FileMaker Server 13; It's running extremly slow because connections are not timing out screen shot
I have [Guest] set up as read-only access (locally) using fmxml.. how do I force time out the connection?
You should create a new privilege and with fmxml, the create new user with that privilege. XML users don't use concurrent connection and there is no timeout.
Related
I’m using Postgres provisioned by Google Cloud SQL,
Recently we see the number of connections to increase by a lot.
Had to raise the limit from 200 to 500, then to 1000. In Google Cloud console Postgres reports 800 currenct connections.
However I have no idea where these connections come from. We have one app engine service, with not a lot of traffic at the moment accessing it, another application hosted on kubernetes. And a dozen or so batch jobs that connect to it. Clearly there must be some connection leakage somewhere.
Is there any way I can see from where these connections originate ?
All applications connecting to it are Java based at the moment.
They use the HikariCP connection pool. I’m considering changing the “test query”upon connection to insert a record in a log table. Hence I could perhaps find out from where the connections originate.
But are there better ways available?
Thanks,
Consider monitoring connection activity with pg_stat_activity, i.e: SELECT * from pg_stat_activity;.
As per the documentation:
Connections that show an IP address, such as 1.2.3.4, are connecting using IP. Connections with cloudsqlproxy~1.2.3.4 are using the Cloud SQL Proxy, or else they originated from App Engine. Connections from localhost are usually to a First Generation instance from App Engine, although that path is also used by some internal Cloud SQL processes.
Also, take a look at the best practices for managing database connections that contain information on opening and closing connections, connection count, or on how to set a connection duration in the Java programming language.
i connect to Postgresql via JDBC through pgbouncer with transaction pooling mode enabled. As far as I know in this mode pgbouncer can share same connection for several client without breaking session. So several clients may work within a single session sequentially, one by another. The question is does pgbouncer care about reseting session parameters when it detach one client from a connection and attach another client to this connection?
In particular, my app get connection and then issues something like this:
executeQuery(connection,"select set_config('myapp.user','fonar101',false)");
..../*other actions*/
commit(connection);
After commit pgbouncer can detach my app from connection and get it back to its pool, right? So,
if I issue another statements after commit they will probably be
executed within another session with incorrect values of session
parameters
pgbouncer can attach another client to that connection
and that client will proceed with again incorrect session settings
How does pgbouncer care about this things?
I'd say the opposite:
https://pgbouncer.github.io/config.html
transaction
Server is released back to pool after transaction finishes.
Which means that when you SET SESSION (default for SET), without specifying SET LOCAL, you change settings for all transactions that share the session in the pool...
According pgbouncer docs it doesnt support SET/RESET and ON COMMIT DROP for temp tables in transaction pooling mode.
When I run a query that takes a long time on my Postgres server (maybe 30 minutes), I get the error. I've verified the query is running with active status on the server using pgAdmin. I've also verified the correctness of the query, as it runs successfully on a smaller dataset. Server configurations are default, I haven't changed anything. Please help!
Look into the PostgreSQL server log.
Either you'll find a crash report there, which would explain the broken connection, or there is something in your network that cuts connections with no activity after a while.
Investigate your firewalls!
Maybe it is a solution to set the configuration parameter tcp_keepalives_idle to a value shorter than the time when the connection is cut. That will cause the server operating system to send keepalive messages on idle connections, which may be enough to prevent the overzealous connection reaper in your environment from disrupting your work.
Why does mongodb logs show too many opened connections? It's showing me more than the maximum connection limit and number of current operations in db.
Also my primary refused to create more connections after reaching 819 limit. That time, the number of current operations in db were less than 819. Raising ulimit has solved my problem temporarily, but why were idle connections not utilized to serve the requests?
I was having this exact same problem. My connection number just kept growing and growing until it hit 819 and then no more connections were allowed.
I'm using mongo-java-driver version 2.11.3. What has seemed to fix the problem is explicitly setting the connectionsPerHost and threadsAllowedToBlockForConnectionMultiplier properties of the MongoClient. Before I was not setting these values myself and accepting the defaults.
MongoClientOptions mco = new MongoClientOptions.Builder()
.connectionsPerHost(100)
.threadsAllowedToBlockForConnectionMultiplier(10)
.build();
MongoClient client = new MongoClient(addresses, mco); //addresses is a pre-populated List of ServerAddress objects
In my application the MongoClient is defined as a static singleton.
I was watching the mongodb logs, and once the application hit 100 open connections I didn't see any more connections established from the client application. I am running a replica set, so you still see the internal connections being made which properly close.
From the MongoDB documentation:
"If you see a very large number connection and re-connection messages in your MongoDB log,then clients are frequently connecting and disconnecting to the MongoDB server. This is normal behavior for applications that do not use request pooling, such as CGI. Consider using FastCGI, an Apache Module, or some other kind of persistent application server to decrease the connection overhead.
If these connections do not impact your performance you can use the run-time quiet option or the command-line option --quiet to suppress these messages from the log."
http://docs.mongodb.org/manual/faq/developers/#why-does-mongodb-log-so-many-connection-accepted-events
Make sure you are using latest driver of mongodb.
I've set up PGBouncer and configured it to connect to my postgres DB and it all connects just fine, however I'm not sure if its actually working.
I have a php script that runs as a daemon and picks up beanstalk jobs. The problem is for every distinct user/action on the system it opens a new connection to postgres and then leaves that connection idling because the daemon doesn't actually stop running so the connection is never terminated (a quick fix for this was to reset the connection at the end of the script loop, but that is going to be inefficient with many connects).
Anyway, this caused postgres to eventually run out of connections and lock up...
So PGBouncer seems like the answer.
But now when I run it, I see the same db connection multiple times when i do ps ax | grep postgres.
Isn't PGBouncer supposed to only ever have 1 connection open to the DB and route all traffic through that connection? Then open a new connection if that one is full?
At present I have 3 for one db connection (my access control system) and 2 for the other database (my client specific data).
To me, it feels like if I roll out these changes then I will just be faced with the same problem that the connections will just get eaten up again because they're not being released.
I hope that explains enough for someone to offer any advice.
It sounds like an important step here is to fix the scripts so they release the connections when they're done. Once you do that, PgBouncer will help reduce the connection setup/teardown overhead, but in connection pooling mode it won't give you the ability to maintain more connections to Pg than you otherwise could.
However, you can also use PgBouncer in transaction-pooling mode. When used for transaction pooling, PgBouncer keeps a pool of idle backend transactions and only assigns them when a client does a BEGIN. Connections are returned to the pool after the client does a COMMIT or ROLLBACK. That means that in transaction pooling mode you can have large numbers of open connections to PgBouncer; they don't each need a corresponding connection to a PostgreSQL backend, so long as most of them are idle at any point in time and don't have any transaction open.
The only real downside of transaction pooling mode is that it breaks applications that expect to SET session-level variables, keep prepared statements across transactions, etc. Applications may need modification, e.g. to use SET LOCAL.