I know hikaricp's minimum maxlifetime is 30s, but is there any way to set it to something less than 30s?
When I try to set maxLifetime = 5000, it doesn't work:
maxLifetime is less than 30000ms, using default 1800000ms.
I'm not sure why you would want a pool at all with a lifetime so short. What is the use case? You can set idleTimeout as low as 10s. If minimumIdle is less than maximumPoolSize, then idleTimeout will be honored.
Related
On Windows and Linux, what are the smallest possible SO_SNDBUF and SO_RCVBUF sizes possible? Is it 1 byte? Does setting these values to 1 achieve the smallest possible? Does the OS delay allocating the RAM until the space is needed?
I realize that this will cause terrible performance for transferring data. I am not trying to transfer data. I am trying to check if a server is listening to a port and if not flag a problem.
$ man 7 socket
SO_SNDBUF
Sets or gets the maximum socket send buffer in bytes. The
kernel doubles this value (to allow space for bookkeeping
overhead) when it is set using setsockopt(2), and this
doubled value is returned by getsockopt(2). The default
value is set by the /proc/sys/net/core/wmem_default file
and the maximum allowed value is set by the /proc/sys/net/core/wmem_max file. The minimum (doubled)
value for this option is 2048.
SO_RCVBUF
Sets or gets the maximum socket receive buffer in bytes. The kernel doubles this value (to allow space for bookkeeping overhead) when it is set using setsockopt(2), and this doubled value is returned by getāsockopt(2). The default value is set by the /proc/sys/net/core/rmem_default file, and the maximum allowed value is set by the /proc/sys/net/core/rmem_max file. The minimum (doubled) value for this option is 256.
If it's just in the "thousands of ports per hour" as you mentioned in your comment, chances are high your server is already getting an order of magnitude more connections per hour than what your test runner would impose. Just do a "connect", then a "close". Anything else is a micro-optimization.
And if there's any sort of proxy, port mapper, or load balancer involved, then testing the TCP connection itself may not be sufficient. You would want to actually test the application protocol being hosted on that socket. For example, if there is a web server running on port 8000, you should not only make a TCP connection, but actually make an HTTP request and see if you get any kind of response back.
I am using the PlayFrameWork with Slick and using it in a system that is all I/O database heavy. In my application.conf file I have this setting:
play {
akka {
akka.loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = WARNING
actor {
default-dispatcher = {
fork-join-executor {
parallelism-factor = 20.0
}
}
}
}
}
This obviously gives me 20 threads per core for the play application and as I understand it Slick creates it's own threadpool, is the NumThreads field in Slick mean that that's the total number of threads or is it (NumThreads x CPU's)? And is there any best practice for best performance? I currently have my settings configured as:
database {
dataSourceClass = "org.postgresql.ds.PGSimpleDataSource"
properties = {
databaseName = "dbname"
user = "postgres"
password = "password"
}
numThreads = 10
}
numThreads is simple number of thread in Thread pool. Slick use this thread pool for executing querying.
The following config keys are supported for all connection pools, both built-in and third-party:
numThreads (Int, optional, default: 20): The number of concurrent threads in the thread pool for asynchronous execution of
database actions. See the HikariCP wiki for more imformation about
sizing the thread pool correctly. Note that for asynchronous
execution in Slick you should tune the thread pool size (this
parameter) accordingly instead of the maximum connection pool size.
queueSize (Int, optional, default: 1000): The size of the queue for database actions which cannot be executed immediately when all
threads are busy. Beyond this limit new actions fail immediately.
Set to 0 for no queue (direct hand-off) or to -1 for an unlimited
queue size (not recommended).
The pool is tuned for asynchronous execution by default. Apart from the connection parameters you should only have to set numThreads and queueSize in most cases. In this scenario there is contention over the thread pool (via its queue), not over the connections, so you can have a rather large limit on the maximum number of connections (based on what the database server can still handle, not what is most efficient). Slick will use more connections than there are threads in the pool when sequencing non-database actions inside a transaction.
The following config keys are supported for HikariCP:
url (String, required): JDBC URL
driver or driverClassName (String, optional): JDBC driver class to load user (String, optional)*: User name
password (String, optional): Password
isolation (String, optional): Transaction isolation level for new connections. Allowed values are: NONE, READ_COMMITTED,
READ_UNCOMMITTED, REPEATABLE_READ, SERIALIZABLE.
catalog (String, optional): Default catalog for new connections.
readOnly (Boolean, optional): Read Only flag for new connections.
properties (Map, optional): Properties to pass to the driver or DataSource.
dataSourceClass (String, optional): The name of the DataSource class provided by the JDBC driver. This is preferred over using
driver. Note that url is ignored when this key is set (You have to
use properties to configure the database connection instead).
maxConnections (Int, optional, default: numThreads * 5): The maximum number of connections in the pool.
minConnections (Int, optional, default: same as numThreads): The minimum number of connections to keep in the pool.
connectionTimeout (Duration, optional, default: 1s): The maximum time to wait before a call to getConnection is timed out. If this
time is exceeded without a connection becoming available, a
SQLException will be thrown. 1000ms is the minimum value.
validationTimeout (Duration, optional, default: 1s): The maximum amount of time that a connection will be tested for aliveness. 1000ms
is the minimum value.
idleTimeout (Duration, optional, default: 10min): The maximum amount of time that a connection is allowed to sit idle in the pool.
A value of 0 means that idle connections are never removed from the
pool.
maxLifetime (Duration, optional, default: 30min): The maximum
lifetime of a connection in the pool. When an idle connection reaches
this timeout, even if recently used, it will be retired from the
pool. A value of 0 indicates no maximum lifetime.
connectionInitSql (String, optional): A SQL statement that will be executed after every new connection creation before adding it to the
pool. If this SQL is not valid or throws an exception, it will be
treated as a connection failure and the standard retry logic will be
followed.
initializationFailFast (Boolean, optional, default: false):
Controls whether the pool will "fail fast" if the pool cannot be
seeded with initial connections successfully. If connections cannot
be created at pool startup time, a RuntimeException will be thrown.
This property has no effect if minConnections is 0.
leakDetectionThreshold (Duration, optional, default: 0): The amount of time that a connection can be out of the pool before a message is
logged indicating a possible connection leak. A value of 0 means leak
detection is disabled. Lowest acceptable value for enabling leak
detection is 10s.
connectionTestQuery (String, optional): A statement
that will be executed just before a connection is obtained from the
pool to validate that the connection to the database is still alive.
It is database dependent and should be a query that takes very little
processing by the database (e.g. "VALUES 1"). When not set, the JDBC4
Connection.isValid() method is used instead (which is usually
preferable).
registerMbeans (Boolean, optional, default: false): Whether or not JMX Management Beans ("MBeans") are registered.
Slick have very transparent configuration setting.Best practice for good performance, There is no thumb rule. It depends on your database(how many parallel connection provides) and your application. It is all about tuning between database & application.
I know there exists a wlm timeout which times out when the query 'executes' more than that time. But can i set a timeout for the amount of time a query waits in the queue ?
You can control the amount of time that query spends waiting in queue indirectly by specifying statement_timeout configuration parameter on session or whole cluster level in addition to max_execution_time parameter on WLM level. If both WLM timeout (max_execution_time) and statement_timeout are specified, the shorter timeout is used. In this case the maximum time that query will be able to wait in the queue is "statement_timeout" minus "max_execution_time".
You can modify your WLM configuration to create separate queues for the queries on the basis of time they require to run and at runtime, you can route queries to the queues according to user groups or query groups. Hope that is what you want.
Just looking for an explanation of rationale for this bit of code (PoolUtiltites:293 in version 2.2.4):
dataSource.setLoginTimeout((int) TimeUnit.MILLISECONDS.toSeconds(Math.min(1000L, connectionTimeout)));
This code and the setConnectionTimeout method means that I get this behaviour:
connectionTimeout == 0, then loginTimeout = Integer.MAX_VALUE
connectionTimeout > 0 && < 100, then HikariConfig throws IllegalArgumentException
connectionTimeout >= 100 && <= 1000, then loginTimeout = connectionTimeout
connectionTeimout > 1000, then loginTimeout = 1000
That looks really weird to me!
It's almost like the Math.min should be Math.max ???
In my current project I'd like to fail connections after 30s, which is impossible in the current setup.
I'm using the 4.1 postgres jdbc driver, but I think this is not relevant to the issue above.
Many thanks - and cool pooling library!!!
Ok, there are a couple of moving parts here. First, Math.min() is a bug, it should be Math.max(). In light of that (it will be fixed) consider the following:
It is important to note that connections are created asynchronously in the pool. The setConnectionTimeout() sets the maximum time (in milliseconds) that a call to getConnection() will wait for a connection before timing out.
The DataSource loginTimeout is the maximum time that physical connection initiation to the database can take before timing out. Because HikariCP obtains connections asynchronously, if the connection attempt fails, HikariCP will continue to retry, but your calls to getConnection() will timeout appropriately. We are using the connectionTimeout in kind of a double duty for loginTimeout.
For example, lets say the pool is completely empty, and you have configured a connectionTimeout of 30 seconds. When you call getConnection() HikariCP, realizing that there are no idle connections available, starts trying to obtain a new one. There is little point in having a loginTimeout exceeding 30 seconds, in this case.
The intent of the Math.max() call is to ensure that we never set loginTimeout to 0 if the user has configured connectionTimeout to 250ms. TimeUnit.MILLESECONDS.toSeconds() would return 0 without the Math.max(). If the user has configured a connectionTimeout of 0, meaning they never want to timeout, the time conversion of Integer.MAX_VALUE results in several thousand years as a timeout (virtually never).
Having said that, and in light of how HikariCP connections to the database are obtained asynchronously, even without the Math.max() fix, you should be able to achieve application-level connection timeouts of 30s. Unless physical connections to your database exceed 1000ms you would be unaffected by the Math.min().
We are putting out a 2.2.5-rc3 release candidate in the next few hours. I will slot this fix in.
I can't find any documentation for the node-postgres drive on setting the maximum connection pool size, or even finding out what it is if it's not configurable. Does anyone know how I can set the maximum number of connections, or what it is by default?
defaults are defined in node-postgres/lib/defaults https://github.com/brianc/node-postgres/blob/master/lib/defaults.js
poolSize is set to 10 by default, 0 will disable any pooling.
var pg = require('pg');
pg.defaults.poolSize = 20;
Note that the pool is only used when using the connect method, and not when initiating an instance of Client directly.
node.js is single threaded why want to have more then 1 connection to db per process ? Even when you will cluster node.js processes you should have 1 connection per process max. Else you are doing something wrong.