ODatabasePool has no relation to OPartitionedDatabasePool - orientdb

OrientDB 3 ; manual kicks off talking about OrientDb & ODatabasePool
However anyone using OPartitionedDatabasePool cannot move freely to ODatabasePool. Though their names imply some resemblance they have none.
Not able to understand the design intent, since ideally switching between the two should be seamless.

They have some resemblances between each other however you need to take a few steps more if you want to use the latter.
Here's some info from the OPartitionedDatabasePool documentation:
To acquire connection from the pool call acquire() method but to release connection you just need to call ODatabase.close() method.
OrientDB Javadoc | ODatabasePool
OrientDB Javadoc | OPartitionedDatabasePool

Related

Stateful session memory management in Drools

Just have a general question regarding memory management when using "Stateful sessions" in Drools. For context, I'm specifically looking to use a ksession in "Stream" mode, together with fireUntilHalt() to process an infinite stream of events. Each event is timestamped, however I'm mainly writing rules using length-based windows notation (i.e. window:length()) and the from accumulate syntax for decision making.
The docs are a little vague though about how memory management works in this case. The docs suggest that using temporal operators the engine can automatically remove any facts/events that can no longer match. However, would this also apply to rules that only use the window:length()? Or would my system need to manually delete events that are no longer applicable, in order to prevent running OOM?
window:time() calculates expiration, so works for automatic removal. However, window:length() doesn't calculate expiration, so events would be retained.
You can confirm the behaviour with my example:
https://github.com/tkobayas/kiegroup-examples/tree/master/Ex-cep-window-length-8.32
FYI)
https://github.com/kiegroup/drools/blob/8.32.0.Final/drools-core/src/main/java/org/drools/core/rule/SlidingLengthWindow.java#L145
You would need to explicitly delete them or set #expires (if it's possible for your application to specify expiration with time) to avoid OOME.
Thank you for pointing that the document is not clear about it. I have filed a doc JIRA to explain it.
https://issues.redhat.com/browse/DROOLS-7282

How to use "Try" Postgres Advisory Locks

I am experiencing some unexpected (to me) behavior using pg_try_advisory_lock. I believe this may be connection pooling / timeout related.
pg_advisory_lock is working as expected. When I call the function and the desired lock is already in use, my application waits until the specified command timeout on the function call.
however, when i replace with pg_try_advisory_lock and instead check the result of this function (true / false) to determine if the lock was acquired some scenario is allowing multiple processes (single threaded .net core deployed to ECS) to acquire "true" on the same lock key at the same time.
in C# code, I have implemented within an IDisposable and make my call to release the lock and dispose of the underlying connection on disposal. This is the case both for my calls to pg_advisory_lock and pg_try_advisory_lock. all of the work that needs to be synchronized happens inside a using block.
my operating theory is that the settings around connection pooling / timeouts are at play here. since the try call doesnt block, the session context for the lock "disposes" at the postgres - perhaps as a result of the connection being idle(?).
if that is the cause, the simplest solution seems to be to disable any kind of pooling for the connections used in try locking. but since pooling is just a theory at this point, it seems a bit early to start targeting a specific solution.
any ideas what may be the cause?
Example of the C#
using (Api.Instance.Locking.TryAcquire(someKey1, someKey2, out var acquired))
{
if (acquired)
{
// do some locked work
}
}
Under the hood. TryAcquire is calling
select pg_try_advisory_lock as acquired from pg_try_advisory_lock(#key1,#key2)
This turned out to be kind of dumb. No changes to pooling where required.
I am using Dapper and NpgSql libraries. The NpgsqlConnection returns to a closed state after being used to .Query() unless the connection is explicitly opened.
This was impacting both my calls to try / blocking versions of the advisory lock calls albeit in a less adverse way in the blocking capacity.

How do I disable legacy application from using XA datasources?

I have this legacy application, that often fails importing data, probably because some transactions spanning too many sql statements. These long transactions are really not needed, so I'm trying to get rid of them and just use normal lookup and commits.
I'm not very familiar with XA datasources and don't really understand what controls if an XA or non XA is used. I have found places in the code that chooses between XA and non XA, but after setting this to always use non XA, I'm still getting the errors.
I have also un-checked the "Support two phase commit protocol" in "Queue connection factories" on my server, also without luck.
My server have datasources registered for both XA and non XA.
Any help on how and where to disable the use of XA datasources would be appreciated.
LocalTransact E J2CA0030E: Method enlist caught com.ibm.ws.Transaction.IllegalResourceIn2PCTransactionException: Illegal attempt to enlist multiple 1PC XAResources
at com.ibm.ws.tx.jta.RegisteredResources.enlistResource(RegisteredResources.java:871)
at com.ibm.ws.tx.jta.TransactionImpl.enlistResource(TransactionImpl.java:1835)
at com.ibm.tx.jta.embeddable.impl.EmbeddableTranManagerSet.enlistOnePhase(EmbeddableTranManagerSet.java:202)
at com.ibm.ejs.j2c.LocalTransactionWrapper.enlist(LocalTransactionWrapper.java:624)
at com.ibm.ejs.j2c.ConnectionManager.lazyEnlist(ConnectionManager.java:2697)
at com.ibm.ws.rsadapter.spi.WSRdbManagedConnectionImpl.lazyEnlist(WSRdbManagedConnectionImpl.java:2605)
at com.ibm.ws.rsadapter.jdbc.WSJdbcConnection.beginTransactionIfNecessary(WSJdbcConnection.java:743)
at com.ibm.ws.rsadapter.jdbc.WSJdbcConnection.prepareStatement(WSJdbcConnection.java:2792)
at com.ibm.ws.rsadapter.jdbc.WSJdbcConnection.prepareStatement(WSJdbcConnection.java:2745)
Before answering this, I want to point out that changing transactional logic without full awareness of what you are doing can put your application at risk of data integrity issues, so proceed with caution.
If you look at the part of the stack that follows what you posted, it should show which application code is using the java.sql.Connection object. Follow the code back to point where it obtains the Connection from a DataSource, and identify the JNDI name of the DataSource that it is using. Switch your code to instead use the JNDI name of a ConnectionPoolDataSource (non-XA) rather than an XADataSource. Once you do this, you might see errors about enlisting multiple one-phase resources in a transaction. If so, your application was relying on two-phase commit which is only possible with XA and you will need to completely refactor it (if even possible at all) to avoid the use of two-phase commit. Alternately, if it was truly the intent that this data source should not be enlisting in JTA transactions, then you can mark it as transactional=false (if using Liberty) or nonTransactionalDataSource=true (WAS traditional) in which case it will avoid enlisting in JTA transactions and thus will not participate as a two-phase (XA) resource.
Before you make changes that you do not understand, you might be better advised to assess whether simply fixing or avoiding the (unspecified) errors may be less risky and less work that changing from XA to Non-XA behaviour.
At the very least for such a change from XA to non-XA you should engage a subject matter expert who can advise on the technical and business impacts of such a change specifically for the application involved.
You should edit your question to specify the exact errors (for example, sqlcodes or sqlstates) that the application receives in response to which kind of SQL actions. Sometimes simple low risk configuration changes can resolve those errors.

"PSQLException: FATAL: sorry, too many clients already" error in integration tests with jOOQ & Spring Boot

There are already similar questions about this error and suggested solutions; e.g. increasing max_connections in postgresql.conf and / or adapting the max number of connections your app requests. However, my question is more specific to using jOOQ in a Spring Boot application.
I integrated jOOQ into my application as in the example on GitHub. Namely, I am using DataSourceConnectionProvider with TransactionAwareDataSourceProxy to handle database connections, and I inject the DSLContext in the classes that need it.
My application provides various web services to front-ends and I've never encountered that PSQLException on dev or test environments so far. I only started getting that error when running all integration tests (around 1000) locally. I don't expect some leak in handling the connection as Spring and jOOQ manage the resources; nevertheless that error got me worried if that would also happen on production.
Long story short, is there a better alternative to using DataSourceConnectionProvider to manage connections? Note that I already tried using DefaultConnectionProvider as well, and tried to make spring.datasource.max-active less than max_connections allowed by Postgres. Neither fixed my problem so far.
Since your question seems not to be about the generally best way to work with PostgreSQL connections / data sources, I'll answer the part about jOOQ and using its DataSourceConnectionProvider:
Using DataSourceConnectionProvider
There is no better alternative in general. In order to understand DataSourceConnectionProvider (the implementation), you have to understand ConnectionProvider (its specification). It is an SPI that jOOQ uses for two things:
to acquire() a connection prior to running a statement or a transaction
to release() a connection after running a statement (and possibly, fetching results) or a transaction
The DataSourceConnectionProvider does so by acquiring a connection from your DataSource through DataSource.getConnection() and by releasing it through Connection.close(). This is the most common way to interact with data sources, in order to let the DataSource implementation handle transaction and/or pooling semantics.
Whether this is a good idea in your case may depend on individual configurations that you have made. It generally is a good idea because you usually don't want to manually manage connection lifecycles.
Using DefaultConnectionProvider
This can certainly be done instead, in case of which jOOQ does not close() your connection for you, you'll do that yourself. I'm expecting this to have no effect in your particular case, as you'll implement the DataSourceConnectionProvider semantics manually using e.g.
try (Connection c = ds.getConnection()) {
// Implicitly using a DefaultConnectionProvider
DSL.using(c).select(...).fetch();
// Implicit call to c.close()
}
In other words: this is likely not a problem related to jOOQ, but to your data source.

Erlang store pool of mongodb connections

How can i store pool of mongodb connections in erlang.
in one function i create pool of db connections
Replset = {<<"rs1">>, [{localhost, 27017}]},
Pool = resource_pool:new (mongo:rs_connect_factory (Replset), Count),
in second function i need to get connection from pool.
{ok, Conn} = resource_pool:get (Pool).
But i can not do this, because i created pool in another function.
I try to use records, but without success (
What i need to do to get it a bit global cross module?
I think the best solution is to use gen_server and store data in its state.
Another way is to use ets table.
Some points to guide you in the correct direction:
Erlang has no concept of a global variable. Bindings can only exist inside a process and that binding will be local to that process. Furthermore,
Inside a process, there is no process-local bindings, only bindings which are local to the current scope.
Note that this is highly consistent with most functional programming styles.
To solve your problem, you need a process to keep track of your resource pool for you. Clients then call this process and asks for a resource. The resource manager can then handle, via, monitors what should happen should the client die when it has a checked out resource.
The easiest way to get started is to grab devinus/poolboy from Github and look into that piece of code.