How to automatically dispatch read-only transactions to slave - postgresql

I would like all queries from my Spring-Hibernate application executed in a read-only transaction to be dispatched to a PostgreSQL slave and all read-write transaction queries to a master.
While using annotation driven transactions in Spring, if the transaction is defined as read-only, the PostreSQL driver allows only select queries to be executed, which is obvious, however there is no mention of how the driver would behave in a master slave configuration. For e.g., the MySQL driver has a replication connection class which automatically dispatches read-only transaction queries to the slave.
One solution would be to use multiple Hibernate session factories and use the one pointing to the slave for selects and the other for updates, but that would be too much manual handling. How should I be designing this?

This is a surprisingly complex question and the answer is not simply easy. You need to keep in mind that you have to have this dispatched in such a way that the layer which does the dispatching knows whether a transaction is likely to be read-only or not.
The cleanest solution is probably to implement the dispatching in your middleware. This has the advantage of being a functional dispatch-- we know what we are trying to do so let's dispatch there... Of course functions can create a bit of a knowledge gap in what is read-only and what writes....
The second option is that one could probably dispatch with something like PGPool or the like. I would expect you would probably want to avoid server-side prepared queries in these cases because the more knowledge you provide the intermediate layer, the fewer problems you will have.

Related

How to query axon aggregates

Is there a way to see the current state of the aggregates stored in axon?
Our application uses a Oracle backed axon event store.
I tried querying the domainevententry and snapshotevententry tables, but they are empty.
Is there a way to see the current state of the aggregates stored in axon?
In short, yes, although it is not recommended. Granted, if you are planning to employ CQRS. CQRS, or Command-Query Responsibility Separation, dictates that the Command Model and the Query Model are separate.
The aggregate support Axon delivers supplies an easy means to construct a Command Model. As the name suggests, it's intended for commands. On the flip side, you have Query Models, which are designed for queries. AxonIQ has this to say on CQRS; maybe that clarifies some things.
I tried querying the domainevententry and snapshotevententry tables, but they are empty.
That's interesting on its own account! When you publish events in Axon, either through the AggregateLifecycle#apply(Object...) or EventGateway#publish(Object...) method, the published event should end up in your domain_event_entry table. If that's not the case, then either your JPA/JDBC configuration has a misser or some other exceptions occurring in your application.
Would you be able to update your issue with samples of your configuration and/or stack traces that you are seeing?
Replaying production issues locally
What I've done in the past to be able to replay behavior occurring in a production environment is by loading the Aggregate's event stream from that environment into a local dev/test event store. To be able to query this, you only need the aggregate identifier. As the aggregate identifier is indexed, retrieving all events for a specific aggregate (differently named, the aggregate stream) is straightforward.
By doing so, I could run the application locally to flow through the aggregate step-by-step. This gave the benefit of knowing exactly which event caused what state change, leading to the problematic scenario.
However, why your events are not present in your domainevententry is unclear to me. If you're still facing issues with that, I still recommend that you update the question with more specifics on your project.

How can you set a MongoDB write concern at database level that cannot be overridden?

I'd like to prevent clients of a database using any write concern other than majority for a replica set. Is it possible to do this at a database level?
It is not possible to prevent the use of write concerns other than the default. Clients can specify a write concern for each CRUD operation and MongoDB's permission model is not granular enough to prevent this. If you really want to prevent the write-concern being changed by your client, you could create a REST API that wraps most common MongoDB operations, but prevents users from setting the write concern.

How to process distributed transaction within postgresql?

Anyone can kindly tell me how to process distributed transaction within postgresql, which is also called "XA"? Is there any resources about it? Great thanks for any answer.
It looks like you are a bit confused. Generally database systems support two notions of distributed transaction types:
Native distributed transactions and
XA transactions.
Native distributed transactions are generally between different servers of the same RDBMS. Postgres also supports this with the dblink_exec command. Generally the connection to the other server is created by a so called database link. Postgres is a bit more clumsy to use then some other commercial grade RDBMS. You first need to install an extension to be able to use database links. However the postgres rdbms is managing the transaction.
XA transactions on the other hand are managed by the external transaction manager (TM) and each of the participating database has the role of a XA resource, which enlists with the transaction manager. The RDBMS can no longer decide itself when to commit a transaction. This is the task of the XA transaction manager. He uses a 2PC protocol to make sure the changes are applied or rolled back in a consistent manner across the databases.
On some OSes like windows a transaction manager is part of the operating system on others not. Generally java is shipped with a transaction manager and the corresponding data source needs to be configured to use XA.

How to get a connection and hold it using DAAB?

I have a task ahead of me that requires the use of local temporary tables. For performance reasons I can't use transactions.
Temporary tables much like transactions require that all queries must come from one connection which must not be closed or reset. How can I accomplish this using Enterprise library data access application block?
Enterprise Library will use a single database connection if a transaction is active. However, there is no way to force a single connection for all Database methods in the absence of a transaction.
You can definitely use the Database.CreateConnection method to get a database connection. You could then use that connection along with the DbCommand objects to perform the appropriate logic.
Other approaches would be to modify Enterprise Library source code to do exactly what you want or create a new Database implementation that does not perform connection management.
Can't see a way of doing that with DAAB. I think you are going to have to drop back to use ADO.Net connections and manage them yourself, but even then, playing with temporary tables on the server from a client-side app doesn't strike me as an optimal solution to a problem.

Marklogic - Interrupting a long running query with a database restart

In Marklogic, if I interrupt a long running query with a database restart, will that query then no-longer be fully applied when the database comes online again?
Yes, in general canceling an update query will roll back any changes it tried to make. You can think of this like a stack: every update in your query goes into a stack, taking any necessary locks as it goes. After all the expressions have been evaluated, the update enters commit phase and applies that stack atomically to the database. If the query is interrupted before that atomic commit, none of the changes are durable. This behavior covers the A=atomic and D=durable aspects of the ACID properties common to transactional DBMS implementations.
There are some exceptions. It is possible to structure an update so that work is applied in granular sub-transactions. One way to do that is with a multi-statement transaction.
See http://docs.marklogic.com/guide/app-dev/transactions for more.