Postgresql: How to access data of a transaction with another connection - postgresql

Goodmorning,
I use Postgresql for my database engine and some operations use transactions to be sure that everything goes fine.
Sometimes I need to test some specific datas at "that poin" of my application but these operations often make al lot changes in the database and it's not easy to reproduce "all the changes made inside the transaction" with another connection (like using a PgAdmin query tool) outside the transaction to test the single aspect that i need.
One way to test the specific data, is to load the data into a variable, and then debug-it, but i was searching for a more "wide solution".
So that's the question: Is there a way to access the data of a specific connection (which is in transaction) with another connection/query_tool?
Thanks,
Attilio

In postgresql (actually) there's no way to do it, full stop.

Related

Is there a way to give write permissions only in a transaction in Postgres?

I work with a software that is used by a lot of different clients in several countries, with different needs, rules and constraints on their data.
When I make a change to the database's structure, I have a tool to test it on every client's database, obviously with read-only rights. This means that the best way to test a query like UPDATE table SET x = y WHERE condition
is to call the "read-only part" SELECT x FROM table WHERE condition.
It works but it's not ideal, as sometimes it is writing data that causes problems (mostly deadlocks or timeouts), meaning I can't see the problem until a client suffers from it.
I'm wondering if there is a way to grant write permissions in Postgres, but only when inside a transaction, and force a rollback on every transaction. This way, changes could be tested accurately on real data and still prevent any dev from editing it.
Any ideas?
Edit: the volumes are too large to consider cloning data for every dev who needs to run a query
This sounds similar to creating an audit table to record information about transactions. I would consider using a trigger to write a copy of the data to a "rollback" table/row and then copy the "rollback" table/row back on completion of the update.

Is there a way to show everything that was changed in a PostgreSQL database during a transaction?

I often have to execute complex sql scripts in a single transaction on a large PostgreSQL database and I would like to verify everything that was changed during the transaction.
Verifying each single entry on each table "by hand" would take ages.
Dumping the database before and after the script to plain sql and using diff on the dumps isn't really an option since each dump would be about 50G of data.
Is there a way to show all the data that was added, deleted or modified during a single transaction?
Dude, What are you looking for is the most searchable thing on the internet when it comes to capturing Database changes. It is a kind of version control we can say.
But as long as I know, sadly there are no in-built approaches are available in PostgreSQL or MySql. But you can overcome it by setting/adding some triggers for your most usable operations.
You can create some backup schemas, and tables to capture your changes that are changed(updated), created, or deleted.
In this way you can achieve what you want. I know this process is fully manual, But really effective.
If you need to analyze the script's behaviour only sporadically, then the easiest approach would be to change server configuration parameter log_min_duration_statement to 0 and then back to any value it had before the analysis. Then all of the script activity will be written to the instance log.
This approach is not suitable if your storage is not prepared to accommodate this amount of data, or for systems in which you don't want sensitive client data to be written to a plain-text log file.

How to get a connection and hold it using DAAB?

I have a task ahead of me that requires the use of local temporary tables. For performance reasons I can't use transactions.
Temporary tables much like transactions require that all queries must come from one connection which must not be closed or reset. How can I accomplish this using Enterprise library data access application block?
Enterprise Library will use a single database connection if a transaction is active. However, there is no way to force a single connection for all Database methods in the absence of a transaction.
You can definitely use the Database.CreateConnection method to get a database connection. You could then use that connection along with the DbCommand objects to perform the appropriate logic.
Other approaches would be to modify Enterprise Library source code to do exactly what you want or create a new Database implementation that does not perform connection management.
Can't see a way of doing that with DAAB. I think you are going to have to drop back to use ADO.Net connections and manage them yourself, but even then, playing with temporary tables on the server from a client-side app doesn't strike me as an optimal solution to a problem.

what happens to my dataset in case of unexpected failure

i know this has been asked here. But my question is slightly different. When the dataset was designed keeping the disconnected principle in mind, what was provided as a feature which would handle unexpected termination of the application, say a power failure or a windows hang or system exception leading to restart. Say the user has entered some 100 rows and it is modified at the dataset alone. Usually the dataset is updated at the application close or at a timely period.
In old times which programming using vb 6.0 all interaction used to take place directly with the database, thus each successful transaction was committing itself automatically. How can that be done using datasets?
DataSets are never for direct access to database, they are a disconnected model only. There is no intent that they be able to recover from machine failures.
If you want to work live against the database you need to use DataReaders and issue DbCommands against the database live for changes. This of course will increase your load on the database server though.
You have to balance the two for most applications. If you know a user just entered vital data as a new row, execute an insert command to the database, and put a copy in your local cached DataSet. Then your local queries can run against the disconnected data, and inserts are stored immediately.
A DataSet can be serialized very easily, so you could implement your own regular backup to disk by using serialization of the DataSet to the filesystem. This will give you some protection, but you will have to write your own code to check for any data that your application may have saved to disk previously and so on...
You could also ignore DataSets and use SqlDataReaders and SqlCommands for the same sort of 'direct access to the database' you are describing.

Database last updated?

I'm working with SQL 2000 and I need to determine which of these databases are actually being used.
Is there a SQL script I can used to tell me the last time a database was updated? Read? Etc?
I Googled it, but came up empty.
Edit: the following targets issue of finding, post-facto, the last access date. With regards to figuring out who is using which databases, this can definitively monitored with the right filters in the SQL profiler. Beware however that profiler traces can get quite big (and hence slow/hard to analyze) when the filters are not adequate.
Changes to the database schema, i.e. addition of table, columns, triggers and other such objects typically leaves "dated" tracks in the system tables/views (can provide more detail about that if need be).
However, and unless the data itself includes timestamps of sorts, there are typically very few sure-fire ways of knowing when data was changed, unless the recovery model involves keeping all such changes to the Log. In that case you need some tools to "decompile" the log data...
With regards to detecting "read" activity... A tough one. There may be some computer-forensic like tricks, but again, no easy solution I'm afraid (beyond the ability to see in server activity the very last query for all still active connections; obviously a very transient thing ;-) )
I typically run the profiler if I suspect the database is actually used. If there is no activity, then simply set it to read-only or offline.
You can use a transaction log reader to check when data in a database was last modified.
With SQL 2000, I do not know of a way to know when the data was read.
What you can do is to put a trigger on the login to the database and track when the login is successful and track associated variables to find out who / what application is using the DB.
If your database is fully logged, create a new transaction log backup, and check it's size. The log backup will have a fixed small lengh, when there were no changes made to the database since the previous transaction log backup has been made, and it will be larger in case there were changes.
This is not a very exact method, but it can be easily checked, and might work for you.