PostgreSQL external transaction feature - postgresql

I have a big web application and tests which make requests to app running in sandbox. After each test I used to rollback database using db migrate rollback && db migrate && db seed. But now after test quantity rised, It takes much time. So, I am looking for feature which can wrap some amount of database command into a transaction and after test finish cancel transaction without modifying app source code (or make this by another way). May be there are some postgres database parameters or extensions?

I found another way..
I can make a dump one time and then drop and restore dump every time next, much faster)
look this topic:
Truncating all tables in a Postgres database

Related

Capture Postgresql Trafic to replay it on another database

In order to check if a new version of the database (in staging) react the same way (or better) than the production database, I would like to capture all requests execute on production server, .. to replay them on the staging database.
Is there a tool that does this job ?
what would be interesting is the abality to compare execution time, when replay, and highlight queries executed slower.
Else, I thought I would catpure queries by configuring '0' to log_min_statement_duration (so that queries can be logged in postgresql logfile), and then parse the file to grab and re play request on other server.... is there a better way to do it ?
(current database version postgresql9.6, but I'm interesting even if it's for higher version.. for next time)

Limiting downtime while importing new tables on Heroku Postgres

We have a few tables with a pretty large number of entries that sometimes need to be re-imported. Only some tables are concerned, so we don't use restore but a command similar to this:
heroku pg:psql --app ourapp HEROKU_POSTGRESQL_WHITE < data.sql
This takes roughly 30min, mainly due to data upload (about 1GB).
Until now we've put the app in maintenance mode to import the new data, but we'd like to avoid the long downtime in the future.
What would be the best way to achieve this in Heroku?
Our first thought to reduce downtime was to find a way to run the command from a server that will have much better upload speed, but it's still not perfect.
We've thought of using followers but some other tables need to be written to when users are interacting with the app, and we're not sure if the app can be told to fall back on followers even if the master db doesn't have issues.
We've also thought of entirely caching all relevant tables while we're uploading new data, and then clearing that cache, but Heroku doesn't seem to give enough control on the cache to achieve that.
Import into a temporary second table, and then drop first table, and rename second one in a transaction.

Is there any way in sqlmap(sql-injection testing tool) to fetch database tables without running the complete test?

Is there any way in sqlmap(sql-injection testing tool) to fetch database tables without running the complete test?
When I test a URL it takes a long time to Complete the whole test and retrieve database tables. Is there any shorter way to do fetch database or database tables ?

Best way to backup and restore data in PostgreSQL for testing

I'm trying to migrate our database engine from MsSql to PostgreSQL. In our automated test, we restore the database back to "clean" state at the start of every test. We do this by comparing the "diff" between the working copy of the database with the clean copy (table by table). Then copying over any records that have changed. Or deleting any records that have been added. So far this strategy seems to be the best way to go about for us because per test, not a lot of data is changed, and the size of the database is not very big.
Now I'm looking for a way to essentially do the same thing but with PostgreSQL. I'm considering doing the exact same thing with PostgreSQL. But before doing so, I was wondering if anyone else has done something similar and what method you used to restore data in your automated tests.
On a side note - I considered using MsSql's snapshot or backup/restore strategy. The main problem with these methods is that I have to re-establish the db connection from the app after every test, which is not possible at the moment.
If you're okay with some extra storage, and if you (like me) are particularly not interested in re-inventing the wheel in terms of checking for diffs via your own code, you should try creating a new DB (per run) via templates feature of createdb command (or CREATE DATABASE statement) in PostgreSQL.
So for e.g.
(from bash) createdb todayDB -T snapshotDB
or
(from psql) CREATE DATABASE todayDB TEMPLATE snaptshotDB;
Pros:
In theory, always exact same DB by design (no custom logic)
Replication is a file-transfer (not DB restore). So far less time taken (i.e. doesn't run SQL again, doesn't recreate indexes / restore tables etc.)
Cons:
Takes 2x the disk space (although template could be on a low performance NFS etc)
For my specific situation. I decided to go back to the original solution. Which is to compare the "working" copy of the database with "clean" copy of the database.
There are 3 types of changes.
For INSERT records - find max(id) from clean table and delete any record on working table that has higher ID
For UPDATE or DELETE records - find all records in clean table EXCEPT records found in working table. Then UPSERT those records into working table.

How can I obtain the creation date of a DB2 database without connecting to it?

How can I obtain the creation date or time of an IBM's DB2 database without connecting to the specified database first? Solutions like:
select min(create_time) from syscat.tables
and:
db2 list tables for schema SYSIBM
require me to connect to the database first, like:
db2 connect to dbname user userName using password
Is there another way of doing this through a DB2 command instead, so I wouldn't need to connect to the database?
Can db2look command be used for that?
Edit 01: Background Story
Since more than one person asked why do I need to do this and for what reasons, here is the background story.
I have a server with DB2 DBMS where many people and automated scripts are using it to create some databases for temporary tasks and tests. It's never meant to keep the data for long time. However for one reason or another (ex: developer not cleaning after himself or tests stopping forcefully before they can do the clean up) some databases never get dropped and they start to get accumulated till the hard disk is filled out eventually. So The idea of the app is to look up the age of the database and drop it, if it's older than 6 months (for example).