PostgreSQL how to DROP TABLE with extreme prejudice [duplicate] - postgresql

This question already has answers here:
How to drop a PostgreSQL database if there are active connections to it?
(12 answers)
Closed 7 years ago.
We use postgres as a real-time data cache for observations. We need to drop our tables on a daily basis. There are frequently clients that still have the db open for reading, actually they have it open for read/write and don't realize it. We have specifically noted that Python opens it rw and keeps a permanent transaction lock on the DB. This prevents us from dropping the tables.
The data table can have different number of columns on a daily basis, so 'delete from table' does not appear to be an option.
We have tried creating a read-only user, but that did not help, it was still getting "IDLE in transaction".
Is there any kind of 'kill -9' for dropping tables?
We are currently on PostgreSQL 8.4 on RHEL 6, but will be migrating to RHEL 7 soon.

If you have administrative access then you can kill all the current sessions. I think your question is similar to this.

Related

Creating an in-memory table in PostgreSQL?

My understanding of an in-memory table is a table that will be created in memory and would resort to disk as little as possible, if at all. I am assuming that I have enough RAM to fit the table there, or at least most of it. I do not want to use an explicit function to load tables (like pg_prewarm) in memory, I just want the table to be there by default as soon as I issue a CREATE TABLE or CREATE TABLE AS select statement, unless memory is full or unless I indicate otherwise. I do not particularly care about logging to disk.
7 years ago, a similar question was asked here PostgreSQL equivalent of MySQL memory tables?. It has received 2 answers and one of them was a bit late (4 years later).
One answer says to create a RAM disk and to add a tablespace for it. Or to use an UNLOGGED table. Or to wait for global temporary tables. However, I do not have special hardware, I only have regular RAM - so I am not sure how to go about that. I can use UNLOGGED feature, but as I understand, there is still quite a bit of disk interaction involved (this is what I am trying to reduce) and I am not sure if tables will be loaded in memory by default. Furthermore, I do not see how global temporary spaces are related. My understanding of them is that they are just tables in spaces that can be shared.
Another answer recommends an in-memory column store engine. And to then use a function to load everything in memory. The issue I have with this approach is that the engine being referred to looks old and unmaintained and I cannot find any other. Also, I was hoping I wouldn't have to explicitly resort to using a 'load into memory' function, but instead that everything will happen by default.
I was just wondering how to get in-memory tables now in Postgres 12, 7 years later.
Postgres does not have in-memory tables, and I do not have any information about any serious work on this topic now. If you need this capability then you can use one of the special in-memory databases like REDIS, MEMCACHED or MonetDB. There are FDW drivers for these databases. So you can create in-memory tables in a specialized database and you can work with these tables from Postgres via foreign tables.
MySQL in-memory tables were necessary when there was only the MyISAM engine, because this engine had very primitive capabilities with regard to IO and MySQL did not have its own buffers. Now MySQL has the InnoDB engine (with modern form of joins like other databases) and a lot of the arguments for using MySQL in-memory tables are obsolete. In comparison to the old MySQL Postgres has its own buffers and does not bypass file system caches, so all of the RAM is available for your data and you have to do nothing. Ten years ago we had to use MySQL in-memory engine to have good enough performance. But after migrating to Postgres we have had better performance without in-memory tables.
If you have a lot of memory then Postgres can use it by default - via file system cache.
As This question is specific to Postgres
There is no in-memory table but in-memory view, Materialize view which can also be refreshed. See if your requirements fits in

FireDAC: Shared Lock on Table with Firebird

I'm using Delphi 10.1 with FireDAC to connect to Firebird.
I would like to open a table in Exclusive mode in Firebird with FireDAC?
How would it be?
Firebird does not handle table or row locks. So there's no way you're going to get this to work with FireDAC... no connection parameters can do this magic.
What you can do with Firebird is to use the entire database in single user mode. To do this, you must shut it down, run GFIX to flag it as a single user database, and then reconnect to the database. You can find more details on the Firebird How-To FAQ. But I doubt this is what you are looking for.
You should explain better what you are trying to do. With real SQL servers you should not feel the need to lock tables or rows. Transactions and transaction isolation should be enough to handle most situations. If not, then you should probably start thinking about application level locks, that is, if you have just one application that uses the database.

How can I obtain the creation date of a DB2 database without connecting to it?

How can I obtain the creation date or time of an IBM's DB2 database without connecting to the specified database first? Solutions like:
select min(create_time) from syscat.tables
and:
db2 list tables for schema SYSIBM
require me to connect to the database first, like:
db2 connect to dbname user userName using password
Is there another way of doing this through a DB2 command instead, so I wouldn't need to connect to the database?
Can db2look command be used for that?
Edit 01: Background Story
Since more than one person asked why do I need to do this and for what reasons, here is the background story.
I have a server with DB2 DBMS where many people and automated scripts are using it to create some databases for temporary tasks and tests. It's never meant to keep the data for long time. However for one reason or another (ex: developer not cleaning after himself or tests stopping forcefully before they can do the clean up) some databases never get dropped and they start to get accumulated till the hard disk is filled out eventually. So The idea of the app is to look up the age of the database and drop it, if it's older than 6 months (for example).

Retrieve a progress of index creation process in PostgreSQL

Consider long-running query in PostgreSQL of index creation, smth like that:
CREATE INDEX some_idx
ON some_table USING btree
(some_varchar_column COLLATE pg_catalog."default");
The question is: how to retrieve the progress of this query process? Is it possible or not?
It is interesting to know the way in both cases:
using pgAdmin
using SQL
using some internal postgreSQL tools.
May be this additional info could influence on the answer: PostgreSQL 9.3 on Windows 7 64 bit.
There's a wiki page on this very topic, which links to several links. Their accuracy is in question as of a few years ago. There's also a thread on hackers from 2006 or 2007 regarding adding progress indicators within which, EnterpriseDBs Greg Stark makes the same point.
In Postgres v12+ the view pg_stat_progress_create_index should give you this information.
https://www.postgresql.org/docs/12/progress-reporting.html

Communicating with Informix from PostgreSQL? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Please help me to setup connectivity from PostgreSQL to Informix (latest versions for both). I would like to be able to perform a query on Informix from PostgreSQL. I am looking for a solution that will not require data exports (from Informix) and imports (to PostgreSQL) for every query.
I am very new in PostgreSQL and need detailed instructions.
As Chris Travers said, what you're seeking to do is not easy to do.
In theory, if you were working with Informix and needed to access PostgreSQL, you could (buy and) use the Enterprise Gateway Manager (EGM) and use the ODBC driver for PostgreSQL to allow Informix to connect to PostgreSQL. The EGM would do its utmost to appear to be another Informix database while actually accessing PostgreSQL. (I've not validated that PostgreSQL is supported, but EGM basically needs an ODBC driver to work, so there shouldn't be any problem — 'famous last words', probably.) This will include an emulation of 2PC (two-phase commit); not perfect, but moderately close.
For the converse connection (working with PostgreSQL and connecting to Informix), you will need to look to the PostgreSQL tool suite — or other sources.
You haven't said which version you are using. There are some limitations to be aware of but there are a wide range of choices.
Since you say this is import/export, I will assume that read-only options are not sufficient. That rules out PostgreSQL 9.1's foreign data wrapper system.
Depending on your version David Fetter's DBI-Link may suit your needs since it can execute queries on remote tables (see https://github.com/davidfetter/DBI-Link). It hasn't been updated in a while but the implementation should be pretty stable and usable across versions. If that fails you can write stored procedures in an untrusted language (PL/PythonU, PL/PerlU, etc) to connect to Informix and run the queries there. Note there are limits regarding transaction handling you will run into in this case so you may want to run any queries on the other tables using deferred constraint triggers so everything gets run at commit time.
Edit: A cleaner way occurred to me: use foreign data wrappers for import and a separate client app for export.
In this approach, you are going to have four basic components but this will be loosely coupled and subject to proper transactional controls. You can even use two-phase commit if you want. The four components are (not providing a complete working example here but at least a roadmap to one):
Foreign data wrappers for data import, allowing you to see data from Informix.
Views of data to be exported.
External application which manages the export aspect, written in a language of your choice. This listens on a channel like LISTEN export_informix;
Triggers on underlying tables which make view of data to be exported which raise a NOTIFY export_informix
The notifications are riased on the commit and so basically you have two stages to your transaction in this way:
Write data in PostgreSQL, flag data to be exported. Commit.
Read data from PostgreSQL, export to Informix. Commit on both sides (TPC?).