I have about 50 users that needs to be removed. I can drop the users one after another by running drop command but I was wondering if I could get an answer that would significantly reduce the number of queries. Fairly new to psql, just wanted to know if there exists an efficient way or a function that would help me achieve what I'm looking for.
Related
I have a postgres table with duplicated indexes (called someName and someName1) applied to the same columns. I would like to know which user executed the ddl that created these indexes, and when it happened. Is this possible on postgres?
If you hadn't already set up some kind of auditing or aggressive logging before this happened, then your options are pretty limited.
If you retain WAL files, you could go exploring through those (with pg_waldump and other tools, or by doing PITR) to pinpoint the time. This will probably not be a quick and painless exercise. By looking at surrounding changes, or at log files from the same time, you might be able to figure out who was logged on at the time and also had permissions to create the index.
So I want to do like an email queue, say
insert into messages_to_send (email, firstname,lastname, message_text)
select email, firstname,lastname,'hello' from subscribers where list=99
So something like this but say I want to do 100k rows or maybe some day a million.
Seems like the COPY command would perform better. I don't want to lock the messages_to_send table or slow down the rest of the database. Speed isn't a big issue, I just want it to get in there eventually and another process will pick those up. I'm not that familiar with postgres, maybe COPY is good like that i couldn't tell from reading.
So what I came up with is to insert 1000 at a time (I saw someone post that was safer), make a table especially for the queue and searching and send to another table after the send. Make sure that table is cleaned out a lot. I suppose if I really want to scale I put that table in a different type of db that postgres, but it should be good for me for now.
I'd like to analyze our db and create better indices for it.
Because our app is very complex, and we don't know what are the most used parts of our app, I'd like to somehow see what are the most used read queries that we hit our db with.
That would make it very easy for me to analyze and create the right indices for them.
Any ideas on how to do that?
you can enable database profiling for this.
get the details here - https://docs.mongodb.com/v3.2/tutorial/manage-the-database-profiler/
alternatively a simpler way would be to use the mongostat (details here -https://docs.mongodb.com/v3.2/administration/monitoring/) which captures and returns the counts of database operations by type (e.g. insert, query, update, delete, etc.).
We have an app that uses postgres database, that has about 50 tables. Each table contains about 3 Million records (on average). The tables get updated with new data every now and than. Now, we want to implement search feature in our app. The search needs to be performed on one table at a time (no joins needed).
I've read about postgres full text support and that looks promising. But it seems that Solr is Super fast in comparison to it. Can I use my existing postgres database with Solr? If tables get updated would I need to re-index everything again?
It is definitely worth giving Solr a try. We moved many MySQL queries involving JOINs on multiple tables with sorting on different fields to Solr. We are very happy with Solr's search speed, sort speed, faceting capabilities and highly configurable text analysis/tokenization options.
If tables get updated would I need to re-index everything again?
No, you can run delta imports to only re-index your new and updated documents. See https://wiki.apache.org/solr/DataImportHandler.
Get started with https://lucene.apache.org/solr/4_1_0/tutorial.html and all the links in there.
Since nobody has leapt in, I'll answer.
I'm afraid it all depends. It depends on (at least)
how big the text is in each "document"
how flexible you want your searching to be
how much integration you need between database and text-search
how fast is fast enough
how much experience you have with both
When I've had a database that needs some text searching, I've just used PG's built-in options. If I didn't have superuser access to the db, or was already running a big Java setup then Solr might well have appealed.
I'm a newbie to pgsql. I have few questionss on it:
1) I know it is possible to access columns by <schema>.<table_name>, but when I try to access columns like <db_name>.<schema>.<table_name> it throwing error like
Cross-database references are not implemented
How do I implement it?
2) We have 10+ tables and 6 of have 2000+ rows. Is it fine to maintain all of them in one database? Or should I create dbs to maintain them?
3) From above questions tables which have over 2000+ rows, for a particular process I need a few rows of data. I have created views to get those rows.
For example: a table contains details of employees, they divide into 3 types; manager, architect, and engineer. Very obvious thing this table not getting each every process... process use to read data from it...
I think there are two ways to get data SELECT * FROM emp WHERE type='manager', or I can create views for manager, architect n engineer and get data SELECT * FROM view_manager
Can you suggest any better way to do this?
4) Do views also require storage space, like tables do?
Thanx in advance.
Cross Database exists in PostGreSQL for years now. You must prefix the name of the database by the database name (and, of course, have the right to query on it). You'll come with something like this:
SELECT alias_1.col1, alias_2.col3 FROM table_1 as alias_1, database_b.table_2 as alias_2 WHERE ...
If your database is on another instance, then you'll need to use the dblink contrib.
This question doe not make sens. Please refine.
Generally, views are use to simplify the writing of other queries that reuse them. In your case, as you describe it, maybe that stored proceudre would better fits you needs.
No, expect the view definition.
1: A workaround is to open a connection to the other database, and (if using psql(1)) set that as your current connection. However, this will work only if you don't try to join tables in both databases.
1) That means it's not a feature Postgres supports. I do not know any way to create a query that runs on more than one database.
2) That's fine for one database. Single databases can contains billions of rows.
3) Don't bother creating views, the queries are simple enough anyway.
4) Views don't require space in the database except their query definition.