Is there a way to drop index by fields names - postgresql

In some of the migrations, I have indexes created without explicit name set. Which usually provoke the automatic (by PostgreSQL engine) index name generation.
I'm looking a way to drop indexes by specific field names instead of index name.
There seems to be a way to get the list of indexes:
select *
from pg_indexes
where tablename = 'myTable'
Any recommendations for elegant way to "programmatically" DROP INDEX having only specific field or multiple field names?

Related

APACHE AGE index creation understanding

where are indexes created in AGE extension on Postgres since normally indexes are created on every column where we create index. But in case of AGE since its graph-based so there are nodes and edges so how does indexes work in this scenario ??
Since nodes are different from columns so how does indexes work on them ??
I'm trying to understand this query as how does indexes work in apache-age !
With postgres running, you can see a list of relations of your current database by typing the command \d.
ag_catalog.ag_graph and ag_catalog.ag_label are both tables which contain the data of your grpahs and labels. With the commands \d ag_graph and \d ag_label, it will show the column names, their type, if they are nullable and their indexes. Note that the indexes are based on the column names.
Choose one of the tables that appear with the command SELECT * FROM ag_catalog.ag_label;. After this, do the \d command followed by the label's table name (e.g. \d schema_name."label_name"). You'll see that there are only two columns: id and properties
You could create an index on one of these columns, but I don't think you can do this with one of the properties of the node.
In AGE the initial schema is 'ag_catalog'. For every new graph a new schema is formed. To see the general indexes we can do:
SELECT schemaname, indexname, tablename
FROM pg_indexes
WHERE schemaname = 'ag_catalog'
ORDER BY indexname;
For a particular graph you can also see the indexes by just changing
WHERE schemaname = '<graph_name>'

fuzzy finding through database - prisma

I am trying to build a storage manager where users can store their lab samples/data. Unfortunately, this means that the tables will end up being quite dynamic, as each sample might have different data associated with it. I will still require users to define a schema, so I can display the data properly, however, I think this schema will have to be represented as a JSON field in the underlying database.
I was wondering, in Prisma, is there a way to fuzzy search through collections. Could I type something like help and then return all rows that match this expression ANYWHERE in their columns? (including the JSON fields). Could i do something like this at all with posgresql? Or with MongoDB?
thank you
You can easily do that with jsonb in PostgreSQL.
If you have a table defined like
CREATE TABLE userdata (
id bigint PRIMARY KEY,
important_col1 text,
important_col2 integer,
other_cols jsonb
);
You can create an index like this
CREATE INDEX ON userdata USING gin (other_cols);
and search efficiently with
SELECT id FROM userdata WHERE other_cols #> '{"attribute": "value"}';
Here, #> is the JSON containment operator in PostgreSQL.
Yes, in PostgreSQL you surely can do this. It's quite straightforward. Here is an example.
Let your table be called the_table aliased as tht. Cast an entire table row as text tht::text and use case insensitive regular expression match operator ~* to find rows that contain help in this text. You can use more elaborate and powerful regular expression for searching too.
Please note that since the ~* operator will defeat any index, this query will result in a sequential scan.
select * -- or whatever list of expressions you need
from the_table as tht
where tht::text ~* 'help';

Will postgresql generate index automatically?

is there automatic index in Postgresql or need users to create index explicitly? if there is automatic index, how can I view it? thanks.
An index on the primary key and unique constraints will be made automatically. Use CREATE INDEX to make more indexes. To view existing database structure including the indexes, use \d table.
A quick example of generating an index would be:
CREATE INDEX unique_index_name ON table (column);
You can create an index on multiple columns:
CREATE INDEX unique_index_name ON table (column1, column2, column3);
Or a partial index which will only exist when conditions are met:
CREATE INDEX unique_index_name ON table (column) WHERE column > 0;
There is a lot more you can do with them, but that is for the documentation (linked above) to tell you. Also, if you create an index on a production database, use CREATE INDEX CONCURRENTLY (it will take longer, but not lock out new writes to the table). Let me know if you have any other questions.
Update:
If you want to view indexes with pure SQL, look at the pg_catalog.pg_indexes table:
SELECT *
FROM pg_catalog.pg_indexes
WHERE schemaname='public'
AND tablename='table';

How multiple indexes in postgres work on the same column

I was wondering I'm not really sure how multiple indexes would work on the same column.
So lets say I have an id column and a country column. And on those I have an index on id and another index on id and country. When I do my query plan it looks like its using both those indexes. I was just wondering how that works? Can I force it to use just the id and country index.
Also is it bad practice to do that? When is it a good idea to index the same column multiple times?
It is common to have indexes on both (id) and (country,id), or alternatively (country) and (country,id) if you have queries that benefit from each of them. You might also have (id) and (id, country) if you want the "covering" index on (id,country) to support index only scans, but still need the stand along to enforce a unique constraint.
In theory you could just have (id,country) and still use it to enforce uniqueness of id, but PostgreSQL does not support that at this time.
You could also sensibly have different indexes on the same column if you need to support different collations or operator classes.
If you want to force PostgreSQL to not use a particular index to see what happens with it gone, you can drop it in a transactions then roll it back when done:
BEGIN; drop index table_id_country_idx; explain analyze select * from ....; ROLLBACK;

Indexing strategy for full text search in a multi-tenant PostgreSQL database

I have a PostgreSQL database that stores a contact information table (first, last names) for multiple user accounts. Each contact row has a user id column. What would be the most performant way to set up indexes so that users could search for the first few letters of the first or last name of their contacts?
I'm aware of conventional b-tree indexing and PG-specific GIN and GiST, but I'm just not sure how they could (or could not) work together such that a user with just a few contacts doesn't have to search all of the contacts before filtering by user_id.
You should add the account identifier as the first column of any index you create. This will in effect first narrow down the search to rows belonging to that account. For gist or gin fulltext indexes you will need to install the btree_gist or btree_gin extensions.
If you only need to search for the first letters, the simplest and probably fastest would be to use a regular btree that supports text operations for both columns and do 2 lookups. You'll need to use the text_pattern_ops opclass to support text prefix queries and lower() the fields to ensure case insensitivity:
CREATE INDEX contacts_firstname_idx ON contacts(aid, lower(firstname) text_pattern_ops);
CREATE INDEX contacts_lastname_idx ON contacts(aid, lower(lastname) text_pattern_ops);
The query will then look something like this:
SELECT * FROM contacts WHERE aid = 123 AND
(lower(firstname) LIKE 'an%' OR lower(lastname) LIKE 'an%')