I have a table with a column that contains array of numbers like : ['1','3','45'];
And was looking for a way to select the column value from the second index so that I get ['3','45']
Is there any way to do this in PostgreSQL?
Related
My table (table) has a JSONB field (data) that contains a field with an array where I store tags (tags).
I query that table with an expression like:
SELECT * FROM table WHERE data->'tags' #? '$[*] ? (# like_regex ".*(foo|bar).*" flag "i");
With such use-case is there a way for me to index the data->'tags' array to speed up the query? Or should I rather work on moving the tags array out of the JSONB field and into a TEXT[] field and index that?
I've already tried:
CREATE INDEX foo ON tbl USING GIN ((data->'tags') jsonb_path_ops);
but it doesn't work: https://gist.github.com/vkaracic/a62ac917d34eb6e975c4daeefbd316e8
The index you built can be used (if you set enable_seqscan=off, you will see that it does get used), but it is generally not chosen as it is pretty useless for this query. The only rows it would rule out through the index are the ones that don't have the 'tags' key at all, and even at that is poorly estimated so probably won't be used without drastic measures.
You could try to convert to text[] and the use parray_gin, but probably better would be to convert to a child table with text and then use pg_trgm.
In my database i have a jsonb array column and i want to count element inside that column during my select query from typeorm.
I am using postgres.
So I have a table where I'm trying to get rid of some rows.
All these rows contain a letter where they should only contain a numeric value.
Example:Columns
So I pretty much want to copy the column grade_percent of column 1 to 'class_rank' of column 2 and then delete column 1.
The thing is that I have about 9k rows and there are different marking_period_ids
I was thinking of doing something like
UPDATE table SET class_rank=(SELECT exam from table WHERE marking_period_id)
but that's where I get lost as I have no idea how to make this repetitive straight from a postgresql query
Like the title says, how can I index a JSONB array?
The contents look like...
["some_value", "another_value"]
I can easily access the elements like...
SELECT * FROM table WHERE data->>0 = 'some_value';
I created an index like so...
CREATE INDEX table_data_idx ON table USING gin ((data) jsonb_path_ops);
When I run EXPLAIN, I still see it sequentially scanning...
What am I missing on indexing an array of text elements?
If you want to support that exact query with an index, the index would have to look like this:
CREATE INDEX ON "table" ((data->>0));
If you want to use the index you have, you cannot limit the search to just a specific array element (in your case, the first). You can speed up a search for some_value anywhere in the array:
SELECT * FROM "table"
WHERE data #> '["some_value"]'::jsonb;
I ended up taking a different approach. I am still having problems getting the search to work using a JSONB Type, so I ended up switching my column to a varchar ARRAY
CREATE TABLE table (
data varchar ARRAY NOT NULL
);
CREATE INDEX table_data_idx ON table USING GIN (data);
SELECT * FROM table WHERE data #> '{some_value}';
This works and is using the index.
I think my problem with my JSONB approach is because the element is actually nested much further and being treated as text.
i.e. data->'some_key'->>'array_key'->>0
And everytime I try to search I get all sorts of invalid token errors and other such things.
You may want to create a materialized view that has the primary key (or other unique index of your table) and expands the array field into a text column with the jsonb_array_elements_text function:
CREATE MATERIALIZED VIEW table_mv
AS
SELECT DISTINCT table.id, jsonb_array_elements_text(data->0) AS array_elem FROM table;
You can then create a unique index on this materialized view (primary keys are not supported on materialized views):
CREATE UNIQUE INDEX table_array_idx ON table_mv(id, array_elem);
Then query with a join to the original table on its primary key:
SELECT * FROM table INNER JOIN table_mv ON table.id = table_mv.id WHERE table_mv.array_elem = 'some_value';
This query should use the unique index and then look up the primary key of the original table, both very fast.
I tried with the code below, but raised a "syntax error at or near array". Googled around, and nothing found. Is it possible to do so? Thanks!
alter table "tablename" alter column "columnname" TYPE ARRAY(VARCHAR(200));
It's unclear to me if you want to increase the length of each entry, or the length of the array.
An array declaration follows the form datatype[] - the [] makes the column an array and the data type specification is the one for the base type.
So, if you want to increase the length of each array element, just declare an array with a longer varchar length: varchar(200)[]:
alter table "tablename"
alter column "columnname" TYPE varchar(200)[];
If you want to use the ARRAY keyword, that needs to be put after the data type:
alter table "tablename"
alter column "columnname" TYPE varchar(200) array;
If you want to increase the length of the array (=allow more array elements) you don't need to do anything because even if you did specify an array dimension, this is not enforced by Postgres