# things :string is an Array
scope :things, ->(q) { where('ARRAY[?]::varchar[] IN things', Array.wrap(q)) }
scope :things, ->(q) { where('things && ARRAY[?]::varchar[]', Array.wrap(q)) }
scope :things, ->(q) { where('ARRAY[?]::varchar[] <# things', Array.wrap(q)) }
I've tried a few versions, but I can't seem to find the proper incantation. I'm looking to find any row that has any of the things in the array... is there any overlap?
[1, 2, 3] & [1, 8] = t
[1, 2, 3] & [8, 9] = f
I'm trying to mimic ActiveRecord's default where behavior. If I give it an array, it'll get all the matching rows. Is this possible with postgres arrays? Is it even efficient?
One way of doing this is by converting the arrays to a set of rows. Once you have the arrays as set of rows, you can do an intersection between them and check if the result is empty set.
For example:
CREATE TABLE my_test_table(id BIGINT, test_array BIGINT[]);
INSERT INTO my_test_table(id, test_array)
VALUES
(1, array[1,2,3]),
(2, ARRAY[1,5,8]);
SELECT * FROM my_test_table
WHERE array_length((SELECT array
(
SELECT UNNEST(test_array)
INTERSECT
SELECT UNNEST(array[3,15,2])
)), 1) > 0;
The result of the SELECT statement above is:
1 | {1,2,3}
This allows for more complex matching of elements of 2 arrays. For example, if you would like to select the arrays that have at least 2 common elements, you could just change the WHERE part to
WHERE array_length((SELECT array
(
SELECT UNNEST(test_array)
INTERSECT
SELECT UNNEST(array[3,15,2])
)), 1) > 1;
Related
In postgresql, I have a simple one JSONB column data store:
data
----------------------------
{"foo": [1,2,3,4]}
{"foo": [10,20,30,40,50,60]}
...
I need to convert consequent pairs of values into data points, essentially calling the array variant of ST_MakeLine like this: ST_MakeLine(ARRAY(ST_MakePoint(10,20), ST_MakePoint(30,40), ST_MakePoint(50,60))) for each row of the source data.
Needed result (note that the x,y order of each point might need to be reversed):
data geometry (after decoding)
---------------------------- --------------------------
{"foo": [1,2,3,4]} LINE (1 2, 3 4)
{"foo": [10,20,30,40,50,60]} LINE (10 20, 30 40, 50 60)
...
Partial solution
I can already iterate over individual array values, but it is the pairing that is giving me trouble. Also, I am not certain if I need to introduce any ordering into the query to preserve the original ordering of the array elements.
SELECT ARRAY(
SELECT elem::int
FROM jsonb_array_elements(data -> 'foo') elem
) arr FROM mytable;
You can achieve this by using window functions lead or lag, then picking only every second row:
SELECT (
SELECT array_agg((a, b) ORDER BY o)
FROM (
SELECT elem::int AS a, lead(elem::int) OVER (ORDER BY o) AS b, o
FROM jsonb_array_elements(data -> 'foo') WITH ORDINALITY els(elem, o)
) AS pairs
WHERE o % 2 = 1
) AS arr
FROM example;
(online demo)
And yes, I would recommend to specify the ordering explicitly, making use of WITH ORDINALITY.
I have issue where SUM function is not actually calculating/adding values for duplicate key - it doesn't add to sum if there is same key in array twice, only once. I feel like it is skipping keys that appear more than once.
SELECT
w.id,
wc.name,
wc.description,
data_ids,
(SELECT SUM(d.length) FROM data d WHERE d.id = ANY(w.data_ids)) as length
FROM words w
LEFT JOIN "words_content" wc ON wc.word_id = w.id
WHERE
w.id = 15
AND
wc.language_id = 2
So if I have length for data id 1, 2, 3 to be 1, 1, 4 and inside data_ids column i have 1, 2, 3, 1, 2, 3 it will display length 6 and not 12.
Is there a way to include it to keep adding to SUM even if the value of a key is already inside SUM? Note that data_ids is array of integers.
I'm trying to turn a 1d jsonb array
[1, 2]
into 2d array where its elements are repeated 3 times (the result can be in jsonb)
[[1, 1, 1],
[2, 2, 2]]
My attempt doesn't work
select array(select array_fill(a::text::integer, array[3]))
from jsonb_array_elements('[1,2]'::jsonb) as a;
ERROR: could not find array type for data type integer[]
Maybe it would work in later PG version, but I'm restricted to PG 9.4.8
What are the other ways?
First of all, you need to replace array() with array_agg(), then you'll have what you expect, starting with Postgres 9.5.
That being said, your issue is with array_agg() not being able to aggregates arrays prior to 9.5.
Then are multiple existing answers for you, but basically you'll need to create a new aggregate function array_agg_mul:
CREATE AGGREGATE array_agg_mult (anyarray) (
SFUNC = array_cat,
STYPE = anyarray,
INITCOND = '{}'
);
then running the following query should work:
SELECT
array_agg_mult(array[array_fill(a::text::integer, array[3])])
FROM jsonb_array_elements('[1,2]'::jsonb) as a;
Then you should get:
array_agg_mult
-------------------
{{1,1,1},{2,2,2}}
Without using a subquery I'd like to find if all the elements in an array are equal to a subset of numbers. So instead of 1 = ALL(ARRAY[1,1,1]) I want to do something like ALL(ARRAY[1,1,1]) IN (1, 5). Is this possible without using a select statement?
You want to use the #> operator.
-- does the column contain all of
select * from test_arrays where values #> array[6, 9];
select * from test_arrays where values #> '{6, 9}'::int[];
If you want to find where any 1 value of the array is in the other array use the && operator:
-- does the column contain at-least one of
select * from test_arrays where values && array[6, 9];
select * from test_arrays where values && '{6, 9}'::int[];
I happened to write about this a couple of months ago.
http://www.philliphaydon.com/2016/05/07/postgresql-and-its-array-datatype/
Im writing a query for searching an element in an array. Search using "for" loop is not efficient because my array is having a lot of elements. Because of this the query is taking lot of time to execute. So can any one say how to search an element in an array without "for" loop which should be faster. I have to get the index on searching
Thanks,
Karthika
Use the ANY operator:
where 1 = ANY (array_column)
That will return all rows where array_column contains the value 1 at least once. If you want to check for multiple values, see Clodoaldo's answer.
If you create an index on that column, this should be very fast. Something like this:
create index on the_table using gin (the_array_column);
The following is inspired by the solution shown here: Finding the position of a value in PostgreSQL arrays
with sample_data (pk_column, array_data) as (
values
(1, array[1,2,3,4,5]),
(2, array[7,8,9,11]),
(3, array[5,4,3,2,1]),
(4, array[10,9,8,1,4,6]),
(5, array[7,8,9])
)
select *
from (
select pk_column,
unnest(array_data) as value,
generate_subscripts(array_data, 1) as array_index
from sample_data
where 1 = any(array_data)
) t
where value = 1
The inner where will reduce the total work that needs to be done to only those rows that actually contain the value. The outer query will then "explode" the array to get the value's index. But using the function shown in the linked question might actually be what you are after.
Check the contains operator #>
select array[1,2] #> array[1];
?column?
----------
t
http://www.postgresql.org/docs/current/static/functions-array.html