(Sorry for my bad English)
I have an array of users ids as below:
[5, 9, 3, 22, 16]
Obviously the values are dynamic.
Now, I need to SELECT all users but users with above ids come first.
What I've tried so far?
This query gives me exact answer :
SELECT * FROM users WHERE id IN (5, 9, 3, 22, 16)
UNION ALL
SELECT * FROM users WHERE id NOT IN (5, 9, 3, 22, 16);
But is there any better way?
P.S:
I'm using PostgreSQL 10.
Try this:
select *
from users
order by id not in (5, 9, 3, 22, 16), id
As stated in the documentation, an expression used in the ORDER BY clause
can be an arbitrary expression formed from input-column values.
In particular, you can use a Boolean expression, as values of this type are sortable. Note that false < true.
You can use CASE statements on your ORDER BY, I'll give you an example:
select *
from users
order by
CASE WHEN id=5 THEN 1
WHEN id=9 THEN 2
WHEN id=3 THEN 3
WHEN id=22 THEN 4
WHEN id=16 THEN 5
END, id
With CASE you can tell postgres the "priority" or value you want for each id if you know them beforehand. After that I added "id" so the rest of the rows gets ordered properly.
Related
Suppose there's a ranked comment list, and I want to implement relay-style pagination on it.
For example I have this query:
SELECT
comment.id,
(comment.like_count * 0.5 + comment.reply_count * 3) AS score
FROM
comment
WHERE
comment.is_deleted = false
ORDER BY score desc
LIMIT 10;
And the ranked comment IDs are in this order: [20, 1, 4, 12, 9, 5]
Now a user passes a cursor 12, and wants to query ranked comments after ID 12, in this case [9, 5]
How can I implement this in PostgreSQL?
Edit: I figured out another way to achieve this by adding a cache layer. So the implementation becomes:
Get the IDs from cache (for example Redis)
find the index of cursor in the array
If the index > -1, then slice the array
Query the database using WHERE IN
That solves my problem, but if anyone could help with the original quesion, I'd appreciate.
I have a doubt with modification of jsonb data type in postgres
Basic setup:-
array=> ["1", "2", "3"]
and now I have a postgresql database with an id column and a jsonb datatype column named lets just say cards.
id cards
-----+---------
1 {"1": 3, "4": 2}
thats the data in the table named test
Question:
How do I convert the cards of id->1 FROM {"1": 3, "4": 2} TO {"1": 4, "4":2, "2": 1, "3": 1}
How I expect the changes to occur:
From the array, increment by 1 all elements present inside the array that exist in the cards jsonb as a key thus changing {"1": 3} to {"1": 4} and insert the values that don't exist as a key in the cards jsonb with a value of 1 thus changing {"1":4, "4":2} to {"1":4, "4":2, "2":1, "3":1}
purely through postgres.
Partial Solution
I asked a senior for support regarding my question and I was told this:-
Roughly (names may differ): object keys to explode cards, array_elements to explode the array, left join them, do the calculation, re-aggregate the object. There may be a more direct way to do this but the above brute-force approach will work.
So I tried to follow through it using these two functions json_each_text(), json_array_elements_text() but ended up stuck halfway into this as well as I was unable to understand what they meant by left joining two columns:-
SELECT jsonb_each_text(tester_cards) AS each_text, jsonb_array_elements_text('[["1", 1], ["2", 1], ["3", 1]]') AS array_elements FROM tester WHERE id=1;
TLDR;
Update statement that checks whether a range of keys from an array exist or not in the jsonb data and automatically increments by 1 or inserts respectively the keys into the jsonb with a value of 1
Now it might look like I'm asking to be spoonfed but I really haven't managed to find anyway to solve it so any assistance would be highly appreciated 🙇
The key insight is that with jsonb_each and jsonb_object_agg you can round-trip a JSON object in a subquery:
SELECT id, (
SELECT jsonb_object_agg(key, value)
FROM jsonb_each(cards)
) AS result
FROM test;
(online demo)
Now you can JOIN these key-value pairs against the jsonb_array_elements of your array input. Your colleague was close, but not quite right: it requires a full outer join, not just a left (or right) join to get all the desired object keys for your output, unless one of your inputs is a subset of the other.
SELECT id, (
SELECT jsonb_object_agg(COALESCE(obj_key, arr_value), …)
FROM jsonb_array_elements_text('["1", "2", "3"]') AS arr(arr_value)
FULL OUTER JOIN jsonb_each(cards) AS obj(obj_key, obj_value) ON obj_key = arr_value
) AS result
FROM test;
(online demo)
Now what's left is only the actual calculation and the conversion to an UPDATE statement:
UPDATE test
SET cards = (
SELECT jsonb_object_agg(
COALESCE(key, arr_value),
COALESCE(obj_value::int, 0) + (arr_value IS NOT NULL)::int
)
FROM jsonb_array_elements_text('["1", "2", "3"]') AS arr(arr_value)
FULL OUTER JOIN jsonb_each_text(cards) AS obj(key, obj_value) ON key = arr_value
);
(online demo)
I'm working on a migration project (MongoDB to Snowflake) and trying to convert one of the mongo queries to Snowflake, we have a use case to fetch records if all the elements from an array matched based on the given parameters.
Mongo DB Function: $all
The $all operator selects the documents where
the value of a field is an array that contains all the specified
elements.
Mongo Query:
db.collection('collection_name').find({
'code': { '$in': [ 'C0001' ] },
'months': { '$all': [ 6, 7, 8, 9 ] } --> 6,7,8,9 given parameters
});
Table Structure in snowflake:
column name datatype
code varchar(50)
id int
months ARRAY
weeks ARRAY
Could you provide some suggestions on how to write this query in Snowflake?
Any recommendations would be helpful.
You can use ARRAY_SIZE and ARRAY_INTERSECTION to test it:
Here is sample table:
create or replace table test ( code varchar, months array );
insert into test select 'C0002', array_construct( 1, 2, 5,8,6,3)
union all select 'C0001', array_construct( 1,2,3 )
union all select 'C0002', array_construct( 2, 12, 3,7,9)
union all select 'C0001', array_construct( 7,8,9,3, 2) ;
Here is query to test:
select * from test where code in ('C0001','C0002')
and ARRAY_SIZE( ARRAY_INTERSECTION( months, array_construct( 3, 2 ))) = 2;
So I find the intersection of two arrays, and check the number of items. If you want to look for 3 items, you should set 3 (and it goes like this):
select * from test where code in ('C0001','C0002')
and ARRAY_SIZE( ARRAY_INTERSECTION( months, array_construct( 3, 2, 7 ))) = 3;
Following is the output of my query:
key ;value
"2BxtRdkRvwc-2hPjF8LBmHD-finapril" ;4
"3QXORSfsIY0-2sDizCyvY6m-finapril" ;12
"4QXORSfsIY0-2sDizCyvY6m-curr" ;12
"5QXORSfsIY0-29Xcom4SHVh-finapril" ;12
What i want is simply to bring the rows into columns so that only one row remains with the key as the column name.
I have seen examples with crosstab catering to much complex use cases but i want to know if there is a simpler way in which this can be achieved in my particular case?
Any help is appreciated
Thanks
Postgres Version : 9.5.10
It is impossible to execute a query resulting in an unknown number and names of columns. The simplest way to get a similar effect is to generate a json object which can be easily interpreted by a client app as a pivot table, example:
with the_data(key, value) as (
values
('2BxtRdkRvwc-2hPjF8LBmHD-finapril', 4),
('3QXORSfsIY0-2sDizCyvY6m-finapril', 12),
('4QXORSfsIY0-2sDizCyvY6m-curr', 12),
('5QXORSfsIY0-29Xcom4SHVh-finapril', 12)
)
select jsonb_object_agg(key, value)
from the_data;
The query returns this json object:
{
"4QXORSfsIY0-2sDizCyvY6m-curr": 12,
"2BxtRdkRvwc-2hPjF8LBmHD-finapril": 4,
"3QXORSfsIY0-2sDizCyvY6m-finapril": 12,
"5QXORSfsIY0-29Xcom4SHVh-finapril": 12
}
I have the following table
---------------------------------------
id, type, keyword, default
---------------------------------------
1, 1, mcdonalds, 1
2 1, food, 0
3, 1, drinks, 0
4, 2, vending machine, 1
5, 2, drinks, 0
6, 3, station, 1
7, 3, travel, 0
8 3, train, 0
The idea behind this is that I want a serach query returing 'a unique' type for keywords (the default row), so when I search mcdonalds, I get the first row, but when I search food or drinks I allso get the first row.
I have done this using a subselect.
SELECT type, keyword FROM keywords WHERE type IN (SELECT DISTINCT type FROM keywords WHERE keyword like '%?%') AND `default`=1
;
This works like a charm, now however I want to be able to give multiple keywords, for example "drinks & food" I have tried
SELECT type, keyword FROM keywords WHERE type IN (SELECT DISTINCT type FROM keywords WHERE keyword like '%?%' OR keyword like'%?%') AND `default`=1
;
But then when I search for "food & drinks" in this case I would get both "the vending machine" and the "mcdonalds". However the vending machine only has an assosiated keyword "drinks" (it serves no food) So I don't want that one in my results.
When I do 'AND' instead of OR like, I get no results at all (since one row cant have both values at the same time).
Does anyone have any suggestions on how I could solve this?
Use two independent subqueries, then combine the lookups with AND:
SELECT type,
keyword
FROM keywords
WHERE type IN (SELECT DISTINCT type FROM keywords WHERE keyword like '%drinks%')
AND type IN (SELECT DISTINCT type FROM keywords WHERE keyword like '%food%')
AND "default"=1
/*
t = keywords
*/
SELECT *
FROM t
WHERE type IN (
SELECT type
FROM (
SELECT COUNT(id) AS count_matches,
type
FROM t
WHERE keyword LIKE '%drinks%'
OR keyword LIKE '%food%'
GROUP BY type
)
q
ORDER BY count_matches DESC
LIMIT 1
)
AND "default" = 1
Well, thanks for all your help. It has been figured out! :)
SELECT type, keyword FROM keywords WHERE type IN
(SELECT type FROM keywords WHERE (`keyword` LIKE '%drinks%' OR keyword like '%food%') AND `default`!= 1 GROUP BY type HAVING COUNT(id) >= 2)
AND `default`=1
The query can be built dynamically so what this does is use COUNT. The number of rows for count returned in the subquery must be at least as much as the number of keywords the user entered.
[b]edit does not seem to work, the or and like sometimes gives an higher count than expected[/b]