Currently I'm trying, in a rails 3 application, to select all objects where two arrays have some overlap.
I've tried the following:
Contact.where("possible_unique_keys && ?" c.possible_unique_keys)
Which gives:
array value must start with "{" or dimension information
So I tried to convert the latter record into a postgresql array, like so:
Contact.where("possible_unique_keys && string_to_array(?)", c.possible_unique_keys)
Which gives:
operator does not exist: character varying[] && text[]
How so I get both arrays to a format that will correctly evaluate &&?
I think I've solved it for now, and come up with a way to build a query from a Ruby array to a postgresql array (thanks in part to this bit of code written for rails/postgresql interoperability.
Given a list of strings ["bill", "joe", "stu", "katie", "laura"], we have to do a little bit in the way of acrobatics to get postgresql to recognize them. This is a solution to construct the request.
request = "SELECT * from Contacts where possible_unique_keys && \'{"
list.each do |key|
key.gsub("'", "\\'")
if key == c.possible_unique_keys.last
request << "\"#{key}\"}\'"
else
request << "\"#{key}\", "
end
end
dupes = Contacts.find_by_sql(request)
Related
In an Ecto query I'm running against Postgres (13.6), I need to interpolate a list of ids into a fragment - this is generally not something I have a problem with, but in this case, the list of ids is being received as a list of strings that need to be cast to integers (or, more specifically, BIGINT). The query that I think that I need is as follows, which the troublesome bit being ANY(ARRAY?::BIGINT[]):
ModelA
|> where(
[ma],
fragment(
"EXISTS (SELECT * FROM model_b mb WHERE mb.a_id = ? AND mb.c_id = ANY(ARRAY?::BIGINT[]))",
a.id,
^c_ids
)
)
where c_ids would be a list like ["1449441579", "2345556834"]
However, when I run this, I get the error
(Postgrex.Error) ERROR 42703 (undefined_column) column "array$4" does not exist
referring to the generated SQL
ANY(ARRAY$4::BIGINT[])
Of course, I could convert the array of c_ids to integers beforehand in my app code, but I'd like to see if I can get it to cast in the query itself.
Writing the fragment in straight SQL works out just fine:
SELECT * FROM model_b mb WHERE mb.a_id = 1 AND mb.c_id = ANY(ARRAY['1449441579', '2345556834']::BIGINT[]);
What is the idiomatic way to get this kind of array casting to work in an Ecto fragment? Many thanks.
Just to codify my comment, I would do the integer conversion before the query. You can use the dynamic macro to support IN queries:
import Ecto.Query
alias YourApp.Repo
alias YourApp.SomeSchema, as: ModelA
strs = ["1", "2", "3"]
ids = Enum.map(strs, fn x -> String.to_integer(x) end)
conditions = dynamic([tbl], tbl.id in ^ids)
Repo.all(
from ma in ModelA,
where: ^conditions
)
|> IO.inspect()
I did find a post that led me to a potential solution here - https://elixirforum.com/t/interpolating-lists-into-ecto-query-fragment-1/16690/7 - however, I'm not convinced that this is optimal for me.
By using Enum.join and converting my initial array of strings into a string and then allowing Postgres to cast that string to an array of bigints, it worked out:
ModelA
|> where(
[ma],
fragment(
"EXISTS (SELECT * FROM model_b mb WHERE mb.a_id = ? AND mb.c_id = ANY(ARRAY?::BIGINT[]))",
a.id,
^Enum.join(c_ids, ",")
)
)
So I'm posting it here for reference and because it does "work", but I feel sort of silly doing this, because I specifically was trying to avoid processing the list before passing it to PG... so yeah I'd still really like to hear about the "right" way to do this...
How can I concatenate a string inside of a concatenated jsonb object in postgresql? In other words, I am using the JSONb concatenate operator as well as the text concatenate operator in the same query and running into trouble.
Or... if there is a totally different query I should be executing, I'd appreciate hearing suggestions. The goal is to update a row containing a jsonb column. We don't want to overwrite existing key value pairs in the jsonb column that are not provided in the query and we also want to update multiple rows at once.
My query:
update contacts as c set data = data || '{"geomatch": "MATCH","latitude":'||v.latitude||'}'
from (values (16247746,40.814140),
(16247747,20.900840),
(16247748,20.890570)) as v(contact_id,latitude) where c.contact_id = v.contact_id
The Error:
ERROR: invalid input syntax for type json
LINE 85: update contacts as c set data = data || '{"geomatch": "MATCH...
^
DETAIL: The input string ended unexpectedly.
CONTEXT: JSON data, line 1: {"geomatch": "MATCH","latitude":
SQL state: 22P02
Character: 4573
You might be looking for
SET data = data || ('{"geomatch": "MATCH","latitude":'||v.latitude||'}')::jsonb
-- ^^ jsonb ^^ text ^^ text
but that's not how one should build JSON objects - that v.latitude might not be a valid JSON literal, or even contain some injection like "", "otherKey": "oops". (Admittedly, in your example you control the values, and they're numbers so it might be fine, but it's still a bad practice). Instead, use jsonb_build_object:
SET data = data || jsonb_build_object('geomatch', 'MATCH', 'latitude', v.latitude)
There are two problems. The first is operator precedence preventing your concatenation of a jsonb object to what is read a text object. The second is that the concatenation of text pieces requires a cast to jsonb.
This should work:
update contacts as c
set data = data || ('{"geomatch": "MATCH","latitude":'||v.latitude||'}')::jsonb
from (values (16247746,40.814140),
(16247747,20.900840),
(16247748,20.890570)) as v(contact_id,latitude)
where c.contact_id = v.contact_id
;
In Postgres, I have a jsonb column foo which stores an array of strings
["a","b","c"]
I need a query which appends another string to whatever is currently there, at a specified index
e.g. Append "!" at index 1
run query: ["a","b","c"] -> ["a","b!","c"]
run again: ["a","b","c"] -> ["a","b!!","c"]
run again: ["a","b","c"] -> ["a","b!!!","c"]
I've implemented this in Postgres v11.2 as follows
UPDATE my_table
SET foo = jsonb_set(foo, '{1}', CONCAT('"', foo->>1, '!', '"')::jsonb)
WHERE id = '12345';
Note the index 1 and the string '!' are just hardcoded here for simplicity - but they'd be variables.
It works, but I find it quite inelegant. As you can see, I'm selecting out the string at the given index as text using the ->> operator, using that as input to CONCAT to append the '!', and also to build that back into the correct syntax to convert back to a jsonb string. There is just a lot more work going on here than seems necessary, simply to append to a string at a given path.
Is there a simpler way to do this? A built-in function or operator perhaps, or a simpler way of appending than using CONCAT? (I tried using the || operator in various ways but couldn't seem to make anything work with the syntax & types)
I don't think there is a better way than jsonb_set().
The concat can be replaced by || as follows:
jsonb_set(foo, '{1}', ('"' || (foo->>1) || '!"')::jsonb)
I want a query which will return a combination of characters and number
Example:
Table name - emp
Columns required - fname,lname,code
If fname=abc and lname=pqr and the row is very first of the table then result should be code = ap001.
For next row it should be like this:
Fname = efg, lname = rst
Code = er002 and likewise.
I know that we can use substr to retrieve first letter of a colume but I don't know how to use it to do with two columns and how to concatenate.
OK. You know you can use substr function. Now, to concatenate you will need a concatenation operator ||. To get the number of row retrieved by your query, you need the rownum pseudocolumn. Perhaps you will also need to use to_char function to format the number. About all those functions and operators you can read in SQL reference. Anyway I think you need something like this (I didn't check it):
select substr(fname, 1, 1) || substr(lname, 1, 1) || to_char(rownum, 'fm009') code
from emp
I'm trying to write a SphinxQL query that would replicate the following MySQL in a Sphinx RT index:
SELECT id FROM table WHERE colA LIKE 'valA' AND (colB = valB OR colC = valC OR ... colX = valX ... OR colY LIKE 'valY' .. OR colZ LIKE 'valZ')
As you can see I'm trying to get all the rows where one string column matches a certain value, AND matches any one of a list of values, which mixes and matches string and integer columns / values)
This is what I've gotten so far in SphinxQL:
SELECT id, (intColA = intValA OR intColB = intValB ...) as intCheck FROM rt_index WHERE MATCH('#requiredMatch = requiredValue');
The problem I'm running into is in matching all of the potential optional string values. The best possible query (if multiple MATCH statements were allowed and they were allowed as expressions) would be something like
SELECT id, (intColA = intValA OR MATCH('#checkColA valA|valB') OR ...) as optionalMatches FROM rt_index WHERE optionalMatches = 1 AND MATCH('#requireCol requiredVal')
I can see a potential way to do this with CRC32 string conversions and MVA attributes but these aren't supported with RT Indexes and I REALLY would prefer not switch from them.
One way would be to simply convert all your columns to normal fields. Then you can put all this logic inside the MATCH(..). Ie not using attributes.
Yes you can only have one MATCH per query.
Otherwise, yes you could use the CRC trick to make string attributes into integer ones, so can use for filtering.
Not sure why you would need MVA, but they are now supported in RT indexes in 2.0.2