How to use JSON_CONTAINS in spring data jpa+queryDsl? - jpa

For example,
row1 ["cat","dog","bird"]
row2 ["horse","dog","bird"]
row3 ["honeybee","cat"]
...
I want to query whether cat exists in the categories field in the animal table.What should I do?
I try to Expressions.numberTemplate("JSON_CONTAINS...") method,
but I don't know how to use Expressions.xxxTemplate.

Related

How can I query the same column in a kdb table multiple times in a single statement?

I have the following table in kdb...
p:([]r:("(A|A(A|B|C|D).*)";"A(E|F|G|H|I).*";"A(J|K|L|M).*";"A(N|O|P|Q|R|S).*";"A(T|U|V|W|X|Y|Z).*";"B.*";"(C|C(A|B|C|D|E).*)";"C(F|G|H|I|J|K).*";"C(L|M|N|O|P|Q|R).*";"C(S|T|U|V|W|X|Y|Z).*";"D.*"))
r
----------------------
"(A|A(A|B|C|D).*)"
"A(E|F|G|H|I).*"
"A(J|K|L|M).*"
"A(N|O|P|Q|R|S).*"
"A(T|U|V|W|X|Y|Z).*"
"B.*"
"(C|C(A|B|C|D|E).*)"
"C(F|G|H|I|J|K).*"
"C(L|M|N|O|P|Q|R).*"
"C(S|T|U|V|W|X|Y|Z).*"
"D.*"
and the below function that parses each row of the table...
getRange:{$[x like "*(*";
[if[x like "(*"; x2:1#1_x; l:enlist x2; x:-1_(3_x)];
l,:enlist {(3#x),"-",(-3#x)} ssr[ssr[ssr[x;".";""];")";"]"];"(";"["];
if[((count l)>1)&(l[1] like "*A-*"); l[1]:ssr[l[1]; "A-";"0-9/A-"]];
:l];
:enlist ssr[x;".";""]
];
}
Which gives an output like this...
r1:raze getRange'[exec r from p]
q)r1
,"A"
"A[0-9/A-D]*"
"A[E-I]*"
"A[J-M]*"
"A[N-S]*"
"A[T-Z]*"
"B*"
,"C"
"C[0-9/A-E]*"
"C[F-K]*"
"C[L-R]*"
"C[S-Z]*"
"D*"
I'm parsing the rows so they can be inserted into a query similar to something like select from t where sym like raze getRange'[exec r from p][0]
What I'd like to be able to do is combine the first "single A" with the first "group of A" and the same with the C's (so it looks like below). But the problem I'm having is that those results can't be easily inserted into a query...
(,"A";"A[0-9/A-D]*")
,"A[E-I]*"
,"A[J-M]*"
,"A[N-S]*"
,"A[T-Z]*"
,"B*"
(,"C";"C[0-9/A-E]*")
,"C[F-K]*"
,"C[L-R]*"
,"C[S-Z]*"
,"D*"
Is there a way in q that I can do this? Essentially, select from t where sym like (enlist "A";"A[0-9/A-D]*")
Please let me know if you need any additional info. Thank you in advance.
For matching against multiple regexps we can do following
select from t where any sym like/:("A";"A[0-9/A-D]*")

Athena-Querying a table with JSON in one column

I have a parquet file with the following format
id = 2a1ed0848022
raw_value:
[{"state":"MO","city":"O Fallon","location_name":"Jackson Hewitt Tax Service","top_category":"Accounting, Tax Preparation, Bookkeeping, and Payroll Services"},
{"state":"IL","city":"Collinsville","location_name":"L E Smith Jewelry","top_category":"Jewelry, Luggage, and Leather Goods Stores"},
{"state":"MO","city":"O Fallon","location_name":"Bagwasi Family Eyecare","top_category":"Health and Personal Care Stores"},
{"state":"MO","city":"O Fallon","location_name":"Rally's Drive-In Restaurants","top_category":"Restaurants and Other Eating Places"},
{"state":"IL","city":"Collinsville","location_name":"BP","top_category":"Gasoline Stations"}
I would like to create a table in Athena on this parquet file and run a query like this
select maid from test12 where state="MD" and city="Baltimore".
How can I search state and city from the second column which has nested JSON.
The key to this is to use UNNEST. I'm assuming that raw_value is typed as array<struct<state:string,city:string,location_name:string,top_category:string>>.
SELECT id
FROM the_table CROSS JOIN UNNEST(raw_value) rv (location)
WHERE location.state = 'MD' AND city = 'Baltimore'
Using UNNEST like this expands each row in the table to one row per element in the array.
If raw_value is a string column, you need to parse it first. You can find an example of this in this answer: https://stackoverflow.com/a/56176204/1109

KDB: How to assign string datatype to all columns

When I created the table Tab, I specified the columns as string,
Tab: ([Key1:string()] Col1:string();Col2:string();Col3:string())
But the column datatype (t) is empty. I suppose specifying the column as string has no effect.
meta Tab
c t f a
--------------------
Key1
Col1
Col2
Col3
After I do a bulk upsert in Java...
c.Dict dict = new c.Dict((Object[]) columns.toArray(new String[columns.size()]), data);
c.Flip flip = new c.Flip(dict);
conn.c.ks("upsert", table, flip);
The datatypes are all symbols:
meta Tab
c t f a
--------------------
Key1 s
Col1 s
Col2 s
Col3 s
How can I specify the datatype of the columns as string and have it remain as string?
You cant define a column of the empty table with as strings as they are merely lists of lists of characters
You can just set them as empty lists which is what your code is doing.
But the column will then take on the type of whatever data is inserted into it.
Real question is what is your java process sending symbols when it should be sending strings. You need to make the change there before publishing to KDB
Note if you define as chars you still wont be able to upsert strings
q)Tab: ([Key1:`char$()] Col1:`char$();Col2:`char$();Col3:`char$())
q)Tab upsert ([Key1:enlist"test"] Col1:enlist"test";Col2:enlist"test";Col3:enlist "test")
'rank
[0] Tab upsert ([Key1:enlist"test"] Col1:enlist"test";Col2:enlist"test";Col3:enlist "test")
^
q)Tab: ([Key1:()] Col1:();Col2:();Col3:())
q)Tab upsert ([Key1:enlist"test"] Col1:enlist"test";Col2:enlist"test";Col3:enlist "test")
Key1 | Col1 Col2 Col3
------| --------------------
"test"| "test" "test" "test"
KDB does not allow to define column types as list during creation of table. So that means you can not define your column type as String because that is also a list.
To do that only way is to define column as empty list like:
q) t:([]id:`int$();val:())
Then when you insert data to this table the column will automatically take type of that data.
q)`t insert (4;"row1")
q) meta t
c | t f a
---| -----
id | i
val| C
In your case, one option is to send string data from your Java process as mentioned by user 'emc211' or other option is to convert your data to string in KDB process before insertion.

How to transform constant value to Query/Rep for unionAll

There is a some table Users with fields id, name, etc
I need to select some ids from Users and concatenate with some constant value A. For example, I want to get the following result:
id
--------------------------------------
someId-1
someId-2
someId-3
A
I can do it with plain sql in the following way:
SELECT id FROM users UNION ALL SELECT 'A';
How can I do it with slick?
For example:
val q: Query[UsersTable, Users, Seq] = ...
q.map(_.id).unionAll( "A" ) //TODO how to transform "A" to query or Rep
Well the answer how to create a Slick Query from a constant into is rather trivial: just use slick.lifted.Query.apply from companion object such as
q.map(_.id).unionAll( Query("A") )

How to pass dictionary into query constraint?

If this is the dictionary of constraint:
dictName:`region`Code;
dictValue:(`NJ`NY;`EEE213);
dict:dictName!dictValue;
I would like to pass the dict to a function and depending on how many keys there are and let the query react accordingly. If there is one key region, then I would like to put it as
select from table where region in dict`region;
The same thing is for code. But if I pass two keys, I would like the query knows and pass it as:
select form table where region in dict`region,Code in dict`code;
Is there any way to do this?
I came up this code:
funcForOne:{[constraint]?[`bce;enlist(in;constraint;(`dict;enlist constraint));0b;()]};
funcForAll[]
{[dict]$[(null dict)~1;select from bce;($[(count key dict)=1;($[`region in (key dict);funcForOne[`region];funcForOne[`Code]]);select from bce where region in dict`region,rxmCode in dict`Code])]};
It works for one and two constraint. but when I called funcForAll[] it gives type error. How should I change it? i think it is from null dict~1
I tried count too. but doesn't work too well.
Update
So I did this but I have some error
tab:([]code:`B90056`B90057`B90058`B90059;region:`CA`NY`NJ`CA);
dictKey:`region`Code;dictValue:(`NJ`NY;`B90057);
dict:dictKey!dictValue;
?[tab;f dict;0b;()];
and I got 'NY error. Do you know why? Also,if I pass a null dictionary it doesn't seem working.
As I said funtional form would be the better approach but if your requirement is very limited as you said then you can consider other solution as below:
Note: Assuming all dictionary keys will be in table columns list.
q) f:{[dict] if[0=count dict;:select from t];
select from t where (#[key dict;t]) in {$[any 0<=type each value x;flip ;enlist ]x}[dict] }
Explanation:
1. convert dict to table depending on the values type. Flip if any value is a general list else enlist.
$[any 0<=type each value dict;flip ;enlist ]dict
Get subset of table t which consists only of dictionary keys as columns.
#[key dict;t]
get rows where (2) in (1)
Basically we are using below form of querying and matching:
q)t1:([]id:1 2;s:`a`b);
q)t2:([]id:1 3 ;s:`a`b);
q)select from t1 where ([]id;s) in t2
If you're just using in, you can do something like:
f:{{[x;y](in),'key[y],'(),x}[;x]enlist each value[x]}
So that:
q)d
a| 10 1
b| ,`a
q)f d
in `a 10 1
in `b ,`a
q)t
a b c
------
1 a 10
2 b 20
3 c 30
q)?[t;f d;0b;()]
a b c
------
1 a 10
Note that because of the enlist each the resulting list is enlisted so that singletons work too:
q)d:enlist[`a]!enlist 1
q)d
a| 1
q)?[t;f d;0b;()]
a b c
------
1 a 10
Update to secondary question
This still works with empty dict, i.e. ()!(). I'm passing in the dictionary variable.
In your 2nd question your dictionary is not constructed correctly (also remember q is case sensitive). Also your values need to be enlisted. Look up functional select in the reference pages on the kx site, you'll see that you need to enlist the symbol lists to differentiate them from column name declarations
`region`code!(enlist `NY`NJ;enlist `B90057)