A simple example, I have a select sql:
select
t.id,
t.name,
null as grade
t.class,
t.no
from table t
and I execute this sql by sequelize.query(), the result of it returned is wrong:
all the field after grade like class is null.
But I move these fields above the grade, I can get the real value of these fields by sequelize.query()
Check your relation definition ex:
Model.hasOne(db.RelatedModel, {
as: 'namedAlias',
foreignKey: 'id' // <- Make sure you add the relation id as the foreign key
})
Related
There is a simple database entity:
case class Foo(id: Option[UUID], keywords: Seq[String])
I want to implement a search function which returns all entities of type Foo which have at least one keyword that contains the search string.
I'm using Slick and tried this:
def searchKeywords(txt: String): Future[Seq[Foo]] = {
val action = Foos.filter(p => p.keywords.any like s"%$txt%").result
db.run(action)
}
This piece of code compiles, but when executing, I get this SQL error:
PSQLException: ERROR: syntax error at or near "any"
The generated sql statement looks like:
select "id", "title", "tagline", "logo", "short_desc", "keywords", "initial_condition", "work_process", "end_result", "ts", "lm", "v" from "projects" where any("keywords") like '%foo%'
And it does not work with postgresql. (I'm using v12)
Schema for the table looks like this:
CREATE TABLE foos
(
id UUID NOT NULL PRIMARY KEY,
keywords varchar[] NOT NULL
);
How can I achieve to search in a list of strings using the like operator?
From a pure SQL point of view, you need a derived table to achieve that. I hope some expert corrects me if I'm wrong but you can't use SQL operator like on a array.
Supposing your table construction is :
CREATE TABLE foos
(
id UUID NOT NULL PRIMARY KEY,
keywords varchar[] NOT NULL
);
Then an SQL way of retrieving the results would be :
select * from (
select id, unnest(keywords) as keyw from foos
) myTable where keyw like '%foo%'
Otherwise, the syntax you're using for the like operator seems correct.
myProperty like s"%$myVariable"
I'm building a multitenant app and running into an error after adding multiple relations that point to the same table:
Uncaught Error: More than one relationship was found for teams and users
When performing this query:
const data = await supabaseClient.from('organizations')
.select(`
*,
teams(
id,
org_id,
name,
members:users(
id,
full_name,
avatar_url
)
)
`);
I have the following table structures (leaving off some fields for brevity):
table users (
id uuid PK
full_name text
email text
)
table organizations (
id uuid PK
....
)
table organization_memberships (
id uuid PK
organization_id uuid FK
user_id uuid FK
role ENUM
)
table teams (
id uuid PK
name text PK
)
table team_memberships (
id uuid PK
team_id uuid FK
user_id uuid FK
role ENUM
)
table team_boards (
id uuid PK
team_id uuid FK
owner_id uuid FK
)
Under the hood, Supabase uses PostREST for queries. And I have deciphered from the error message that the query is ambigious and it's unsure which relationship(s) to fulfill. I'm not sure how to tell Supabase which relation to use in this particular query to avoid this error.
Here's the more verbose console error from postREST:
{
hint: "By following the 'details' key, disambiguate the request by changing the url to /origin?select=relationship(*) or /origin?select=target!relationship(*)",
message: 'More than one relationship was found for teams and users',
details: [
{
origin: 'public.teams',
relationship: 'public.team_memberships[team_memberships_team_id_fkey][team_memberships_user_id_fkey]',
cardinality: 'm2m',
target: 'public.users'
},
{
origin: 'public.teams',
relationship: 'public.team_boards[team_boards_team_id_fkey][team_boards_owner_id_fkey]',
cardinality: 'm2m',
target: 'public.users'
}
]
}
Digging a bit deeper into the PostgREST docs, it turns out what I was looking for is the disambiguation operator, !.
The working query looks like this (note that we are disambiguating which relation to use to satisfy the members query):
const data = await supabaseClient.from('organizations')
.select(`
*,
teams(
id,
org_id,
name,
members:users!team_memberships(
id,
full_name,
avatar_url
)
)
`);
I have a numeric(10,2) data type column named "Value" in a Payment table in postgresql Database.
CREATE TABLE IF NOT EXISTS PAYMENT(
PAYMENT_ID BIGINT NOT NULL DEFAULT nextval('payment_seq') PRIMARY KEY,
DATE TIMESTAMP,
PLACE VARCHAR(255),
VALUE NUMERIC(10,2) NOT NULL,
UTILISATEUR_ID BIGINT REFERENCES UTILISATEUR
);
I want to retrieve that numeric value by a BigDecimal Data Type in Java. (dto)
import java.math.BigDecimal;
public interface UserSumByGroup {
public String getFullName();
public BigDecimal getSumOfValues();
}
For some reason all the times the return values for the dto is null when I execute in the controller..
...
List<UserSumByGroup> usersGroupSumPaymnt = userRepo.userGroupSumPaymt();
model.addAttribute("userGroupListSumPaymt", usersGroupSumPaymnt);
System.out.println("usersGroupSumPaymnt=> "+usersGroupSumPaymnt.get(0).getSumOfValues());
...
SQL Query:
SELECT usr.FULL_NAME as fullName, SUM(VALUE) as paymentCount
FROM PAYMENT pym left join UTILISATEUR usr ON usr.utilisateur_id = pym.utilisateur_id
WHERE MONTH(pym.date) = MONTH(CURRENT_DATE)
AND YEAR(pym.date) = YEAR(CURRENT_DATE)
AND pym.utilisateur_id IN (1,5)
GROUP BY pym.utilisateur_id, usr.FULL_NAME;
Console Log:
usersGroupSumPaymnt=> null
Do you have some idea why I got always a Null?.
Thanks.
To correspond the names of the variables between Spring JPA and the Database, the names of the attributes of the dto object must be the same to the name of the result field in the query.
The name of the result query "paymentCount" needs to be the same as the dto instance "sumOfValues".
So, Change:
SUM(VALUE) as paymentCount
By
SUM(VALUE) as sumOfValues
How to parse jsonb object in PostgreSql. The problem is - object every time is different by structure inside. Just like below.
{
"1":{
"1":{
"level":2,
"nodeType":2,
"id":2,
"parentNode":1,
"attribute_id":363698007,
"attribute_text":"Finding site",
"concept_id":386108004,
"description_text":"Heart tissue",
"hierarchy_id":0,
"description_id":-1,
"deeperCnt":0,
"default":false
},
"level":1,
"nodeType":1,
"id":1,
"parentNode":0,
"concept_id":22253000,
"description_id":37361011,
"description_text":"Pain",
"hierarchy_id":404684003,
"deeperCnt":1,
"default":false
},
"2":{
"1":{
"attribute_id":"363698007",
"attribute_text":"Finding site (attribute)",
"value_id":"321667001",
"value_text":"Respiratory tract structure (body structure)",
"default":true
},
"level":1,
"nodeType":1,
"id":3,
"parentNode":0,
"concept_id":11833005,
"description_id":20419011,
"description_text":"Dry cough",
"hierarchy_id":404684003,
"deeperCnt":1,
"default":false
},
"level":0,
"recAddedLevel":1,
"recAddedId":3,
"nodeType":0,
"multiple":false,
"currNodeId":3,
"id":0,
"lookForAttributes":false,
"deeperCnt":2,
}
So how should I parse all object and for example look if object inside has "attribute_id" = 363698007?
In this case we should get 'true' while selecting data rows in PostgreSql with WHERE statement.
2 question - what index should I use for jsonb column to get wanted results?
Already tried to create btree and gin indexes but even simple select returns 'null' with sql like this:
SELECT object::jsonb -> 'id' AS id
FROM table;
if I use this:
SELECT object
FROM table;
returns firstly described object.
The quick and dirty way (extended upon Collect Recursive JSON Keys In Postgres):
WITH RECURSIVE doc_key_and_value_recursive(id, key, value) AS (
SELECT
my_json.id,
t.key,
t.value
FROM my_json, jsonb_each(my_json.data) AS t
UNION ALL
SELECT
doc_key_and_value_recursive.id,
t.key,
t.value
FROM doc_key_and_value_recursive,
jsonb_each(CASE
WHEN jsonb_typeof(doc_key_and_value_recursive.value) <> 'object' THEN '{}'::jsonb
ELSE doc_key_and_value_recursive.value
END) AS t
)
SELECT t.id, t.data->'id' AS id
FROM doc_key_and_value_recursive AS c
INNER JOIN my_json AS t ON (t.id = c.id)
WHERE
jsonb_typeof(c.value) <> 'object'
AND c.key = 'attribute_id'
AND c.value = '363698007'::jsonb;
Online example: https://dbfiddle.uk/?rdbms=postgres_11&fiddle=57b7c4e817b2dd6580bbf28cbac10981
This may be improved a lot by stopping the recursion as soon as the relevant key and value are found, reverse sort and limit 1, aso. But it does the basic thing generically.
It also shows that jsonb->'id' does work as expected.
I have this query;
knex('metrics').insert(function() {
this.select('metric as name')
.from('stage.metrics as s')
.whereNotExists(function() {
this.select('*')
.from('metrics')
.where('metrics.name', knex.raw('s.metric'))
})
})
The table metrics has two columns; an id, which is incrementing, and name. I expected this to insert into the name column because the subquery has one column, labeled name, and default id. however, instead it complains that I am providing a column of type character varying for my integer column id. How do I make it explicit that I want the id to take the default value?
This can do the trick
knex('metrics').insert(function() {
this
.select([
knex.raw('null::bigint as id'), // or any other type you need (to force using default value you need to pass explicitly null value to insert query)
'metric as name'
])
.from('stage.metrics as s')
.whereNotExists(function() {
this.select('*')
.from('metrics')
.where('metrics.name', knex.raw('s.metric'))
})
})
I know, looks a bit hacky. Would be great to see something in knex API like (example below is a proposal and not a working example)
knex('table_name')
.insert(
['name', 'surname'],
function () {
this.select(['name', 'surname']).from('other_table'))
}
)
Which produces
insert into table_name (name, surname) select name, surname from other_table;
I'm not sure about this interface, but you got the point. Like explicitly write fields you want to insert.