I created a "favorite" functionality, which is similar to the common "Like" functionality in many websites.
There are 3 tables:
"User" with primary key UUID
"Photo" with pk UUID
"Favorite" with pk user.UUID and post.UUID
The corresponding SQL is:
CREATE TABLE public."user" (
id uuid DEFAULT public.gen_random_uuid() NOT NULL
);
CREATE TABLE public."photo" (
id uuid DEFAULT public.gen_random_uuid() NOT NULL
);
CREATE TABLE public."favorite" (
userId uuid NOT NULL
photoId uuid NOT NULL
);
Now, I would like to query photos with a computed field isFavorite as boolean where the value is set to true when the current user has favorited the photo.
So, I created this custom SQL function:
CREATE OR REPLACE FUNCTION public.isfavorite(photo photo, hasura_session json)
RETURNS boolean
LANGUAGE sql
STABLE
AS $function$
SELECT EXISTS (
SELECT *
FROM public.favorite
WHERE "userId" = (VALUES (hasura_session ->> 'x-hasura-role'))::uuid AND "photoId" = photo.uuid
)
$function$
I can create this function with SQL in Hasura, but when I set this function to a computed field in the photo table, Hasura display this error:
in table "photo": in computed field "isFavorite": function "isfavorite" is overloaded. Overloaded functions are not supported
Where I made a mistake? Can we build a custom function that return boolean? How do you build a favorite (or like) functionality?
Solved: There was two isFavorite functions in the database that cause overloading...
So now there is a isFavorite field in the photo schema, but I need te provide $args with hasura_session as argument.
How to provide hasura_session without the need to fill in arguments?
You will need to track your computed column passing the session variable.
https://hasura.io/docs/1.0/graphql/manual/api-reference/schema-metadata-api/computed-field.html
{
"type":"add_computed_field",
"args":{
"table":{
"name":"photo",
"schema":"public"
},
"name":"isfavorite",
"definition":{
"function":{
"name":"isfavorite",
"schema":"public"
},
"table_argument":"photo_row",
"session_argument": "hasura_session"
}
}
}
This was also added recently. Make sure your are on version v1.3 or later. I would also change the function to accept photo_row as the variable, instead of photo photo this might cause issues with PostgreSQL.
CREATE OR REPLACE FUNCTION public.isfavorite(photo_row photo, hasura_session json)
RETURNS boolean
LANGUAGE sql
STABLE
AS $function$
SELECT EXISTS (
SELECT *
FROM public.favorite
WHERE "userId" = (VALUES (hasura_session ->> 'x-hasura-role'))::uuid AND "photoId" = photo.uuid
)
$function$
Related
I am using DatabaseClient for building a custom Repository. After I insert or update an Item I need that Row data to return the saved/updated Item. I just can´t wrap my head around why .all(), .first(), .one() are not returning the Result Map, although I can see that the data is inserted/updated in the database. They just signal onComplete. But .rowsUpdated() returns 1 row updated.
I observed this behaviour with H2 and MS SQL Server.
I´m new to R2dbc. What am I missing? Any ideas?
#Transactional
public Mono<Item> insertItem(Item entity){
return dbClient
.sql("insert into items (creationdate, name, price, traceid, referenceid) VALUES (:creationDate, :name, :price, :traceId, :referenceId)")
.bind("creationDate", entity.getCreationDate())
.bind("name", entity.getName())
.bind("price", entity.getPrice())
.bind("traceId", entity.getTraceId())
.bind("referenceId", entity.getReferenceId())
.fetch()
.first() //.all() //.one()
.map(Item::new)
.doOnNext(item -> LOGGER.info(String.format("Item: %s", item)));
}
The table looks like this:
CREATE TABLE [dbo].[items](
[creationdate] [bigint] NOT NULL,
[name] [nvarchar](32) NOT NULL,
[price] [int] NOT NULL,
[traceid] [nvarchar](64) NOT NULL,
[referenceid] [int] NOT NULL,
PRIMARY KEY (name, referenceid)
)
Thanks!
This is the behavior of an insert/update statement in database, it does not return the inserted/updated rows.
It returns the number of inserted/updated rows.
It may also return some generated values by the database (such as auto increment, generated uuid...), by adding the following line:
.filter(statement -> statement.returnGeneratedValues())
where you may specify specific generated columns in parameter. However this has limitations depending on the database (for example MySql can only return the last generated ID of an auto increment column even if you insert multiple rows).
If you want to get the inserted/updated values from database, you need to do a select.
This is the simplified structure of my tables - 2 main tables, one relation table.
What's the best way to handle an insert API for this?
If I just have a Client and Supabase:
- First API call to insert book and get ID
- Second API call to insert genre and get ID
- Third API call to insert book-genre relation
This is what I can think of, but 3 API calls seems wrong.
Is there a way where I can do insert into these 3 tables with a single API call from my client, like a single postgres function that I can call?
Please share a general example with the API, thanks!
Is there any reason you need to do this with a single call? I'm assuming from your structure that you're not going to create a new genre for every book you create, so most of the time, you're just inserting a book record and a book_gen_rel record. In the real world, you're probably going to have books that fall into multiple genres, so eventually you're going to be changing your function to handle the insert of a single book along with multiple genres in a single call.
That being said, there are two ways too approach this. You can make multiple API calls from the client (and there's really no problem doing this -- it's quite common). Second, you could do it all in a single call if you create a PostgreSQL function and call it with .rpc().
Example using just client calls to insert a record in each table:
const { data: genre_data, error: genre_error } = await supabase
.from('genre')
.insert([
{ name: 'Technology' }
]);
const genre_id = genre_data[0].id;
const { data: book_data, error: book_error } = await supabase
.from('book')
.insert([
{ name: 'The Joys of PostgreSQL' }
]);
const book_id = book_data[0].id;
const { data: book_genre_rel_data, error: book_genre_rel_error } = await supabase
.from('book_genre_rel_data')
.insert([
{ book_id, genre_id }
]);
Here's a single SQL statement to insert into the 3 tables at once:
WITH genre AS (
insert into genre (name) values ('horror') returning id
),
book AS (
insert into book (name) values ('my scary book') returning id
)
insert into book_genre_rel (genre_id, book_id)
select genre.id, book.id from genre, book
Now here's a PostgreSQL function to do everything in a single function call:
CREATE OR REPLACE FUNCTION public.insert_book_and_genre(book_name text, genre_name text)
RETURNS void language SQL AS
$$
WITH genre AS (
insert into genre (name) values (genre_name) returning id
),
book AS (
insert into book (name) values (book_name) returning id
)
insert into book_genre_rel (genre_id, book_id)
select genre.id, book.id from genre, book
$$
Here's an example to test it:
select insert_book_and_genre('how to win friends by writing good sql', 'self-help')
Now, if you've created that function (inside the Supabase Query Editor), then you can call it from the client like this:
const { data, error } = await supabase
.rpc('insert_book_and_genre', {book_name: 'how I became a millionaire at age 3', genre_name: 'lifestyle'})
Again, I don't recommend this approach, at least not for the genre part. You should insert your genres first (they probably won't change) and simplify this to just insert a book and a book_genre_rel record.
I'm trying to pull back data from PostgreSQL using a data reader. Each time I run my code the only value returned is the name of the refcursor.
I created the following to illustrate my problem. I'm using NpgSql .net core 3.1 aginst a PostgreSQL 12.4 database. Can anyone point out what I'm doing wrong?
Here is a simple table of cities with a function that is supposed to return the list of cities stored in the tblcities table.
CREATE TABLE public.tblcities
(
cityname character varying(100) COLLATE pg_catalog."default" NOT NULL,
state character varying(2) COLLATE pg_catalog."default",
CONSTRAINT tblcities_pkey PRIMARY KEY (cityname)
);
INSERT INTO public.tblcities(cityname, state) VALUES ('San Francisco','CA');
INSERT INTO public.tblcities(cityname, state) VALUES ('San Diego','CA');
INSERT INTO public.tblcities(cityname, state) VALUES ('Los Angeles','CA');
CREATE OR REPLACE Function getcities() RETURNS REFCURSOR
LANGUAGE 'plpgsql'
AS $BODY$
DECLARE
ref refcursor := 'city_cursor';
BEGIN
OPEN ref FOR
select *
from tblcities;
Return ref;
END;
$BODY$;
The following is the .net code.
public static void GetCities()
{
using (var cn = new NpgsqlConnection(dbconn_string))
{
if (cn.State != ConnectionState.Open)
cn.Open();
using (NpgsqlCommand cmd = cn.CreateCommand())
{
cmd.CommandText = "getcities";
cmd.Connection = cn;
cmd.CommandType = CommandType.StoredProcedure;
NpgsqlDataReader dr = cmd.ExecuteReader();
while (dr.Read())
{
//There is only one row returned when there should be 3.
//The single value returned is the name of the refcursor - 'city_cursor'
//Where are the city rows I'm expecting?
var value1 = dr[0];
}
}
}
}
In PostgreSQL, rather than returning a refcursor, you generally return the data itself - change the function to have RETURNS TABLE instead of RETURNS REFCURSOR (see the docs for more details). If you return a cursor to the client, the client must then perform another roundtrip to fetch the results for that cursor, whereas when returning a table directly no additional round-trip is needed.
This is one of the main reasons Npgsql doesn't automatically "dereference" cursors - lots of people coming from other databases write functions returning cursors, when in reality doing that is very rarely necessary.
For some discussions around this, see https://github.com/npgsql/npgsql/issues/1785 and https://github.com/npgsql/npgsql/issues/438.
I have a many:many relationship between 2 tables: note and tag, and want to be able to search all notes by their tagId. Because of the many:many I have a junction table note_tag.
My goal is to expose a computed field on my Postgraphile-generated Graphql schema that I can query against, along with the other properties of the note table.
I'm playing around with postgraphile-plugin-connection-filter. This plugin makes it possible to filter by things like authorId (which would be 1:many), but I'm unable to figure out how to filter by a many:many. I have a computed column on my note table called tags, which is JSON. Is there a way to "look into" this json and pick out where id = 1?
Here is my computed column tags:
create or replace function note_tags(note note, tagid text)
returns jsonb as $$
select
json_strip_nulls(
json_agg(
json_build_object(
'title', tag.title,
'id', tag.id,
)
)
)::jsonb
from note
inner join note_tag on note_tag.tag_id = tagid and note_tag.note_id = note.id
left join note_tag nt on note.id = nt.note_id
left join tag on nt.tag_id = tag.id
where note.account_id = '1'
group by note.id, note.title;
$$ language sql stable;
as I understand the function above, I am returning jsonb, based on the tagid that was given (to the function): inner join note_tag on note_tag.tag_id = tagid. So why is the json not being filtered by id when the column gets computed?
I am trying to make a query like this:
query notesByTagId {
notes {
edges {
node {
title
id
tags(tagid: "1")
}
}
}
}
but right now when I execute this query, I get back stringified JSON in the tags field. However, all tags are included in the json, whether or not the note actually belongs to that tag or not.
For instance, this note with id = 1 should only have tags with id = 1 and id = 2. Right now it returns every tag in the database
{
"data": {
"notes": {
"edges": [
{
"node": {
"id": "1",
"tags": "[{\"id\":\"1\",\"title\":\"Psychology\"},{\"id\":\"2\",\"title\":\"Logic\"},{\"id\":\"3\",\"title\":\"Charisma\"}]",
...
The key factor with this computed column is that the JSON must include all tags that the note belongs to, even though we are searching for notes on a single tagid
here are my simplified tables...
note:
create table notes(
id text,
title text
)
tag:
create table tag(
id text,
title text
)
note_tag:
create table note_tag(
note_id text FK references note.id
tag_id text FK references tag.id
)
Update
I am changing up the approach a bit, and am toying with the following function:
create or replace function note_tags(n note)
returns setof tag as $$
select tag.*
from tag
inner join note_tag on (note_tag.tag_id = tag.id)
where note_tag.note_id = n.id;
$$ language sql stable;
I am able to retrieve all notes with the tags field populated, but now I need to be able to filter out the notes that don't belong to a particular tag, while still retaining all of the tags that belong to a given note.
So the question remains the same as above: how do we filter a table based on a related table's PK?
After a while of digging, I think I've come across a good approach. Based on this response, I have made a function that returns all notes by a given tagid.
Here it is:
create or replace function all_notes_with_tag_id(tagid text)
returns setof note as $$
select distinct note.*
from tag
inner join note_tag on (note_tag.tag_id = tag.id)
inner join note on (note_tag.note_id = note.id)
where tag.id = tagid;
$$ language sql stable;
The error in approach was to expect the computed column to do all of the work, whereas its only job should be to get all of the data. This function all_nuggets_with_bucket_id can now be called directly in graphql like so:
query MyQuery($tagid: String!) {
allNotesWithTagId(tagid: $tagid) {
edges {
node {
id
title
tags {
edges {
node {
id
title
}
}
}
}
}
}
}
I have this query;
knex('metrics').insert(function() {
this.select('metric as name')
.from('stage.metrics as s')
.whereNotExists(function() {
this.select('*')
.from('metrics')
.where('metrics.name', knex.raw('s.metric'))
})
})
The table metrics has two columns; an id, which is incrementing, and name. I expected this to insert into the name column because the subquery has one column, labeled name, and default id. however, instead it complains that I am providing a column of type character varying for my integer column id. How do I make it explicit that I want the id to take the default value?
This can do the trick
knex('metrics').insert(function() {
this
.select([
knex.raw('null::bigint as id'), // or any other type you need (to force using default value you need to pass explicitly null value to insert query)
'metric as name'
])
.from('stage.metrics as s')
.whereNotExists(function() {
this.select('*')
.from('metrics')
.where('metrics.name', knex.raw('s.metric'))
})
})
I know, looks a bit hacky. Would be great to see something in knex API like (example below is a proposal and not a working example)
knex('table_name')
.insert(
['name', 'surname'],
function () {
this.select(['name', 'surname']).from('other_table'))
}
)
Which produces
insert into table_name (name, surname) select name, surname from other_table;
I'm not sure about this interface, but you got the point. Like explicitly write fields you want to insert.