Go: How to get last insert id on Postgresql with NamedExec() - postgresql

I use jmoiron/sqlx library for communicating with my PostgreSql server in my Go apps. Somewhere on my apps i have this following code:
sqlQuery := `
INSERT INTO table_to_insert (
code,
status,
create_time,
create_by
) VALUES (
'',
0,
CURRENT_TIMESTAMP,
0
) RETURNING id
`
datas, err := tx.NamedExec(sqlQuery, structToInsert)
Question: how can i get the last insert id using the return from tx.NamedExec()? I've tried datas.LastInsertId() but its always return 0.
Note: im sure the insert to postgres is success.

The reason for this is because PostgreSQL does not return you the last inserted id. This is because last inserted id is available only if you create a new row in a table that uses a sequence.
If you actually insert a row in the table where a sequence is assigned, you have to use RETURNING clause. Something like this: INSERT INTO table (name) VALUES("val") RETURNING id".
I am not sure about your driver, but in pq you will do this in the following way:
lastInsertId := 0
err = db.QueryRow("INSERT INTO brands (name) VALUES($1) RETURNING id", name).Scan(&lastInsertId)

resp.LastInsertID() only (typically) works with mySQL, and only works for integer IDs: https://golang.org/pkg/database/sql/#Result
Note that since you're using sqlx (by the use of NamedExec) you'll want to instead use tx.Get to exec the query and capture the return value:
// id should match the type of your ID
// e.g. int64 for a bigserial column, or string for a uuid
var id string
resp, err := tx.Get(&id, query, v1, v2, v3)
See this relevant discussion on the sqlx GitHub repo: https://github.com/jmoiron/sqlx/issues/154#issuecomment-148216948

Related

With PostgREST, convert a column to and from an external encoding in the API

We are using PostgREST to automatically generate a REST API for a Postgres database. Our primary keys have an external representation that's different from how we store them internally. For simplicity's sake lets pretend the ids are stored as integers but we represent them as hexadecimal strings outwardly.
It's simple enough to get PostgREST to convert to the external representation for read operations:
CREATE DOMAIN hexid AS bigint;
CREATE TABLE fruits (
fruit_id hexid PRIMARY KEY,
name text
);
CREATE OR REPLACE VIEW api_fruits AS
SELECT to_hex(fruit_id) as fruit_id, name FROM fruits;
INSERT INTO fruits(fruit_id, name) VALUES('51955', 'avocado');
PostgREST generates the expected representation when we GET api_fruits:
[
{
"fruit_id": "caf3",
"name": "avocado"
}
]
But that's about as far as we get with this solution. It's a one way transformation so we won't be able to POST/PATCH records this way. The way PostgREST works is to transform such requests into equivalent INSERT and UPDATE statements. But this view with its custom formatting is not updatable. This is what would happen if we tried:
ERROR: cannot insert into column "fruit_id" of view "api_fruits"
DETAIL: View columns that are not columns of their base relation are not updatable.
STATEMENT: WITH pgrst_source AS (WITH pgrst_payload AS (SELECT $1::json AS json_data), pgrst_body AS ( SELECT CASE WHEN json_typeof(json_data) = 'array' THEN json_data ELSE json_build_array(json_data) END AS val FROM pgrst_payload) INSERT INTO "api_x"."api_fruits"("fruit_id", "name") SELECT "fruit_id", "name" FROM json_populate_recordset (null::"api_x"."api_fruits", (SELECT val FROM pgrst_body)) _ RETURNING "api_x"."api_fruits".*) SELECT '' AS total_result_set, pg_catalog.count(_postgrest_t) AS page_total, CASE WHEN pg_catalog.count(_postgrest_t) = 1 THEN coalesce((
WITH data AS (SELECT row_to_json(_) AS row FROM pgrst_source AS _ LIMIT 1)
SELECT array_agg(json_data.key || '=eq.' || json_data.value)
FROM data CROSS JOIN json_each_text(data.row) AS json_data
WHERE json_data.key IN ('')
), array[]::text[]) ELSE array[]::text[] END AS header, '' AS body, nullif(current_setting('response.headers', true), '') AS response_headers, nullif(current_setting('response.status', true), '') AS response_status FROM (SELECT * FROM pgrst_source) _postgrest_t
We can't INSERT into "View columns that are not columns of their base relation".
The obvious workaround is to serve fruit_id as a straight column, just an integer. With some post and preprocessing at the nginx level we can hex encode it there (and hex decode incoming ids). I'm wondering if we can do better than that though. For large API operations, re-encoding the JSON will use a lot of memory and CPU time and it seems so unnecessary.
It would have been great to be able to use a custom CREATE CAST to take the incoming hexadecimal strings and turn them back into integers, something like this:
CREATE CAST (json AS hexid) WITH FUNCTION json_to_hexid AS ASSIGNMENT;
But alas custom casts are ignored on CREATE DOMAIN types. And we can't make a true custom column type because our cloud Postgres host (Google Cloud SQL) doesn't allow custom extensions.
It feels like some combination of INSTEAD OF triggers or rules could work. But when using query parameters to filter results using query parameters (e.g. select a fruit by id), I don't think there's an appropriate trigger to use. INSTEAD OF doesn't work for straight SELECT does it?
For example I've tested doing something like this to take care of INSERT and allow POST with PostgREST. It works:
CREATE OR REPLACE FUNCTION api_fruits_insert()
RETURNS trigger AS
$$
BEGIN
INSERT INTO fruits(fruit_id, name) VALUES (('x' || lpad(NEW.fruit_id, 16, '0'))::bit(64)::bigint::hexid, NEW.name);
RETURN NEW;
END
$$ LANGUAGE 'plpgsql';
CREATE TRIGGER api_fruits_insert
INSTEAD OF INSERT
ON api_fruits
FOR EACH ROW
EXECUTE PROCEDURE api_fruits_insert();
The trouble is in the WHERE clause. Let's PATCH api_fruits?fruit_id=in.(7b,caf3) with {"name": "pear"}. This works out of the box since the name column is updatable but look at the query:
WITH pgrst_source AS (WITH pgrst_payload AS (SELECT $1::json AS json_data), pgrst_body AS ( SELECT CASE WHEN json_typeof(json_data) = 'array' THEN json_data ELSE json_build_array(json_data) END AS val FROM pgrst_payload) UPDATE "api_x"."api_fruits" SET "name" = _."name" FROM (SELECT * FROM json_populate_recordset (null::"api_x"."api_fruits" , (SELECT val FROM pgrst_body) )) _ WHERE "api_x"."api_fruits"."fruit_id" = ANY ($2) RETURNING 1) SELECT '' AS total_result_set, pg_catalog.count(_postgrest_t) AS page_total, array[]::text[] AS header, '' AS body, nullif(current_setting('response.headers', true), '') AS response_headers, nullif(current_setting('response.status', true), '') AS response_status FROM (SELECT * FROM pgrst_source) _postgrest_t
DETAIL: parameters: $1 = '{
"name": "pear"
}', $2 = '{7b,caf3}'
So we have essentially UPDATE api_fruits SET name='berry' WHERE fruit_id IN ('7b', 'caf3');. Surprisingly this works but it's a full table scan so Postgres can evaluate to_hex(fruit_id) for each row looking for matches. The same happens if we try to GET a record by fruit_id. How would we rewrite the WHERE clauses?
It really feels like some combination of just the right Postgres and PostgREST features should be able to get us to a point where it's all happening in Postgres without nginx's help and without excessive complexity. Any ideas?

Can postgreSQL OnConflict combine with JSON obejcts?

I wanted to perform a conditional insert in PostgreSQL. Something like:
INSERT INTO {TABLE_NAME} (user_id, data) values ('{user_id}', '{data}')
WHERE not exists(select 1 from files where user_id='{user_id}' and data->'userType'='Type1')
Unfortunately, insert and where does not cooperate in PostGreSQL. What could be a suitable syntax for my query? I was considering ON CONFLICT, but couldn't find the syntax for using it with JSON object. (Data in the example)
Is it possible?
Rewrite the VALUES part to a SELECT, then you can use a WHERE condition:
INSERT INTO { TABLE_NAME } ( user_id, data )
SELECT
user_id,
data
FROM
( VALUES ( '{user_id}', '{data}' ) ) sub ( user_id, data )
WHERE
NOT EXISTS (
SELECT 1
FROM files
WHERE user_id = '{user_id}'
AND data -> 'userType' = 'Type1'
);
But, there is NO guarantee that the WHERE condition works! Another transaction that has not been committed yet, is invisible to this query. This could lead to data quality issues.
You can use INSERT ... SELECT ... WHERE ....
INSERT INTO elbat
(user_id,
data)
SELECT 'abc',
'xyz'
WHERE NOT EXISTS (SELECT *
FROM files
WHERE user_id = 'abc'
AND data->>'userType' = 'Type1')
And it looks like you're creating the query in a host language. Don't use string concatenation or interpolation for getting the values in it. That's error prone and makes your application vulnerable to SQL injection attacks. Look up how to use parameterized queries in your host language. Very likely for the table name parameters cannot be used. You need some other method of either whitelisting the names or properly quoting them.

Extract all the values in jsonb into a row

I'm using postgresql 11, I have a jsonb which represent a row of that table, it's look like
{"userid":"test","rolename":"Root","loginerror":0,"email":"superadmin#ae.com",...,"thirdpartyauthenticationkey":{}}
is there any method that I could gather all the "values" of the jsonb into a string which is separated by ',' and without the keys?
The string I want to obtain with the jsonb above is like
(test, Root, 0, superadmin#ae.com, ..., {})
I need to keep the ORDER of those values as what their keys were in the jsonb. Could I do that with postgresql?
You can use the jsonb_populate_record function (assuming your json data does match the users table). This will force the text value to match the order of your users table:
Schema (PostgreSQL v13)
CREATE TABLE users (
userid text,
rolename text,
loginerror int,
email text,
thirdpartyauthenticationkey json
)
Query #1
WITH d(js) AS (
VALUES
('{"userid":"test", "rolename":"Root", "loginerror":0, "email":"superadmin#ae.com", "thirdpartyauthenticationkey":{}}'::jsonb),
('{"userid":"other", "rolename":"User", "loginerror":324, "email":"nope#ae.com", "thirdpartyauthenticationkey":{}}'::jsonb)
)
SELECT jsonb_populate_record(null::users, js),
jsonb_populate_record(null::users, js)::text AS record_as_text,
pg_typeof(jsonb_populate_record(null::users, js)::text)
FROM d
;
jsonb_populate_record
record_as_text
pg_typeof
(test,Root,0,superadmin#ae.com,{})
(test,Root,0,superadmin#ae.com,{})
text
(other,User,324,nope#ae.com,{})
(other,User,324,nope#ae.com,{})
text
Note that if you're building this string to insert it back into postgresql then you don't need to do that, since the result of jsonb_populate_record will match your table:
Query #2
WITH d(js) AS (
VALUES
('{"userid":"test", "rolename":"Root", "loginerror":0, "email":"superadmin#ae.com", "thirdpartyauthenticationkey":{}}'::jsonb),
('{"userid":"other", "rolename":"User", "loginerror":324, "email":"nope#ae.com", "thirdpartyauthenticationkey":{}}'::jsonb)
)
INSERT INTO users
SELECT (jsonb_populate_record(null::users, js)).*
FROM d;
There are no results to be displayed.
Query #3
SELECT * FROM users;
userid
rolename
loginerror
email
thirdpartyauthenticationkey
test
Root
0
superadmin#ae.com
[object Object]
other
User
324
nope#ae.com
[object Object]
View on DB Fiddle
You can use jsonb_each_text() to get a set of a text representation of the elements, string_agg() to aggregate them in a comma separated string and concat() to put that in parenthesis.
SELECT concat('(', string_agg(value, ', '), ')')
FROM jsonb_each_text('{"userid":"test","rolename":"Root","loginerror":0,"email":"superadmin#ae.com","thirdpartyauthenticationkey":{}}'::jsonb) jet (key,
value);
db<>fiddle
You didn't provide DDL and DML of a (the) table the JSON may reside in (if it does, that isn't clear from your question). The demonstration above therefore only uses the JSON you showed as a scalar. If you have indeed a table you need to CROSS JOIN LATERAL and GROUP BY some key.
Edit:
If you need to be sure the order is retained and you don't have that defined in a table's structure as #Marth's answer assumes, then you can of course extract every value manually in the order you need them.
SELECT concat('(',
concat_ws(', ',
j->>'userid',
j->>'rolename',
j->>'loginerror',
j->>'email',
j->>'thirdpartyauthenticationkey'),
')')
FROM (VALUES ('{"userid":"test","rolename":"Root","loginerror":0,"email":"superadmin#ae.com","thirdpartyauthenticationkey":{}}'::jsonb)) v (j);
db<>fiddle

Mybatis Insert PK manually

I am trying to single insert data into table with assigned PK. Manually assiging PK.
XML file
<insert id = "insertStd" parameterType = "com.org.springboot.dao.StudentEntity" useGeneratedKeys = "false" keyProperty = "insertStd.id", keyColumn = "id">
INSERT INTO STUDENT (ID, NAME, BRANCH, PERCENTAGE, PHONE, EMAIL )
VALUES (ID=#{insertStd.id}, NAME=#{insertStd.name}, BRANCH=#{insertStd.branch}, PERCENTAGE=#{insertStd.percentage}, PHONE=#{insertStd.phone}, EMAIL =#{insertStd.email});
</insert>
Service call method
public boolean saveStudent(Student student){
LOGGER.info("Student object save");
int savedId= studentMapper.insertStd(student);
}
Log file
org.springframework.jdbc.badsqlgrammarexception
### Error updating database Causes: cause org.postgresql.util.psqlexception error column id does not exist
HINT: There is a column named "id" in the table "student" but it can't be referenced from this part of the query.
Position 200
### Error may exist in file [c:\.....\StudentMapper.xml]
### Error may involve in com.org.springboot.dao.StudentMapper.insertStd-InLine
### The error occurred while setting parameters
### SQL INSERT INTO STUDENT (ID, NAME, BRANCH, PERCENTAGE, PHONE, EMAIL )
VALUES (ID=?, NAME=?,BRANCH=?, PERCENTAGE=?, PHONE=?, EMAIL=?);
### cause org.postgresql.util.psqlexception ERROR column "id" doesn't exist. //It did worked with JPA id assigned manually.
### There is a column named "ID" in the table "STUDENT", Bbut it cannot be referenced from the part of the query.
The INSERT statement of malformed. The VALUES clause should not include the column names.
Also, since there's no primary auto-generation, you can remove all the other attributes. Just leave the mapper id.
Note: if you want to manually assign the PK value, you need to make sure the table does not have a GENERATED ALWAYS clause for the column. If this is the case, the table will ignore the value you are providing and will use its own rules to generate the PK.
Use:
<insert id="insertStd">
INSERT INTO STUDENT (ID, NAME, BRANCH, PERCENTAGE, PHONE, EMAIL)
VALUES (
#{insertStd.id}, #{insertStd.name}, #{insertStd.branch},
#{insertStd.percentage}, #{insertStd.phone}, #{insertStd.email}
);
</insert>
Your error is easily reproduceable:
create table t (a int, b varchar(10));
insert into t (a, b) values (123, 'ABC'); -- succeeds
insert into t (a, b) values (a=123, b='ABC'); -- fails!
error: column "a" does not exist
See the Fiddle.

Postgres SELECT in Go returns all columns as string (using pq and database/sql)

I'm doing a select in Go using the database/sql package and the pq Postgres driver:
rows, err := db.Query("SELECT (name, age) FROM people WHERE id = 1")
I tried retrieving the values in the normal way:
rows.Next()
name := ""
age := 0
err = rows.Scan(&name, &age)
but I got the error:
sql: expected 1 destination arguments in Scan, not 2
The documentation for sql.(*Rows).Scan says that you can pass a byte slice pointer and it will be filled with the raw results. So I did this:
b := make([]byte, 1024*1024)
rows.Scan(&b)
fmt.Println(string(b))
which succeeded, printing:
(John,18)
So I looked at the source code for sql.(*Rows).Scan, and it turns out that the error is returned if the number of arguments doesn't match the number of results returned by the database driver (which makes sense). So, for some reason, the pq driver seems to be returning the result set as a single value. Why would this be?
Thanks to a_horse_with_no_name for the right answer! I'm posting it here for ... posterity?
In Postgres, doing SELECT (a, b) with parentheses returns a single record, which is an anonymous composite type. Removing the parentheses will return the columns individually: Select a, b.
When using a function with out parameters which only ever returns one row I was running into the same issue. The following resolved it for me:
var foo, bar string
err := db.QueryRow("select * from my_function()").Scan(&foo, &bar)
The function was of this form:
create or replace function my_function(
out first_out varchar,
out second_out json
) as $$
-- etc.
$$ language plpgsql;