I'm trying to do an insert or update in a postgres database using node.js with pg extension (version 0.5.4).
So far I have this code:
(...)
client.query({
text: "update users set is_active = 0, ip = $1 where id=$2",
values: [ip,id]
}, function(u_err, u_result){
debug(socket_id,"update query result: ",u_result);
debug(socket_id,"update query error: ",u_err);
date_now = new Date();
var month = date_now.getMonth() + 1;
if(!u_err){
client.query({
text: 'insert into users (id,first_name,last_name,is_active,ip,date_joined) values' +
'($1,$2,$3,$4,$5,$6)',
values: [
result.id,
result.first_name,
result.last_name,
1,
ip,
date_now.getFullYear() + "-" + month + "-" + date_now.getDate() + " " + date_now.getHours() + ":" + date_now.getMinutes() + ":" + date_now.getSeconds()
]
}, function(i_err, i_result){
debug(socket_id,"insert query result: ",i_result);
debug(socket_id,"insert query error: ",i_err);
});
}
});
The problem is that, although both queries work the problem is always running both instead of only running the insert function if the update fails.
The debug functions in code output something like:
UPDATE
Object { type="update query result: ", debug_value={...}}
home (linha 56)
Object { type="update query error: ", debug_value=null}
home (linha 56)
Object { type="insert query result: "}
home (linha 56)
Object { type="insert query error: ", debug_value={...}}
Insert
Object { type="update query result: ", debug_value={...}}
home (linha 56)
Object { type="update query error: ", debug_value=null}
home (linha 56)
Object { type="insert query result: ", debug_value={...}}
home (linha 56)
Object { type="insert query error: ", debug_value=null}
** EDIT **
ANSWER FROM node-postgres developer:
It's possible to retrieve number of rows affected by an insert and
update. It's not fully implemented in the native bindings, but does
work in the pure javascript version. I'll work on this within the
next week or two. In the mean time use pure javascript version and
have a look here:
https://github.com/brianc/node-postgres/blob/master/test/integration/client/result-metadata-tests.js
** END EDIT **
Can anyone help?
The immediate answer to your question is to use a stored procedure to do an upsert.
http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-UPSERT-EXAMPLE
Something like this works fine with the pg module.
client.query({
text: "SELECT upsert($1, $2, $3, $4, $5, $6)"
values: [ obj.id,
obj.first_name,
obj.last_name,
1,
ip,
date_now.getFullYear() + "-" + month + "-" + date_now.getDate() + " " + date_now.getHours() + ":" + date_now.getMinutes() + ":" + date_now.getSeconds()
]
}, function(u_err, u_result){
if(err) // this is a real error, handle it
// otherwise your data is updated or inserted properly
});
Of course this assumes that you're using some kind of model object that has all the values you need, even if they aren't changing. You have to pass them all into the upsert. If you're stuck doing it the way you've shown here, you should probably check the actual error object after the update to determine if it failed because the row is already there, or for some other reason (which is real db error that needs to be handled).
Then you've gotta deal with the potential race condition between the time your update failed and the time your insert goes through. If some other function tries to insert with the same id, you've got a problem. Transactions are good for that. That's all I got right now. Hope it helps.
I had this issue when connecting to a PG instance using the JDBC. The solution I ended up using was:
UPDATE table SET field='C', field2='Z' WHERE id=3;
INSERT INTO table (id, field, field2)
SELECT 3, 'C', 'Z'
WHERE NOT EXISTS (SELECT 1 FROM table WHERE id=3);
The update does nothing if the record doesn't exist and the insert does nothing if the record does exist. It works pretty well and is an SQL based solution vs a stored procedure.
Here's the initial question:
Insert, on duplicate update in PostgreSQL?
I have an electronic component database to which I add components that I either salvage from e-waste or buy as new, and the way I did it was:
const upsertData = (request, response) => {
const {
category, type, value, unit, qty,
} = request.body;
pool.query(`DO $$
BEGIN
IF EXISTS
( SELECT 1
FROM elab
WHERE category='${category}'
AND type='${type}'
AND value='${value}'
AND unit='${unit}'
)
THEN
UPDATE elab
SET qty = qty + ${qty}
WHERE category='${category}'
AND type='${type}'
AND value='${value}'
AND unit='${unit}';
ELSE
INSERT INTO elab
(category, type, value, unit, qty)
values ('${category}', '${type}', '${value}', '${unit}', ${qty});
END IF ;
END
$$ ;`, (error, results) => {
if (error) {
throw error;
}
response.status(201).send('Task completed lol');
});
};
The reason for this was that the only unique column any entry had was the ID, which is automatically updated, none of the other columns are unique only the whole entry is e.g. you can have a 100 kOhm resistor as a potentiometer or a "normal" one - and you can have a potentiometer with different values than 100 kOhm so only the whole entry is unique.
Related
I have a JPA/Springboot application backed by a Postgres database. I need to get a records that is equal to a substring passed back to the server.
For example:
Select * from dp1_attachments where TRIM(RIGHT(dp1_submit_date_dp1_number::text, 5)) ='00007'
This query works in PgAdmin, but not in the JPA #Query statement.
#Query("SELECT a.attachmentsFolder as attachmentsFolder, a.attachmentNumber as attachmentNumber, a.attachmentName as attachmentName, a.dp1SubmitDateDp1Number as dp1SubmitDateDp1Number,a.attachmentType as attachmentType, a.attachmentDate as attachmentDate, a.attachmentBy as attachmentBy "
+ "FROM DP1Attachments a WHERE TRIM(SUBSTRING(a.dp1SubmitDateDp1Number::text, 5 )) = :dp1Number")
I've also tried CASTing the parameter like this:
#Query("SELECT a.attachmentsFolder as attachmentsFolder, a.attachmentNumber as attachmentNumber, a.attachmentName as attachmentName, a.dp1SubmitDateDp1Number as dp1SubmitDateDp1Number,a.attachmentType as attachmentType, a.attachmentDate as attachmentDate, a.attachmentBy as attachmentBy "
+ "FROM DP1Attachments a WHERE TRIM(SUBSTRING(CAST(a.dp1SubmitDateDp1Number as string, 5 ))) = :dp1Number")
but the application won't even run, and returns an error that the query isn't valid.
If I make no attempt to cast it, I get an error that function pg_catalog.substring(numeric, integer) does not exist
UPDATE
I've also tried creating a native query instead but that also doesn't seem to work.
List<DP1AttachmentsProjection> results = em.createNativeQuery("Select * FROM dp1_attachments WHERE TRIM(RIGHT(CAST(dp1_submit_date_dp1_number as varchar),5)) =" + dp1Number).getResultList();
In place of varchar I have also tried string and text.
Errors come back similar to ERROR: operator does not exist: text = integer. Its like the CAST is being ignored and I'm not sure why.
I also tried the following as a native query:
em.createNativeQuery("Select * FROM dp1_attachments WHERE TRIM(RIGHT(dp1_submit_date_dp1_number::varchar),5)) =" + dp1Number).getResultList();
and get ERROR: syntax error at or near ":"
FINAL SOLUTION
Thanks to #Nenad J I altered the query to get the final working solution:
#Query(value = "SELECT a.attachments_Folder as attachmentsFolder, a.attachment_Number as attachmentNumber, a.attachment_Name as attachmentName, a.dp1_Submit_Date_Dp1_Number as dp1SubmitDateDp1Number,a.attachment_Type as attachmentType, a.attachment_Date as attachmentDate, a.attachment_By as attachmentBy FROM DP1_Attachments a WHERE TRIM(RIGHT(CAST(a.dp1_Submit_Date_Dp1_Number as varchar ), 5 )) = :dp1Number", nativeQuery = true)"
Default substring returns a string, so substring(integer data,5) returns a string. Thus no need for the cast.
#Query("SELECT * FROM DP1Attachments a WHERE TRIM(SUBSTRING(a.dp1SubmitDateDp1Number, 5)) = :dp1Number")
But I recommend in this case use native query like this:
Put this code in your attachment repository.
#Query(value="SELECT * FROM DP1Attachments AS a WHERE TRIM(SUBSTRING(a.dp1SubmitDateDp1Number, 5 )) = :dp1Number", nativeQuery=true)
Be careful with the column's name.
I'm changing queries from an Oracle Database to PostgreSQL, and in this query I am getting this error:
ERROR: syntax error at or near "SET"
the query is:
{call UPDATE alarm_instance SET last_update_time=default, wait_expire_time=null, core_number=nextval(SEQ_ALRM_NUMBR)
where wait_time <= current_date RETURNING alarm_instance_id bulk collect INTO ?}
I am using JDBC to connect to the database and here is the call code
try (CallableStatement cs = super.prepareCall_(query)) {
cs.registerOutParameter(1, Types.ARRAY);
cs.execute();
...
I have taken a long look at Postgres documentation and cannot find what is wrong and didn't find any answer to this specific situation
An UPDATE statement can't be executed with a CallableStatement. A CallableStatement is essentially only intended to call stored procedures. In case of Oracle that includes anonymous PL/SQL blocks.
And bulk collect is invalid in Postgres to begin with.
It seems you want something like this:
String sql =
"UPDATE alarm_instance " +
" SET last_update_time=default, " +
" wait_expire_time=null, "
" core_number=nextval('SEQ_ALRM_NUMBR') " +
" where wait_time <= current_date RETURNING alarm_instance_id";
Statement stmt = connection.createStatement();
stmt.execute(sql);
int rowsUpdated = stmt.getUpdateCount();
ResultSet rs = stmt.getResultSet();
while (rs.next() {
// do something with the returned IDs
}
In python 3+, I want to insert values from a dictionary (or pandas dataframe) into a database. I have opted for psycopg2 with a postgres database.
The problems is that I cannot figure out the proper way to do this. I can easily concatenate a SQL string to execute, but the psycopg2 documentation explicitly warns against this. Ideally I wanted to do something like this:
cur.execute("INSERT INTO table VALUES (%s);", dict_data)
and hoped that the execute could figure out that the keys of the dict matches the columns in the table. This did not work. From the examples of the psycopg2 documentation I got to this approach
cur.execute("INSERT INTO table (" + ", ".join(dict_data.keys()) + ") VALUES (" + ", ".join(["%s" for pair in dict_data]) + ");", dict_data)
from which I get a
TypeError: 'dict' object does not support indexing
What is the most phytonic way of inserting a dictionary into a table with matching column names?
Two solutions:
d = {'k1': 'v1', 'k2': 'v2'}
insert = 'insert into table (%s) values %s'
l = [(c, v) for c, v in d.items()]
columns = ','.join([t[0] for t in l])
values = tuple([t[1] for t in l])
cursor = conn.cursor()
print cursor.mogrify(insert, ([AsIs(columns)] + [values]))
keys = d.keys()
columns = ','.join(keys)
values = ','.join(['%({})s'.format(k) for k in keys])
insert = 'insert into table ({0}) values ({1})'.format(columns, values)
print cursor.mogrify(insert, d)
Output:
insert into table (k2,k1) values ('v2', 'v1')
insert into table (k2,k1) values ('v2','v1')
I sometimes run into this issue, especially with respect to JSON data, which I naturally want to deal with as a dict. Very similar. . .But maybe a little more readable?
def do_insert(rec: dict):
cols = rec.keys()
cols_str = ','.join(cols)
vals = [ rec[k] for k in cols ]
vals_str = ','.join( ['%s' for i in range(len(vals))] )
sql_str = """INSERT INTO some_table ({}) VALUES ({})""".format(cols_str, vals_str)
cur.execute(sql_str, vals)
I typically call this type of thing from inside an iterator, and usually wrapped in a try/except. Either the cursor (cur) is already defined in an outer scope or one can amend the function signature and pass a cursor instance in. I rarely insert just a single row. . .And like the other solutions, this allows for missing cols/values provided the underlying schema allows for it too. As long as the dict underlying the keys view is not modified as the insert is taking place, there's no need to specify keys by name as the values will be ordered as they are in the keys view.
[Suggested answer/workaround - better answers are appreciated!]
After some trial/error I got the following to work:
sql = "INSERT INTO table (" + ", ".join(dict_data.keys()) + ") VALUES (" + ", ".join(["%("+k+")s" for k in dict_data]) + ");"
This gives the sql string
"INSERT INTO table (k1, k2, ... , kn) VALUES (%(k1)s, %(k2)s, ... , %(kn)s);"
which may be executed by
with psycopg2.connect(database='deepenergy') as con:
with con.cursor() as cur:
cur.execute(sql, dict_data)
Post/cons?
using %(name)s placeholders may solve the problem:
dict_data = {'key1':val1, 'key2':val2}
cur.execute("""INSERT INTO table (field1, field2)
VALUES (%(key1)s, %(key2)s);""",
dict_data)
you can find the usage in psycopg2 doc Passing parameters to SQL queries
Here is another solution inserting a dictionary directly
Product Model (has the following database columns)
name
description
price
image
digital - (defaults to False)
quantity
created_at - (defaults to current date)
Solution:
data = {
"name": "product_name",
"description": "product_description",
"price": 1,
"image": "https",
"quantity": 2,
}
cur = conn.cursor()
cur.execute(
"INSERT INTO products (name,description,price,image,quantity) "
"VALUES(%(name)s, %(description)s, %(price)s, %(image)s, %(quantity)s)", data
)
conn.commit()
conn.close()
Note: The columns to be inserted is specified on the execute statement .. INTO products (column names to be filled) VALUES ..., data <- the dictionary (should be the same **ORDER** of keys)
I am running npgsql v3.7 with .NetCore on Ubuntu.
When I execute a select query and a cell in any row in the results is null, an exception is thrown with the error message "Column is null".
I am having to work around this by putting every column in the select clause inside a case statement which tests for NULL
"CASE WHEN " + fieldName + " IS NULL THEN '' ELSE " + fieldName + " END "
This seems a bit extreme and should not be necessary. Has anyone else come across this.
Thanks.
You are probably trying to read the column like this:
using (var reader = cmd.ExecuteReader()) {
reader.Next();
var o = reader.GetString(0); // Or any other of the Get methods on reader
...
}
This code will fail if the column contains a null, and is the expected behavior. In ADO.NET, you need to check for a null value with reader.IsDBNull(0) before actually getting the value. That's just how the database API works.
I don't know why NULL values are giving you errors, but you can do away with the ugly CASE statement in favor of using COALESCE:
"COALESCE(" + fieldName + ", '')"
Ideally you should make a configuration change such that NULL values do not cause this problem.
I'm trying to update my data with talend.I'm using tOracleOutput with the option 'update or insert' but my problem is that when there is an update i want to sum the older and the newer value of a row because my app is executing in real time.There is any help? (sorry for my bad english)
You can use a tOracleRow component containing a SQL update statement . Take the new value (from context, row, or globalMap) and use that in the update statement within tOracleRow, adding it to the existing value of the desired record.
Ex
"
UPDATE target_table
SET target_field = target_field " + globalMap.get("newVal") + "
WHERE target_id = " + globalMap.get("id") + "
"