I need to insert/update a point column type in postgres database.
I'm using node-postgres
The script generated using POSTGRES admin panel shows the update query as
UPDATE public.places SET id=?, user_id=?, business_name=?, alternate_name=?, primary_category=?, categories=?, description=?, address=?, city=?, state=?, country=?, zip=?, point WHERE <condition>;
How do I achieve point from latitude and longitude?
I have seen couple of answers using POSTGIS, but could not get it working.
In the documentation of POSTGRES (https://www.postgresql.org/docs/9.2/static/xfunc-sql.html) it is mentioned we can use point '(2,1)', but this does not work with pg query.
What I have now :
var config = {
user: 'postgres',
database: 'PGDATABASE',
password: 'PGPASSWORD!',
host: 'localhost',
port: 5432,
max: 10,
idleTimeoutMillis: 30000
};
And the update part :
app.post('/updatePlaces', function(req, res, next) {
console.log("Update");
console.log(req.body.places);
pool.query('UPDATE places SET address = $1, alternate_name = $2, business_name = $3, categories = $4, city = $5, country = $6, description = $7, point = $8, primary_category = $9, state = $10, zip = $11', [req.body.places.address, req.body.places.alternate_name, req.body.places.business_name, req.body.places.categories, req.body.places.city, req.body.places.country, req.body.places.description, (req.body.places.point.x, req.body.places.point.y), req.body.places.primary_category, req.body.places.state, req.body.places.zip], function(err, result) {
if(err) {
console.log(err);
return err;
}
res.send(result.rows[0]);
});
});
Tried many different ways for passing point :
(req.body.places.point.x, req.body.places.point.y)
point(req.body.places.point.x, req.body.places.point.y)
point '(2,1)'
All the above throws error. Do I need to use POSTGIS?
This works if you write SQL "directly":
CREATE TEMP TABLE x(p point) ;
INSERT INTO x VALUES ('(1,2)');
INSERT INTO x VALUES (point(3, 4));
SELECT * FROM x ;
Results
(1,2)
(3,4)
After couple of combinations, found out this works.!!
( '(' + req.body.places.point.x + ',' + req.body.places.point.y +')' )
Posting as answer if someone is trying to do this just using node-postgres.
So you can use single-quoted points: insert into x values ( '(1,2)' );
But using insert into x values (point(1,2)); in the query does not work.
I recently came across a similar problem using node-postgres when inserting into a postgis database with a geography (point) column. My solution was to use:
pool.query("INSERT INTO table (name, geography) VALUES ($1, ST_SetSRID(ST_POINT($2, $3), 4326))",
[req.body.name, req.body.lat, req.body.lng ]);
Current versions (Postgres 12, pg 8) should work simply use Postgres's POINT function to set a point column value.
Example:
export async function setPoint(client, x, y, id) {
const sql = `UPDATE table_name SET my_point = POINT($1,$2) WHERE id = $3 RETURNING my_point`;
const result = await client.query(sql, [x, y, id]);
return result.rows[0];
}
await setPoint(client, 10, 20, 5);
Result:
{x: 10.0, y: 20.0}
If you are using pg-promise, then custom types can be formatted automatically, see Custom Type Formatting.
You can introduce your own type like this:
function Point(x, y) {
this.x = x;
this.y = y;
// Custom Type Formatting:
this._rawDBType = true; // to make the type return the string without escaping it;
this.formatDBType = function () {
return 'ST_MakePoint(' + this.x + ',' + this.y + ')';
};
}
At some point you would create your objects:
var p = new Point(11, 22);
And then you can use such variables as regular types:
db.query('INSERT INTO places(place) VALUES(ST_SetSRID($1, 4326))', [p]);
See also: Geometry Constructors.
Related
So,I was hoping to do this:
let statement = "update players set walk_count = unnest($1), x = unnest($2), y = unnest($3) where player_id = unnest($4)";
But the error I get is "message: "set-returning functions are not allowed in WHERE",
The only other way I can solve this is by doing individual updates, but I see the loop is taking a lot of time.
Assuming that each parameter ($1, $2, ...) is an array containing one item for each row you want to update, you should use a single unnest() call for all 4 arrays:
update players
set walk_count = v.wc,
x = v.x,
y = v.y
from (
select *
from unnest($1, $2, $3, $4)
) as v (wc, x, y, id)
where v.id = players.player_id
I am currently working on weather monitoring.
For example a record of temperature has a date and a location (coordinates).
All of the coordinates are already in the database, what I need to add is time and the value of the temperature. Values and metadata are in a CSV file.
Basically what I'm doing is:
Get time through the file's name
Insert time into DB, and keep the primary key
Reading file, get the value and coordinates
Select query to get the id of the coordinates
Insert weather value with foreign keys (time and coordinates)
The issue is that the
"SELECT id FROM location WHERE latitude = ... AND longitude = ..."
is too slow. I have got 230k files and currently one file takes more than 2 minutes to be processed... Edit: by changing the index, it now takes 25 seconds and is still too slow. Moreover, the PreparedStatement is also still slower and I cannot figure out why.
private static void putFileIntoDB(String variableName, ArrayList<String[]> matrix, File file, PreparedStatement prepWeather, PreparedStatement prepLoc, PreparedStatement prepTime, Connection conn){
try {
int col = matrix.size();
int row = matrix.get(0).length;
String ts = getTimestamp(file);
Time time = getTime(ts);
// INSERT INTO takes 14ms
prepTime.setInt(1, time.year);
prepTime.setInt(2, time.month);
prepTime.setInt(3, time.day);
prepTime.setInt(4, time.hour);
ResultSet rs = prepTime.executeQuery();
rs.next();
int id_time = rs.getInt(1);
//for each column (longitude)
for(int i = 1 ; i < col ; ++i){
// for each row (latitude)
for(int j = 1 ; j < row ; ++j){
try {
String lon = matrix.get(i)[0];
String lat = matrix.get(0)[j];
String var = matrix.get(i)[j];
lat = lat.substring(1, lat.length()-1);
lon = lon.substring(1, lon.length()-1);
double latitude = Double.parseDouble(lat);
double longitude = Double.parseDouble(lon);
double value = Double.parseDouble(var);
// With this prepared statement, instruction needs 16ms to be executed
prepLoc.setDouble(1, latitude);
prepLoc.setDouble(2, longitude);
ResultSet rsLoc = prepLoc.executeQuery();
rsLoc.next();
int id_loc = rsLoc.getInt(1);
// Whereas this block takes 1ms
Statement stm = conn.createStatement();
ResultSet rsLoc = stm.executeQuery("SELECT id from location WHERE latitude = " + latitude + " AND longitude =" + longitude + ";" );
rsLoc.next();
int id_loc = rsLoc.getInt(1);
// INSERT INTO takes 1ms
prepWeather.setObject(1, id_time);
prepWeather.setObject(2, id_loc);
prepWeather.setObject(3, value);
prepWeather.execute();
} catch (SQLException ex) {
Logger.getLogger(ECMWFHelper.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
} catch (SQLException ex) {
Logger.getLogger(ECMWFHelper.class.getName()).log(Level.SEVERE, null, ex);
}
}
What I already did:
Set two B-tree index on table location on columns latitude and longitude
Drop foreign keys constraints
PreparedStatements in parameters are :
// Prepare selection for weather_radar foreign key
PreparedStatement prepLoc = conn.prepareStatement("SELECT id from location WHERE latitude = ? AND longitude = ?;");
PreparedStatement prepTime = conn.prepareStatement("INSERT INTO time(dataSetID, year, month, day, hour) " +
"VALUES(" + dataSetID +", ?, ? , ?, ?)" +
" RETURNING id;");
// PrepareStatement for weather_radar table
PreparedStatement prepWeather = conn.prepareStatement("INSERT INTO weather_radar(dataSetID, id_1, id_2, " + variableName + ")"
+ "VALUES(" + dataSetID + ", ?, ?, ?)");
Any idea to get things go quicker?
Ubuntu 16.04 LTS 64-bits
15.5 Gio
Intel® Core™ i7-6500U CPU # 2.50GHz × 4
PostgreSQL 9.5.11 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit
Netbeans IDE 8.2
JDK 1.8
postgresql-42.2.0.jar
The key issue you have here is you miss ResultSet.close() and Statement.close() kind of calls.
As you resolve that (add relevant close calls) you might find that having SINGLE con.prepareStatement call (before both for loops) would improve the performance even further (of course, you will not need to close the statement in a loop, however you still would need to close resultsets in a loop).
Then you might apply batch SQL
Using EXPLAIN, the point where query becomes latent could be figured out.
One of the situation where I have encountered case alike being:
Compound queries e.g. parameterized similar date ranges, from different tables and then joining them on some indexed value. Even if the date in the above serve as index still the query produced in preparedStatement, could not hit the indexes and ended up doing a scan over the joining data.
In python 3+, I want to insert values from a dictionary (or pandas dataframe) into a database. I have opted for psycopg2 with a postgres database.
The problems is that I cannot figure out the proper way to do this. I can easily concatenate a SQL string to execute, but the psycopg2 documentation explicitly warns against this. Ideally I wanted to do something like this:
cur.execute("INSERT INTO table VALUES (%s);", dict_data)
and hoped that the execute could figure out that the keys of the dict matches the columns in the table. This did not work. From the examples of the psycopg2 documentation I got to this approach
cur.execute("INSERT INTO table (" + ", ".join(dict_data.keys()) + ") VALUES (" + ", ".join(["%s" for pair in dict_data]) + ");", dict_data)
from which I get a
TypeError: 'dict' object does not support indexing
What is the most phytonic way of inserting a dictionary into a table with matching column names?
Two solutions:
d = {'k1': 'v1', 'k2': 'v2'}
insert = 'insert into table (%s) values %s'
l = [(c, v) for c, v in d.items()]
columns = ','.join([t[0] for t in l])
values = tuple([t[1] for t in l])
cursor = conn.cursor()
print cursor.mogrify(insert, ([AsIs(columns)] + [values]))
keys = d.keys()
columns = ','.join(keys)
values = ','.join(['%({})s'.format(k) for k in keys])
insert = 'insert into table ({0}) values ({1})'.format(columns, values)
print cursor.mogrify(insert, d)
Output:
insert into table (k2,k1) values ('v2', 'v1')
insert into table (k2,k1) values ('v2','v1')
I sometimes run into this issue, especially with respect to JSON data, which I naturally want to deal with as a dict. Very similar. . .But maybe a little more readable?
def do_insert(rec: dict):
cols = rec.keys()
cols_str = ','.join(cols)
vals = [ rec[k] for k in cols ]
vals_str = ','.join( ['%s' for i in range(len(vals))] )
sql_str = """INSERT INTO some_table ({}) VALUES ({})""".format(cols_str, vals_str)
cur.execute(sql_str, vals)
I typically call this type of thing from inside an iterator, and usually wrapped in a try/except. Either the cursor (cur) is already defined in an outer scope or one can amend the function signature and pass a cursor instance in. I rarely insert just a single row. . .And like the other solutions, this allows for missing cols/values provided the underlying schema allows for it too. As long as the dict underlying the keys view is not modified as the insert is taking place, there's no need to specify keys by name as the values will be ordered as they are in the keys view.
[Suggested answer/workaround - better answers are appreciated!]
After some trial/error I got the following to work:
sql = "INSERT INTO table (" + ", ".join(dict_data.keys()) + ") VALUES (" + ", ".join(["%("+k+")s" for k in dict_data]) + ");"
This gives the sql string
"INSERT INTO table (k1, k2, ... , kn) VALUES (%(k1)s, %(k2)s, ... , %(kn)s);"
which may be executed by
with psycopg2.connect(database='deepenergy') as con:
with con.cursor() as cur:
cur.execute(sql, dict_data)
Post/cons?
using %(name)s placeholders may solve the problem:
dict_data = {'key1':val1, 'key2':val2}
cur.execute("""INSERT INTO table (field1, field2)
VALUES (%(key1)s, %(key2)s);""",
dict_data)
you can find the usage in psycopg2 doc Passing parameters to SQL queries
Here is another solution inserting a dictionary directly
Product Model (has the following database columns)
name
description
price
image
digital - (defaults to False)
quantity
created_at - (defaults to current date)
Solution:
data = {
"name": "product_name",
"description": "product_description",
"price": 1,
"image": "https",
"quantity": 2,
}
cur = conn.cursor()
cur.execute(
"INSERT INTO products (name,description,price,image,quantity) "
"VALUES(%(name)s, %(description)s, %(price)s, %(image)s, %(quantity)s)", data
)
conn.commit()
conn.close()
Note: The columns to be inserted is specified on the execute statement .. INTO products (column names to be filled) VALUES ..., data <- the dictionary (should be the same **ORDER** of keys)
I have a table that I am trying to update multiple values at once. Here is the table schema:
Column | Type | Modifiers
---------------+---------+-----------
user_id | integer |
subservice_id | integer |
I have the user_id and want to insert multiple subservice_id's at once. Is there a syntax in Postgres that will let me do something like this
insert into user_subservices(user_id, subservice_id) values(1, [1, 2, 3]);
How would I do this?
Multi-value insert syntax is:
insert into table values (1,1), (1,2), (1,3), (2,1);
But krokodilko's answer is much slicker.
Try:
INSERT INTO user_subservices(user_id, subservice_id)
SELECT 1 id, x
FROM unnest(ARRAY[1,2,3,4,5,6,7,8,22,33]) x
Demo: http://www.sqlfiddle.com/#!15/9a006/1
A shorter version of krokodilko's answer:
insert into user_subservices(user_id, subservice_id)
values(1, unnest(array[1, 2, 3]));
A slightly related answer because I keep finding this question every time I try to remember this solution. Insert multiple rows with multiple columns:
insert into user_subservices (user_id, subservice_id)
select *
from unnest(array[1, 2], array[3, 4]);
More robust example, for when you need to insert multiple rows into some table for every row in another table:
INSERT INTO user_subservices (user_id, subservice_id)
SELECT users.id AS user_id, subservice_id
FROM users
CROSS JOIN unnest(ARRAY[1,2,3]) subservice_id;
For multiple values, this function might be helpful.
This function generates multiple values
const _multiInsert = arrOfValues => {
// removes lastCharacter
const _remLastChar = str => str.slice(0, str.length - 1);
let foramttedQuery = '';
arrOfValues.forEach(row => {
let newRow = '';
for (const val of Object.values(row)) {
let newValue = '';
if (typeof val === 'string') newValue = `'${val}',`;
else newValue = `${val},`;
newRow = newRow.concat(newValue);
}
foramttedQuery = foramttedQuery.concat(`(${_remLastChar(newRow)}),`);
});
return _remLastChar(foramttedQuery);
};
const arr_Of_Values = [
{
id: 1,
name: "SAMPLE_NAME_1",
},
{
id: 2,
name: "SAMPLE_NAME2",
}
]
const query_template = `INSERT INTO TABLE_NAME VALUES ${_multiInsert(arr_Of_Values)}`
console.log(query_template)
I'm trying to do an insert or update in a postgres database using node.js with pg extension (version 0.5.4).
So far I have this code:
(...)
client.query({
text: "update users set is_active = 0, ip = $1 where id=$2",
values: [ip,id]
}, function(u_err, u_result){
debug(socket_id,"update query result: ",u_result);
debug(socket_id,"update query error: ",u_err);
date_now = new Date();
var month = date_now.getMonth() + 1;
if(!u_err){
client.query({
text: 'insert into users (id,first_name,last_name,is_active,ip,date_joined) values' +
'($1,$2,$3,$4,$5,$6)',
values: [
result.id,
result.first_name,
result.last_name,
1,
ip,
date_now.getFullYear() + "-" + month + "-" + date_now.getDate() + " " + date_now.getHours() + ":" + date_now.getMinutes() + ":" + date_now.getSeconds()
]
}, function(i_err, i_result){
debug(socket_id,"insert query result: ",i_result);
debug(socket_id,"insert query error: ",i_err);
});
}
});
The problem is that, although both queries work the problem is always running both instead of only running the insert function if the update fails.
The debug functions in code output something like:
UPDATE
Object { type="update query result: ", debug_value={...}}
home (linha 56)
Object { type="update query error: ", debug_value=null}
home (linha 56)
Object { type="insert query result: "}
home (linha 56)
Object { type="insert query error: ", debug_value={...}}
Insert
Object { type="update query result: ", debug_value={...}}
home (linha 56)
Object { type="update query error: ", debug_value=null}
home (linha 56)
Object { type="insert query result: ", debug_value={...}}
home (linha 56)
Object { type="insert query error: ", debug_value=null}
** EDIT **
ANSWER FROM node-postgres developer:
It's possible to retrieve number of rows affected by an insert and
update. It's not fully implemented in the native bindings, but does
work in the pure javascript version. I'll work on this within the
next week or two. In the mean time use pure javascript version and
have a look here:
https://github.com/brianc/node-postgres/blob/master/test/integration/client/result-metadata-tests.js
** END EDIT **
Can anyone help?
The immediate answer to your question is to use a stored procedure to do an upsert.
http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-UPSERT-EXAMPLE
Something like this works fine with the pg module.
client.query({
text: "SELECT upsert($1, $2, $3, $4, $5, $6)"
values: [ obj.id,
obj.first_name,
obj.last_name,
1,
ip,
date_now.getFullYear() + "-" + month + "-" + date_now.getDate() + " " + date_now.getHours() + ":" + date_now.getMinutes() + ":" + date_now.getSeconds()
]
}, function(u_err, u_result){
if(err) // this is a real error, handle it
// otherwise your data is updated or inserted properly
});
Of course this assumes that you're using some kind of model object that has all the values you need, even if they aren't changing. You have to pass them all into the upsert. If you're stuck doing it the way you've shown here, you should probably check the actual error object after the update to determine if it failed because the row is already there, or for some other reason (which is real db error that needs to be handled).
Then you've gotta deal with the potential race condition between the time your update failed and the time your insert goes through. If some other function tries to insert with the same id, you've got a problem. Transactions are good for that. That's all I got right now. Hope it helps.
I had this issue when connecting to a PG instance using the JDBC. The solution I ended up using was:
UPDATE table SET field='C', field2='Z' WHERE id=3;
INSERT INTO table (id, field, field2)
SELECT 3, 'C', 'Z'
WHERE NOT EXISTS (SELECT 1 FROM table WHERE id=3);
The update does nothing if the record doesn't exist and the insert does nothing if the record does exist. It works pretty well and is an SQL based solution vs a stored procedure.
Here's the initial question:
Insert, on duplicate update in PostgreSQL?
I have an electronic component database to which I add components that I either salvage from e-waste or buy as new, and the way I did it was:
const upsertData = (request, response) => {
const {
category, type, value, unit, qty,
} = request.body;
pool.query(`DO $$
BEGIN
IF EXISTS
( SELECT 1
FROM elab
WHERE category='${category}'
AND type='${type}'
AND value='${value}'
AND unit='${unit}'
)
THEN
UPDATE elab
SET qty = qty + ${qty}
WHERE category='${category}'
AND type='${type}'
AND value='${value}'
AND unit='${unit}';
ELSE
INSERT INTO elab
(category, type, value, unit, qty)
values ('${category}', '${type}', '${value}', '${unit}', ${qty});
END IF ;
END
$$ ;`, (error, results) => {
if (error) {
throw error;
}
response.status(201).send('Task completed lol');
});
};
The reason for this was that the only unique column any entry had was the ID, which is automatically updated, none of the other columns are unique only the whole entry is e.g. you can have a 100 kOhm resistor as a potentiometer or a "normal" one - and you can have a potentiometer with different values than 100 kOhm so only the whole entry is unique.