Change Json text to json array - postgresql

Currently in my table data is like this
Field name : author
Field Data : In json form
When we run select query
SELECT bs.author FROM books bs; it returns data like this
"[{\"author_id\": 1, \"author_name\": \"It happend once again\", \"author_slug\": \"fiction-books\"}]"
But I need selected data should be like this
[
{
"author_id": 1,
"author_name": "It happend once again",
"author_slug": "fiction-books"
}
]
Database : PostgreSql
Note : Please avoid PHP code or iteration by PHP code

The answer depends on the version of PostgreSQL you are using and ALSO what client you are using but PostgreSQL has lots of builtin json processing functions.
https://www.postgresql.org/docs/10/functions-json.html
Your goal is also not clearly defined...If all you want to do is pretty print the json, this is included.
# select jsonb_pretty('[{"author_id": 1,"author_name":"It happend once again","author_slug":"fiction-books"}]') as json;
json
-------------------------------------------------
[ +
{ +
"author_id": 1, +
"author_name": "It happend once again",+
"author_slug": "fiction-books" +
} +
]
If instead you're looking for how to populate a postgres record set from json, this is also included:
# select * from json_to_recordset('[{"author_id": 1,"author_name":"It happend once again","author_slug":"fiction-books"}]')
as x(author_id text, author_name text, author_slug text);
author_id | author_name | author_slug
-----------+-----------------------+---------------
1 | It happend once again | fiction-books

Related

How to push data into a "JSON" data type column in Postgresql

I have the following POSTGRESQL table
id | name | email | weightsovertime | joined
20 | Le | le#gmail.com | [] | 2018-06-09 03:17:56.718
I would like to know how to push data (JSON object or just object) into the weightsovertime array.
And since I am making a back-end server, I would like to know the KnexJS query that does this.
I tried the following syntax but it does not work
update tableName set weightsovertime = array_append(weightsovertime,{"weight":55,"date":"2/3/96"}) where id = 20;
Thank you
For anyone who happens to land on this question, the solution using Knex.js is:
knex('table')
.where('id', id)
.update({
arrayColumn: knex.raw(`arrayColumn || ?::jsonb`, JSON.stringify(arrayToAppend))
})
This will produce a query like:
update tableName
set weightsovertime = arrayColumn || $1::json
where id = 20;
Where $1 will be replaced by the value of JSON.stringfy(arrayToAppend). Note that this conversion is obligatory because of a limitation of the Postegre drive
array_append is for native arrays - a JSON array inside a jsonb column is something different.
Assuming your weightsovertime is a jsonb (or json) you have to use the concatenation operator : ||
e.g:
update the_table
set weitghtsovertime = weightsovertime||'[{"weight": 55, "date": "1996-03-02"}]'
where id = 20;
Online example: http://rextester.com/XBA24609

Multiple updates using select statement and joins in Postgresql

I am very new to postgres,
I am trying to achieve multiple updates by joining tables using json object in posgres.
I have a table "timestamp_snapshot" with below data..
channel_id(varchar) date_in (timestamp)
"sam_channel" "2016-06-27 19:36:40.706018"
"vam_channel" "2016-06-27 19:36:40.706018"
I have a json object as below
'{ "sam_channel" : "2016-06-27T19:36:40.706018", "vam_channel" : "2016-06-29T19:42:34.812616" }'
I'd like to update the date_in column of table "timestamp_snapshot" using the data from the json object
The output, after update, I'd like to have is given below:
channel_id(varchar) date_in (timestamp)
"sam_channel" "2016-06-27 19:36:40.706018"
"vam_channel" "2016-06-29 19:42:34.812616"
I tried the problem with the below approach but the data in the table "timestamp_snapshot" remains the same even after the update.
my approach:
update timestamp_snapshot
set date_in = latest_snap.value
from
timestamp_snapshot as tab_time_snap,
(
select j.key, j.value::timestamp
from json_each_text('{ "sam_channel" : "2016-06-27T19:36:40.706018",
"vam_channel" : "2016-06-29T19:42:34.812616" }') j
) latest_snap
where latest_snap.key = tab_time_snap.channel_id
I see their is some problem with the query, since I am new to postgres, I'd really appreciate for any help.
It's possible that it is a result of you referencing the table to be updated twice. If you change your statement to this, do you get your intended result?:
UPDATE timestamp_snapshot
SET date_in = latest_snap.value
FROM
(
SELECT j.key, j.value::timestamp
FROM json_each_text('{ "sam_channel" : "2016-06-27T19:36:40.706018",
"vam_channel" : "2016-06-29T19:42:34.812616" }') j
) latest_snap
WHERE timestamp_snapshot.channel_id = latest_snap.key;
Reference to the docs: https://www.postgresql.org/docs/9.5/static/sql-update.html

Edit text value in Postgresql

I have a text column which holds a json with very long values - actually HTML code.
{"first_key" : "a very long html", "second_key" : "another very long html"}
How can I easily delete the second_key and its vale?
You could use regex_replace, and hope that the second html doesn't contain a } character:
update MY_TABLE set text_column = regex_replace(text_column, ', "second_key": ".*\}"', '')
where my_column_id = '<whatever>';
You will have to tweak your regex to make this work. You can test it by doing a select with the same expression to see what you get.

Inserting values into multiple columns by splitting a string in PostgreSQL

I have the following heap of text:
"BundleSize,155648,DynamicSize,204800,Identifier,com.URLConnectionSample,Name,
URLConnectionSample,ShortVersion,1.0,Version,1.0,BundleSize,155648,DynamicSize,
16384,Identifier,com.IdentifierForVendor3,Name,IdentifierForVendor3,ShortVersion,
1.0,Version,1.0,".
What I'd like to do is extract data from this in the following manner:
BundleSize:155648
DynamicSize:204800
Identifier:com.URLConnectionSample
Name:URLConnectionSample
ShortVersion:1.0
Version:1.0
BundleSize:155648
DynamicSize:16384
Identifier:com.IdentifierForVendor3
Name:IdentifierForVendor3
ShortVersion:1.0
Version:1.0
All tips and suggestions are welcome.
It isn't quite clear what do you need to do with this data. If you really need to process it entirely in the database (looks like the task for your favorite scripting language instead), one option is to use hstore.
Converting records one by one is easy:
Assuming
%s =
BundleSize,155648,DynamicSize,204800,Identifier,com.URLConnectionSample,Name,URLConnectionSample,ShortVersion,1.0,Version,1.0
SELECT * FROM each(hstore(string_to_array(%s, ',')));
Output:
key | value
--------------+-------------------------
Name | URLConnectionSample
Version | 1.0
BundleSize | 155648
Identifier | com.URLConnectionSample
DynamicSize | 204800
ShortVersion | 1.0
If you have table with columns exactly matching field names (note the quotes, populate_record is case-sensitive to key names):
CREATE TABLE data (
"BundleSize" integer, "DynamicSize" integer, "Identifier" text,
"Name" text, "ShortVersion" text, "Version" text);
You can insert hstore records into it like this:
INSERT INTO data SELECT * FROM
populate_record(NULL::data, hstore(string_to_array(%s, ',')));
Things get more complicated if you have comma-separated values for more than one record.
%s = BundleSize,155648,DynamicSize,204800,Identifier,com.URLConnectionSample,Name,URLConnectionSample,ShortVersion,1.0,Version,1.0,BundleSize,155648,DynamicSize,16384,Identifier,com.IdentifierForVendor3,Name,IdentifierForVendor3,ShortVersion,1.0,Version,1.0,
You need to break up an array into chunks of number_of_fields * 2 = 12 elements first.
SELECT hstore(row) FROM (
SELECT array_agg(str) AS row FROM (
SELECT str, row_number() OVER () AS i FROM
unnest(string_to_array(%s, ',')) AS str
) AS str_sub
GROUP BY (i - 1) / 12) AS row_sub
WHERE array_length(row, 1) = 12;
Output:
"Name"=>"URLConnectionSample", "Version"=>"1.0", "BundleSize"=>"155648", "Identifier"=>"com.URLConnectionSample", "DynamicSize"=>"204800", "ShortVersion"=>"1.0"
"Name"=>"IdentifierForVendor3", "Version"=>"1.0", "BundleSize"=>"155648", "Identifier"=>"com.IdentifierForVendor3", "DynamicSize"=>"16384", "ShortVersion"=>"1.0"
And inserting this into the aforementioned table:
INSERT INTO data SELECT (populate_record(NULL::data, hstore(row))).* FROM ...
the rest of the query is the same.

Upsert in Postgres using node.js

I'm trying to do an insert or update in a postgres database using node.js with pg extension (version 0.5.4).
So far I have this code:
(...)
client.query({
text: "update users set is_active = 0, ip = $1 where id=$2",
values: [ip,id]
}, function(u_err, u_result){
debug(socket_id,"update query result: ",u_result);
debug(socket_id,"update query error: ",u_err);
date_now = new Date();
var month = date_now.getMonth() + 1;
if(!u_err){
client.query({
text: 'insert into users (id,first_name,last_name,is_active,ip,date_joined) values' +
'($1,$2,$3,$4,$5,$6)',
values: [
result.id,
result.first_name,
result.last_name,
1,
ip,
date_now.getFullYear() + "-" + month + "-" + date_now.getDate() + " " + date_now.getHours() + ":" + date_now.getMinutes() + ":" + date_now.getSeconds()
]
}, function(i_err, i_result){
debug(socket_id,"insert query result: ",i_result);
debug(socket_id,"insert query error: ",i_err);
});
}
});
The problem is that, although both queries work the problem is always running both instead of only running the insert function if the update fails.
The debug functions in code output something like:
UPDATE
Object { type="update query result: ", debug_value={...}}
home (linha 56)
Object { type="update query error: ", debug_value=null}
home (linha 56)
Object { type="insert query result: "}
home (linha 56)
Object { type="insert query error: ", debug_value={...}}
Insert
Object { type="update query result: ", debug_value={...}}
home (linha 56)
Object { type="update query error: ", debug_value=null}
home (linha 56)
Object { type="insert query result: ", debug_value={...}}
home (linha 56)
Object { type="insert query error: ", debug_value=null}
** EDIT **
ANSWER FROM node-postgres developer:
It's possible to retrieve number of rows affected by an insert and
update. It's not fully implemented in the native bindings, but does
work in the pure javascript version. I'll work on this within the
next week or two. In the mean time use pure javascript version and
have a look here:
https://github.com/brianc/node-postgres/blob/master/test/integration/client/result-metadata-tests.js
** END EDIT **
Can anyone help?
The immediate answer to your question is to use a stored procedure to do an upsert.
http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-UPSERT-EXAMPLE
Something like this works fine with the pg module.
client.query({
text: "SELECT upsert($1, $2, $3, $4, $5, $6)"
values: [ obj.id,
obj.first_name,
obj.last_name,
1,
ip,
date_now.getFullYear() + "-" + month + "-" + date_now.getDate() + " " + date_now.getHours() + ":" + date_now.getMinutes() + ":" + date_now.getSeconds()
]
}, function(u_err, u_result){
if(err) // this is a real error, handle it
// otherwise your data is updated or inserted properly
});
Of course this assumes that you're using some kind of model object that has all the values you need, even if they aren't changing. You have to pass them all into the upsert. If you're stuck doing it the way you've shown here, you should probably check the actual error object after the update to determine if it failed because the row is already there, or for some other reason (which is real db error that needs to be handled).
Then you've gotta deal with the potential race condition between the time your update failed and the time your insert goes through. If some other function tries to insert with the same id, you've got a problem. Transactions are good for that. That's all I got right now. Hope it helps.
I had this issue when connecting to a PG instance using the JDBC. The solution I ended up using was:
UPDATE table SET field='C', field2='Z' WHERE id=3;
INSERT INTO table (id, field, field2)
SELECT 3, 'C', 'Z'
WHERE NOT EXISTS (SELECT 1 FROM table WHERE id=3);
The update does nothing if the record doesn't exist and the insert does nothing if the record does exist. It works pretty well and is an SQL based solution vs a stored procedure.
Here's the initial question:
Insert, on duplicate update in PostgreSQL?
I have an electronic component database to which I add components that I either salvage from e-waste or buy as new, and the way I did it was:
const upsertData = (request, response) => {
const {
category, type, value, unit, qty,
} = request.body;
pool.query(`DO $$
BEGIN
IF EXISTS
( SELECT 1
FROM elab
WHERE category='${category}'
AND type='${type}'
AND value='${value}'
AND unit='${unit}'
)
THEN
UPDATE elab
SET qty = qty + ${qty}
WHERE category='${category}'
AND type='${type}'
AND value='${value}'
AND unit='${unit}';
ELSE
INSERT INTO elab
(category, type, value, unit, qty)
values ('${category}', '${type}', '${value}', '${unit}', ${qty});
END IF ;
END
$$ ;`, (error, results) => {
if (error) {
throw error;
}
response.status(201).send('Task completed lol');
});
};
The reason for this was that the only unique column any entry had was the ID, which is automatically updated, none of the other columns are unique only the whole entry is e.g. you can have a 100 kOhm resistor as a potentiometer or a "normal" one - and you can have a potentiometer with different values than 100 kOhm so only the whole entry is unique.