Multiple updates using select statement and joins in Postgresql - postgresql

I am very new to postgres,
I am trying to achieve multiple updates by joining tables using json object in posgres.
I have a table "timestamp_snapshot" with below data..
channel_id(varchar) date_in (timestamp)
"sam_channel" "2016-06-27 19:36:40.706018"
"vam_channel" "2016-06-27 19:36:40.706018"
I have a json object as below
'{ "sam_channel" : "2016-06-27T19:36:40.706018", "vam_channel" : "2016-06-29T19:42:34.812616" }'
I'd like to update the date_in column of table "timestamp_snapshot" using the data from the json object
The output, after update, I'd like to have is given below:
channel_id(varchar) date_in (timestamp)
"sam_channel" "2016-06-27 19:36:40.706018"
"vam_channel" "2016-06-29 19:42:34.812616"
I tried the problem with the below approach but the data in the table "timestamp_snapshot" remains the same even after the update.
my approach:
update timestamp_snapshot
set date_in = latest_snap.value
from
timestamp_snapshot as tab_time_snap,
(
select j.key, j.value::timestamp
from json_each_text('{ "sam_channel" : "2016-06-27T19:36:40.706018",
"vam_channel" : "2016-06-29T19:42:34.812616" }') j
) latest_snap
where latest_snap.key = tab_time_snap.channel_id
I see their is some problem with the query, since I am new to postgres, I'd really appreciate for any help.

It's possible that it is a result of you referencing the table to be updated twice. If you change your statement to this, do you get your intended result?:
UPDATE timestamp_snapshot
SET date_in = latest_snap.value
FROM
(
SELECT j.key, j.value::timestamp
FROM json_each_text('{ "sam_channel" : "2016-06-27T19:36:40.706018",
"vam_channel" : "2016-06-29T19:42:34.812616" }') j
) latest_snap
WHERE timestamp_snapshot.channel_id = latest_snap.key;
Reference to the docs: https://www.postgresql.org/docs/9.5/static/sql-update.html

Related

How to use a recursive query in a subquery in PostgreSQL

I created a recursive query that returns me a string of the productcategory history (typical parent-child relation:
with recursive productCategoryHierarchy as (
--start with the "anchor" row
select
1 as "level",
pg1.id,
pg1.title,
pg1.parentproductgroup_id
from product_group pg1
where
pg1.id = '17e949b6-85b3-4c87-8f76-ad1e61ea01e1' --parameterize me
union all
-- Get child nodes
select
pch.level +1 as "level",
pg2.id,
pg2.title,
pg2.parentproductgroup_id
from product_group pg2
join productCategoryHierarchy pch on pch.parentproductgroup_id = pg2.id
)
-- Get hierarchy as string
select
CONCAT('',string_agg(productCategoryHierarchy.title, ' > '),'')
from productCategoryHierarchy;
Now I want to use this result in another query as a subquery so that I can use the created string as an attribute in the parent query. Is that possible in Postgres or is there another solution to get a hierarchical tree as string in an attribute?
Are you looking for something like this?
with recursive productcategoryhierarchy as (
...
), aggregated_values as (
select string_agg(productCategoryHierarchy.title, ' > ') as all_titles
from productCategoryHierarchy
)
select ..., (select all_titles from aggregated_values) as all_titles
from ... your main query goes here ..

How to push data into a "JSON" data type column in Postgresql

I have the following POSTGRESQL table
id | name | email | weightsovertime | joined
20 | Le | le#gmail.com | [] | 2018-06-09 03:17:56.718
I would like to know how to push data (JSON object or just object) into the weightsovertime array.
And since I am making a back-end server, I would like to know the KnexJS query that does this.
I tried the following syntax but it does not work
update tableName set weightsovertime = array_append(weightsovertime,{"weight":55,"date":"2/3/96"}) where id = 20;
Thank you
For anyone who happens to land on this question, the solution using Knex.js is:
knex('table')
.where('id', id)
.update({
arrayColumn: knex.raw(`arrayColumn || ?::jsonb`, JSON.stringify(arrayToAppend))
})
This will produce a query like:
update tableName
set weightsovertime = arrayColumn || $1::json
where id = 20;
Where $1 will be replaced by the value of JSON.stringfy(arrayToAppend). Note that this conversion is obligatory because of a limitation of the Postegre drive
array_append is for native arrays - a JSON array inside a jsonb column is something different.
Assuming your weightsovertime is a jsonb (or json) you have to use the concatenation operator : ||
e.g:
update the_table
set weitghtsovertime = weightsovertime||'[{"weight": 55, "date": "1996-03-02"}]'
where id = 20;
Online example: http://rextester.com/XBA24609

Postgresql Update & Inner Join

I am trying to update data in Table: local.import_payments from Table: local.payments based on update and Inner Join queries. The query I used:
Update local.import_payments
Set local.import_payments.client_id = local.payments.payment_for_client__record_id,
local.import_payments.client_name = local.payments.payment_for_client__company_name,
local.import_payments.customer_id = local.payments.customer__record_id,
local.import_payments.customer_name = local.payment_from_customer,
local.import_payments.payment_id = local.payments.payment_id
From local.import_payments
Inner Join local.payments
Where local.payments.copy_to_imported_payments = 'true'
The client_id, client_name, customer_id, customer_name in the local.import_payments need to get updated with the values from the table local.payments based on the condition that the field copy_to_imported_payments is checked.
I am getting a syntax error while executing the query. I tried a couple of things, but they did not work. Can anyone look over the queries and let me know where the issue is
Try the following
UPDATE local.import_payments
Set local.import_payments.client_id =
local.payments.payment_for_client__record_id,
local.import_payments.client_name =
local.payments.payment_for_client__company_name,
local.import_payments.customer_id = local.payments.customer__record_id,
local.import_payments.customer_name = local.payment_from_customer,
local.import_payments.payment_id = local.payments.payment_id
FROM local.payments as lpay
WHERE lpay.<<field>> = local.import_payments.<<field>>
AND local.payments.copy_to_imported_payments = 'true'
You shouldn't to specify the schema/table for updated columns, only column names:
Do not include the table's name in the specification of a target column — for example, UPDATE table_name SET table_name.col = 1 is invalid.
from the doc
You shouldn't to use the updating table in the from clause except of the case of self-join.
You can to make your query shorter using "column-list syntax".
update local.import_payments as target
set (
client_id,
client_name,
customer_id,
customer_name,
payment_id) = (
source.payment_for_client__record_id,
source.payment_for_client__company_name,
source.customer__record_id,
source.payment_from_customer,
source.payment_id)
from local.payments as source
where
<join condition> and
source.copy_to_imported_payments = 'true'

Inserting values into multiple columns by splitting a string in PostgreSQL

I have the following heap of text:
"BundleSize,155648,DynamicSize,204800,Identifier,com.URLConnectionSample,Name,
URLConnectionSample,ShortVersion,1.0,Version,1.0,BundleSize,155648,DynamicSize,
16384,Identifier,com.IdentifierForVendor3,Name,IdentifierForVendor3,ShortVersion,
1.0,Version,1.0,".
What I'd like to do is extract data from this in the following manner:
BundleSize:155648
DynamicSize:204800
Identifier:com.URLConnectionSample
Name:URLConnectionSample
ShortVersion:1.0
Version:1.0
BundleSize:155648
DynamicSize:16384
Identifier:com.IdentifierForVendor3
Name:IdentifierForVendor3
ShortVersion:1.0
Version:1.0
All tips and suggestions are welcome.
It isn't quite clear what do you need to do with this data. If you really need to process it entirely in the database (looks like the task for your favorite scripting language instead), one option is to use hstore.
Converting records one by one is easy:
Assuming
%s =
BundleSize,155648,DynamicSize,204800,Identifier,com.URLConnectionSample,Name,URLConnectionSample,ShortVersion,1.0,Version,1.0
SELECT * FROM each(hstore(string_to_array(%s, ',')));
Output:
key | value
--------------+-------------------------
Name | URLConnectionSample
Version | 1.0
BundleSize | 155648
Identifier | com.URLConnectionSample
DynamicSize | 204800
ShortVersion | 1.0
If you have table with columns exactly matching field names (note the quotes, populate_record is case-sensitive to key names):
CREATE TABLE data (
"BundleSize" integer, "DynamicSize" integer, "Identifier" text,
"Name" text, "ShortVersion" text, "Version" text);
You can insert hstore records into it like this:
INSERT INTO data SELECT * FROM
populate_record(NULL::data, hstore(string_to_array(%s, ',')));
Things get more complicated if you have comma-separated values for more than one record.
%s = BundleSize,155648,DynamicSize,204800,Identifier,com.URLConnectionSample,Name,URLConnectionSample,ShortVersion,1.0,Version,1.0,BundleSize,155648,DynamicSize,16384,Identifier,com.IdentifierForVendor3,Name,IdentifierForVendor3,ShortVersion,1.0,Version,1.0,
You need to break up an array into chunks of number_of_fields * 2 = 12 elements first.
SELECT hstore(row) FROM (
SELECT array_agg(str) AS row FROM (
SELECT str, row_number() OVER () AS i FROM
unnest(string_to_array(%s, ',')) AS str
) AS str_sub
GROUP BY (i - 1) / 12) AS row_sub
WHERE array_length(row, 1) = 12;
Output:
"Name"=>"URLConnectionSample", "Version"=>"1.0", "BundleSize"=>"155648", "Identifier"=>"com.URLConnectionSample", "DynamicSize"=>"204800", "ShortVersion"=>"1.0"
"Name"=>"IdentifierForVendor3", "Version"=>"1.0", "BundleSize"=>"155648", "Identifier"=>"com.IdentifierForVendor3", "DynamicSize"=>"16384", "ShortVersion"=>"1.0"
And inserting this into the aforementioned table:
INSERT INTO data SELECT (populate_record(NULL::data, hstore(row))).* FROM ...
the rest of the query is the same.

Metadata about a column in SQL Server 2008 R2?

I'm trying to figure out a way to store metadata about a column without repeating myself.
I'm currently working on a generic dimension loading SSIS package that will handle all my dimensions. It currently does :
Create a temporary table identical to the given table name in parameters (this is a generic stored procedure that receive the table name as parameter, and then do : select top 0 * into ##[INSERT ORIGINAL TABLE NAME HERE] from [INSERT ORIGINAL TABLE NAME HERE]).
==> Here we insert custom code for this particular dimension that will first query the data from a datasource and get my delta, then transform the data and finally loads it into my temporary table.
Merge the temporary table into my original table with a T-SQL MERGE, taking care of type1 and type2 fields accordingly.
My problem right now is that I have to maintain a table with all the fields in it to store a metadata to tell my scripts if this particular field is type1 or type2... this is nonsense, I can get the same data (minus type1/type2) from sys.columns/sys.types.
I was ultimately thinking about renaming my fields to include their type in it, such as :
FirstName_T2, LastName_T2, Sex_T1 (well, I know this can be type2, let's not fall into that debate here).
What do you guyz would do with that? My solution (using a table with that metadata) is currently in place and working, but it's obvious that repeating myself from the systables to a custom table is nonsense, just for a simple type1/type2 info.
UPDATE: I also thought about creating user defined types like varchar => t1_varchar, t2_varchar, etc. This sounds like something a bit sluggy too...
Everything you need should already be in INFORMATION_SCHEMA.COLUMNS
I can't follow your thinking of not using provided tables/views...
Edit: As scarpacci mentioned, this somewhat portable if needed.
I know this is bad, but I will post an answer to my own question... Thanks to GBN for the help tho!
I am now storing "flags" in the "description" field of my columns. I, for example, can store a flag this way : "TYPE_2_DATA".
Then, I use this query to get the flag back for each and every column :
select columns.name as [column_name]
,types.name as [type_name]
,extended_properties.value as [column_flags]
from sys.columns
inner join sys.types
on columns.system_type_id = types.system_type_id
left join sys.extended_properties
on extended_properties.major_id = columns.object_id
and extended_properties.minor_id = columns.column_id
and extended_properties.name = 'MS_Description'
where object_id = ( select id from sys.sysobjects where name = 'DimDivision' )
and is_identity = 0
order by column_id
Now I can store metadata about columns without having to create a separate table. I use what's already in place and I don't repeat myself. I'm not sure this is the best possible solution yet, but it works and is far better than duplicating information.
In the future, I will be able to use this field to store more metadata, where as : "TYPE_2_DATA|ANOTHER_FLAG|ETC|OH BOY!".
UPDATE :
I now store the information in separate extended properties. You can manage extended properties using sp_addextendedproperty and sp_updateextendedproperty stored procedures. I have created a simple store procedure that help me to update those values regardless if they currently exist or not :
create procedure [dbo].[UpdateSCDType]
#tablename nvarchar(50),
#fieldname nvarchar(50),
#scdtype char(1),
#dbschema nvarchar(25) = 'dbo'
as
begin
declare #already_exists int;
if ( #scdtype = '1' or #scdtype = '2' )
begin
select #already_exists = count(1)
from sys.columns
inner join sys.extended_properties
on extended_properties.major_id = columns.object_id
and extended_properties.minor_id = columns.column_id
and extended_properties.name = 'ScdType'
where object_id = (select sysobjects.id from sys.sysobjects where sysobjects.name = #tablename)
and columns.name = #fieldname
if ( #already_exists = 0 )
begin
exec sys.sp_addextendedproperty
#name = N'Scd_Type',
#value = #scdtype,
#level0type = N'SCHEMA',
#level0name = #dbschema,
#level1type = N'TABLE',
#level1name = #tablename,
#level2type = N'COLUMN',
#level2name = #fieldname
end
else
begin
exec sys.sp_updateextendedproperty
#name = N'Scd_Type',
#value = #scdtype,
#level0type = N'SCHEMA',
#level0name = #dbschema,
#level1type = N'TABLE',
#level1name = #tablename,
#level2type = N'COLUMN',
#level2name = #fieldname
end
end
end
Thanks again