update table by response of the inner query - postgresql

Trying to update the field in portfolio table with the help of inner query.
So this is the actual query which i'm trying as shown below:-
UPDATE portfolio SET "productTargetCheck"=1
WHERE "id" in (SELECT jsonb_agg("portfolioId") FROM uac WHERE "campaignId" IN (92738140924095,18631183844302));
so the select query(inner query) is returning array as value
SELECT jsonb_agg("portfolioId") FROM uac WHERE "campaignId" IN (92738140924095,18631183844302);
so this is the response i'm getting
jsonb_agg
+----------------------------------------------------------------------------------+
["cf69fe67-4409-4f5e-b48f-c051ad68d641", "eaf64075-25a4-424c-82fd-3d5238f19ed4"]
so in this response, it is the array of UUID
and taking those UUID I'm trying to update field in Portfolio table
Taking the response i'm of the select query and updating the field as shown below again:-
UPDATE portfolio SET "productTargetCheck"=1
WHERE "id"=(SELECT jsonb_agg("portfolioId") FROM uac WHERE "campaignId" IN (92738140924095,18631183844302));
i'm getting response as
pq: unsupported comparison operator: <uuid> = <jsonb>
Is the anyway that i can update the field with querying into second table?

Related

The sqliite db query is not working in postgresql db

i am having a query which is working correctly in SQLite. but its giving error in PostgreSQL.
SELECT decks.id, decks.name, count(cards.id)
from decks
JOIN cards ON decks.id = cards.did
GROUP BY cards.did
above query is giving error in postgresql.
ERROR: column "decks.id" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: SELECT decks.id, decks.name, count(cards.id) FROM decks JOIN...
You can't have columns in the SELECT list, that are not used in an aggregate function or part of the GROUP BY. The fact that SQLite accepts this, is a bug in SQLite. The fact that Postgres rejects this, is correct.
You need to rewrite your query to:
SELECT decks.id, decks.name, count(cards.id)
from decks
JOIN cards ON decks.id = cards.did
GROUP BY decks.id, decks.name;
If decks.id is the primary key, you can shorten the grouping to GROUP BY decks.id

Azure Data Factory LOOKUP Activity issues

I have the following pipeline with a range of activities, see image below.
I keep on getting the error with my lookup activity
Failure happened on 'Source' side.
ErrorCode=SqlOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=A
database operation failed with the following error: 'Invalid column
name
'updated_at'.',Source=,''Type=System.Data.SqlClient.SqlException,Message=Invalid
column name 'updated_at'.,Source=.Net SqlClient Data
Provider,SqlErrorNumber=207,Class=16,ErrorCode=-2146232060,State=1,Errors=[{Class=16,Number=207,State=1,Message=Invalid
column name 'updated_at'.,},],'
I kind of know what the problem is.. the lookup isn't looping through the individual tables to find the column name 'updated_at'.
But, I don't understand why.
The Lookup 'Lookup New Watermark' activity has the following query
SELECT MAX(updated_at) as NewWatermarkvalue FROM #{item().Table_Name}
The ForEach activity 'For Each Table' as the following for Items:
#activity('Find SalesDB Tables').output.value
The Lookup activity 'Find SalesDB Tables' has the following query
SELECT QUOTENAME(table_schema)+'.'+QUOTENAME(table_name) AS Table_Name FROM information_Schema.tables WHERE table_name not in ('watermarktable', 'database_firewall_rules')
The only thing I can see that is wrong with the 'Lookup New Watermark' actvitiy is that its not looping through table. Can someone let me know what is needed.
Just to show the column exists I adjusted the connection from
To the following:
And the Lookup was able to find the updated_at column on dbo.Products, but couldn't locate the updated_at column on the other 4 tables.
Therefore, I'm suggesting the problem is that the Lookup activity isn't iterating over the tables automatically.
The error is when using the following query on a table that does not have updated_at column, we get this error.
SELECT MAX(updated_at) as NewWatermarkvalue FROM #{item().Table_Name}
The items field in for each activity was given the value as #activity('FindSalesDBTables').output.value (returns a list of table names). Inside the for each, when we use the above query, it will be executed as following:
#first iteration
SELECT MAX(updated_at) as NewWatermarkvalue FROM <table_1>
#second iteration
SELECT MAX(updated_at) as NewWatermarkvalue FROM <table_2>
.
.
...
During this process, when we use the above query on a table that does not have updated_at column, it gives the same error. The following is a demonstration of the same.
I created 2 tables (for demonstration) called t1 and t2.
create table t1(id int, updated_at int)
create table t2(id int, up int)
I used look up activity to get the list of table names using the following query:
SELECT QUOTENAME(table_schema)+'.'+QUOTENAME(table_name) AS Table_Name FROM information_Schema.tables WHERE table_name not in ('watermarktable', 'database_firewall_rules','ipv6_database_firewall_rules')
Inside the for each activity (looping through #activity('lookup1').output.value), I have tried the same query as given.
SELECT MAX(updated_at) as NewWatermarkvalue FROM #{item().Table_Name}
After debugging the pipeline, we can observe that it produces the same error.
For iteration where the table is t1 (has updated_at column):
For iteration where the table is t2 (does not have updated_at column):
If you publish and run this pipeline, the pipeline will fail giving the same error.
Therefore, try to check if the updated_at column exists or not in the particular table (current for each item). If it does exist, proceed to query it.
Inside for each use look up with the following query. It returns the length of column in bytes if the column exists in a table, else it returns null. Use this result along with If condition activity.
select COL_LENGTH('#{item().Table_Name}','updated_at') as column_exists
Use the following condition in If activity. If it returns false, then it indicates that the particular table contains updated_at column and we can work with it.
#equals(activity('check for column in table').output.firstRow['column_exists'],null)
The following is the debug output for the same (t1 and t2 tables)
You can continue with other required activities inside the False section of the If condition activity using above process.

getting an error as more than one row returned by a subquery used as an expression when trying to insert more than one rows in table

I am trying to insert multiple values into a table from different table in postgresql and encountering an error as [21000]: ERROR: more than one row returned by a subquery used as an expression
INSERT INTO coupon (id,entityid)
values
(select nextval('seq_coupon')),(select entityid from card where country in ('China')));
This query [select entityid from card where country in ('China'))] has multiple rows.
Any help is much appreciated.
If you want to insert rows that come from a SELECT query, don't use the values clause. The SELECT query you use for the second column's value returns more than one row which is not permitted in places where a single value is required.
To include a constant value for all newly inserted rows, just add it to the SELECT list of the source query.
INSERT INTO coupon (id, entityid, coupon_code)
select nextval('seq_coupon'), entityid, 'A-51'
from card
where country in ('China');
As a side note: when using nextval() there is no need to prefix it with a SELECT, even in the values clause, e.g.
insert into coupon (id, entityid)
values (nextval('some_seq'), ...);

ERROR: more than one row returned by subquery

I have the following SQL query (Using Postgre, pgAdmin3)
update "Customers"
set "PID"=(select PID
from person
left join "Customers"
on "Customers"."Email"=person.Email
where "Customers"."Email"!='' and "Customers"."Email" is not null)
The subquery runs just fine and returns the list of Customer emails matched to the PID it found in the person table.
I need to use this list of PIDs to update the PID field in the Customer table.
Any advice?
It's not completely clear from your query, but I think you're looking for something like this:
update "Customers" as c
set "PID"=p.PID
from person AS p
where c."Email"=p.Email
and c."Email"!='' and c."Email" is not null

group by date aggregate function in postgresql

I'm getting an error running this query
SELECT date(updated_at), count(updated_at) as total_count
FROM "persons"
WHERE ("persons"."updated_at" BETWEEN '2012-10-17 00:00:00.000000' AND '2012-11-07 12:25:04.082224')
GROUP BY date(updated_at)
ORDER BY persons.updated_at DESC
I get the error ERROR: column "persons.updated_at" must appear in the GROUP BY clause or be used in an aggregate function LINE 5: ORDER BY persons.updated_at DESC
This works if I remove the date( function from the group by call, however I'm using the date function because i want to group by date, not datetime
any ideas
At the moment it is unclear what you want Postgres to return. You say it should order by persons.updated_at but you do not retrieve that field from the database.
I think, what you want to do is:
SELECT date(updated_at), count(updated_at) as total_count
FROM "persons"
WHERE ("persons"."updated_at" BETWEEN '2012-10-17 00:00:00.000000' AND '2012-11-07 12:25:04.082224')
GROUP BY date(updated_at)
ORDER BY count(updated_at) DESC -- this line changed!
Now you are explicitly telling the DB to sort by the resulting value from the COUNT-aggregate. You could also use: ORDER BY 2 DESC, effectively telling the database to sort by the second column in the resultset. However I highly prefer explicitly stating the column for clarity.
Note that I'm currently unable to test this query, but I do think this should work.
the problem is that, because you are grouping by date(updated_at), the value for updated_at may not be unique, different values of updated_at can return the same value for date(updated_at). You need to tell the database which of the possible values it should use, or alternately use the value returned by the group by, probably one of
SELECT date(updated_at) FROM persons GROUP BY date(updated_at)
ORDER BY date(updated_at)
or
SELECT date(updated_at) FROM persons GROUP BY date(updated_at)
ORDER BY min(updated_at)