Inserting records into table1 depending on row value in table2 - postgresql

For each row in table exam 'where exam.examRegulation isnull', I want to insert one corresponding row in table examRegulation and copy columnvalues from exam to examregulation. Apparently the following query ist too naive and must be approved:
insert into examRegulation (graduation, course, examnumber, examversion)
values (exam.graduation, exam.course, exam.examnumber, exam.examversion)
where ?? (select graduation, course, examnumber, examversion
from exam
where exam.examRegulation isnull)
Is there a way to do this in postgresql?

You may rephrase this as an INSERT INTO ... SELECT statement:
INSERT INTO examRegulation (graduation, course, examnumber, examversion)
SELECT graduation, course, examnumber, examversion
FROM exam
WHERE examRegulation IS NULL;
The VALUES clause, as the name implies, can only be used with literal values. If you need to populate an insert using query logic, then you need to use a SELECT clause.

Related

How can I restrict a result to only include rows where one specific field is unique with UNION Select statement in BigQuery?

I have the following code. I try to stitch the two tables together, but restrict it to only add duplicate Opportunity_ID once, and then from the second table (OpportunitiesUpdates).
SELECT
Opportunity.Account_Name,
Opportunity.Opportunity_Name,
Opportunity.Opportunity_Owner,
Opportunity.Opportunity_ID
FROM
Opportunity
UNION DISTINCT
SELECT
OpportunityUpdates.Account_Name,
OpportunityUpdates.Opportunity_Name,
OpportunityUpdates.Opportunity_Owner,
OpportunityUpdates.Opportunity_ID
FROM
OpportunityUpdates
WHERE OpportunityUpdates.Opportunity_ID <> Opportunity.Opportunity_ID
This code consolidates all records from both tables (by Opportunity_ID) and gives priority to the OpportunityUpdates table based on Opportunity_ID.
It assumes that the same Opportunity_ID could be in either table ("duplicates"), but that within each table an Opportunity_ID is unique. It also assumes that Opportunity_ID is not nullable (never null).
SELECT DISTINCT
IF(ou.Opportunity_ID IS NOT NULL, ou.Account_Name, o.Account_Name) Account_Name,
IF(ou.Opportunity_ID IS NOT NULL, ou.Opportunity_Name, o.Opportunity_Name) Opportunity_Name,
IF(ou.Opportunity_ID IS NOT NULL, ou.Opportunity_Owner, o.Opportunity_Owner) Opportunity_Owner,
COALESCE(ou.Opportunity_ID, o.Opportunity_ID) Opportunity_ID
FROM OpportunityUpdates ou
FULL OUTER JOIN
Opportunity o
ON o.Opportunity_ID = ou.Opportunity_ID

Array Insertion Postgres

I have a table item with attributes no(integer) and price(integer), also another table cart with attributes no(integer) and items(array of item).
I have some records in items.
When i tried :
INSERT INTO myschema.cart VALUES(1,'{SELECT item from myschema.item}')
I'am getting error malformed record literal.
I expected this to insert all items from myschema.item into the cart record.
It's hard to give you exact statement without the table structures and such, but you can select into an array:
INSERT INTO myschema.cart (id, item_ids)
SELECT 1, array(SELECT id from myschema.item)
This will select the id's from the item table into an array.
You can test it out by writing:
select array(SELECT id from myschema.item)
You can't write a subquery inside a string like that.
What you need to do is aggregate the items into a array with array_agg
INSERT INTO myschema.cart
VALUES (1, (SELECT array_agg(item) FROM myschema.item));
Or
INSERT INTO myschema.cart
SELECT 1, array_agg(item) FROM myschema.item;

How to include the values of a select statement in an insert? (PostgreSQL)

I have this select statement:
select it.item_id
from item as it
where it.owning_collection in (select col.collection_id
from collection as col
where col.name like '%Restricted%'
Which returns around 3k results. Now I would like to make an insert in another table for each one of those results with that item_id as one of the parameters like this:
insert into metadatavalue (metadata_value_id, **item_id**, metadata_field_id, text_value, text_lang, place, confidence)
But since I'm not very experienced with databases, I'm not sure how to make these multiple inserts
All the other information needed in the insert statement are fixed values.
Table structures:
Table Item
*item_id
*submitter_id
*in_archive
*withdrawn
*last_modified
*owning_collection
*dicoverable
Table metadata
*metadata_value_id
*item_id
*metadata_field_id
*text_value
*text_lang
*place
*authority
*confidence
insert into metadatavalue (metadata_value_id, item_id, metadata_field_id, text_value, text_lang, place, confidence)
select 'metadata_value_id',it.item_id,'metadata_field_id','text_value', 'text_lang', 'place', 'confidence'
from item it
where it.owning_collection in (select col.collection_id
from collection as col
where col.name like '%Restricted%')
Replace 'apostrophed' columns with its default values.
Further reading.
INSERT INTO metadatavalue (item_id, metadata_field_id, text_value, text_lang, place, confidence)
SELECT it.item_id, <c1>, <c2>, <c3>, <c4>, <c5>
FROM item AS it
JOIN collection AS col ON col.collection_id = it.owning_collection
WHERE col.name LIKE '%Restricted%'
Where you replace <c1> etc with your constant values. Note also that I have rewritten your SELECT query to a more efficient JOIN.

Another way of returning rows if any of the columns has different value for the same id

Is there any other way for returning rows for the same id by joining two tables and return the row if any of the columns value for the same id is different.
Select Table1.No,Table2.No,Table1.Name,Table2.Name,Table1.ID,Table2.ID,Table1.ID_N,Table2.ID_N
From MyFirstTable Table1
JOIN MySecondTable Table2
ON Table1.No=Table2.No where Table1.ID!=Table2.ID or Table1.ID_N != Table2.ID_N
In the example above , I have only two columns I need to check but in my real case there are at least 20 .
Is there any other statment I can use instead of enumerating each column in the where codition?
...WHERE BINARY_CHECKSUM(Table1.*) <> BINARY_CHECKSUM(Table2.*)
or
...WHERE BINARY_CHECKSUM(Table1.Field1, Table1.Field2, ...) <> BINARY_CHECKSUM(Table2..Field1, Table2.Field2, ...)
*this assumes you have no blob fields in your tables
http://technet.microsoft.com/en-us/library/ms173784.aspx
If No is a PK
Select Table1.No,Table1.Name,Table1.ID,Table1.ID_N
From MyFirstTable Table1
except
Select Table1.No,Table1.Name,Table1.ID,Table1.ID_N
From MySecondTable Table1

t sql select into existing table new column

Hi I have a temp table (#temptable1) and I want to add a column from another temp table (#temptable2) into that, my query is as follows:
select
Customer
,CustName
,KeyAccountGroups
,sum(Weeksales) as Weeksales
into #temptable1
group by Customer
,CustName
,KeyAccountGroups
select
SUM(QtyInvoiced) as MonthTot
,Customer
into #temptalbe2
from SalesSum
where InvoiceDate between #dtMonthStart and #dtMonthEnd
group by Customer
INSERT INTO #temptable1
SELECT MonthTot FROM #temptable2
where #temptable1.Customer = #temptable2.Customer
I get the following: Column name or number of supplied values does not match table definition.
In an INSERT statement you cannot reference the table you are inserting into. An insert works under the assumption that a new row is to be created. That means there is no existing row that could be referenced.
The functionality you are looking for is provided by the UPDATE statement:
UPDATE t1
SET MonthTot = t2.MonthTot
FROM #temptable1 t1
JOIN #temptable2 t2
ON t1.Customer = t2.Customer;
Be aware however, that this logic requires the Customer column in t2 to be unique. If you have duplicate values in that table the query will seem to run fine, however you will end up with randomly changing results.
For more details on how to combine two tables in an UPDATE or DELETE check out my A Join A Day - UPDATE & DELETE post.
If I understand it correctly you want to do two things.
1: Alter table #temptable1 and add a new column.
2: Fill that column with the values of #temptable2
ALTER #temptable1 ADD COLUMN MothTot DATETIME
UPDATE #temptable1 SET MothTot = (
SELECT MonthTot
FROM #temptable2
WHERE #temptable2.Customer = #temptable1.Customer)