Update a value from one table to another using a Proc - tsql

I have two tables with identical structure. One is a Temp table and one is the main table. The table names are PurchaseOrders_TEMP and PurchaseOrders respectively. Each day I refresh the temp table with new data and run a PROC to update the main table with the changes and/or additions. I realized I had an issue if a PO and/or an Item on a PO was completely deleted. My proc is not updating rows that are not in the Temp table. Here is the table definition for PurchaseOrders. I need either a new PROC or an update to this PROC that will change the PBSTAT to 'X' of the PBPO/PBITEM when it does not exist in the TEMP table.
[dbo].[PurchaseOrders](
[ID] [int] IDENTITY(1,1) NOT NULL,
[PBPO] [int] NOT NULL,
[PBSEQ] [int] NOT NULL,
[PBITEM] [varchar](28) NOT NULL,
[PBDEL] [varchar](1) NULL,
[PBVEND] [varchar](9) NULL,
[PBTYPE] [varchar](1) NULL,
[PBLOC] [varchar](4) NULL,
[PBDSC1] [varchar](51) NULL,
[PBDSC2] [varchar](45) NULL,
[PBPDTE] [datetime] NULL,
[PBDUE] [datetime] NULL,
[XDPRVDT] [datetime] NULL,
[XDCURDT] [datetime] NULL,
[PBRDTE] [datetime] NULL,
[PBOQTY] [int] NULL,
[PBTQTY] [int] NULL,
[PBRQTY] [int] NULL,
[PBDQTY] [int] NULL,
[PBLQTY] [int] NULL,
[PBBQTY] [int] NULL,
[PBCOST] [float] NULL,
[EXTCOST] [float] NULL,
[PBCCTD] [datetime] NULL,
[PBCCTT] [int] NULL,
[PBCCTU] [varchar](15) NULL,
[PBLCGD] [datetime] NULL,
[PBLCGT] [int] NULL,
[PBUSER] [varchar](12) NULL,
[PASTAT] [varchar](1) NULL,
[PABUYR] [varchar](3) NULL,
[PAPAD3] [varchar](45) NULL,
[PAPPHN] [varchar](30) NULL,
[PACONF] [varchar](39) NULL,
[Comment] [varchar](max) NULL
Here is the current Proc that I run to update the data.
ALTER PROCEDURE [dbo].[UpdateExistingPurchaseOrders]
AS
BEGIN
SET NOCOUNT ON;
UPDATE
p
SET
p.PBDEL = pt.PBDEL,
p.PBVEND = pt.PBVEND,
p.PBTYPE = pt.PBTYPE,
p.PBLOC = pt.PBLOC,
p.PBDSC1 = pt.PBDSC1,
p.PBDSC2 = pt.PBDSC2,
p.PBPDTE = pt.PBPDTE,
p.PBDUE = pt.PBDUE,
p.PBRDTE = pt.PBRDTE,
p.PBOQTY = pt.PBOQTY,
p.PBTQTY = pt.PBTQTY,
p.PBRQTY = pt.PBRQTY,
p.PBDQTY = pt.PBDQTY,
p.PBLQTY = pt.PBLQTY,
p.PBBQTY = pt.PBBQTY,
p.PBCOST = pt.PBCOST,
p.EXTCOST = pt.EXTCOST,
p.PBCCTD = pt.PBCCTD,
p.PBCCTT = pt.PBCCTT,
p.PBCCTU = pt.PBCCTU,
p.PBLCGD = pt.PBLCGD,
p.PBLCGT = pt.PBLCGT,
p.PBUSER = pt.PBUSER,
p.PASTAT = pt.PASTAT,
p.PABUYR = pt.PABUYR,
p.PAPAD3 = pt.PAPAD3,
p.PAPPHN = pt.PAPPHN,
p.PACONF = pt.PACONF
FROM
dbo.PurchaseOrders_TEMP pt
LEFT OUTER JOIN dbo.PurchaseOrders p
ON p.PBPO = pt.PBPO
AND p.PBSEQ = pt.PBSEQ
WHERE
p.PBPO IS NOT NULL
AND p.PBSEQ IS NOT NULL
END

You can use the MERGE statement to modify data in a target table from data in a source query. The first two examples in the docs show how to insert/update or update/delete depending on whether a source row is found.
In your case you would have to write something like this:
MERGE dbo.PurchaseOrders AS target
USING (SELECT ... FROM PurchaseOrders_TEMP) AS source (...)
ON (ON target.PBPO = source.PBPO
AND target.PBSEQ = source.PBSEQ)
WHEN NOT MATCHED BY SOURCE
THEN DELETE
WHEN NOT MATCHED BY TARGET
THEN INSERT (....)
VALUES (....)
WHEN MATCHED
THEN UPDATE
SET ...

Here is the answer I came up with...
UPDATE
p
SET
p.PASTAT = 'X'
FROM
dbo.PurchaseOrders_TEMP pt
LEFT JOIN dbo.PurchaseOrders p ON
p.PBPO = pt.PBPO
AND p.PBSEQ = pt.PBSEQ
WHERE
pt.PBPO IS NULL
AND pt.PBSEQ IS NULL
AND p.PASTAT <> 'X'
END

Related

sqlalchemy seems have no support for insert cte

By given table creation statement and query it's necessary to get old values before update:
CREATE TABLE IF NOT EXISTS products(
id INT GENERATED BY DEFAULT AS IDENTITY NOT NULL PRIMARY KEY,
product_id INT UNIQUE,
image_link CHARACTER VARYING NOT NULL,
additional_image_links CHARACTER VARYING[] NOT NULL
);
WITH temp AS (
INSERT INTO products(product_id, image_link, additional_image_links)
VALUES(1, 'http://www.e1xazm1ple1k113.com',ARRAY['http://www.examkple1113.com','http://www.example2.com'])
ON CONFLICT (product_id) DO UPDATE SET image_link = EXCLUDED.image_link, additional_image_links = EXCLUDED.additional_image_links
WHERE products.image_link != EXCLUDED.image_link OR products.additional_image_links != EXCLUDED.additional_image_links OR products.image_link != EXCLUDED.image_link
RETURNING id, image_link, additional_image_links
)
SELECT image_link, additional_image_links FROM products WHERE id IN (SELECT id FROM temp);
If conflict happens and new values conform criteria result is generated, however I need to use sqlalchemy machinery for it. Approximate but not working example:
def upsert(table, rows, constraint, update_cols):
query = insert(table).values(rows)
return query.on_conflict_do_update(
constraint=constraint,
set_={c: getattr(query.excluded, c) for c in update_cols},
where=getattr(table.c, "additional_image_link") != getattr(query.excluded, "additional_image_link"),
).cte("upsert")
Calling which produces the exception:
sesh = session(autocommit=False, autoflush=False, engine=DEFAULT)
sesh.execute(upsert(*args))
sqlalchemy.exc.ArgumentError: Executable SQL or text() construct expected, got <sqlalchemy.sql.selectable.CTE at 0x1042c3f10; upsert>.

TSQL - Select values with same IS

have a view like this:
Table
The record "NDocumento" is populated only in the first row of a transaction by design. These rows are grouped by the column "NMov" which is the ID.
Since this is a view, I would like to populate each empty "NDocumento" record with the corresponding value contained in the first transaction through a SELECT statement.
As you can see by the picture this is MS-SQL Server 2008, so the lack of LAG makes the game harder.
I would immensely appreciate any help,
thanks
Try this:
SELECT
T1.NDocumento
, T2.NMov
, T2.NRiga
-- , T2. Rest of the fields
FROM NDocumentoTable T1
JOIN NDocumentoTable T2 ON T2.NMov = T1.NMov
WHERE T1.NRiga = 1
I used LAG() over the partition of NMov,Causale by based on your data. You cna change the partition with your requirement. The logic is you get the previous value if the NDocument is empty for the given partition.
CREATE TABLE myTable_1
(
NMov int
,NRiga int
,CodiceAngrafica varchar(100)
,Causale varchar(100)
,DateRegistration date
,DateDocumented date
,NDocument varchar(100)
)
INSERT INTO myTable_1 VALUES (5133, 1, '', 'V05', '01/14/2021', '01/14/2021', 'VI-2100001')
,(5133, 2, '', 'V05', null, null, '')
,(5134, 1, '', 'V05', '01/14/2021', '01/14/2021', 'VI-2100002')
,(5134, 2, '', 'V05', null, null, '')
SELECT
NMov
,NRiga
,CASE WHEN ISNULL(NDocument,'') = ''
THEN LAG(NDocument) OVER (PARTITION BY NMov,Causale ORDER BY NMov)
ELSE NDocument END AS [NDocument]
FROM myTable_1

Possible to use pandas/sqlalchemy to insert arrays into sql database? (postgres)

With the following:
engine = sqlalchemy.create_engine(url)
df = pd.DataFrame({
"eid": [1,2],
"f_i": [123, 1231],
"f_i_arr": [[123], [0]],
"f_53": ["2013/12/1","2013/12/1",],
"f_53a": [["2013/12/1"], ["2013/12/1"],],
})
with engine.connect() as con:
con.execute("""
DROP TABLE IF EXISTS public.test;
CREATE TABLE public.test
(
eid integer NOT NULL,
f_i INTEGER NULL,
f_i_arr INTEGER NULL,
f_53 DATE NULL,
f_53a DATE[] NULL,
PRIMARY KEY(eid)
);;
""")
df.to_sql("test", con, if_exists='append')
If I try to insert only column "f_53" (an date) it succeeds.
If I try to add column "f_53a" (a date[]) it fails with:
^
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) column "f_53a" is of type date[] but expression is of type text[]
LINE 1: ..._53, f_53a, f_i, f_i_arr) VALUES (1, '2013/12/1', ARRAY['201...
^
HINT: You will need to rewrite or cast the expression.
[SQL: 'INSERT INTO test (eid, f_53, f_53a, f_i, f_i_arr) VALUES (%(eid)s, %(f_53)s, %(f_53a)s, %(f_i)s, %(f_i_arr)s)'] [parameters: ({'f_53': '2013/12/1', 'f_53a': ['2013/12/1', '2013/12/1'], 'f_i_arr': [123], 'eid': 1, 'f_i': 123}, {'f_53': '2013/12/1', 'f_53a': ['2013/12/1', '2013/12/1'], 'f_i_arr': [0], 'eid': 2, 'f_i': 1231})]
I have mentioned the dtypes explicitly and it worked for me for postgres.
//sample code
import sqlalchemy
from sqlalchemy import create_engine
from sqlalchemy.dialects import postgresql
df.to_sql('mytable',pgConn, if_exists='append', index=False, dtype={'datetime': sqlalchemy.TIMESTAMP(), 'cur_c':postgresql.ARRAY(sqlalchemy.types.REAL),
'volt_c':postgresql.ARRAY(sqlalchemy.types.REAL)
})
Yes -- is possible to insert [] and [][] types from a dataframe into postgres form a dataframe.
Unlike flat DATE types, which are may be correctly parsed by sql, DATE[] and DATE[][] need to be converted to datetime objects first. Like so.
with engine.connect() as con:
con.execute("""
DROP TABLE IF EXISTS public.test;
CREATE TABLE public.test
(
eid integer NOT NULL,
f_i INTEGER NULL,
f_ia INTEGER[] NULL,
f_iaa INTEGER[][] NULL,
f_d DATE NULL,
f_da DATE[] NULL,
f_daa DATE[][] NULL,
PRIMARY KEY(eid)
);
""")
d = pd.to_datetime("2013/12/1")
i = 99
df = pd.DataFrame({
"eid": [1,2],
"f_i": [i,i],
"f_ia": [None, [i,i]],
"f_iaa": [[[i,i],[i,i]], None],
"f_d": [d,d],
"f_da": [[d,d],None],
"f_daa": [[[d,d],[d,d]],None],
})
df.to_sql("test", con, if_exists='append', index=None)

HSQLDB merge WHEN MATCHED AND fails

I have the following tables:
create table WorkPendingSummary
(
WorkPendingID int not null,
WorkPendingDate date not null,
Status varchar(20) not null,
EndDate date null
)
create table WorkPendingSummaryStage
(
WorkPendingID int not null,
WorkPendingDate date not null,
Status varchar(20) not null
)
I then have the following merge statement:
MERGE INTO WorkPendingSummary w USING WorkPendingSummaryStage
AS vals(WorkPendingID, WorkPendingDate, Status)
ON w.WorkPendingID = vals.WorkPendingID
WHEN MATCHED AND vals.status = 'CLOSED'
THEN UPDATE SET w.workpendingdate = vals.workpendingdate, w.status = vals.status, w.enddate = current_time
The documentation at: http://hsqldb.org/doc/guide/dataaccess-chapt.html#dac_merge_statement states that the "WHEN MATCHED" statement can have an additional "AND" clause as I have above, however that fails with:
unexpected token: AND required: THEN : line: 4 [SQL State=42581, DB Errorcode=-5581]
Does this feature work or am I just missing something?
Using HSQLDB 2.3.1.
Thanks!
The documentation is for version 2.3.3 and forthcoming 2.3.4. The AND clause is supported in these latest versions.

TYPO3 Extbase Repository Query: How to find records in M:N relation where several values for N are given?

We have a simple model Company. Each company can have one ore more departments Dept. Each department is of a certain type Type.
Now we need a query where all companies are returned, which have a department of type X and one of type Y at least (i.e. each returned company has two or more departments, at least one X and one Y).
How can that be done with a query?
This query gives no results if getTypes returns more than one type.
if (count($types = $demand->getTypes()) > 0) {
foreach ($types as $type)
$constraints[] = $query->contains('dept.type', $type);
}
$result = $query->matching($query->logicalAnd($query->logicalAnd($constraints)))->execute();
This query returns results for type X or Y
if (count($types = $demand->getTypes()) > 0) {
$constraints[] = $query->in('dept.type', $types);
}
The tables look like this (simplified):
CREATE TABLE IF NOT EXISTS `company` (
`uid` int(10) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(128) NOT NULL,
PRIMARY KEY (`uid`)
);
CREATE TABLE IF NOT EXISTS `dept` (
`uid` int(10) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(128) NOT NULL,
`company` int(10) unsigned NOT NULL,
`type` int(10) unsigned NOT NULL,
PRIMARY KEY (`uid`)
);
CREATE TABLE IF NOT EXISTS `type` (
`uid` int(10) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(128) NOT NULL,
PRIMARY KEY (`uid`)
);
if (count($types = $demand->getTypes()) > 0) {
foreach ($types as $type)
$constraints[] = $query->contains('dept.type', $type);
}
You do not show the further processing.
If you need AND operation use this:
$result = $query->matching($query->logicalAnd($query->logicalAnd($constraints)))->execute();
If you need OR operation use this:
$result = $query->matching($query->logicalAnd($query->logicalOr($constraints)))->execute();
HTH
I found out that $query->contains() only works properly with plain _mm tables.
So this is what I did: I just added a view to the DB which has the required fields for a _mm table:
CREATE VIEW `company_type_mm` AS
SELECT
`company` AS `uid_local`,
`type` AS `uid_foreign`,
0 AS `sorting`,
0 AS `sorting_foreign`
FROM `dept`;
Then I added a new field dept to the TCA of the company table:
'type' => array(
...
'config' => array(
'foreign_table' => 'type',
'MM' => 'company_type_mm',
...
)
)
And now I get the right results for companies which have departments of type A and type B like this:
if (count($types = $demand->getTypes()) > 0) {
foreach ($types as $type)
$constraints[] = $query->contains('type', $type);
}