I'm trying to update a table (in pgsql) with a complex expression that needs to occur several times in the UPDATE statement. WITH seems perfect for this:
WITH newtz AS (SELECT timezone FROM timezonebyzipcode WHERE zip=(SELECT zip_code FROM company WHERE id=company_id))
UPDATE cross_rental
SET return_timezone=newtz
return_time=(return_time AT TIME ZONE return_timezone) AT TIME ZONE newtz
WHERE return_to='Vendor' AND return_timezone<>newtz
Unfortunately, it doesn't work:
ERROR: syntax error at or near "UPDATE"
LINE 2: UPDATE cross_rental
^
I've searched and couldn't find any examples of using WITH with UPDATE in this way, but I also don't see anything indicating it shouldn't work. Is this just unsupported, or am I making some silly mistake?
And, if it's unsupported, should I just copy that nasty long expression into each of the three places where I'm using "newtz" in the UPDATE clause? Or is there some better way to accomplish this update?
ERROR: syntax error at or near "UPDATE"
LINE 2: UPDATE cross_rental
This specific error message reveals you're using a PostgreSQL version 9.0 or older. The two major versions before 9.1 featured CTEs and WITH, but not in the context of data modifying queries.
This appeared in 9.1. See 7.8.2. Data-Modifying Statements in WITH in PostgreSQL 9.1 doc.
Assuming a newer version, a CTE must be used as a table with rows and columns (not as a scalar variable), so the query should be fixed as mentioned in Richard Huxton's answer.
It's a table (or a from-recordset-source anyway).
WITH calculated AS (
SELECT ....
)
UPDATE foo
SET bar = calculated.something
FROM calculated
WHERE ...
WITH (SELECT timezone FROM timezonebyzipcode WHERE zip=(SELECT zip_code FROM company WHERE id=company_id)) AS newtz
You have the alias and referent backwards...
Related
I was creating a function following an example from a database class which included the creation of a temporary variable (base_salary) and using a SELECT INTO to calculate its value later.
However, I did not realize I used a different order for the syntax (SELECT ... FROM ... INTO base_salary) and the function could be used later without any visible issues (values worked as expected).
Is there any difference in using "SELECT ... FROM ... INTO" syntax order? I tried looking about it in the PostgreSQL documentation but found nothing about it. Google search did not provide any meaningful information neither. Only thing I found related to it was from MySQL documentation, which only mentioned about supporting the different order in an older version.
There is no difference. From the docs of pl/pgsql:
The INTO clause can appear almost anywhere in the SQL command.
Customarily it is written either just before or just after the list of
select_expressions in a SELECT command, or at the end of the command for other command types. It is recommended that you follow
this convention in case the PL/pgSQL parser becomes stricter in future
versions.
Notice that in (non-procedural) SQL, there is also a SELECT INTO command which works like CREATE TABLE AS, in this version the INTO must come right after the SELECT clause.
I always use SELECT ... INTO ... FROM , I believe that is the standard supported notation
https://www.w3schools.com/sql/sql_select_into.asp
I would recommend using this, also if there are any updates or if the other version might become unsupported as you mentioned...
For about the past year I having an issue with having my delete queries recognize a where clause. I am not talking about big major complex queries. Even little simple queries.
Usually I am in a hurry so rather than try to analyze the issue I simply utilize a join on a subquery to specify which rows to delete. But now I want to know what's going on. I've looked other places on stackoverflow but those errors were always due to something obvious like a extra comma, or trying to use the where clause with an Insert values statement.
The culprit today is the following:
DELETE FROM prc_ContractChanges chgs
WHERE chgs.ChangeTypeID = 1
Similar constructs have worked fine for the last 19 years. So why all of a sudden does this throw the obscene error
Incorrect syntax near 'WHERE'.
I tried to add a comment in response to one below but each attempt end with "an error occurred when adding this comment" so UI am editing the original question
That suggestion below worked. So it occurred to me how would you identify the multiple tables in a delete with a join. So I put the alias in the delete clause. That worked. It occurred to me that putting the alias in the delete clause with a single table delete would work as well. It did.
You can use one of the following syntaxes:
DELETE prc_ContractChanges
FROM prc_ContractChanges chgs
WHERE chgs.ChangeTypeID = 1
DELETE prc_ContractChanges
WHERE ChangeTypeID = 1
i have a DB2 data source and an Oracle 12c target.
The Oracle has a DB link to the DB2 defined which is working in general.
Now i have a huge table in the DB2 which has a timestamp column (lets call it ROW_CHANGED) for row changes. I want to retrieve rows which have changed after a particular time.
Running
SELECT * FROM lib.tbl WHERE ROW_CHANGED >'2016-08-01 10:00:00'
on the DB2 returns exactly 1 row after ca. 90 secs which is fine.
Now i try the same query from the Oracle via the db link:
SELECT * FROM lib.tbl#dblink_name WHERE ROW_CHANGED >TO_TIMESTAMP('2016-08-01 10:00:00')
This runs for hours and ends up in a timeout.
I read some Oracle docs and found distributed query optimization tips but most of them refer to joining a local to a remote table which is not my case.
In my desperation, i have tried the DRIVING_SITE hint, without effect.
Now i wonder when the WHERE part of the query will be evaluated. Since i have to use Oracle syntax and not DB2 syntax for the query, is it possible the Oracle will try to first copy the full table and apply the where clause afterwards? I did some research but did not find anything which would help me in this direction.
The ROW_CHANGED is a hidden column in the DB2, if that matters.
Thx for any hint in advance.
Update
Thanks#all for help. I'll share what did the trick for me.
First of all i have used TO_TIMESTAMP since the DB2 column is also Timestamp (not date) and i had expected to circumvent implicit conversions by this.
Without the explicit conversion i ran into ORA-28534: Heterogeneous Services preprocessing error and i have no hope of touching the DB config within reasonable time.
The explain plan btw did not bring much. It showed a FULL hint and no conversion on the predicates. Indeed it showed the ROW_CHANGED column as Date, i wonder why.
I have tried Justins suggestion to use a bind variable, however i got ORA-28534 again. Next thing i did was to wrap it into a pl/sql block (will run in a SP anyway later).
declare
v_tmstmp TIMESTAMP := 01.08.16 10:00:00;
begin
INSERT INTO ORAUSER.TMP_TBL (SRC_PK,ROW_CHANGED)
SELECT SRC_PK,ROW_CHANGED
FROM lib.tbl#dblink_name
WHERE ROW_CHANGED > v_tmstmp;
end;
This was executing in the same time as in DB2 itself. The date format is DD.MM.YY here since it is the default unfortunately.
When changing the variable assignment to
v_tmstmp TIMESTAMP := TO_TIMESTAMP('01.08.16 10:00:00','DD.MM.YY HH24:MI:SS');
I got the same problem as before.
Meanwhile the DB2 operators have created an index in the ROW_CHANGED column which i requested earlier that day. This has solved the problem in general it seems. Even my original query finishes in no time now.
If you are actually using an Oracle-specific conversion function like to_timestamp, that forces the predicate to be evaluated on the Oracle side. Oracle isn't going to know how to convert a built-in function like to_timestamp into an exactly equivalent function call in DB2.
If you used a bind variable, that would be more likely to get evaluated on the DB2 side. But that may be complicated by the data type mapping between different databases-- there may not be a perfect mapping between one engine's date and another engine's timestamp data type. If this was a numeric column, a bind variable would be almost certain to get pushed. In this case, it probably involves playing around a bit to figure out exactly what data type to use for your variable that works for your framework, Oracle, and DB2.
If using a bind variable doesn't work, you can force the predicate to be evaluated on the remote server using the dbms_hs_passthrough package. That lets you send a query verbatim to the remote server which allows you to do things like use functions defined in your DB2 database. That's a bit of overkill in this situation, hopefully, but it's nice to have the hammer as your backup if the simpler solution doesn't work quickly enough.
I was planning to use the WITH clause with PostgreSQL, but it doesn't seem to support the command. Is there a substitute command?
What I want to do is with one query select several sub-resultsets and use parts of the sub-resultsets to create my final SELECT.
That would have been easy using the WITH clause.
UPDATE:
Opps! I discovered that I misunderstood the error message I got; and pgSQL does support WITH.
PostgreSQL supports common-table expressions (WITH queries) in version 8.4 and above. See common table expressions in the manual.
You should really include your PostgreSQL version, the exact text of the error message, and the exact text of any query you ran in your question. Where practical/relevant also include table definitions, sample data, and expected results.
I butter-fingered a query in SQL Server 2000 and added a period in the middle of the table name:
SELECT t.est.* FROM test
Instead of:
SELECT test.* FROM test
And the query still executed perfectly. Even SELECT t.e.st.* FROM test executes without issue.
I've tried the same query in SQL Server 2008 where the query fails (error: the column prefix does not match with a table name or alias used in the query). For reasons of pure curiosity I have been trying to figure out how SQL Server 2000 handles the table names in a way that would allow the butter-fingered query to run, but I haven't had much luck so far.
Any sql gurus know why SQL Server 2000 ran the query without issue?
Update: The query appears to work regardless of the interface used (e.g. Enterprise Manager, SSMS, OSQL) and as Jhonny pointed out below it bizarrely even works when you try:
SELECT TOP 1000 dbota.ble.* FROM dbo.table
Maybe table names are constructed from a naive concatenation of prefix and base name.
't' + 'est' == 'test'
And maybe in the later versions of SQL Server, the distinction was made more semantic/more rigorously.
{ owner = t, table = est } != { table = test }
SQL Server 2005 and up has a "proper" implementation of schemas. SQL 2000 and earlier did not. The details escape me (its been years since I used SQL 2000), all I recall clearly is that you'd be nuts to create anything that wasn't owned by "dbo". It all ties into users and object ownership, but the 2000 and earlier model was pretty confusticated. Hopefully someone will read up on BOL, do some experimentation, and post their results here.
S-SQL reference manual:
"[dot] Can be used to combine multiple names into a name of the form A.B to refer to a column in a table, or a table in a schema. Note that you calso just use a symbol with a dot in it."
So I think if you referenced tblTest as tblT.est it would work OK as long as there isn't a column called 'est' in tblTest.
If it can't find a column name referenced with the dot I imagine it checks the parent of the object.
I found a reference to it being a bug
Note: as a result of a comparison
algorithm bug in SQL Server 2000, dot
symbols themselves have no effect on
matching, so "dbo.t" will successfully
match with tables "dbot", "d.b.o.t",
etc
from Link
It's been fixed in SQL Server 2005. Same link > Changes introduced in SQL Server 2005
Dot-related comparison bug has been fixed.
Is it in the "Open table" view of SSMS or via Enterprise Manager or via an SSMS Query Window?
There is/was a SQL Server 2005 issue with SSMS so how you run the query affects how it behaves.
This is a bug.
It has to do with internal representation of column names in SQL server 2000 that leaked out.
You will also not be able to create tablecolumn with a name which collides with table+column concatenation with another column, like, if you have tables User and UserDetail, you won't be able to have columns DetailAge and Age in these tables, respectively.