Why do I get wrong submission of my query? - tsql

For an assignment I have to do the following.
Write a script that safely checks whether a certain region exists:
Declare a custom region #region called Space, of type NVARCHAR(25).
Use IF NOT EXISTS, ELSE, and BEGIN..END to:
throw an error with THROW 50001, 'Error!', 0 if no record whose RegionDescription matches #region exists.
select all columns for that region from the Region table if the record does exist.
Notes:
Specify the Region table as Region, not dbo.Region.
Use SELECT * FROM Region <fill in> everywhere.
The query that I wrote is somehow not correct, but I do not know what is wrong:
DECLARE #region NVARCHAR(25)='Space'
IF NOT EXISTS (SELECT * FROM Region WHERE RegionDescription = #region)
BEGIN
THROW 50001, 'Error!', 0;
END
ELSE
BEGIN
(SELECT * FROM Region WHERE RegionDescription = #region)
END

This is the query I wrote and was successful in submission. Not sure what is different between the two....
DECLARE #region NVARCHAR(25)='Space'
IF NOT EXISTS (Select * From Region WHERE RegionDescription=#region)
BEGIN
THROW 50001,'error!',0;
END
ELSE
BEGIN
(SELECT * from Region Where RegionDescription=#region)
END

I think your query is correct. It outputs "Error!" because that is what it is supposed to do if the Region called "Space" does not exist in the table. Try to insert a row in the table with a RegionDescription called "Space", then your query should output that row.
Maybe the problem is simply that you do not have a table called Region. If this is true, then you should start by creating this table. The table must have some columns, of which one column should be called RegionDescription.
If this does not help you, could you please state the error message that you are getting.

Related

Trying to FOR LOOP a column with ST_CONTAINS in PgAdmin4

I am working with PgAdmin4 to create a View that consists of a large set of geometric data. Part of this data is polylines that exist within polygons. I am attempting to write a code that can loop through all of my polyline data in a given column, and check if it is in a given polygon, and return true/false. So far this is what I have.
DO
$$
BEGIN
FOR i IN (SELECT "geom" FROM "street map"."segment_id")
LOOP
SELECT ST_CONTAINS(
(SELECT "geom" FROM "street map"."cc_districts" WHERE "district number" = 1),
(i)
)
RETURN NEXT i
END LOOP;
END
$$
The error I receive when running this code is as follows:
ERROR: loop variable of loop over rows must be a record or row variable or list of scalar variables
LINE 4: FOR i IN (SELECT "geom" FROM "street map"."segment_id")
^
SQL state: 42601
Character: 18
From what I understand, "i" must refer to a "row variable", and I tried to define that variable with this piece of code:
(SELECT "geom" FROM "street map"."segment_id")
Any ideas to get this going would be very helpful.
A simple join would be much more efficient here
SELECT line.*, polygon.id IS NOT NULL AS is_in_polygon
FROM line
LEFT JOIN polygon
ON ST_Contains(polygon.geometry, line.geometry)
AND polygon.id = 1
Which can be translated as:
Get every field of a line record, and true if the polygon.id exists (is not null), false otherwise (more below). Name this boolean field is_in_polygon.
Do this on every line.
Join (link) each line to the polygon layer. If there is no match, keep the line information and put NULL for every polygon field (this is a left join). If there is a match, keep both line and polygon information.
A match is found if the polygon.geometry contains the line.geometry and if the polygon.id = 1
I have found a way to make this work without doing a JOIN or needing a FOR LOOP. The following code works.
SELECT * FROM "street map"."segment_id"
WHERE
ST_WITHIN(
ST_CENTROID((ST_SetSRID(geom, 4326))),
ST_SetSRID((SELECT geom FROM "street map"."cc_districts" WHERE "district number" = 1),4326))
This lets me do what I was intending by runing the process on all rows in a given column. I swapped from ST_CONTAINS to ST_WITHIN, and I also am now checking if the centroid of the polyline is within the given polygon by using ST_CENTROID. I found that the error goes away by asserting the SRID of the geometry to 4326 using ST_SetSRID. I'm not sure why that works, as my geoms already have an SRID of 4326.
Thanks for all of those who answered

Teradata MERGE with DELETE and INSERT - syntax?

I have been trying to find the correct syntax for the following case (if it is possible?):
MERGE INTO TAB_A tgt
USING TAB_B src ON (src.F1 = tgt.F1 AND src.F2 = tgt.F2
WHEN MATCHED THEN DELETE
ELSE INSERT (tgt.*) VALUES (src.*)
Background: the temp table contains a fix for the target table, as in it contains two types of rows:
the incorrect rows that are to be removed (they match with rows in the target table), and the 'corrected' row that should be inserted (it replaces all the 'delete' rows).
So essentially: remove anything that matches;
insert anything that does not match.
the current error I am getting is:
"Syntax error: expected something between the 'DELETE' keyword and the 'ELSE' keyword"
Any help appreciated, thanks!
You can make use of MultiStatement DELETE and INSERT statement to correct data from temp table into target table
DELETE FROM TAB_A WHERE EXISTS (SELECT 1 FROM TAB_B WHERE TAB_A.F1 = TAB_B.F1 AND TAB_A.F2 = TAB_B.F2)
;INSERT INTO TAB_A SELECT * FROM TAB_B;

SQL update statements updates wrong fields

I have the following code in Postgres
select op.url from identity.legal_entity le
join identity.profile op on le.legal_entity_id =op.legal_entity_id
where op.global_id = '8wyvr9wkd7kpg1n0q4klhkc4g'
which returns 1 row.
Then I try to update the url field with the following:
update identity.profile
set url = 'htpp:sam'
where identity.profile.url in (
select op.url from identity.legal_entity le
join identity.profile op on le.legal_entity_id =op.legal_entity_id
where global_id = '8wyvr9wkd7kpg1n0q4klhkc4g'
);
But the above ends up updating more than 1 row, actually all of the rows of the identity table.
I would assume since the first postgres statement returns one row, only one row at most can be updated, but I am getting the wrong effect where all of the rows are being updated. Why ?? Please help a nubie fix the above update statement.
Instead of using profile.url to identify the row you want to update, use the primary key. That is what it is there for.
So if the primary key column is called id, the statement could be modified to:
UPDATE identity.profile
SET ...
WHERE identity.profile.id IN (SELECT op.id FROM ...);
But you can do this much simpler in PostgreSQL with
UPDATE identity.profile op
SET url = 'htpp:sam'
FROM identity.legal_entity le
WHERE le.legal_entity_id = op.legal_entity_id
AND le.global_id = '8wyvr9wkd7kpg1n0q4klhkc4g';

PostgreSQL Recursive Query Performance

I am a noob when it comes to PostgreSQL, but I was able to get it to produce what I needed it to do which was to take a hierarchy that was up to 30 levels deep, and create a flattened list 'jagged' listview with the topmost level of and every intervening level to each end node. The recursive function, just pushes every parent found, into an array, and then returns the final flattened list for each node using (LIMIT 1)
The following bit of SQL generates the table I need. My question is whether my function that returns the array of values that I use populate the row-columns is called once per row, or is called once for each of the 30 columns in each row.
Can someone guide me to how I would determine that? And/or if it is blatantly obvious that my SQL is inefficient, what might be a better way of putting together the statements.
Thanks in advance for having a look.
DROP FUNCTION IF EXISTS fnctreepath(nodeid NUMERIC(10,0));
CREATE FUNCTION fnctreepath(nodeid NUMERIC(10,0))
RETURNS TABLE (endnode NUMERIC, depth INTEGER, path NUMERIC[]) AS
$$
WITH RECURSIVE ttbltreepath(endnode, nodeid, parentid, depth, path) AS (
SELECT src.nodeid AS endnode, src.nodeid, src.parentid, 1::INT AS depth,
ARRAY[src.nodeid::NUMERIC(10,0)]::NUMERIC(10,0)[] AS path
FROM tree AS src WHERE nodeid = $1
UNION
SELECT ttbl.endnode, src.nodeid, src.parentid, ttbl.depth + 1 AS depth,
ARRAY_PREPEND(src.nodeid::NUMERIC(10,0), ttbl.path::NUMERIC(10,0)[])::NUMERIC(10,0)[] AS path
FROM tree AS src, ttbltreepath AS ttbl WHERE ttbl.parentid = src.nodeid
)
SELECT endnode, depth, path FROM ttbltreepath GROUP BY endnode, depth, path ORDER BY endnode, depth DESC LIMIT 1;
$$ LANGUAGE SQL;
DROP TABLE IF EXISTS treepath;
SELECT parentid, nodeid, name
(fnctreepath(tree.nodeid)).depth,
(fnctreepath(tree.nodeid)).path[1] as nodeid01,
(fnctreepath(tree.nodeid)).path[2] as nodeid02,
(fnctreepath(tree.nodeid)).path[3] as nodeid03,
(fnctreepath(tree.nodeid)).path[4] as nodeid04,
(fnctreepath(tree.nodeid)).path[5] as nodeid05,
(fnctreepath(tree.nodeid)).path[6] as nodeid06,
(fnctreepath(tree.nodeid)).path[7] as nodeid07,
(fnctreepath(tree.nodeid)).path[8] as nodeid08,
(fnctreepath(tree.nodeid)).path[9] as nodeid09,
(fnctreepath(tree.nodeid)).path[10] as nodeid10,
(fnctreepath(tree.nodeid)).path[11] as nodeid11,
(fnctreepath(tree.nodeid)).path[12] as nodeid12,
(fnctreepath(tree.nodeid)).path[13] as nodeid13,
(fnctreepath(tree.nodeid)).path[14] as nodeid14,
(fnctreepath(tree.nodeid)).path[15] as nodeid15,
(fnctreepath(tree.nodeid)).path[16] as nodeid16,
(fnctreepath(tree.nodeid)).path[17] as nodeid17,
(fnctreepath(tree.nodeid)).path[18] as nodeid18,
(fnctreepath(tree.nodeid)).path[19] as nodeid19,
(fnctreepath(tree.nodeid)).path[20] as nodeid20,
(fnctreepath(tree.nodeid)).path[21] as nodeid21,
(fnctreepath(tree.nodeid)).path[22] as nodeid22,
(fnctreepath(tree.nodeid)).path[23] as nodeid23,
(fnctreepath(tree.nodeid)).path[24] as nodeid24,
(fnctreepath(tree.nodeid)).path[25] as nodeid25,
(fnctreepath(tree.nodeid)).path[26] as nodeid26,
(fnctreepath(tree.nodeid)).path[27] as nodeid27,
(fnctreepath(tree.nodeid)).path[28] as nodeid28,
(fnctreepath(tree.nodeid)).path[29] as nodeid29,
(fnctreepath(tree.nodeid)).path[30] as nodeid30
INTO treepath
FROM tree;
You should check the volatile attribute of your function.
By default a function is VOLATILE, meaning any call to the function may alter the database, so the query optimiser cannot reuse the result when you use the function several times in the same statement.
Your function is not IMUTABLE, 2+2=4 is immutable. But you should define the STABLE volatility keyword for your function, this way the optimiser could reuse your call of fnctreepath(tree.nodeid) used several time in the same statement as a stable result and share it (run it only once).

How to refactor this sql query

I have a lengthy query here, and wondering whether it could be refactor?
Declare #A1 as int
Declare #A2 as int
...
Declare #A50 as int
SET #A1 =(Select id from table where code='ABC1')
SET #A2 =(Select id from table where code='ABC2')
...
SET #A50 =(Select id from table where code='ABC50')
Insert into tableB
Select
Case when #A1='somevalue' Then 'x' else 'y' End,
Case when #A2='somevalue' Then 'x' else 'y' End,
..
Case when #A50='somevalue' Then 'x' else 'y' End
From tableC inner join ......
So as you can see from above, there is quite some redundant code. But I can not think of a way to make it simpler.
Any help is appreciated.
If you need the variables assigned, you could pivot your table...
SELECT *
FROM
(
SELECT Code, Id
FROM Table
) t
PIVOT
(MAX(Id) FOR Code IN ([ABC1],[ABC2],[ABC3],[ABC50])) p /* List them all here */
;
...and then assign them accordingly.
SELECT #A1 = [ABC1], #A2 = [ABC2]
FROM
(
SELECT Code, Id
FROM Table
) t
PIVOT
(MAX(Id) FOR Code IN ([ABC1],[ABC2],[ABC3],[ABC50])) p /* List them all here */
;
But I doubt you actually need to assign them at all. I just can't really picture what you're trying to achieve.
Pivotting may help you, as you can still use the CASE statements.
Rob
Without taking the time to develop a full answer, I would start by trying:
select id from table where code in ('ABC1', ... ,'ABC50')
then pivot that, to get one row result set of columns ABC1 through ABC50 with ID values.
Join that row in the FROM.
If 'somevalue', 'x' and 'y' are constant for all fifty expressions. Then start from:
select case id when 'somevalue' then 'x' else 'y' end as XY
from table
where code in ('ABC1', ... ,'ABC50')
I am not entirely sure from your example, but it looks like you should be able to do one of a few things.
Create a nice look up table that will tell you for a given value of the select statement what should be placed there. This would be much shorter and should be insanely fast.
Create a simple for loop in your code and generate a list of 50 small queries.
Use sub-selects or generate a list of selects with one round trip to retrieve your #a1-#A50 values and then generate the query with them already in place.
Jacob