I'm building rather lengthy joins in ad-hoc queries lately and I find it very tedious to type out all of the field names in the select statement once my joins work. Is there a quick way to just list out all of the field names in a combined query? Is there some query to run against a select statement, or key command to do this?
For example, there are probably about 30 fields in the below join. If I could get a quick list of them to expand from the '*' star then I could take away what I don't need.
SELECT *
FROM [DB].[THINGS].[QLINKS] Q
JOIN [DB].[THINGS].[POINTS] RQ
ON Q.ID = RQ.POINT_ID
JOIN [DB].[THINGS].[REFERENCES] R
ON RQ.POINT_ID = R.ID
JOIN SA_MEMBERSHIP.DBO.ASPNET_USERS U
ON U.USERID = R.PERSON_ID
JOIN SA_MEMBERSHIP.DBO.ASPNET_MEMBERSHIP M
ON M.USERID = U.USERID
WHERE NOT Q.ID IN (
SELECT RQ.QLINK_ID
FROM [DB].[DATA].[ENTRIES] E
JOIN [DB].[THINGS].[REFERENCES] R
ON E.PERSON_ID = R.PERSON_ID
JOIN [DB].[THINGS].[POINTS] RQ
ON R.ID = RQ.POINT_ID
WHERE (
ITEMKEY LIKE '102_0%'
OR ITEMKEY LIKE '104_0%'
)
AND E.POINT_ID IS NULL
GROUP BY E.PERSON_ID, LEFT(ITEMKEY, 5), R.ID, RQ.QLINK_ID
)
something like the following (tsql version, the idea is to use metadata of the result set) with cursor c being your query
declare c cursor for select * from sys.databases
go
open c
DECLARE #Report CURSOR;
declare #cn sysname
declare #op int
declare #ccf int
declare #cs int
declare #dts smallint
declare #cp tinyint
declare #colsc tinyint
declare #orp int
declare #od varchar(1)
declare #hc smallint
declare #cid int
declare #oid int
declare #dbid int
declare #dbn sysname
exec sp_describe_cursor_columns #cursor_return = #Report out, #cursor_source = N'global', #cursor_identity = N'c';
declare #res nvarchar(max)
set #res = '';
FETCH NEXT from #Report into #cn, #op, #ccf, #cs, #dts, #cp, #colsc, #orp, #od, #hc, #cid, #oid, #dbid, #dbn;
WHILE (##FETCH_STATUS <> -1)
BEGIN
set #res = #res +',' + #cn
FETCH NEXT from #Report into #cn, #op, #ccf, #cs, #dts, #cp, #colsc, #orp, #od, #hc, #cid, #oid, #dbid, #dbn;
END
print stuff(#res, 1, 1, '')
CLOSE #Report;
DEALLOCATE #Report;
GO
close c
deallocate c
It turns out that there is a plugin for that! The plugin ApexSQL refactor has this for free.
Direct link here:http://www.apexsql.com/sql_tools_refactor.aspx
Described by the venerable Pinal Dave here: http://blog.sqlauthority.com/2014/07/24/sql-server-how-to-format-and-refactor-your-sql-code-directly-in-ssms-and-visual-studio/
Related
I have some data which I bulk import into this table structure:
CREATE TABLE #Temp
(
WellKnownText NVARCHAR(MAX)
)
Some of the entries are not valid. So something like this:
SELECT geometry::STPolyFromText(WellKnownText,4326) FROM #Temp
does not work for all rows and thus falls over.
What is the best way to detect which WellKnownText are not valid? I have used MakeValid in the past - so ideally I would like to fix entries as much as possible.
PS:
This does not work:
SELECT * FROM #Temp
WHERE geometry::STPolyFromText(WellKnownText,4326).STIsValid() = 0
PPS:
I chose a loop based approach in the end along those lines:
IF OBJECT_ID('tempdb..#Temp') IS NOT NULL DROP TABLE #Temp;
IF OBJECT_ID('tempdb..#Temp1') IS NOT NULL DROP TABLE #Temp1;
DECLARE #LoopCounter INT = 1;
DECLARE #MaxCounter INT;
DECLARE #Valid BIT;
DECLARE #ValidCounter INT;
DECLARE #WellKnownText NVARCHAR(MAX);
CREATE TABLE #Temp
(
Guid UNIQUEIDENTIFIER,
PostcodeFraction NVARCHAR(50),
WellKnownText NVARCHAR(MAX),
GeoJson NVARCHAR(MAX)
);
CREATE TABLE #Temp1
(
Guid UNIQUEIDENTIFIER,
PostcodeFraction NVARCHAR(50),
WellKnownText NVARCHAR(MAX),
GeoJson NVARCHAR(MAX)
);
BULK INSERT #Temp FROM 'D:\PolygonData.txt' WITH (FIELDTERMINATOR = '\t', FIRSTROW = 2, ROWTERMINATOR = '\n');
ALTER TABLE #Temp ADD Id INT IDENTITY(1,1);
SELECT #MaxCounter = MAX(Id) FROM #Temp
SET #ValidCounter = 0;
WHILE(#LoopCounter <= #MaxCounter)
BEGIN
BEGIN TRY
SELECT #WellKnownText = WellKnownText FROM #Temp WHERE Id = #LoopCounter;
SET #Valid = GEOMETRY::STGeomFromText(#WellKnownText,4326).STIsValid();
SET #ValidCounter = #ValidCounter + 1;
END TRY
BEGIN CATCH
SET #Valid = 0;
END CATCH
IF(#Valid = 1)
BEGIN
INSERT INTO #TEMP1
SELECT Guid, PostcodeFraction, WellKnownText, GeoJson FROM #Temp WHERE Id = #LoopCounter;
END
SET #LoopCounter = #LoopCounter + 1;
END
PRINT #ValidCounter;
SELECT * FROM #TEMP1;
As requested in the comments, some possible solutions
I guess you're really looking for a function that can be CROSS APPLYed, something like
SELECT * FROM #Temp T
CROSS APPLY IsWKTValidFunc(T.WellKnownText, 4326) F
WHERE F.IsValid = <somecondition>
(Or even added to as computed column to give you a flag that's set on inserting your WKT)
Stored Proc
https://gis.stackexchange.com/questions/66642/detecting-invalid-wkt-in-text-column-in-sql-server has a simple SP that wraps GEOMETREY::STGeomFromText in a try catch block.
However, stored procs cannot be CROSS APPLYed (or called from a UDF that can be) so this would result in a cursor based solution.
UDF
A UDF can be cross applied, but can't have a TRY-CATCH block. You also can't call the above SP from a UDF. So not much use there.
CLR UDF
Wrap the GEOMETREY::STGeomFromText call in a CLR UDF that can be CROSS APPLIED, can have try catch and other error checking, rules etc, and return a flag indicating valid text. I haven't tried this one out but this sounds like the best option if CLR is enabled in your environment.
Hope this gives you some ideas. Feedback in the comments to these suggestions appreciated.
Let's say I have data:
heloo
cuube
triniity
How to write script that will replace those "doubled" characters with only one? So the result from the above data set would be:
helo
cube
trinity
Usually I post some script where I tried to achieve this, but this time I can't think of any.
This should work:
CREATE PROCEDURE remove_duplicate_characters(#string VARCHAR(100))
AS
DECLARE #result VARCHAR(100)
SET #result=''
SELECT #result=#result+MIN(SUBSTRING(#string ,number,1)) FROM
(
SELECT number FROM master..spt_values WHERE type='p' AND number BETWEEN 1 AND len(#string )) AS t GROUP BY SUBSTRING(#string,number,1) ORDER BY MIN(number)
)
SELECT #result
GO
You then call it like this:
EXEC remove_duplicate_characters 'heloo'
Source
This script does not depend on having access to master functions, and just relies on t-sql string functions.
declare #word varchar(100) = 'aaaacuuuuuubeeeee', #result varchar(100) = ''
declare #letter char, #idx int = 0, #lastletter char = ''
while(#idx <= len(#word))
begin
select #letter = substring(#word,#idx,1)
if (#letter != #lastletter)
begin
select #result = concat(#result,#letter)
end
select #lastletter = #letter,#idx = #idx + 1
end
select #result
I have a scalar value function that returns a VarChar(MAX) In my stored procedure I do this
declare #p_emailAddr varchar(MAX) = (select db.dbo.GetEmails(10))
If I do print #p_emailAddr it shows me it was populated with the correct information but the rest of the code doesn't work correctly using it. (I have no clue why, it doesn't make sense!)
Now if I change it like this
declare #p_emailAddr varchar(MAX) = 'test#email.com;'
The rest of my code works perfect as it should!
What is the difference between the two methods of setting #p_emailAddr that is breaking it?
This is get emails code
ALTER FUNCTION [dbo].[GetEmails](#p_SubID int)
RETURNS varchar(max)
AS
BEGIN
DECLARE #p_Emails varchar(max)
SELECT #p_Emails = COALESCE(#p_Emails + ';', '') + E.EmailAddress
FROM db.dbo.UserEmailAddr E JOIN
db.dbo.EmailSubscriptionUsers S on e.ClockNumber = s.Clock AND S.SubID = #p_SubID
SET #p_Emails = #p_Emails + ';'
RETURN #p_Emails
END
What's coming back from GetEmails(10)? varchar(max) is a string value and is expecting a single value. you could have a table variable or if dbo.getemails(10) is a table just join it where you're expecting to use #p_emailaddr
best
select *
from table1 t1
join dbo.GetEmails(10) e
on e.email = t1.email
alternative
create table #GetEmails (emails varchar(max))
insert into #GetEmails values ('email#test.com'), ('test#email.com')
declare #p_emailAddr table (emails varchar(max))
insert into #p_emailAddr(emails)
select *
from #GetEmails
select *
from #p_emailAddr
I'm attempting to write an area of a function in PL/pgSQL that loops through an hstore and sets a record's column(the key of the hstore) to a specific value (the value of the hstore). I'm using Postgres 9.1.
The hstore will look like: ' "column1"=>"value1","column2"=>"value2" '
Generally, here is what I want from a function that takes in an hstore and has a record with values to modify:
FOR my_key, my_value IN
SELECT key,
value
FROM EACH( in_hstore )
LOOP
EXECUTE 'SELECT $1'
INTO my_row.my_key
USING my_value;
END LOOP;
The error which I am getting with this code:
"myrow" has no field "my_key". I've been searching for quite a while now for a solution, but everything else I've tried to achieve the same result hasn't worked.
Simpler alternative to your posted answer. Should perform much better.
This function retrieves a row from a given table (in_table_name) and primary key value (in_row_pk), and inserts it as new row into the same table, with some values replaced (in_override_values). The new primary key value as per default is returned (pk_new).
CREATE OR REPLACE FUNCTION f_clone_row(in_table_name regclass
, in_row_pk int
, in_override_values hstore
, OUT pk_new int)
LANGUAGE plpgsql AS
$func$
DECLARE
_pk text; -- name of PK column
_cols text; -- list of names of other columns
BEGIN
-- Get name of PK column
SELECT INTO _pk a.attname
FROM pg_catalog.pg_index i
JOIN pg_catalog.pg_attribute a ON a.attrelid = i.indrelid
AND a.attnum = i.indkey[0] -- single PK col!
WHERE i.indrelid = in_table_name
AND i.indisprimary;
-- Get list of columns excluding PK column
SELECT INTO _cols string_agg(quote_ident(attname), ',')
FROM pg_catalog.pg_attribute
WHERE attrelid = in_table_name -- regclass used as OID
AND attnum > 0 -- exclude system columns
AND attisdropped = FALSE -- exclude dropped columns
AND attname <> _pk; -- exclude PK column
-- INSERT cloned row with override values, returning new PK
EXECUTE format('
INSERT INTO %1$I (%2$s)
SELECT %2$s
FROM (SELECT (t #= $1).* FROM %1$I t WHERE %3$I = $2) x
RETURNING %3$I'
, in_table_name, _cols, _pk)
USING in_override_values, in_row_pk -- use override values directly
INTO pk_new; -- return new pk directly
END
$func$;
Call:
SELECT f_clone_row('tbl', 1, '"col1"=>"foo_new","col2"=>"bar_new"');
db<>fiddle here
Old sqlfiddle
Use regclass as input parameter type, so only valid table names can be used to begin with and SQL injection is ruled out. The function also fails earlier and more gracefully if you should provide an illegal table name.
Use an OUT parameter (pk_new) to simplify the syntax.
No need to figure out the next value for the primary key manually. It is inserted automatically and returned after the fact. That's not only simpler and faster, you also avoid wasted or out-of-order sequence numbers.
Use format() to simplify the assembly of the dynamic query string and make it less error-prone. Note how I use positional parameters for identifiers and unquoted strings respectively.
I build on your implicit assumption that allowed tables have a single primary key column of type integer with a column default. Typically serial columns.
Key element of the function is the final INSERT:
Merge override values with the existing row using the #= operator in a subselect and decompose the resulting row immediately.
Then you can select only relevant columns in the main SELECT.
Let Postgres assign the default value for the PK and get it back with the RETURNING clause.
Write the returned value into the OUT parameter directly.
All done in a single SQL command, that is generally fastest.
Since I didn't want to have to use any external functions for speed purposes, I created a solution using hstores to insert a record into a table:
CREATE OR REPLACE FUNCTION fn_clone_row(in_table_name character varying, in_row_pk integer, in_override_values hstore)
RETURNS integer
LANGUAGE plpgsql
AS $function$
DECLARE
my_table_pk_col_name varchar;
my_key text;
my_value text;
my_row record;
my_pk_default text;
my_pk_new integer;
my_pk_new_text text;
my_row_hstore hstore;
my_row_keys text[];
my_row_keys_list text;
my_row_values text[];
my_row_values_list text;
BEGIN
-- Get the next value of the pk column for the table.
SELECT ad.adsrc,
at.attname
INTO my_pk_default,
my_table_pk_col_name
FROM pg_attrdef ad
JOIN pg_attribute at
ON at.attnum = ad.adnum
AND at.attrelid = ad.adrelid
JOIN pg_class c
ON c.oid = at.attrelid
JOIN pg_constraint cn
ON cn.conrelid = c.oid
AND cn.contype = 'p'
AND cn.conkey[1] = at.attnum
JOIN pg_namespace n
ON n.oid = c.relnamespace
WHERE c.relname = in_table_name
AND n.nspname = 'public';
-- Get the next value of the pk in a local variable
EXECUTE ' SELECT ' || my_pk_default
INTO my_pk_new;
-- Set the integer value back to text for the hstore
my_pk_new_text := my_pk_new::text;
-- Add the next value statement to the hstore of changes to make.
in_override_values := in_override_values || hstore( my_table_pk_col_name, my_pk_new_text );
-- Copy over only the given row to the record.
EXECUTE ' SELECT * '
' FROM ' || quote_ident( in_table_name ) ||
' WHERE ' || quote_ident( my_table_pk_col_name ) ||
' = ' || quote_nullable( in_row_pk )
INTO my_row;
-- Replace the values that need to be changed in the column name array
my_row := my_row #= in_override_values;
-- Create an hstore of my record
my_row_hstore := hstore( my_row );
-- Create a string of comma-delimited, quote-enclosed column names
my_row_keys := akeys( my_row_hstore );
SELECT array_to_string( array_agg( quote_ident( x.colname ) ), ',' )
INTO my_row_keys_list
FROM ( SELECT unnest( my_row_keys ) AS colname ) x;
-- Create a string of comma-delimited, quote-enclosed column values
my_row_values := avals( my_row_hstore );
SELECT array_to_string( array_agg( quote_nullable( x.value ) ), ',' )
INTO my_row_values_list
FROM ( SELECT unnest( my_row_values ) AS value ) x;
-- Insert the values into the columns of a new row
EXECUTE 'INSERT INTO ' || in_table_name || '(' || my_row_keys_list || ')'
' VALUES (' || my_row_values_list || ')';
RETURN my_pk_new;
END
$function$;
It's quite a bit longer than what I had envisioned, but it works and is actually quite speedy.
When developing in SQL PL, what is the difference between 'set' and 'select into'?
set var = (select count(1) from emp);
select count(1) into var from emp;
Are they completely equivalent? where can I find documention about them?
When issuing a select, and it does not return any value:
select into throws an exception
set gets a null value
You can check the difference with these two stored procedures:
Using set:
create or replace procedure test1 (
in name varchar(128)
)
begin
declare val varchar(128);
set val = (select schemaname
from syscat.schemata where schemaname = name);
end #
Using select into
create or replace procedure test2 (
in name varchar(128)
)
begin
declare val varchar(128);
select schemaname into val
from syscat.schemata where schemaname = name;
end #
Call set
$ db2 "call test1('nada')"
Return Status = 0
Call select into
$ db2 "call test2('nada')"
Return Status = 0
SQL0100W No row was found for FETCH, UPDATE or DELETE; or the result of a
query is an empty table. SQLSTATE=02000
This is a difference between both of them. When using select into, you have to deal with handlers.
They are, to the best of my knowledge
In some cases, you would do one technique over the other ..
eg. You cannot use WITH UR in SET
SET var1=(selct....from t with ur)
but can do
select a into var1 from t with ur
When the result of the query is part of a test condition.
For example, when detaching paritions and waiting for the asynchronous process, the following works:
WHILE (STATUS_PART <> '') DO
CALL DBMS_LOCK.SLEEP(1);
SET STATUS_PART = (SELECT STATUS
FROM SYSCAT.DATAPARTITIONS
WHERE TABSCHEMA = TABLE_SCHEMA
AND TABNAME = TABLE_NAME
AND DATAPARTITIONNAME LIKE 'SQL%' WITH UR);
END WHILE;
But the following does not:
WHILE (STATUS_PART <> '') DO
CALL DBMS_LOCK.SLEEP(1);
SELECT STATUS INTO STATUS_PART
FROM SYSCAT.DATAPARTITIONS
WHERE TABSCHEMA = TABLE_SCHEMA
AND TABNAME = TABLE_NAME
AND DATAPARTITIONNAME LIKE 'SQL%' WITH UR;
END WHILE;
The SELECT INTO works for SELECT statements.
With SET you can directly assign the outcome of a function, do calculations or assign a different variable. e.g.
SET var = var + 1;
SET var1 = var;