How do I rename a tSQLt test class? - redgate

I'm developing a database using the Red Gate SQL Developer tools. SQL Test, the SSMS add-in that runs tSQLt tests, lacks a way to rename test classes.
I have a test called [BackendLayerCustomerAdministrationTests].[test uspMaintainCustomerPermissions throws error when PermissionValue is missing or empty].
The name is so long it breaks Deployment Manager.
2013-12-05 18:48:40 +00:00 ERROR The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters.
There are other unwieldly test names in this class, so I want to start by shortening the class name.
A more succinct class name would be CustomerTests.
sp_rename is no help here.
EXECUTE sys.sp_rename
#objname = N'BackendLayerCustomerAdministrationTests',
#newname = N'CustomerTests';
Msg 15225, Level 11, State 1, Procedure sp_rename, Line 374
No item by the name of 'BackendLayerCustomerAdministrationTests' could be found in the current database 'ApiServices', given that #itemtype was input as '(null)'.
How do I change it?

tSQLt test classes are schemas with a special extended property.
Cade Roux's great solution for renaming schemas is to create a new schema, transfer all the objects, then drop the old schema.
If we did that here we'd lose the extended property.
Let's adapt it for the tSQLt framework.
How to rename a tSQLt test class
Create a new test class.
EXECUTE tSQLt.NewTestClass
#ClassName = 'CustomerTests';
You should see the old class and the new class together in the tSQLt.TestClasses view.
SELECT *
FROM tSQLt.TestClasses;
Name SchemaId
----------------------------------------- ----------
SQLCop 7
BackendLayerCustomerAdministrationTests 10
CustomerTests 14
Cade used Chris Shaffer's select variable concatenation trick to build a list of transfer statements, and print the result.
DECLARE #sql NVARCHAR(MAX) = N'';
SELECT #sql = #sql +
N'ALTER SCHEMA CustomerTests
TRANSFER BackendLayerCustomerAdministrationTests.' + QUOTENAME(name) + N';' +
CHAR(13) + CHAR(10)
FROM sys.objects
WHERE SCHEMA_NAME([schema_id]) = N'BackendLayerCustomerAdministrationTests';
PRINT #sql;
Ugly, but effective.
Copy the output and execute as a new query.
ALTER SCHEMA CustomerTests
TRANSFER BackendLayerCustomerAdministrationTests.[test uspMaintainCustomer validate merged data];
ALTER SCHEMA CustomerTests
TRANSFER BackendLayerCustomerAdministrationTests.[test uspMaintainCustomerPermissions throws error when PermissionValue is missing or empty];
I've shown only two tests here, but it should work for all of them.
Now drop the old test class.
EXECUTE tSQLt.DropClass
#ClassName = N'BackendLayerCustomerAdministrationTests';
The old class should be gone from view.
SELECT *
FROM tSQLt.TestClasses;
Name SchemaId
----------------------------------------- ----------
SQLCop 7
CustomerTests 14
Run all your tests again to check that it worked.
EXECUTE tSQLt.RunAll;
+----------------------+
|Test Execution Summary|
+----------------------+
|No|Test Case Name |Result |
+--+----------------------------------------------------------------------------+-------+
|1|[CustomerTests].[test uspMaintainCustomer throws error on missing APIKey] |Success|
|2|[CustomerTests].[test uspMaintainCustomerPermissions validate merged data] |Success|
|3|[SQLCop].[test Decimal Size Problem] |Success|
|4|[SQLCop].[test Procedures Named SP_] |Success|
|5|[SQLCop].[test Procedures using dynamic SQL without sp_executesql] |Success|
|6|[SQLCop].[test Procedures with ##Identity] |Success|
|7|[SQLCop].[test Procedures With SET ROWCOUNT] |Success|
-------------------------------------------------------------------------------
Test Case Summary: 7 test case(s) executed, 7 succeeded, 0 failed, 0 errored.
-------------------------------------------------------------------------------
Success!

Sorry to come into this so late! I'm a developer who's working on SQL Test.
We've just added the ability to rename test classes to the latest version of SQL Test.
http://www.red-gate.com/products/sql-development/sql-test/
It's now as simple as right clicking on the context menu for a test class, or pressing F2:
Please bear in mind that this option will not appear for old versions of tSQLt. To upgrade, right click on the database to uninstall the framework, then do Add database... to re-add it (the right-most button in the window):
Alternatively, you could just call a new procedure in tSQLt called tSQLt.RenameClass, which is what SQL Test calls behind the scenes.
Please let us know if you have any issues with this!
David

What is your workflow like? If you have all your tests for that test class in one script with exec tSQLt.NewTestClass 'BackendLayerCustomerAdministrationTests' then you can just find and replace the testclass name and you are done.
e.g.
EXEC tSQLt.DropClass 'BackendLayerCustomerAdministrationTests'
GO
EXEC tSQLt.NewTestClass 'CustomerTests'
GO
CREATE PROC [CustomerTests].[test_Insert_AddsACustomer]
AS
etc, etc
This will work because the EXEC tSQLt.NewTestClass 'CustomerTests' will drop all objects in the testclass and they will be recreated as the rest of the script runs.

Simplest is probably:
EXEC tSQLt.RenameClass 'old test class name', 'new test class name';
See the tSQLt docs for RenameClass
It seems Red-gate have added that ability to SQL Test since this question was posted, but the raw SQL code is somehow leaner and cleaner (whether or not you use the excellent SQL Test)

Related

TRY..CATCH Error_Line()....line

I'm looking at this example provided by MS as I'm trying to learn Try...Catch. I understand the syntax and Output (for the most part) but I have one question:
The Output will show the Error_Line as '4'. This is fine but if I remove the line break between GO and BEGIN TRY it'll show the Error_Line as '3'. I just want to understand the logic here.
What I imagine is happening is that SQL Server is counting the lines by beginning the batch immediately after GO, even if that line is blank but I do not know this for certain. Can anyone clarify? If that theory is correct, wouldn't that make finding errors difficult if scripts are written with line breaks like this?
-- Verify that the stored procedure does not already exist.
IF OBJECT_ID ( 'usp_GetErrorInfo', 'P' ) IS NOT NULL
DROP PROCEDURE usp_GetErrorInfo;
GO
-- Create procedure to retrieve error information.
CREATE PROCEDURE usp_GetErrorInfo
AS
SELECT
ERROR_NUMBER() AS ErrorNumber
,ERROR_SEVERITY() AS ErrorSeverity
,ERROR_STATE() AS ErrorState
,ERROR_PROCEDURE() AS ErrorProcedure
,ERROR_LINE() AS ErrorLine
,ERROR_MESSAGE() AS ErrorMessage;
GO
--Line 1
BEGIN TRY --Line 2
-- Generate divide-by-zero error. --Line 3
SELECT 1/0; --Line 4
END TRY
BEGIN CATCH
-- Execute error retrieval routine.
EXECUTE usp_GetErrorInfo;
END CATCH;
You can't really rely on ERROR_LINE(), especially when the error is thrown in internal stored procedure or there is dynamic T-SQL statement which is executed.
But do you really need the exact error line?
in real production code, the fix for the line causing the error may not be so obvious as in your example;
it will be better to debug the stored procedure or the function with the corresponding input parameter in order to reproduce the error
In this way it will be easier to fix an issue. In order to debug a SQL routine:
just script it
remove the drop and create stuff
add declare in front of the input parameters and initialized them with the values causing the error
Basically, instead of the exact error line (which can be easily fine having the correct input parameters and executing the routine) you may found useful two things:
which routing is causing the error (for example, you can add additional parameter to user usp_GetErrorInfo SP which is yielding the SP name as well
the input parameters which are causing the error (this can be done using separated table for logging the errors in the CATCH clause - you simple insert the input parameters in the table and information about the error)
Having this information, it will be easy to reproduce and then fix an issue (in many cases).

I can create a stored procure with invalid user defined function names in it

I just noticed that I could alter my stored procedure code with a misspelled user defined function in it.
I noticed that at 1st time I execute the SP.
Is there any way to get a compile error when an SP include an invalid user-defined function name in it?
At compile time? No.
You can, however, use some of SQL's dependency objects (if using MS SQL) to find problems just after deployment, or as part of your beta testing. Aaron Bertran has a pretty nice article rounding up the options, depending upon the version of SQL Server.
Here is an example using SQL Server 2008 sys object called sql_expression_dependencies
CREATE FUNCTION dbo.scalarTest
(
#input1 INT,
#input2 INT
)
RETURNS INT
AS
BEGIN
-- Declare the return variable here
DECLARE #ResultVar int
-- Add the T-SQL statements to compute the return value here
SELECT #ResultVar = #input1 * #input2
-- Return the result of the function
RETURN #ResultVar
END
GO
--Fn Works!
SELECT dbo.ScalarTest(2,2)
GO
CREATE PROCEDURE dbo.procTest
AS
BEGIN
SELECT TOP 1 dbo.scalarTest(3, 3) as procResult
FROM sys.objects
END
GO
--Sproc Works!
EXEC dbo.procTest
GO
--Remove a dependency needed by our sproc
DROP FUNCTION dbo.scalarTest
GO
--Does anything have a broken dependency? YES
SELECT OBJECT_NAME(referencing_id) AS referencing_entity_name,
referenced_entity_name, *
FROM sys.sql_expression_dependencies
WHERE referenced_id IS NULL --dependency is missing
GO
--Does it work? No
EXEC dbo.procTest
GO

Is there a way to use User Activity Variables to store SQL in Datastage

I am considering using RCP to run a generic datastage job, but the initial SQL changes each time it's called. Is there a process in which I can use a User Activity Variable to inject SQL from a text file or something so I can use the same datastage?
I know this Routine can read a file to look up parameters:
Routine = ‘ReadFile’
vFileName = Arg1
vArray = ”
vCounter = 0
OPENSEQ vFileName to vFileHandle
Else Call DSLogFatal(“Error opening file list: “:vFileName,Routine)
Loop
While READSEQ vLine FROM vFileHandle
vCounter = vCounter + 1
vArray = Fields(vLine,’,’,1)
vArray = Fields(vLine,’,’,2)
vArray = Fields(vLine,’,’,3)
Repeat
CLOSESEQ vFileHandle
Ans = vArray
Return Ans
But does that mean I just store the SQL in one Single line, even if it's long?
Thanks.
Why not just have the SQL within the routine itself and propagate parameters?
I have multiple queries within a single routine that does just that (one for source and one for AfterSQL statement)
This is an example and apologies I'm answering this on my mobile!
InputCol=Trim(pTableName)
If InputCol='Table1' then column='Day'
If InputCol='Table2' then column='Quarter, Day'
SQLCode = ' Select Year, Month, '
SQLCode := column:", Time, "
SQLCode := " to_date(current_timestamp, 'YYYY-MM-DD HH24:MI:SS'), "
SQLCode := \ "This is example text as output" \
SQLCode := "From DATE_TABLE"
crt SQLCode
I've used the multiple encapsulations in the example above, when passing out to a parameter make sure you check the ', " have either been escaped or are displaying correctly
Again, apologies for the quality but I hope it gives you some ideas!
You can give this a try
As you mentioned ,maintain the SQL in a file ( again , if the SQL keeps changing , you need to build a logic to automate populating the new SQL)
In the Datastage Sequencer , use a Execute Command Activity to open the SQL file
eg : cat /home/bk/query.sql
In the job activity which calls your generic job . you should map the command output of your EC activity to a job parameter
so if EC activity name is exec_query , then the job parameter will be
exec_query.$CommandOuput
When you run the sequence , your query will flow from
SQL file --> EC activity-->Parameter in Job activity-->DB stage( query parameterised)
Has you thinked to invoke a shellscript who connect to database and execute the SQL script from the sequential job? You could use sqlplus to connect in the shellscript and read the file with the SQL and use it. To execute the shellscript from the sequential job use a ExecCommand Stage (sh, ./, ...), it depends from the interpreter.
Other way to solve this, depends of the modification degree of your SQL; you could invoke a routine base who handle the parameters and invokes your parallel job.
The principal problem that I think you could have, is the limit of the long of the variable where you could store the parameter.
Tell me what option you choose and I could help you more.

Multiple Search in SSRS when using only a part of the field

I Have created a stored procedure:
#DeviceID nvarchar(20) =''
WITH EXECUTE AS CALLER
AS
SELECT
amd.BRANDID,
amd.DEVICEID
FROM AMDEVICETABLE amd
where
left(amd.Deviceid,len(#DeviceID)) in (#DeviceID)
The length of amd.Deviceid is about 15 characters
In Visual Studio I create a parameter #DeviceID and when I am entering e.g ABCDE ( the first 5 characters from Deviceid) everything is working perfect.
the problem is that I want to put multiple values like
jhmcl*, jhmgd*.
So I created my own little version of your report and I believe the problem is your LEN() function. I'm surprised it doesn't return an error because it errors out in Report Builder for SQL Server 2014(simple version of SSRS). I would test what your LEN(#DeviceID) is returning. I would bet it's not returning the correct value. Instead you might try this to cover every possible pattern. I don't know how it will work performance wise.
SELECT DeviceID
FROM YourTable
WHERE LEN(DeviceID,1) IN (#DeviceID)
OR LEN(DeviceID,2) IN (#DeviceID)
OR LEN(DeviceID,3) IN (#DeviceID)
..
OR LEN(DeviceID,15),IN(#DeviceID)

SQLAlchemy, Psycopg2 and Postgresql COPY

It looks like Psycopg has a custom command for executing a COPY:
psycopg2 COPY using cursor.copy_from() freezes with large inputs
Is there a way to access this functionality from with SQLAlchemy?
accepted answer is correct but if you want more than just the EoghanM's comment to go on the following worked for me in COPYing a table out to CSV...
from sqlalchemy import sessionmaker, create_engine
eng = create_engine("postgresql://user:pwd#host:5432/db")
ses = sessionmaker(bind=engine)
dbcopy_f = open('/tmp/some_table_copy.csv','wb')
copy_sql = 'COPY some_table TO STDOUT WITH CSV HEADER'
fake_conn = eng.raw_connection()
fake_cur = fake_conn.cursor()
fake_cur.copy_expert(copy_sql, dbcopy_f)
The sessionmaker isn't necessary but if you're in the habit of creating the engine and the session at the same time to use raw_connection you'll need separate them (unless there is some way to access the engine through the session object that I don't know). The sql string provided to copy_expert is also not the only way to it, there is a basic copy_to function that you can use with subset of the parameters that you could past to a normal COPY TO query. Overall performance of the command seems fast for me, copying out a table of ~20000 rows.
http://initd.org/psycopg/docs/cursor.html#cursor.copy_to
http://docs.sqlalchemy.org/en/latest/core/connections.html#sqlalchemy.engine.Engine.raw_connection
If your engine is configured with a psycopg2 connection string (which is the default, so either "postgresql://..." or "postgresql+psycopg2://..."), you can create a psycopg2 cursor from an SQL Alchemy session using
cursor = session.connection().connection.cursor()
which you can use to execute
cursor.copy_from(...)
The cursor will be active in the same transaction as your session currently is. If a commit or rollback happens, any further use of the cursor with throw a psycopg2.InterfaceError, you would have to create a new one.
You can use:
def to_sql(engine, df, table, if_exists='fail', sep='\t', encoding='utf8'):
# Create Table
df[:0].to_sql(table, engine, if_exists=if_exists)
# Prepare data
output = cStringIO.StringIO()
df.to_csv(output, sep=sep, header=False, encoding=encoding)
output.seek(0)
# Insert data
connection = engine.raw_connection()
cursor = connection.cursor()
cursor.copy_from(output, table, sep=sep, null='')
connection.commit()
cursor.close()
I insert 200000 lines in 5 seconds instead of 4 minutes
It doesn't look like it.
You may have to just use psycopg2 to expose this functionality and forego the ORM capabilities. I guess I don't really see the benefit of ORM in such an operation anyway since it's a straight bulk insert and dealing with individual objects a la an ORM would not really make a whole lot of sense.
If you're starting from SQLAlchemy, you need to first get to the connection engine (also known by the property name bind on some SQLAlchemy objects):
engine = create_engine('postgresql+psycopg2://myuser:password#localhost/mydb')
# or
engine = session.engine
# or any other way you know to get to the engine
From the engine you can isolate a psycopg2 connection:
# get a psycopg2 connection
connection = engine.connect().connection
# get a cursor on that connection
cursor = connection.cursor()
Here are some templates for the COPY statement to use with cursor.copy_expert(), a more complete and flexible option than copy_from() or copy_to() as it is indicated here: https://www.psycopg.org/docs/cursor.html#cursor.copy_expert.
# to dump to a file
dump_to = """
COPY mytable
TO STDOUT
WITH (
FORMAT CSV,
DELIMITER ',',
HEADER
);
"""
# to copy from a file:
copy_from = """
COPY mytable
FROM STDIN
WITH (
FORMAT CSV,
DELIMITER ',',
HEADER
);
"""
Check out what the options above mean and others that may be of interest to your specific situation https://www.postgresql.org/docs/current/static/sql-copy.html.
IMPORTANT NOTE: The link to the documentation of cursor.copy_expert() indicates to use STDOUT to write out to a file and STDIN to copy from a file. But if you look at the syntax on the PostgreSQL manual, you'll notice that you can also specify the file to write to or from directly in the COPY statement. Don't do that, you're likely just wasting your time if you're not running as root (who runs Python as root during development?) Just do what's indicated in the psycopg2's docs and specify STDIN or STDOUT in your statement with cursor.copy_expert(), it should be fine.
# running the copy statement
with open('/path/to/your/data/file.csv') as f:
cursor.copy_expert(copy_from, file=f)
# don't forget to commit the changes.
connection.commit()
You don't need to drop down to psycopg2, use raw_connection nor a cursor.
Just execute the sql as usual, you can even use bind parameters with text():
engine.execute(text('''copy some_table from :csv
delimiter ',' csv'''
).execution_options(autocommit=True),
csv='/tmp/a.csv')
You can drop the execution_options(autocommit=True) if this PR will be accepted