getting different file size when reading pdf from disc to varbinary(max) column and subsequently bcp-ing it back to disc - tsql

So in setting the test environment for SQL Agent job I'm working on I populated a varbinary(max) field from a pdf file on disc using OPENROWSET like so
DECLARE #File varbinary(max)
SELECT #File=BulkColumn
FROM OPENROWSET
(BULK 'v:\DIMA.pdf', SINGLE_BLOB) pdf
UPDATe Invoice_FileList SET Fajl=#File WHERE ID=4
Afterwards I write the file back to disc like so
DECLARE #bcpCommand nvarchar(1000), #ID bigint, #UID bigint
DECLARE #FileName nvarchar(256), #FileDir nvarchar(128) = 'v:\'
SELECT #ID=ID, #UID=UID, #FileName =#FileDir + ImeFajla FROM ABImport_ImmoF.dbo.Invoice_FileList WHERE ID=4
SET #bcpCommand = 'bcp "SELECT Fajl FROM ABImport_ImmoF.dbo.Invoice_FileList WHERE ID = ' + CAST(#ID AS VARCHAR(20)) + ' AND UID = ' + CAST(#UID AS VARCHAR(20)) + '" queryout "' + #FileName + '" -T -N -S ' + ##SERVERNAME
print #bcpCommand
EXEC master..xp_cmdshell #bcpCommand
Everything seemingly works fine, but the original pdf file and the file bcp saves to disk differ in size by few bytes and while looking identical when opened in pdf reader, comparing their respective hex shows that they are very much different.
Can someone explain to me why is that so (and since they are looking the same when opened in pdf reader do I need to worry about it at all)

Related

How to import multiple CSV files into SQL Server tables?

I am using SQL Server 2017 version, and I want to import multiple .csv files into multiple tables in SQL server.
I found the following script in the net,
--BULK INSERT MULTIPLE FILES From a Folder
--a table to loop thru filenames drop table ALLFILENAMES
CREATE TABLE ALLFILENAMES(WHICHPATH VARCHAR(255),WHICHFILE varchar(255))
--some variables
declare #filename varchar(255),
#path varchar(255),
#sql varchar(8000),
#cmd varchar(1000)
--get the list of files to process:
SET #path = 'C:\Dump\'
SET #cmd = 'dir ' + #path + '*.csv /b'
INSERT INTO ALLFILENAMES(WHICHFILE)
EXEC Master..xp_cmdShell #cmd
UPDATE ALLFILENAMES SET WHICHPATH = #path where WHICHPATH is null
--cursor loop
declare c1 cursor for SELECT WHICHPATH,WHICHFILE FROM ALLFILENAMES where WHICHFILE like '%.csv%'
open c1
fetch next from c1 into #path,#filename
While ##fetch_status <> -1
begin
--bulk insert won't take a variable name, so make a sql and execute it instead:
set #sql = 'BULK INSERT Temp FROM ''' + #path + #filename + ''' '
+ ' WITH (
FIELDTERMINATOR = '','',
ROWTERMINATOR = ''\n'',
FIRSTROW = 2
) '
print #sql
exec (#sql)
fetch next from c1 into #path,#filename
end
close c1
deallocate c1
But the problem is I cannot use the command 'EXEC Master..xp_cmdShell' cause it was disabled by DBA's due to some security reasons, and they are not permitting me to use it. Is there any alternative command that I can use instead of 'xp_cmdShell' in the same script.
In this script near bulk insert command (set #sql = 'BULK INSERT Temp FROM ''' + #path + #filename + ''' '
+ ') I see only one table name 'Test', and how can I mention multiple table names in the Bulk insert command?
Any help please.
It's been a long time since I have had to do this, but this is how I used to do these kinds of things.
DECLARE #intFlag INT
SET #intFlag = 1
WHILE (#intFlag <=100)
BEGIN
PRINT #intFlag
declare #fullpath1 varchar(1000)
select #fullpath1 = '''\\FTP\' + convert(varchar, getdate()- #intFlag , 112) + '_your_file.csv'''
declare #cmd1 nvarchar(1000)
select #cmd1 = 'bulk insert [dbo].[your_table] from ' + #fullpath1 + ' with (FIELDTERMINATOR = ''\t'', FIRSTROW = 5, ROWTERMINATOR=''0x0a'')'
exec (#cmd1)
SET #intFlag = #intFlag + 1
END
GO
As you can tell, this is looping through a bunch of files with dates as file names. The first part of each file name was in this date format: convert(varchar, getdate()- #intFlag , 112)
I'm guessing your files have names that match some specific pattern.
SQl Server has a tool that does this for you. Goto to your SQL Server folder
Open SQL Server Import and Export Wizard.
Choose a Data Source Microsoft Excel
Select the Excel File. And following the steps

Make copies of database

I have a database with all tables needed, on which is perfectly usable. But for test purposes, I need to make copies of the database for, lets say 100 times. (My application will loop on each database to execute some scripts).
The databases generated should bear different names of course. To use Backup/Restore or even Detach/Copy/Attach a 100 times is not do-able. So I would like to know if there's a script which can loop to copy/restore a database several times on different names?
Thanks
Ok found something that's working for me, by simple WHILE LOOP;
DECLARE #index int
DECLARE #dbName varchar(25)
declare #HRNET varchar(200)
declare #HRNET_LOG varchar(200)
declare #sql varchar(2000)
SET #index = 5
WHILE (#index < 200)
BEGIN
-- Construct db name and corresponding files name
SET #dbName = 'BDName' + Right('0000' + CONVERT(NVARCHAR, #index), 4)
set #MDF = '''C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQL2008\MSSQL\DATA\' + #dbName + '.mdf'''
SET #LDF = '''C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQL2008\MSSQL\DATA\' + #dbName + '_1.ldf'''
-- Restore db from backup bak file
SELECT #sql = 'RESTORE DATABASE ' + #dbName + '
FROM DISK = ''C:\DB Backup\DBName1919.bak''
WITH FILE = 1,
MOVE ''WEEKLY_UK_CO_E_REPORTING_Data'' TO ' + #MDF + ',
MOVE ''WEEKLY_UK_CO_E_REPORTING_Log'' TO ' + #LDF +
', NOUNLOAD, STATS = 10'
exec(#sql)
SET #index = #index + 1
END
GO
To retrieve the backup path location (MDF & LDF), just run the following;
RESTORE FILELISTONLY
FROM DISK = N'C:\DB Backup\DBName1919.bak'

Create View in T-SQL Script

We are running SQL Server 2008 R2 and creating an archiving function that will create a new database (that can later be taken offline and stored elsewhere), then take data out of our primary database and put it in to the new DB and finally, create a view in the primary DB to look at the archived data in the new table.
I have the script to create the DB, create the archive table in the new DB, copy the records from the primary DB and put them in to the archive DB and delete the records from the primary DB. Now I am trying to script the creation of a view:
declare #sql varchar(8000)
set #sql = 'create view [' + #srcdb + '].[dbo].[vw_artrans] as
select * from [' + #srcdb + '].[dbo].artrans
union
select * from [' + #archdb + '].[dbo].artrans'
exec (#sql)
But you cannot pass the name of the DB to create view.
So I tried this instead:
declare #sql varchar(8000)
set #sql = 'use ' + #srcdb + '
go
create view [vw_artrans] as
select * from [' + #srcdb + '].[dbo].artrans
union
select * from [' + #archdb + '].[dbo].artrans'
exec (#sql)
But it now complains about the GO statement (Incorrect syntax).
The name of the database being created for the archived data is determined dynamically in the script (#archdb contains the name) so I can't script in the DB name and I can't run a second script.
Based on #Sebastien answer, here is the solution :
declare #sql varchar(8000)
set #sql = 'EXEC ' + #srcdb + '.sys.sp_executesql N''create view [vw_artrans] as
select * from [' + #srcdb + '].[dbo].artrans
union
select * from [' + #archdb + '].[dbo].artrans'';'
exec (#sql)
to execute a dynamic SQL statement in a different database than the one you are in you can use sp_executesql like this:
USE db1;
EXEC db2.sys.sp_executesql N'SELECT DB_NAME();';
This wil result in db2 being returned.
GO is not a T-SQL statement. It is interpreted by SSMS to break the query text into batches. It never gets send to SQL Server itself.

Subtleties of SQL Server Variables

I have the following SQL Server stored procedure:
CREATE PROCEDURE ispsDcOhcAgg #TmpTableName NVARCHAR(50), #ListItem NVARCHAR(50)
AS
IF EXISTS (SELECT name
FROM sys.tables
WHERE name = #TmpTableName)
DROP TABLE #TmpTableName; -- This will not work.
GO
This will clearly not work (see the comment in the above snippit). The only (and very ugly) way I have found to get around this problem is to do the following
CREATE PROCEDURE ispsDcOhcAgg #TmpTableName NVARCHAR(50), #ListItem NVARCHAR(50)
AS
DECLARE #SQL NVARCHAR(4000)
SET #SQL = N'IF EXISTS (SELECT name ' +
N'FROM sys.tables ' +
N'WHERE name = N' + N'''' + #TmpTableName + N''') ' +
N'DROP TABLE ' + #TmpTableName + N';'
EXEC sp_executesql #SQL;
GO
which truly stinks and for large stored procedures, it's horrendous!
Is there another way of doing this that I don't know about?
Thanks for your time.
No, if you want to use a table name dynamically like this, you need to use dynamic SQL.
So you should make sure you don't open yourself up to nasty SQL injection risks!
Try something like this:
SET #SQL = 'IF EXISTS (SELECT name ' +
N'FROM sys.tables ' +
N'WHERE name = #TableName) ' +
N'DROP TABLE ' + QUOTENAME(#TmpTableName) + ';'
EXEC sp_executesql #SQL, N'#TableName sysname', #TmpTableName;
No, if you want to determine the table to be dropped at runtime, there is no alternative to dynamic SQL.
There is a slightly less ugly way: you only use dynamic SQL for the command that needs to be dynamic (the DROP command):
DECLARE #SQL NVARCHAR(100)
IF EXISTS (SELECT name
FROM sys.tables
WHERE name = #TmpTableName)
BEGIN
SET #SQL = N'DROP TABLE ' + #TmpTableName + N';'
EXEC sp_executesql #SQL;
END

In SSMS copied string has different behaviour to original string

I am attempting to semi automate creation of my databases
As part of this I want to add extended properties of column descriptions.
When I try to run sp_sqlexec in my script ( or even just Exec(#mystring) I get an error. However, if while debugging, I copy the dynamic sql string from the watch window and then run sp_sqlexec on the copied string in a seperate window I get no errors and the extended properties are added correctly.
The following script demonstrates the problem:
--Create a table to apply column descriptions to
Create table dbo.table1 (id int, name nvarchar(20));
--Create the table that contains our column descriptions
Create table dbo.column_descs_table (schemaname nvarchar(20), tablename nvarchar(20), columnname nvarchar(20), column_description nvarchar(20))
Insert into column_descs_table (schemaname, tablename, columnname, column_description)
values ('dbo', 'table1', 'id', 'the id column'), ('dbo' , 'table1', 'name', 'the name column');
--Dynamic sql string varaible to hold the commands
Declare #dyn_sql nvarchar(max);
Set #dyn_sql = 'N'''; --Set to opening quote
--now create the string containing commands to add column escriptions
SELECT #dyn_sql = #dyn_sql + N' EXEC sp_addextendedproperty ''''Col Desc'''', ''''' + column_description + N''''', ''''SCHEMA'''', ' + schemaname + N', ''''TABLE'''', ' + tablename + N', ''''COLUMN'''', ' + columnname + N' ;'
FROM dbo.column_descs_table
Set #dyn_sql = #dyn_sql + ''''; --add the closing quote
Print #dyn_sql --If I copy the contents of #dyn_sql here and run seperately it works OK
Exec sp_sqlexec #dyn_sql -- this line causes error
The error I get is
Msg 102, Level 15, State 1, Line 1
Incorrect syntax near ' EXEC sp_addextendedproperty 'Col Desc', 'the id column', 'SCHEMA', dbo, 'TABLE', table1, 'COLUMN', id ; EXEC sp_addextendedprope'.
Yet if I step through the code and copy the contents of #dyn_sql then paste this as follows:
Exec sp_sqlexec N' EXEC sp_addextendedproperty ''Col Desc'', ''the id column'', ''SCHEMA'', dbo, ''TABLE'', table1, ''COLUMN'', id ; EXEC sp_addextendedproperty ''Col Desc'', ''the name column'', ''SCHEMA'', dbo, ''TABLE'', table1, ''COLUMN'', name ;'
Then the above works fine and the column descriptions are added as expected.
Any help on this specific copying problem is much appreciated. I do understand the security issues with dynamic sql ( this script will be removed from the database once my setup is complete)
Thanks in advance
Jude
It looks like it's because your leading N is included within the string to execute; you don't need it at all. In other words, you are ending up with something like this:
exec sp_execsql 'N'' exec sp_addextendedproperty /* etc. */ '''
But it should be like this:
exec sp_execsql N'exec sp_addextendedproperty /* etc. */ '
But why are you even using dynamic SQL here? All values passed to sp_addextendedproperty can be passed as parameters so there is no obvious reason to use dynamic SQL, unless you've simplified something for the question.
Finally, you should be using sp_executesql, it's the preferred way to execute dynamic SQL.
I believe that I have resolved my string copying problem. SQL was detecting double quotes in by concatenated string as empty strings and removing them. A simple example showing the problem and my solution is below:
--Example to Select 'simple string' and then 'concat string' into results sets
DECLARE
#Simplestring nvarchar( max ) = '' ,
#Concatstring nvarchar( max ) = '' ,
#Stringvar nvarchar( 10 ) = 'string';
--The double quotes in next line are the quotemark we want plus a quotemark acting
--as an escape character
--#simplestring will be set to 'Select 'simple string' '
SET #Simplestring = 'Select ''simple string'' ';
--Similarly we need #concatstring to be set to 'Select 'Concat string' '
SET #Concatstring = 'Select '' concat' + #Stringvar + ''; -- this wont work the last
--double quote will be removed
--Add a character that cannot appear in any othe part of the concatenation - I've used *
SET #Concatstring = 'Select '' Concat ' + #Stringvar + '*';
--Now replace the * with a quote mark
SET #Concatstring = REPLACE( #Concatstring , '*' , '''' ); -- This will work
EXEC sp_executesql #Simplestring;
EXEC sp_executesql #Concatstring;
There may be a simpler solution than mine.
Many thanks for the advice on using sp_executesql. I am working on changing my code to use this ( with variables passed in as parametrs).
Jude