I'm having difficulty invoking the following PowerShell command, from the command line as well as in a script:
Invoke-Sqlcmd -HostName **HOST** -Database **DATABASE**
-Username **USER** -Password **PWD** -Query "TRUNCATE **THE_TABLE**
BULK INSERT **THE_TABLE** FROM 'D:\Folder\csvfiles\import.csv'
WITH ( FIRSTROW = 2, FIELDTERMINATOR = ',', ROWTERMINATOR = '\n',TABLOCK)"
-ServerInstance "tcp:**HOST**"
I've copied the CSV file to the parent folder, and then to the root. Each time, the command fails with the following error:
Invoke-Sqlcmd : Cannot bulk load. The file "D:\Folder\csvfiles\import.csv" does not exist.
At line:1 char:1
This is all part of a script that Task Scheduler runs on an hourly basis. Often it will run just fine. Today it has been erroring with more frequency.
I'm stumped. Why can't it find a file that is obviously there? Am I overlooking something?
I found out that there was a related, competing task scheduled that was deleting all of the csv files in the folder at very close to the same time that my script ran. This turned out to be a case of bad timing.
Related
I am trying to deploy SQL files to an Azure Synapse Analytics dedicated SQL pools using PowerShell script in an Azure Devops pipeline.
I have a folder with SQL files and after defining array of files I am trying to run foreach loop for array and trying to Invoke-Sqlcmd to deploy files but first SQL file get deployed (object is created) and then I get error:
Msg 104309, Level 16, State 1, Line 1 There are no batches in the
input script.
Below is my piece of code:
$files='$(Build.SourcesDirectory)\folder1\'
foreach ($s in $files)
{
Invoke-sqlcmd -ServerInstance $(server) -Database $(db) -InputFile $s -Credential $(cred)}
Azure Synapse Analytics dedicated SQL pools scripts are sensitive to empty batches, eg this script would generate the same error:
Msg 104309, Level 16, State 1, Line 1 There are no batches in the
input script.
-- xx
GO
However the truth is you get the same error in Synapse Studio and SQL Server Management Studio (SSMS), so I suggest you run through your scripts and use the Parse button (Ctrl+F5) in SSMS, which parses the script but does not execute it. This will help you track down your error:
In summary don't have batches that don't do anything.
I was able to get a simple local example added by including dir and using the FullName method to get the full path:
$files = dir "D:\temp\Powershell\*.sql"
foreach ($s in $files) {
#$s.Name
Invoke-Sqlcmd -ServerInstance '.\sql2019x' -Database 'tempdb' -InputFile $s.FullName
}
hitting an issue where you hopefully can help me with:
I want to import a csv file to postgres with powershell. For this have written following code:
$ConnectionString = "Driver={PostgreSQL UNICODE(x64)};Server=$SQLServer;Port=$SLQPort;Database=$SQLDB;Uid=$SQLUser;Pwd=$SQLPass;"
$connection = New-Object System.Data.ODBC.ODBCConnection
$connection.ConnectionString = $ConnectionString
$connection.Open()
$query= "copy sensorhistory from 'C:\Powershell_Scripts\Postgres\Eurogate\data\export.csv' Delimiter ',' csv header;"
$DBCmd = $connection.CreateCommand();
$DBCmd.CommandText = $query
$DBCmd.ExecuteNonQuery() | Out-Null
$DBCmd.ExecuteReader();
But I am getting the error "No such file or directory"
the link is correct and already made changes to allow access to everybody, but still not working.
Any help is welcome
You're pointing to the wrong file system!
From the $SQLServer argument in the connection string, it appears that Postgres might be hosted on a remote machine.
When encountering a file path like C:\[...]\export.csv in a query, the database engine is going to ask the operating system to open the corresponding file, and the operating system goes "C:\? That's a local drive, let me find that file for you" - and fails to locate the file because you actually meant the C:\ drive on your computer.
Two ways to fix this:
1. Pass a network path in the query:
Update the query so the file path explicitly points to a file share on a different machine:
$query = "copy sensorhistory from '\\ClientComputer01\ShareName\path\to\data\export.csv' Delimiter ',' csv header;"
2. Copy the file to the server that hosts Postgres
Of course, opening the file from the local filesystem wouldn't fail if the file also existed there. Assuming WinRM is configured in the environment and you have admin access on the DB server, copy the file ahead of time:
$remoteSession = New-PSSession -ComputerName $SQLServer
Copy-Item C:\Powershell_Scripts\Postgres\Eurogate\data\export.csv -Destination C:\Powershell_Scripts\Postgres\Eurogate\data\ -ToSession $remoteSession
Alternatively, copy the file over SMB if you have write access to a file share:
Copy-Item C:\Powershell_Scripts\Postgres\Eurogate\data\export.csv \\$SQLServer\ShareName\path\to\data\
After which the DB engine will be able to open the new local copy of the file and you can keep the local file path in the query.
I have a script that pulls some information from AD, inserts rows into a temp table, then calls a SQL script that transforms and upserts rows into a crosswalk table. The script runs fine in ISE, but fails when running in TaskScheduler, whether manually run or scheduled.
On the 'ACTION' page, the program is 'powershell.exe', and the arguments are '-executionpolicy bypass C:\scripts\SysManagement\Populate_AD_Xwalk.ps1.' The Last Run Result is (0x1).
Any idea what is wrong?
Thanks
# Invoke-sqlcmd Connection string parameters
$params = #{'server'='xxx';'UserName'='xxx';'Password'='xxx'; 'Database'='xxx'}
######################
# Function to manipulate the data
Function writeDiskInfo
{
param($UPN,$EMAIL,$SAM,$ACTIVE)
$InsertResults = #"
INSERT INTO [xxx].[dbo].[WORK_UPN_Email](UPN, EMAIL, SAM, ACTIVE)
VALUES ('$UPN','$EMAIL','$SAM', '$ACTIVE')
"#
# call the invoke-sqlcmdlet to execute the query
Invoke-sqlcmd #params -Query $InsertResults
}
#####################
# Query AD objects and store in an array
$dp = Get-ADUser -property 'emailaddress' -Filter *
# Loop through array and insert into WORK table
foreach ($item in $dp)
{
# Call the function to transform the data and prepare the data for insertion
writeDiskInfo $item.UserPrincipalName $item.EmailAddress $item.SamAccountName $item.Enabled
}
# Call SQL procedure to delete rows with blank upns and upsert crosswalk table
Invoke-Sqlcmd #params -Query "EXEC ZZproc_Upsert_AD_Email"
The Last Run Result is (0x1) , could mean it’s a privilege issue.
Check what user is the scheduler running as . Can the scheduler run even if the user is not logged in?
I believe you are using AD user for sql operations. Is the scheduler running as that user. OR does the user running scheduler have sufficient DB privileges ?
Let’s say scheduler is being run as the AD user, then check if the user has sufficient privileges to the folder where the powershell script resides
Under system32 folder you have a Folder “Tasks” . Does this user have read and execute privileges to the Tasks folder
Most importantly the user running scheduler should have the privilege “Log on as batch job”
I am trying to run queries stored in a text file from PowerShell. I use following to do that;
Invoke-Expression 'sqlcmd -d TestDB -U $user -P $pw -i "E:\SQLQuery1.sql"'
If an error or exception occurs when executing the queries from the .sql file, how can I capture that in my Powershell script? How can I get the script output?
NOTE: I cannot use invoke-sqlcmd
To answer the question
If some error or exception occurred when executing .sql file how can I get that into my PowerShell script? How can I get the script output?"
Invoke-Expression returns the output of the expression executed. However, it may only capture STDOUT, not STDERR (I haven't tested, as I don't use this method), so you might not get the actual error message.
From the Help:
The Invoke-Expression cmdlet evaluates or runs a specified string as a command and returns the results of the expression or command
The better route is to use the PowerShell method you already have available - Invoke-SQLCmd is installed if you have installed any of the SQL Server 2008 (or newer) components/tools (like SSMS). If you've got SQL Server 2012, it's very easy: import-module sqlps. For 2008, you need to add a Snap-In, add-pssnapin SqlServerCmdletSnapin. And since you have sqlcmd.exe, the PowerShell components should be there already.
If all else fails, go the System.Data.SQLClient route:
$Conn=New-Object System.Data.SQLClient.SQLConnection "Server=YOURSERVER;Database=TestDB;User Id=$user;password=$pw";
$Conn.Open();
$DataCmd = New-Object System.Data.SqlClient.SqlCommand;
$MyQuery = get-content "e:\SQLQuery1.sql";
$DataCmd.CommandText = $MyQuery;
$DataCmd.Connection = $Conn;
$DAadapter = New-Object System.Data.SqlClient.SqlDataAdapter;
$DAadapter.SelectCommand = $DataCmd;
$DTable = New-Object System.Data.DataTable;
$DAadapter.Fill($DTable)|Out-Null;
$Conn.Close();
$Conn.Dispose();
$DTable;
With both this and Invoke-SQLCmd, you'll be able to use try/catch to pick up and handle any error that occurs.
As seen in this question's answers, there is a method built into Powershell to invoke SQLCMD called, unsurprisingly, Invoke-Sqlcmd.
It's very easy to use for individual files:
Invoke-sqlcmd -ServerInstance $server -Database $db -InputFile $filename
Or groups:
$listOfFiles | % { Invoke-sqlcmd -ServerInstance $server -Database $db -InputFile $_ }
Use invoke-sqlquery module, available at this website.
I have a Powershell script that uses invoke-sqlcmd to apply scripts to a series of development databases. I loop through a list of scripts and compare it to the current release level of the database and then apply the required scripts to get the DB to the release level it needs to be at. Certain databases are reference databases and are in a READ_ONLY state. I connect to those database run an alter DB script setting them to READ_WRITE apply the script then change the back to READ_ONLY. Overall the script works well, the issue is it looks like when PowerShell first opens a connection to the database and applies the first script and then goes to alter the DB back to READ_ONLY the database has objects locked. I've traced it back to the previous connection and a Shared_Transaction_Workspace lock (sys.dm_tran_locks) for what looks to be the previous powershell connection. Why is this connection still open after the invoke-sqlcmd has completed and is there anything I can do about it? Can I force invoke-sqlcmd to use a new connection for each invocation of the cmdlet?
I have tried a messy fix killing the offending connection and then retrying the connection but I think there is something better.
I've always done this and it seems to work:
[System.Data.SqlClient.SqlConnection]::ClearAllPools()
Well, I know that this is a very old post and the people from Microsoft told that fixed this issue (as told the article mentioned by David Brabant) but maybe I'm not the luckiest guy and have to make an workaround to make it happens.
Even running Microsoft SQL Server 2012 (SP1) - 11.0.3128.0 (X64) I had the same issue and after make some researches I got a way to get some parameter from Invoke-Sqlcmd as output so I can get the Session ID of the current user process with the built-in ##SPID global variable from the SQL Server and make a connection with ADO.NET to execute a KILL clause to close the opened connection.
So let's to the workaround applied in my case
#Invoke the Invoke-Sqlcmd to execute an script from a file
Invoke-Sqlcmd -Server "[SERVER_NAME]" -Database [DATABASE_NAME] -Username [USER] -Password [PASSWORD] -InputFile [DOT_SQL_FILE_PATH]
#Invoke the Invoke-Sqlcmd to execute a inline SQL statement to get the SessionID as a Powershell variable
$SQLSession = Invoke-Sqlcmd -Server "[SERVER_NAME]" -Database [DATABASE_NAME] -Username [USER] -Password [PASSWORD] -query "select ##spid as SessionID"
# Build query to execute KILL clause
$DbQuery = "KILL " + $SQLSession.SessionID;
# Create SQL connection with ADO.NET
$DbConnection = New-Object System.Data.SqlClient.SqlConnection
$DbConnectionString = "Server = [SERVER_NAME]; Database = [DATABASE_NAME]; User ID=[USER]; Password=[PASSWORD];"
$DbConnection.ConnectionString = $DbConnectionString
$DbConnection.Open()
# Create SQL command for KILL clause
$DbCommand = New-Object System.Data.SQLClient.SQLCommand
$DbCommand.Connection = $DbConnection
$DbCommand.CommandText = $DbQuery
# Execute KILL clause
$DbCommand.ExecuteNonQuery()
# Close connection
$DbConnection.Close()
I hope that it helps
Even though I am using the newest version of SSMS (Version 16.5.3 - Build 13.0.16106.4), I still get this issue. I haven't figured out what the "right" way of forcing the connection closed is, but I have a work-around that is simple and resolves the issue for me. If you just need to get the connection off the database, you can do the following:
Run normal command(s)
Invoke-Sqlcmd -ServerInstance "SOME_SERVER" -Database "SOME_DB" ...
When you are ready to eliminate the connection from the database:
Invoke-Sqlcmd -ServerInstance "SOME_SERVER" -Database "SOME_DB" -Query "use [master];"
This will switch the connection to master, thus removing it from the database of interest. If you absolutely need the connection closed, I think you need to resort to SqlClient or such.