I have a release pipeline which applies database changes with 'SqlCmd.exe'.
I am trying to execute a stored procedure using this command-line utility:
/opt/mssql-tools/bin/sqlcmd -S tcp:$(Server) -d $(Database) -U $(UserName) -P '$(Password)' -b -i "$(ScriptFile)"
Once something goes wrong in the script file, I want to SQLCMD.EXE automatically rollback all the changes.
I should mention that there is no TRANSACTIONS management inside the script file.
Please help me to learn how to resolve this.
You probably have to add rollback transactions in your script file. There is not configurations in azure release pipeline to control the rollback behavior. See example here to add transactions in scripts.
If you donot want to add transactions in the script file. You can try adding a poweshell task in release pipeline run below script to append a BEGIN TRANSACTION and END TRANSACTION to your query contents.
$fullbatch = #()
$fullbatch += "BEGIN TRANSACTION;"
$fullbatch += Get-Content $(ScriptFile)
$fullbatch += "COMMIT TRANSACTION;"
sqlcmd -S tcp:$(Server) -d $(Database) -U $(UserName) -P '$(Password)' -b -Q "$fullbatch"
See example in this thread.
Related
I want to know if we can pass list of sql script files using psql command.
I have test1.sql,test2.sql,test3.sql files and right now i am executing these files individually in a loop
psql -f D:\\test1.sql postgresql://postgres:secret#localhost:5432/testdb
I want to know if there is any way to pass all three files to the psql command and psql should execute it sequentially.
You can use the -f parameter multiple times:
psql -f D:\\test1.sql -f D:\\test2.sql -f D:\\test3.sql postgresql://postgres:secret#localhost:5432/testdb
I'd like to copy the content of my local machine to my remote one (inside a docker).
For some reason, it is more complicated that I was expected:
When I try to copy the data to the remote one, I get this "ERROR: CREATE DATABASE cannot run inside a transaction block".
Ok... So I get into my docker container, added the rule \set AUTOCOMMIT inside. But I still get this error.
This is the command I did:
// backup
pg_dump -C -h localhost -U postgres woof | xz >backup.xz
and then in my remote computer:
xz -dc backup.xz | docker exec -i -u postgres waf-postgres psql --set ON_ERROR_STOP=on --single-transaction
But each time I get this "CREATE DATABASE cannot run inside a transaction block" no matter what I try. Even if I put the autocommit to "on".
Here my problem: I don't know what a transaction block is. And I don't understand why copying one db to another need to be so hard pain: My remote db is empty. So why there is so much fuss and why psql just can't force what I want?
My aim is just to copy my local db to the remote one.
what happens here is: you add CREATE DATABASE statement with -C key and then try to run psql with --single-transaction, so the content of script are wrapped to BEGIN;...END;, where you can't use CREATE DATABASE
So iether remove -C and run psql against existing database, or remove --single-transaction for psql. Make decision based on what you really need...
from man pg_dump:
-C
--create
Begin the output with a command to create the database itself and reconnect to the created database. (With a script of this
form, it doesn't matter which database in the destination installation
you connect to before
running the script.) If --clean is also specified, the script drops and recreates the target database before reconnecting to
it.
from man psql:
--single-transaction
This option can only be used in combination with one or more -c and/or -f options. It causes psql to issue a BEGIN command
before the first such option and a COMMIT command after the last one, thereby wrapping all the commands into a single
transaction. This ensures that either all the commands complete successfully, or no changes are applied.
I got my database dump (tables, functions, triggers etc) in *.sql files.
At this moment I am deploying them via jenkins, by passing execute shell command:
sudo -u postgres psql -d my_db < /[path_to_my_file].sql
The problem is, that if something is wrong in my sql file, build finishes as SUCCESS. I would like to got information immediately if something fails, without looking into log and checking if every command executed succesfully.
Is it possible (and how if the answer is 'yes') to deploy postgres database via jenkins other way?
I changed my execution command to:
sudo -u postgres psql -v ON_ERROR_STOP=1 -d my_db < [path_to_file].sql
Make sure you have set
set -e
Before running the command.
If that does not work, I'd look at the return code from the command above. That can be done by running
echo $?
right after the command.
If that gives you a zero when it fails it's postgres fault (sice it should return with something else than 0 on fail).
Perhaps there is a postgres flag to fail on wrong input.
EDIT:
-v ON_ERROR_STOP=1
As a flag to postgres should make postgres fail on errors
I am writing a simple Perl script that is attempting to automate checking in files via SVN. I am not using any given SVN clients within Perl and for the sake of simplicity am just using command line arguments.
Whenever I run the svn checkout command seen below, I am getting this error:
sh[2]: svn+ssh://my/repo/url/project/trunk: not found.
Here is the command and the variable declarations.
$svn_root = "svn+ssh://my/repo/url/project/trunk";
$user = `whoami`;
`svn checkout -q --username $user $svn_root workingCopyName`;
I should note that I am connecting to the repository via SSH and have edited my config file (and included -q just in case). I will also note that running this command outside of the script works perfectly fine, for the same exact URL, and without the -q argument.
Thanks for your help. Please let me know if I need to clear anything up.
Note that the error is coming from the shell, not svn.
You are executing
svn checkout -q --username user
svn+ssh://my/repo/url/project/trunk workingCopyName
when you mean to execute
svn checkout -q --username user svn+ssh://my/repo/url/project/trunk workingCopyName
because $user contains a newline. Replace
my $user = `whoami`;
with
chomp( my $user = `whoami` );
When I run data-only script in SQL Server 2008 R2, it is showing this error:
Cannot execute script
Additional information:
Exception of type 'System.OutOfMemoryException' was thrown. (mscorlib)
The size of script file is 115MB and it's only data .
When I open this script file, it shows:
Document contains one or more extremely long lines of text.
These lines cause the editor to respond slowly when you open the file .
Do you still want to open the file ?
I run schema-only script first and then data-only script .
Is there any way to fix this error ?
I solved it by using sqlcmd utitlity.
sqlcmd -S "Server\InstanceName" -U "instantName" -P "password" -i FilePathForScriptFile
For example :
sqlcmd -S .\SQLEXPRESS -U sa -P 123 -i D:\myScript.sql
Zey's answer was helpful for me, but for completion:
If you want to use Windows Authentication just omit the user and password.
And don't forget the quotes before and after the path if you have spaces.
sqlcmd -S .\SQLEXPRESS -i "C:\Users\Stack Overflow\Desktop\script.sql"
If you're logged into the domain with the correct privileges and there's only one instance running, you also do not have to provide the above user/pw/instance command args. I was able to just execute:
sqlcmd -i myfile.sql