Capistrano: conditionally run commands after deploy on remote - capistrano

I want to remove some folders on the remote once deploy has completed. I am currently using
task :set_permissions do
parallel do |session|
session.when "in?(:xb_test)", "cat #{deploy_to}test.htaccess >> #{current_path}/.htaccess"
end
Two questions really, is this the best way to do this and how can I run this kind of statement on multiple functions without having to write repeat code?
session.when "in?(:xb_test)" ...
session.when "in?(:xb_dev)" ...
session.when "in?(:xb_live)" ...
Any help would be appreciated as I'm pretty new to Capistrano

About your first question, "is this the best way to do this ?" :
I don't think this is the best approach.
"test" "dev" and "live" uhm... it looks like you are deploying to different stages, then I would better use https://github.com/capistrano/capistrano/wiki/2.x-Multistage-Extension
About your second question, "how can I run this kind of statement on multiple functions without having to write repeat code ?":
capistrano deploy.rb is just a ruby file, you can use a method
def htaccess_stuff
"cat #{deploy_to}test.htaccess >> #{current_path}/.htaccess"
end
and then
task :set_permissions do
parallel do |session|
session.when "in?(:xb_test)", htaccess_stuff
end

Related

Q/KDB: How do you call system commands in a lambda/function

How do you call system commands in a lambda/function?
E.g. I would like to load a q script through .z.ts
Trying .z.ts:{\l myScript.q} does not work. Thanks in advance!
You can use the loading system function in the format
system "l [x]"
For example:
.z.ts:{system "l myscript.q"}
You can find some information about using system commands, like so, here:
https://kx.com/blog/kdb-q-insights-scripting-with-q/

Phoenix / Elixir testing when setting isolation level of transaction

I have a chunk of code that looks something like this:
Repo.transaction(fn ->
Repo.query!("set transaction isolation level serializable;")
# do some queries
end)
In my test suite, I continually run into the error:
(Postgrex.Error) ERROR 25001 (active_sql_transaction): SET TRANSACTION ISOLATION LEVEL must be called before any query
I'm wondering if I'm doing something fundamentally wrong, or if there's something about the test environment that I'm missing.
Thanks!
Not sure if you are still looking for the answer to this but I found a nice solution for this. For the case I have setup block like so:
setup tags do
:ok =
if tags[:isolation] do
Sandbox.checkout(Repo, isolation: tags[:isolation])
else
Sandbox.checkout(Repo)
end
unless tags[:async] do
Sandbox.mode(Repo, {:shared, self()})
end
:ok
end
then on the test that is in the path of the serializable transaction you have to tag it with "serializable" like so:
#tag isolation: "serializable"
test "my test" do
...
end
this will let you run your tests that come across serializable in the path and still use sandbox.
The problem is for testing purposes all of the tests are wrapped in a transaction so they can be rolled back so you don't pollute your database with tons of old test data. Which could result in failures that should have passed and passes that should have failed depending on how you've written your tests.
You can work around it but it will, again, pollute your test database and you'll have to clean it up yourself:
setup do
[Other set up stuff]
Ecto.Adapters.SQL.Sandbox.checkin(MyApp.Repo) #This closes any open transaction, effectively.
Ecto.Adapters.SQL.Sandbox.checkout(MyApp.Repo, [sandbox: false]) # This opens a new transaction without sandboxing.
end
This setup task goes in the test file with your failing tests if you don't have a setup. If you don't do the checkin call you'll (most likely) get an error about other queries running before the one setting the transaction level because you are inserting something before the test.
See here for someone essentially calling out the same issue.

Create Task group is disabled for Powershell tasks

I m unable to create a task group for Powershell tasks in VSTS . It works for rest of the tasks except PS.
Please advice !
Thanks,
I m able to achieve it. Please ignore.
I had to make sure that Inline script has no errors and my release def is saved. !

How do we examine a particular job in GTM?

Just as we have in Intersystem Cache D ^JOBEXAM to examine the jobs running in background or scheduled.
How can we do the same in GTM?
Do we have any command for the same. Please advice.
The answer is $zinterrupt; and what triggers it: mupip intrpt. Normally it dumps a file on your GT.M start-up directory containing the process state via ZSHOW "*"; however, you can make $zinterrupt do any thing you want.
$ZINT documentation:
http://tinco.pair.com/bhaskar/gtm/doc/books/pg/UNIX_manual/ch08s35.html
A complex example of using $ZINT:
https://github.com/shabiel/random-vista-utilities/blob/master/ZSY.m
--Sam
Late answer here. In addition to what Sam has said, there is a code set, "^ZJOB" that is used in the VistA world. I could get you copies of this if you wanted.

Is there an equivalent in Net::SSH::Expect 1.09 for "collect_exit_code"

I came across this link: Exit status code for Expect script called from Bash but it did not help me. As I was looking to get the exit status code from a command run remotely, I came across cpan documentation for Net::SSH::Expect 0.08 which has "collect_exit_code" and "last_exit_code" methods, which is exactly what I'd like to use today, however, I'm unable to find a suitable replacement when running 1.09.
I'd like to keep it simple, such as:
$ssh_devel_exp->collect_exit_code(1);
$ssh_devel_exp->send("sudo make");
if ($ssh_devel_exp->last_exit_code()) { etc. and so forth... };
But, I cannot think of a simple way to get the exit status when running a command through Net Expect without methods similar to these.
I do not believe switching to Fabric is the answer for this issue; this is a perl application and I need to stick with Perl.
Thanks in advance.
Did you tried to call the underlying expect object?
$ssh_devel_exp->{expect}->collect_exit_code(1);
$ssh_devel_exp->send("sudo make");
if ($ssh_devel_exp->{expect}->last_exit_code()) { etc. and so forth... };
if nothing else helps you could create a small shell script that execute your commands and report back the exitstatus on stderr.