Does BaseX support running a basex script file (.bxs) with the jobs module or a combination of query with proc module and jobs module? - service

This has been a thorn in my side and I'm wondering if I'm missing something simple or not. I need to run .bxs scripts from the jobs scheduler.
I tried to start a service with a .bxs script file from the jobs module but it does not run. It registers as a service but the script does not run.
let $home := Q{org.basex.util.Prop}HOMEDIR()
let $job := $home || 'webapp/sync/update_jira.bxs'
let $job2 := $home || 'webapp/sync/update_commit_data.bxs'
return (jobs:eval(xs:anyURI($job), (), map { 'id':'update_jira_job', 'start':'14:54:02', 'interval':'P1D', 'service': true(), 'log': 'update_jira_job'}),
jobs:eval(xs:anyURI($job2), (), map { 'id':'update_commit_data', 'start':'15:03:02', 'interval':'P1D', 'service': true(), 'log': 'update_commit_data'}))
I also tried to run a query that executes the command line to run the scripts for example within the update_jira.xq there is a line proc:execute('basex update_jira.bxs') from an initial query that looks something like this...
let $home := Q{org.basex.util.Prop}HOMEDIR()
let $job := $home || '/srv/webapp/sync/update_jira.xq'
let $job2 := $home || '/src/webapp/sync/update_commit_data.xq'
return (jobs:eval(xs:anyURI($job), (), map { 'id':'update_jira_job', 'start':'14:54:02', 'interval':'P1D', 'service': true(), 'log': 'update_jira_job'}),
jobs:eval(xs:anyURI($job2), (), map { 'id':'update_commit_data', 'start':'15:03:02', 'interval':'P1D', 'service': true(), 'log': 'update_commit_data'}))
When this ran as a service, the database did not update as expected and I got this output in the log:
22:47:02.001 JOB:update_commit_data admin OK 0.30 update_commit_data
22:41:00.000 JOB:update_jira_job admin ERROR 0.00 update_jira_job; Unexpected end of query: '0'.
But that is strange because when I ran the query itself -- that starts the service with jobs:eval -- then it actually ran ok when I ran the query for the first time.
16:42:52.257 10.244.144.142:57444 admin 200 221563.70 [GET] /rest?run=sync/update_jira.bxs
16:49:39.862 10.244.144.142:57591 admin 200 101413.21 [GET] /rest?run=sync/update_commit_data.bxs
This is my latest attempt where the query runs initially but then doesn't seem to execute as a service interval. I added the base-uri as the path to the query and I hope that's the right way to do that.
let $home := Q{org.basex.util.Prop}HOMEDIR()
return jobs:eval(proc:execute('/usr/local/bin/basex', '/srv/basex/webapp/sync/update_jira.bxs'), (),
map { 'id':'update_jira_job', 'interval':'PT5M', 'base-uri': '/srv/basex/webapp/sync/',
'service': true(), 'log': 'update_jira_job'})
When I run this through the database admin tool query window, it runs right away
14:02:54.494 10.244.144.142:54402 admin 200 296095.32 [POST] /dba/query-update
And then after 5 the minute interval a .05 ms log entry shows up when the service kicked off:
14:57:50.564 JOB:update_jira_job admin OK 0.05 update_jira_job

Please note that BaseX command scripts contain plain database commands, whereas the functions in the Jobs Module were tailored to execute XQuery code. If you want to use jobs:eval, the best solution is to rewrite the contents of your command scripts to XQuery.
If you want to stick with the command scripts, you could indeed try to invoke BaseX via proc:execute, but you should be aware that the two BaseX instance will run independently of each other and could lead to corrupt databases (see https://docs.basex.org/wiki/Startup#Concurrent_Operations).
If the invocation fails…
Cannot run program "basex": CreateProcess error=2, ...
…you may need to address BaseX with the full path:
(: Windows installation :)
proc:execute('c:\Program Files (x86)\BaseX\bin\basex.bat', 'commands.bxs')
(: Linux :)
proc:execute('/path/to/basex', 'commands.bxs')

Related

SCP command not working in karate project - it throws command error:cannot run program scp.exe: CreateProcess error=2 [duplicate]

I'm trying to execute bash script using karate. I'm able to execute the script from karate-config.js and also from .feature file. I'm also able to pass the arguments to the script.
The problem is, that if the script fails (exits with something else than 0) the test execution continues and finishes as succesfull.
I found out that when the script echo-es something then i can access it as a result of the script so I could possibly echo the exit value and do assertion on it (in some re-usable feature), but this seems like a workaround rather than a valid clean solution. Is there some clean way of accessing the exit code without echo-ing it? Am I missing on something?
script
#!/bin/bash
#possible solution
#echo 3
exit 3;
karate-config.js
var result = karate.exec('script.sh arg1')
feture file
def result = karate.exec('script.sh arg1')
Great timing. We very recently did some work for CLI testing which I am sure you can use effectively. Here is a thread on Twitter: https://twitter.com/maxandersen/status/1276431309276151814
And we have just released version 0.9.6.RC4 and new we have a new karate.fork() option that returns an instance of Command on which you can call exitCode
Here's an example:
* def proc = karate.fork('script.sh arg1')
* proc.waitSync()
* match proc.exitCode == 0
You can get more ideas here: https://github.com/intuit/karate/issues/1191#issuecomment-650087023
Note that the argument to karate.fork() can take multiple forms. If you are using karate.exec() (which will block until the process completes) the same arguments work.
string - full command line as seen above
string array - e.g. ['script.sh', 'arg1']
json where the keys can be
line - string (OR)
args - string array
env - optional environment properties (as JSON)
redirectErrorStream - boolean, true by default which means Sys.err appears in Sys.out
workingDir - working directory
useShell - default false, auto-prepend cmd /c or sh -c depending on OS
And since karate.fork() is async, you need to call waitSync() if needed as in the example above.
Do provide feedback and we can tweak further if needed.
EDIT: here's a very advanced example that shows how to listen to the process output / log, collect the log, and conditionally exit: fork-listener.feature
Another answer which can be a useful reference: Conditional match based on OS
And here's how to use cURL for advanced HTTP tests ! https://stackoverflow.com/a/73230200/143475
In case you need to do a lot of local file manipulation, you can use the karate.toJavaFile() utility so you can convert a relative path or a "prefixed" path to an absolute path.
* def file = karate.toJavaFile('classpath:some/file.txt')
* def path = file.getPath()

How to run one or more Topshelf services from a console application in seperate threads

I have a console application that is written to take command line arguments which will be used in determining the number of Windows services are needed. The command line for the console application is like this:
consoleapp.exe -server:11 -azure:7
where -server specifies a Windows service and -azure specifies an Azure WebJob. [NOTE: This question only pertains to the Windows service but I wanted to show that the console application can potentially have many arguments.]
In the console application I parse the command line and, if the command matches "-server" then I want to create a Windows service using TopShelf. I can potentially have multiple -server commands on the console app command line, or single -server commands with multiple values, as in:
-server:11,7 or -server:11 -server:7
For each distinct -server/value I am creating a Task that in turn creates and starts a Topshelf service, like so:
TopshelfExitCode retCode = HostFactory.Run(x =>
{
x.Service<TopshelfWindowsService>(sc =>
{
sc.ConstructUsing(name => new TopshelfWindowsService(companyConfig, runnerProgress));
sc.WhenStarted((s, hostControl) => s.Start(hostControl));
sc.WhenShutdown(s => s.Shutdown());
sc.WhenStopped((s, hostControl) => s.Stop(hostControl));
});
//
x.SetServiceName($"CommRunner {companyConfig.CompanyName + companyConfig.CompanyId}");
x.SetDescription($"Runner for CompanyID ({companyConfig.CompanyId})");
x.SetDisplayName($"Runner {companyConfig.CompanyId}");
//
x.StartAutomaticallyDelayed();
});
My problem is that Topshelf apparently uses the console application's command line arguments during the service configuration and I end up getting an error:
"[Failure] Command Line An unknown command-line option was found: DEFINE: server = 11".
Is it possible to do what I am attempting and still use Topshelf? Is there any way to disable the use of the command line when configuring a service in Topshelf?
I could be wrong, but it sounds like your issue isn't really how to run multiple instances in separate threads, but more how to parse command line arguments of your own with TopShelf in use.
Have a look at the AddCommandLineSwitch functionality to allow you to create and use your own arguments.
x.AddCommandLineSwitch("server", v => server = v);
x.AddCommandLineSwitch("azure", v => azure= v);
x.ApplyCommandLine();
From this the syntax is:
-server:11 -azure:7
See How can I use CommandLine Arguments that is not recognized by TopShelf? for more information.
Remember, these only work during the install phase. To use these parameters for when the service starts, have a look at: How to specify command line options for service in TopShelf

Modify datasource IP addresses in WebSphere Application Server

I have nearly a hundred data sources in a WebSphere Application Server (WAS) and due to office relocation, the IP of the database servers have changed and I need to update the datasource IP addresses in my WAS too.
Considering it error-prone to update hundred IPs through admin console.
Is there any way that I can make the change by updating config files or running a script? My version of WAS is 7.0.
You should be able to use the WAS Admin Console's built-in "command assistance" to capture simple code snippets for listing datasources and changing them by just completing those operations in the UI once.
Take those snippets and create a new jython script to list and update all of them.
More info on command assistance:
http://www.ibm.com/developerworks/websphere/library/techarticles/0812_rhodes/0812_rhodes.html
wsadmin scripting library:
https://github.com/wsadminlib/wsadminlib
You can achieve this using wsadmin scripting. Covener has the right idea with using the admin console's command assistance to do the update once manually (to get the code) and then dump that into a script that you can automate.
The basic idea is that a datasource has a set of properties nested under it, one of which is the ip address. So you want to write a script that will query for the datasource, find its nested property set, and iterate over the property set looking for the 'ipAddress' property to update.
Here is a function that will update the value of the "ipAddress" property.
import sys
def updateDataSourceIP(dsName, newIP):
ds = AdminConfig.getid('/Server:myServer/JDBCProvider:myProvider/DataSource:' + dsName + '/')
propertySet = AdminConfig.showAttribute(ds, 'propertySet')
propertyList = AdminConfig.list('J2EEResourceProperty', propertySet).splitlines()
for prop in propertyList:
print AdminConfig.showAttribute(prop, 'name')
if (AdminConfig.showAttribute(prop, 'name') == 'ipAddress'):
AdminConfig.modify(prop, '[[value '" + newIP + "']]')
AdminConfig.save();
# Call the function using command line args
updateDataSourceIP(sys.argv[0], sys.argv[1])
To run this script you would invoke the following from the command line:
$WAS_HOME/bin/wsadmin.sh -lang jython -f /path/to/script.py myDataSource 127.0.0.1
**Disclaimer: untested script. I don't know the name of the "ipAddress" property off the top of my head, but if you run this once it will print out all the properties on your ds, so you can get it there
Some useful links:
Basics about jython scripting
Modifying config objects using wsadmin
As an improvement to aguibert's script, to avoid having to provide all 100 datasource names and to update it to correct the containment path of the configuration id, consider this script which will update all datasources, regardless of the scope at which they're defined. As always, backup your configuration before beginning and once you're satisified the script is working as expected, replace the AdminConfig.reset() with save(). Note, these scripts will likely not work properly if you're using connection URLs in your configuration.
import sys
def updateDataSourceIP(newIP):
datasources = AdminConfig.getid('/DataSource:/').splitlines()
for datasource in datasources:
propertySet = AdminConfig.showAttribute(datasource, 'propertySet')
propertyList = AdminConfig.list('J2EEResourceProperty', propertySet).splitlines()
for prop in propertyList:
if (AdminConfig.showAttribute(prop, 'name') == 'serverName'):
oldip = AdminConfig.showAttribute(prop, 'value')
print "Updating serverName attribute of datasource '" + datasource + "' from " + oldip + " to " + sys.argv[0]
AdminConfig.modify(prop, '[[value ' + newIP + ']]')
AdminConfig.reset();
# Call the function using command line arg
updateDataSourceIP(sys.argv[0])
The script should be invoked similarly to the above, but without the datasource parameter, the only parameter is the new hostname or ip address:
$WAS_HOME/bin/wsadmin.sh -lang jython -f /path/to/script.py 127.0.0.1

Using __END__ and DATA in Chef recipes (to run legacy shell scripts)

I'm migrating some shell scripts to Chef recipes. Some of these scripts are fairly involved, so just to make life easier in the short term and to avoid introducing bugs in rewriting everything in Chef/Ruby, I'd like to just run some of them as-is. They're all well-written and idempotent, so honestly there's no rush, but of course, the eventual goal is to rewrite them.
One cool feature of Ruby is its __END__ keyword/method: Lines below __END__ will not be executed. Those lines will be available via the special filehandle DATA.
It would be cool to ship the shell scripts as-is inside the the recipe after __END__, maybe something like the following, which I placed in chef-repo/cookbooks/ruby-data-test/recipes/default.rb:
file = Tempfile.new(File.basename(__FILE__))
file << DATA.read
bash file.path
file.unlink
__END__
echo "Hello, world"
However when I run this (with chef-solo -c solo.rb --override-runlist 'recipe[ruby-data-test]'), I get the following error:
[2014-10-03T17:14:56+00:00] ERROR: uninitialized constant Chef::Recipe::DATA
I'm pretty new to Chef, but I'm guessing the above is something about Chef wrapping my recipe in a class, and there's something simple preventing me from accessing DATA. Since it's "global" (?) I tried putting a dollar sign ($DATA) in front of it but that failed with:
NoMethodError
-------------
undefined method `read' for nil:NilClass
So the question is: How do I access DATA in my Chef recipe? Thanks!
It appears you don't have access to DATA, but you can fake it by reading in the current file yourself and splitting on __END__, like Sinatra does.
I ended up making a Chef LWRP for reuse. I don't know if I'll actually end up using this, but I wanted to figure it out. Like I said, I'm a Chef/Ruby noob, so any better ideas or suggestions welcome!
ruby_data_test/recipes/default.rb:
ruby_data_test_execute_ruby_data __FILE__
__END__
#!/bin/bash
set -o errexit
date
echo "Hello, world"
ruby_data_test/resources/execute_ruby_data.rb:
actions :execute_ruby_data
default_action :execute_ruby_data
attribute :source, :name_attribute => true, :required => true
attribute :args, :kind_of => Array
attribute :ignore_errors, :kind_of => [TrueClass, FalseClass], :default => false
ruby_data_test/providers/execute_ruby_data.rb:
def whyrun_supported?
true
end
use_inline_resources
action :execute_ruby_data do
converge_by("Executing #{#new_resource}") do
Chef::Log.info("Executing #{#new_resource}")
file_who_called_me = #new_resource.source
io = ::IO.respond_to?(:binread) ? ::IO.binread(file_who_called_me) : ::IO.read(file_who_called_me)
app, data = io.gsub("\r\n", "\n").split(/^__END__$/, 2)
data.lstrip!
file = Tempfile.new('execute_ruby_data')
file << data
file.chmod(0755)
file.close
exit_status = ::Open3.popen2e(file.path, *#new_resource.args) do |stdin, stdout_and_stderr, wait_thr|
stdout_and_stderr.each { |line| puts line }
wait_thr.value # exit status
end
if exit_status != 0 && !#new_resource.ignore_errors
throw RuntimeError
end
end
end
Here's the output:
$ chef-solo -c solo.rb --override-runlist 'recipe[ruby_data_test]'
Starting Chef Client, version 11.12.4
[2014-10-03T21:50:29+00:00] WARN: Run List override has been provided.
[2014-10-03T21:50:29+00:00] WARN: Original Run List: []
[2014-10-03T21:50:29+00:00] WARN: Overridden Run List: [recipe[ruby_data_test]]
Compiling Cookbooks...
Converging 1 resources
Recipe: ruby_data_test::default
* ruby_data_test_execute_ruby_data[/root/chef/chef-repo/cookbooks/ruby_data_test/recipes/default.rb] action execute_ruby_dataFri Oct 3 21:50:29 UTC 2014
Hello, world
- Executing ruby_data_test_execute_ruby_data[/root/chef/chef-repo/cookbooks/ruby_data_test/recipes/default.rb]
Running handlers:
Running handlers complete
Chef Client finished, 1/1 resources updated in 1.387608 seconds

Capistrano how to access a serverDefinition option in the code

I am defining my server setup like this:
task :test do
role(:frontend) {[server1,server2,server3, {:user=> "frontend-user", :options => {:log_location=>"HOW DO I READ THIS??"}}]}
role(:backend) {...}
role(:db) {...}
role(:mq) {...}
end
task :staging do
role(:frontend) {[server1,server2,server3, {:user=> "frontend-user", :options => {:log_location=>"HOW DO I READ THIS??"}}]}
role(:backend) {...}
role(:db) {...}
role(:mq) {...}
end
task :prod do
role(:frontend) {[server1,server2,server3, {:user=> "frontend-user", :options => {:log_location=>"HOW DO I READ THIS??"}}]}
role(:backend) {...}
role(:db) {...}
role(:mq) {...}
end
This is to embrace all the complexity of a legacy enterpricey system.
Now, from a task, I want to read the log_location.
Task example:
namespace :log do
desc "list all log files"
task :list do
run %(ls -1 #{log_location}/*/*.log)
end
end
The problem is that the variable log_location is undefined.
/.rvm/gems/ruby-2.0.0-p0/gems/capistrano-2.14.2/lib/capistrano/configuration/namespaces.rb:193:in
method_missing': undefined local variable or methodlog_location'
for
# (NameError)
How do I access that variable?
Is there a smarter/simpler way of setting this custom variable?
I'm sorry to say you can't read that. The blocks passed to task() aren't executed in a server context, thus the block in effect doesn't know what server it's operating on.
The classical workaround for this over the years has been to upload a config file which looks something like this:
---
hostname1:
log_file_location: "/var/log/hostname1/foo/bar"
hostname2:
log_file_location: "/var/log/hostname2/foo/bar"
(or similar) and use the machines hostname when loading the configuration.
I know this isn't a great workaround, thus in the forthcoming (see the v3 branch at Github) version of Capistrano there's a feature which looks like this:
host1 = SSHKit::Host.new 'user#example.com'
host2 = SSHKit::Host.new 'user#example.org'
host1.properties = {log_file_location: "/foo/bar"}
host2.properties.log_file_location = "/bar/baz"
on hosts do |host|
target = "/var/www/sites/"
if host.hostname =~ /org/
target += "dotorg"
else
target += "dotcom"
end
execute! :head, '-n 20', host.properties.log_file_location
execute! :git, :clone, "git#git.#{host.hostname}", target
end
(SSHKit Examples) - SSHKit is the new backend driver for Capistrano.
The v3 branch probably isn't ready for prime time yet, we're having a lot of success internally but the documentation is pretty ahem non existent. However the code is quite literally an oder of magnitude less imposing, and I think you'll find quite readable.
You need this: https://github.com/capistrano/capistrano/wiki/2.x-Multistage-Extension
It means that you can isolate stage specific code in separate files named after the stage. If you want to test for the stage name in the shared deploy.rb you can do that too, like this:
Put this in your deploy.rb
task :show_stage do
puts(stage)
end
Test from command line
$ cap staging show_stage
staging
Actually, I was able to pull out the log_location variable, but ended up with a solution that had one restriction:
I am using log location for one environment only. This is no problem in my current project, since I run the capistrano task against one role at a time.
For testing this setup, I made this task:
namespace :support do
desc "Test if the log location variable is correctly fetched from configuration"
task :test_log_location do
find_servers_for_task(current_task).each do |server|
# puts server.host
# puts server.port
# puts server.user
# puts server.options
result = "LOG LOCATION: #{server.options[:log_location]}"
#puts result
logger.info result
end
end
end
Then, for my tasks in the :log namespace, I defined the variable with set :log_location and also define the :current_role variable:
namespace :log do
def set_log_location
#set_log_location
#puts fetch(:log_location)
log_location = nil
options = nil
find_servers_for_task(current_task).each do |server|
# puts server.host
# puts server.port
# puts server.user
# puts server.options
options = server.options
log_location = server.options[:log_location]
#log_location = server.options[:current_role]
end
msg1="FATAL: you need to specify 'ROLES=frontend,backend,mq' (or one of them) from command line"
msg2="FATAL: Could not get log_location from environment/server options. I can only see these options: #{options}"
raise msg1 if ENV['ROLES'].nil?
raise msg2 if log_location.nil?
set :log_location, log_location
set :current_role, ENV['ROLES'].split(',').first
logger.info %(CURRENT_ROLE #{fetch(:current_role)})
logger.info %(THE LOG LOCATION IS: #{fetch(:log_location)})
end
end
Finally, I used a separate method to fully qualify the log path (needed for my setup -- also in the :log namespace):
def log_location
log_names = {
:frontend => "*/play.log",
:backend => "*Weblogic*/*.{log,out}"
}
loc = "#{fetch(:log_location)}/#{log_names[fetch(:current_role).to_sym]}"
logger.info "using the log location of '#{loc}'"
loc
end
Now, each task can use the specific log location like this:
desc "list all log files"
task :list do
set_log_location
run %(ls -l #{log_location})
end
I am sure this can be done more elegant, but it works for me