Rundeck - Get in the script the username who executed the job - rundeck

I would like to know if there is a way to get the username who executed the job and use it in the job which has been executed ?
If my question isn't clear, here is an example :
An user 'bobby' connects to rundeck and execute the job 'lets_go_to_the_moon'
In the job 'lets_go_to_the_moon' I want to get 'bobby' in a variable and use it to store this information in a database for example.
Ps : I know that this information can be retrieve in the database dedicated to rundeck, but is there a native variable dedicated to get this kind of information ? And if yes, how ?
Thank you guys,
Happy day !

You can use job.username context variable.
Command step format: ${job.username}
Inline-script format: #job.username#
"External" script format: $RD_JOB_USERNAME
If you are dispatching an "external" script to a remote node, take a look at this.
Here you can see all context variables available.

Related

Overriding Jmeter property in Run Taurus task Azure pipeline is not working

I am running jmeter from Taurus and I need a output kpi.jtl file with url listing.
I have tried passing parameter -o modules.jmeter.properties.save.saveservice.url='true' and
-o modules.jmeter.properties="{'jmeter.save.saveservice.url':'true'}". Pipeline is running successfully but the kpi.jtl doesn't have the url. Please help
I have tried few more options like editing jmeter.properties via pipeline - which broke the pipeline and expecting input from user
user.properities- Which is ineffective.
I am expecting kpi.jtl file with all the possible logs especially url.
I believe you're using the wrong property, you should pass the next one:
modules.jmeter.csv-jtl-flags.url=true
More information: CSV file content configuration
However be informed that having a.jtl file "with all possible logs" is some form of a performance anti-pattern as it creates massive disk IO and may ruin your test. More information: 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure

Making a determination whether DB/Server is down before I kick-off a pipeline

I want to check whether the database/Server is Online before I kick off a pipeline. In the database is down I want to cancel the pipeline processing. I also would like to log the results in a table.
format (columns) : DBName Status Date
If the DB/Server is down then I want to send an email to concerned team with formatted table showing which DB/Servers are down.
Approach:
Run a query on each of the servers. If there is a result, then format output as shown above. I am using ADF pipeline to achive this. My issue is how do I combine various outputs from different servers.
For e.g.
Server1:
DBName: A Status: ONLINE runDate:xx/xx/xxxx
Server2:
DBName: B Status: ONLINE runDate:xx/xx/xxxx
I would like to combine them as follows:
Server DBName Status runDate
1 A ONLINE xx/xx/xxxx
2 B ONLINE xx/xx/xxxx
Use this to update the logging table as well as in the email if I were to send one out.
Is this possible using the Pipeline activities or do I have to use mapping dataflows?
I did similar work a few weeks ago. We make an API where we put all server-related settings or URL endpoint which we need to ping.
You don't require to store username-password (of SQL Server) at all. When you ping the SQL server, it will timeout if it isn't online. If it's online it will give you password related error. This way you can easily figure out whether it's up and running.
AFAIK, If you are using azure-DevOps you can use your service account to log into the SQL server. If you have set up an AD to log into DevOps, this thing can be done in the build script.
Both way you will be able to make sure whether SQL Server is Up and Running or not.
You can have all the actions as tasks in a yaml pipeline
You need something like below:
steps:
task: Check database status
register: result
task: Add results to a file
shell: "echo text >> filename"
task: send e-mail
when: some condition is met
There are several modules to achieve what you need. You need to find the right modules. You can play around with the flow of tasks by registering results and using the when clause.

How to pass binary data to Mojolicious Minion?

I am using the great tool for long running tasks Minion (docs)
For queued task I can provide a path to file.
This works fine if minions are working on same host machine.
But how to create task and pass binary data, if minions are running on different host?
The best approach for this should be:
store file into database into special table.
fetch id of this record
pass this id as parameter to Minion task instead of file path
In the example above it will look like: --allowed=12345
Then the task sub can connect to database and fetch content of your file file by provided id

For NetSuite Map/Reduce script - Why is map stage failing when being called from Restlet?

In NetSuite, have a Restlet script that calls a deployed map/reduce script but the map stage shows as Failed when looking at details of status page (the getInputData stage does run and shows as Complete).
However, if I do a "Save and Execute" from the deployment of the map/reduce script, it works fine (Map stage does run).
Note that:
There is no error on execution log of either the restlet or the map/reduce scripts.
Have 'N/task' in define section of restlet as well as task in function parameters.
The map/reduce script has the Map checkbox checked. The map/reduce script deployment is not scheduled and has default values for other fields.
Using NetSuite help "See the quick brown fox jump" sample map/reduce script
Using Sandbox account
Using a custom role to post to restlet
Below is call to create.task code snippet from my Restlet. Don't know what is wrong, any help is appreciated.
var mrTask = task.create({
taskType: task.TaskType.MAP_REDUCE,
scriptId: 'customscript_test',
deploymentId: 'customdeploy_test'
});
var mrTaskId = mrTask.submit();
var taskStatus = task.checkStatus(mrTaskId);
log.debug('taskStatus', taskStatus);
You also need Documents and Files -View permission along with SuiteScript - View and SuiteScript Scheduling - Full permissions to access the Map/Reduce script.
The help for MapReduceScriptTask.submit() does not mention this but the help for ScheduledScriptTask.submit() does:
Only administrators can run scheduled scripts. If a user event script calls ScheduledScriptTask.submit(), the user event script has to be deployed with admin permissions.
I did a test of posting to my restlet using Administrator role credentials and it works fine as opposed to using my other custom role. Maybe just like ScheduledScriptTask, the MapReduceScriptTask can only be called by Administrator role ? My custom role does have SuiteScript - View and SuiteScript Scheduling - Full permissions. Thought that would do the trick but apparently not.
Anyone have any further insight on this ?

Configuring spring-xd to use oracle as job repository

I want to run spring xd with Oracle(11g) which i already have in my environment. Currently my first concern is the jobs UI (my database has existing data of job executions that were performed by spring-batch and i simply want to display the details of those executions).
i'm using spring-xd-1.0.0.M5. I followed the instructions in the reference guide and i changed application.yml to have the following:
spring:
datasource:
url: jdbc:oracle:oci:MY_USERNAME/MYPWD#//orarmydomain.com:1521/myservice
username: MY_USERNAME
password: MYPWD
driverClassName: oracle.jdbc.OracleDriver
profiles:
active: default,oracle
i modified also batch-jdbc.properties to have the database configuration similar to the above.
Yet, when i start xd-singlnode.bat (or either xd-admin.bat) it seems like it ignores my oracle configuration and still uses the default hsqldb.
what am i doing wrong?
Thanks
The likely reason is that we did not upgrade the windows .bat scripts to take advantage of the property overriding via xd-config.yml. If you go into the unix script for xd-singlenode you will see that when java is invoked there there is an option
-Dspring.config.location=$XD_CONFIG
you can for now hardcode your location of that file, use file: as the prefix.
Also, The UI right now is very primitive, you will not be able to see many details about the job execution. There are however many job related commands you can execute in the shell and there is only one gap regarding step execution information as compared to what is available via spring-batch-admin.
The issue to watch for this is https://jira.springsource.org/browse/XD-1209 and it is schedule for the next milestone release.
Let me know how it goes, thanks!
Cheers,
Mark