How to create a task under a batch job via ADF pipeline - azure-data-factory

I've created one batch account(demo-batch-account) and one batch job(TEST_JOB).
and created relevant application pool and package all are mapped.
Consider,
Application pool name: DemoTestPool
Application package name: TestTool.zip
To be run command as: cmd /c %AZ_BATCH_APP_PACKAGE_TestTool#1%\TestTool.exe
Now, this command I need to run by adding new Task under TEST_JOB via ADF pipeline.
Kindly suggest me to achieve this.

Can you please specify which pool you are talking about, is it SQL pool?
im not sure if i understand your question , do u want to create a dynamic task and run the package in this task , or create a task and add a command to run the package? (the second one makes more sense to me)
unfortunately ,there are no built-in activity to run zip files yet , but you can do it using Custom Activity.
if you want to run your package using ADF pipline , you need to do 3 things :
create a pipline ,in ADF,on the left side open:
Author hub -> piplines -> click on new pipline -> in search tab type "custom activity" and drag it to the board.
click on Custom activity , under settings tab , you can write the command that you need such as : "cmd /c %AZ_BATCH_APP_PACKAGE_TestTool#1%\TestTool.exe"
click on validate + debug and make sure you configure the custom activity correctly
please check this : https://learn.microsoft.com/en-us/azure/data-factory/transform-data-using-custom-activity

Related

UDeploy Create windows Service using Service Control Manager plugin

I am trying to create new service using Create Service step in Service Control Manager plugin in udeploy but my step is failing while executing it. This is what I see in output window
sc.exe create 'MyServiceName' '/binPath=MyServicePath\n/start=auto\n/'
The link https://developer.ibm.com/urbancode/plugindoc/ibmucd/microsoft-windows-services/1-2/steps/#create_service states that I have to pass argument in a newline-separated list of arguments but as you can see my arguments are passed in big single quote and I am not sure how to address this. Any help is appreciated.
I was able to resolve this by Passing argument like below in argument box for Create Service Step in UDeploy UI
binPath=[Press Enter]
MyServicePath=[Press Enter]
start=[Press Enter]
auto=[Press Enter]
Make sure there is no extra spaces at the end of each line.

Not Able to Publish ADF Incremental Package

As Earlier Posted a thread for syncing Data from Premises Mysql to Azure SQL over here referring this article, and found that lookup component for watermark detection is only available for SQL Server Only.
So tried a work Around, that while using "Copy" Data Flow task ,will pick data greater than last watermark stored from Mysql.
Issue:
Able to validate package successfully but not able to publish same.
Question :
In Copy Data Flow Task i'm using below query to get data from MySql greater than watermark available.
Can't we use Query like below on other relational sources like Mysql
select * from #{item().TABLE_NAME} where #{item().WaterMark_Column} > '#{activity('LookupOldWaterMark').output.firstRow.WatermarkValue}'
CopyTask SQL Query Preview
Validate Successfully
Error With no Details
Debug Successfully
Error After following steps mentioned by Franky
Azure SQL Linked Service Error (Resolved by re configuring connection /edit credentials in connection tab)
Source Query got blank (resolved by re-selection source type and rewriting query)
Could you verify if you have access to create a template deployment in the azure portal?
1) Export the ARM Template: int he top-right of the ADFv2 portal, click on ARM Template -> Export ARM Template, extract the zip file and copy the content of the "arm_template.json" file.
2) Create ARM Template deployment: Go to https://portal.azure.com/#create/Microsoft.Template and log in with the same credentials you use in the ADFv2 portal (you can also get to this page going in the Azure portal, click on "Create a resource" and search for "Template deployment"). Now click on "Build your own template in editor" and paste the ARM template from the previous step in the editor and Save.
3) Deploy template: Click on existing resource group and select the same resource group as the one where your Data Factory is. Fill out the parameters that are missing (for this testing it doesn't really matter if the values are valid); Factory name should already be there. Agree the terms and click purchase.
4) Verify the deployment succeeded. If not let me know the error, it might be an access issue which would explain why your publish fails. (ADF team is working on giving a better error for this issue).
Did any of the objects publish into your Data Factory?

How to use variables in the Application Lab interface to Workload Scheduler on Bluemix

I'm trying to use the Application Lab application on Bluemix's Workload Scheduler service but can't find any documentation on how to use it. Specifically, I need to run a RESTful URL once a day, with a timestamp of the previous run embedded in the URL.
Creating variables in Application Lab seems pretty straightforward, but I can't figure out how to set or use them. Can someone point me in the right direction?
The update of the variables of the application lab via APIs is not yet available (it will be soon).
There is another way of doing it that I can share with you.
Follow these steps:
Enable the cloud agent to run scripts
You should open a ticket for that. By defaults the cloud agent is not enabled to run scripts but you can open a ticket to change this.
Get the user credentials
Open the service a take not of the user credentials. Click "Add credentials" if they are not present.
Open the Workload Editor
The workload editor is a more powerful UI with respect to the Application Lab and enable complex scenarios.
To open the Workload Designer, open the Application Lab, right click on a process and select "Launch Workload Designer".
Define the restful job
Create a new restful job:
Then set the name and the workstations and check the flag "Variable resolution at runtime".
Then click on the "Action" pane and set the URI of the service. Add the ${TIMESTAMP} variable in the URI. For testing purposes I used this URI:
http://echo.jsontest.com/title/ipsum/content/${TIMESTAMP}
Save this definition (use the floppy disk icon).
Define the job that updates the variable
Create an executable job:
Set the name, the workstation and the "Variable resolution at runtime" flag
Then open the "Task pane" and add the following in the script field:
#/bin/sh -x
#Set the following 3 variables from your credentials
export USERNAME="xxxxxx#bluemix.net"
export PASSWORD='xxxxxx'
export HOSTNAME=xxxxx.wa.ibmserviceengage.com
#Replace "CC" with the letters of your tenant
export VT=CCTIMESTAMP
#MAIN STARTS HERE
export TIMESTAMP=`date +%s`
. /home/wauser/TWA/TWS/tws_env.sh
echo "VARTABLE $VT MEMBERS TIMESTAMP \"$TIMESTAMP\" END" > /home/wauser/vt.txt
composer -host $HOSTNAME -protocol https -username "$USERNAME" -password $PASSWORD replace /home/wauser/vt.txt
Save the job.
Submit the job by clicking "Select Action" -> "Submit Job into current plan".
Define a job stream
Click "New" -> "Job Stream"
Define the name, the workstation and the variable table (replace CC with your two letters tenant id).
The right click on and select add jobs:
Add the "TEST" and "UPDATETIMESTAMP" jobs (or the name you used).
Right click on "UPDATETIMESTAMP" and select "Add dependencies" -> "Job in the same job stream" and then select "TEST".
Right click on "Run Cycles" and select "Add Run Cycle"
Open the "Rule" pane and select "Daily"
Open the "Time Restrictions" and set the time when the job stream has to start and check the flag "use as time dependency".
Save the job stream and you're done!

Laravel 4 + Iron: How to register a queue?

I have setup a free account and create a first project: queue_test
I have followed this tutorial: http://vimeo.com/64703617 by Taylor Otwell and create a simple app that uses queues.
I put that app on the server.
The point is: How can i push the queue?
How can i run this command:
php artisan queue:subscribe queue_test http://mydomain.com/queue/push
if normally i use powershell on localhost..
How can i run that command on the server?
Someone can clarify to me this point?
You can do it manually in https://hud.iron.io/dashboard:
Open the IronMQ (MQ button) for your project.
Click the Queues tab
Click on your queue name
In the subscribers widget, add your url: http://mydomain.com/queue/push.

trigger other configuration and send current build status with Jenkins

In a certain Jenkins config, I wish to trigger another configuration as post build action.
I want to pass as one of the parameters the current build status.
IE: a string / int that represents the status (SUCCESS/FAIL/UNSTABLE).
I have 2 options to create post build triggers:
Using the join plugin
Using the trigger parameterized build in post build actions
I wish there was some kind of accessible env variable at end of run...
Have any idea?
Thanks!
Here is a simple solution that will answer most cases:
Use 'Trigger Parameterized Build' plugin, and set two triggers -
'Stable or unstable but not fail'
'Fail'
Each of those triggers should run the same job - let's call it 'JOB_B'.
For each of the triggers, pass whatever parameters you like, and also pass a user-defined value:
for trigger '1' use: JOB_A_STATUS=SUCCESS
for trigger '2' use: JOB_A_STATUS=FAIL
Now, all you need to do is test the value of ${JOB_A_STATUS} from JOB_B, to see if it is set to 'SUCCESS' or 'FAIL'.
Note this solution does not distinguish between 'stable' and 'unstable', but only knows the difference between 'fail' and 'success'.
Good luck!
You can check for status using Groovy script in post-build step via Groovy Post-Build plugin that can access Jenkins internals via Jenkins Java API. The plugin provides the script with variable manager that can be used to access important parts of the API (see Usage section in the plugin documentation).
For example, this is how you can output build result to build console:
def result = manager.build.result
manager.listener.logger.println "And the result is: ${result}"
Now, you can instead use that value to create a properties file and pass that file to Parameterized Trigger post-build step (it has such an option).
One caveat: I am not sure if it is possible to arrange post-build steps to execute in a particular order to ensure that Groovy post-build step runs before the Parameterized Trigger step.