Talend TAC export of list of tasks - talend

I would like to take an export of the list of jobs that are created as tasks under job conductor in TAC along with its configurations. Is this possible and if yes, please support. I am using the Enterprise edition of Talend V5.6

There is an API to query the TAC called MetaServletCaller. Using this API, you can send a command to get a list of the tasks deployed in Job Conductor.
The API can be called from a url in a browser, or by calling MetaServletCaller.sh (or .bat) script from your Talend installation.
The command to get this list is listTasks.
Here is a tutorial on how to do that : http://edwardost.github.io/talend/di/2015/05/28/Using-the-TAC-API/

All TAC configuration is linked to DB tables. Just connect to this database (you can get the name through TAC, in Configuration>Database menu), and have a look at "executiontask" table, it contains all jobs deployed in Job Conductor, with Context/Job Version, etc.

Related

Can I re-run a Power Automate flow instance from history?

Is there any way to find and re-run an earlier instance of a Power Automate workflow programmatically?
I can do this manually: download the .csv file containing the instances, search in the Trigger output column the one I want, get the id, copy-paste the run URL, and click resubmit.
I tried with Power Automate itself:
The built-in Flow Management connector supports only to find a specific flow by name, and does not even go to the history.
PowerShell:
Installed the PowerApps module, I can list the instances with
Get-FlowRun -FlowName {flow name}
But I don't see the same properties as in the exported .csv file, and there's also no Run-Flow command that would let me run it.
So, I am a little stuck here; could someone please help me out?
We cannot programmatically resubmit the Flow run from the history with PowerShell or by any other api method yet.
But can avoid some manual work by using workflow function in a Flow compose step, we can automate the composition of Flow history run url. Read more
https://xxx.flow.microsoft.com/manage/environments/07aa1562-fea6-4583-8d76-9a8e67cbf298/flows/141e89fb-af2d-47ac-be25-f9176e64e9a0/runs/08586722084717816659969428791CU12?backUrl=%2Fflows%2F141e89fb-af2d-47ac-be25-f9176e64e9a0%2Fdetails&runStatus=Failed
There are 3 guids that I need to find aso that I can build up the flow history url.
The first guid is my environmentName (07aa1562-fea6-4583-8d76-9a8e67cbf298), then I’ve got the flow name ( 141e89fb-af2d-47ac-be25-f9176e64e9a0) and finally the run (08586722084717816659969428791CU12).
There is a cmdlet from Microsoft 365 CLI to resubmit a flow run
m365 flow run resubmit --environment flowEnvironmentID --flow flowGUID --name flowRunID –confirm
You can also resubmit a flow run using Power Automate REST API
https://api.flow.microsoft.com/providers/Microsoft.ProcessSimple/environments/{FlowEnvironment}/flows/{FlowGUID}/triggers/manual/histories/{FlowRunID}/resubmit?api-version=2016-11-01
For the Power Automate REST API, you will have to pass an authorization token.
For more information, go through the following post
https://ashiqf.com/2021/05/09/resubmit-your-failed-power-automate-flow-runs-automatically-using-m365-cli-and-rest-api/

Run Databricks notebook jobs via API in a shared context

In the REST documentation for Databricks, you can submit a notebook task as a job to a cluster using the 2.0 API or you can submit a command or python script using the 1.2 API
The 1.2 API allows you to create a context and then all subsequent commands or scripts can be submitted against this context. This allows you to maintain state (dataframes, variables etc) which is much more akin to running notebooks interactively in the browser
What i want is to be able to submit my notebooks into the same context and get the same behaviour as the 1.2 API but this does not seem possible, is there a reason for that? Or am i missing something if it can be done?
My use case is i want to be able to re-run a notebook from the API and have it remember its last state (in the most basic example just knowing its already loaded a dataframe) but more generally having the ability for subsequent jobs to only run what changed since the last run.
As far as I can tell, failing the ability to do this via the 2.0 API, I have 2 options:
Convert my notebook to Python script and have a bootstrap script on client side that invokes an entry point using the 1.2 API within the same context
Create temp tables at checkpoints in my notebook and possibly maintain a special variables dataframe of state variables
Both of these seem unecessarily complex, any other ideas?

Run Powershell script every hour on Azure

I have found this great script which backs up SQL Azure database to BLOB.
I want to run many different variations of this script - e.g. DB1 goes to Customer1Blob, DB2 goes to Customer2Blob.
I have looked at Scheduler Job Collections. However I can only see options (Action settings) for HTTP(S)/ Storage Queue / Service Bus.
Is it possible to run a specific .ps1 script (with commands) scheduled?
You can definitely run a Powershell script as a WebJob. If you want to run a script on a schedule, you can add a settings.job file containing a chron expression with your webjob. The docs for doing so are here.
For this type of automation tasks, I prefer to use the Azure Automation service. You can create runbooks using powershell and then schedule this with the use of the Azure scheduler. You can have it run "on azure" so you do not need to use compute power that you pay for (rather you pay by the minute the job runs) or you can configure it to run with a hybrid worker.
For more information, please see the documentation
When exporting from SQL DB or from SQL Server, make sure you are exporting from a quiescent database. Exporting from a database with active transactions can result in data integrity issues - data being added to various tables while they are also being exported.

Create QlikView task through command line

I want to be able to create a task for qvw files with a command (cmd, powershell, etc), just as you would through the QlikView Management Console. We would like to be able to automate some of the task creation remotely which would require this functionality.
I know that there are arguments that can be passed through the qv.exe to reload the document, but I want to actually create a task through a command line. Is this possible?
Don't think that this is possible out of the box. You can control QV server through QlikView Management Services API. But for this reason you need to build a .net command line app that will do whatever you want.
If you are interested follow this link for more info.

Is it possibile to remotely process an SSAS cube throgh script?

I have an SQL Server Analysis Service (SSAS) cube (developed with BIDS 2012) and I would like to give the opportunity to the users (that use cube through PowerPivot) to process the cube in their local machines.
I found some material on how to make a scheduled job on the server through Powershell or SQL Agent or SSIS but no material on remotely process the cube. Any advice?
There are several possibilities to trigger a cube processing. The low level method is issuing an XMLA statement to the database containing the cube. To see how this looks like, open SQL Server Management Studio, connect to the AS instance, right-click on an AS database, and select "Process". Configure the processing settings, but instead of hitting OK, select "Script from the top toolbar to have the XMLA process command be generated for you. Leave the dialog with Cancel.
All methods that process a cube end in some way or the other in sending a command like this to the AS database.
There are several options to trigger a cube processing:
In Management Studio, by clicking OK in the above mentioned dialog.
In PowerShell (see http://technet.microsoft.com/en-us/library/hh510171.aspx).
In Integration Services, there is an Analysis Services processing task (http://msdn.microsoft.com/en-us/library/ms141779.aspx).
You can set up a SQL Server Agent job, job steps could either be a direct XMLA step, or an Integration Services step containing the process task (among possibly other tasks).
The question, however, is how the setups described above can be accessed by end users. An important issue here is of course that the user executing the process task needs to have the permission to process the cube. As you might not want to give this permission directly, it might make sense to use some impersonation on the way of calling it. With Management Studio - and as far as I am aware with PowerShell - this cannot easily be achieved.
Integration services and Agent jobs offer the possibility of impersonations. Integration services packages are executed by the dtexec command line tool (part of the SQL Server client tools), there is also a tool called dtexecui (available as "Execute Package Utility" in a standard SQL Server client tool installation), which lets you use a dialog to configure all settings, and then execute a package, but it also can display the command line for dtexec, according to your settings.
And to call a SQL Server Agent job, an easy interface are the stored procedures (http://msdn.microsoft.com/en-us/library/ms187763.aspx), especially sp_start_job (Note this is asynchronous, you call it, it starts the job and returns. It does not wait for the job to complete before returning.) and sp_help_jobactivity to ask for job status as well as sp_help_jobhistory for details of jobs that were running.
All in all I think there is no final solution available, but I mentioned some building blocks that you could use to code your own solution, depending on the preferences in your environment.