uDeploy Step to read json file - udeploy

We are using uDeploy for the deployment process. At one of the shell step, we have a json file is receiving as input, we want to read the json file and process the output. I couldnt find any options withing the document. Could someone help to find the option?

If your uDeploy Agent machine is linux, you can try jq

Related

Download Ansible Tower log output directly to a output file on managed remote server

Could someone please help on "How to download Ansible Tower execution result to a log file on a particular remote server?".
I couldn't find much results on "How to automatically download the result of execution?".
I understand your question similar to Ansible Tower REST API: Is there any way to get the logs/output of a job?
According the Tower API Reference Guide: Jobs, the following call might work from or on your remote server.
curl --silent -u ${TOWER_USER}:${TOWER_PASSWORD} -JL https://${TOWER_URL}/api/v2/jobs/${JobID}/stdout?format=txt_download -o job_${JobID}.log
resulting into the output of a file called job_${JobID}.log.
You may transfer this in to an Ansible task, ideally by not using the shell module.

Executing Batch service in Azure Data factory using python script

Hi i've been trying to execute a custom activity in ADF which receives csv file from the container (A) after further transformation on the data set, transformed DF stored into another csv file in a same container (A).
I've written the transformation logic in python and have it stored in the same container (A).
Error raises here, when i execute the pipeline it returns an error *can't find the specified file *
Nothing wrong in the connections, Is anything wrong in batch Account or pools!!
can anyone tell me where to place the python script..!!!
Install azure batch explorer and make sure to choose proper configuration for virtual machine (dsvm-windows) which will ensure python is already in place in the virtual machine where your code is being run.
This video explains the steps
https://youtu.be/_3_eiHX3RKE

Has anyone tried to do a PS script to upload CSV files to BigQuery?

I was able to create a python script to upload files to Bigquery, but has anyone tried it with Powershell ?
I tried to find an API call for PS but I cannot find anything
Yea, there are a few ways...
Use the Google Cloud Tools for Powershell (this is in beta)
Load data using BigQuery's web API
Load data using the .Net client library
Option-1 is probably your best bet. Checkout Add-BqTableRow:
Add-BqTableRow takes CSV, JSON, and AVRO files to import into BigQuery.
Option-3: You'll find the .Net examples will mostly be in C#. Convert what you see to Powershell.
Quick and easy CSV file loader to bigquery for the community
you will need a SVC account linked to your bq project.
Authentication
gcloud auth activate-service-account SERVICE_ACCOUNT#EMAIL>COM --key-JSON FILE WITH THE SVC_ACCOUNT
BigQuery CSV files loader
bq load --source_format=CSV --skip_leading_rows=1 DATASET.TABLE_NAME CSVFILE.CSV
Thanks guys!
Have a nice day!

Talend TAC export of list of tasks

I would like to take an export of the list of jobs that are created as tasks under job conductor in TAC along with its configurations. Is this possible and if yes, please support. I am using the Enterprise edition of Talend V5.6
There is an API to query the TAC called MetaServletCaller. Using this API, you can send a command to get a list of the tasks deployed in Job Conductor.
The API can be called from a url in a browser, or by calling MetaServletCaller.sh (or .bat) script from your Talend installation.
The command to get this list is listTasks.
Here is a tutorial on how to do that : http://edwardost.github.io/talend/di/2015/05/28/Using-the-TAC-API/
All TAC configuration is linked to DB tables. Just connect to this database (you can get the name through TAC, in Configuration>Database menu), and have a look at "executiontask" table, it contains all jobs deployed in Job Conductor, with Context/Job Version, etc.

talend , mongoDB connection

I am facing a problem with mongo DB connection.
I have succefully imported tMongo components it to my Talend Open Studio 5.1.1 and by copying the mongo 1.3.jar file to lib/java folder, my Mongo DB jobs are running successfully, but the problem is even if I provide some fake server path(IP) and fake port for mongoDB, my job is running without an error and it is giving me 1 row with no data. and same goes with right IP and port.
How do I resolve it.
I think the connection is not working. As you must be knowing, mongoDB checks that the connection is actually working or not when you perform a query on it.
(Yeah, it doesn't check for a successful connection when you just connect to it ).
I would suggest to instead add the mongoDB components present in Talend for Big Data by following the steps below:
Components provided for MongoDB are :
tMongoDBInput, tMongoDBOutput, tMongoDBConnection etc.
Or you can Download the components from http://www.talendforge.org/exchange/ and search for Mongo instead of using Talend Big Data. But I would suggest use Talend for big Data for it.
The components will be zipped format , Unzip the same. In Talend Big data you will find the components in Component folder.
Copy these Unzipped Components to the installation Path of TOS.
C:TalendTOS_DI-Win32-r84309V5.1.1pluginsorg.talend.designer.components.localprovider_5.1.1.r84309components
Copy the mongo-1.3.jar file in the component folder into the C:TalendTOS_DI-Win32-r84309-V5.1.1libjava
In many systems you might not be able to see this file then go with ADMINISTRATOR priviliges.
optional for few systems——>>> Inside index.xml add
save index.xml
Restart TOS
Then you will be able to use them as normal components.
Cheers!
The reason for the Job running without any error could be due to the connection / meta-data you have used for the Mongo Connector. It doesn't is not possible for the job to run without any error even after giving fakepath.
I guess you might configured (re-modified) the repository connection but using a built-in meta data for component.