Talend - Insert data into DB using Django REST API - talend

I am trying utilise Django REST APIs to insert data into the database, instead of the direct write. I've been able to read JSON data using the tRESTClient component but I am not too sure about the insertion/POST. Could someone point me to the components (and relation) that I should use?
The current job that I have is mostly:
Read data from raw file -> tMap -> DB
and I wish to do something like:
Read data from raw file -> tMap -> (pass on data to REST endpoint via POST)
Used the tRestClient component after my tMap and I could see the records getting inserted into the DB but all of them are without any data. Strangely nowhere I was asked to specify the JASON tree. The number of records getting inserted are equal to rows being read from raw file so at least something is right. But I couldn't locate the menu/options to specify which data element read from the raw file should tag to which JASON element.
How do I specify the data to JSON mapping?
PS: I realise that this might not be the most efficient way to ingest data but that's what the business wants since it brings in an additional layer of control.

Related

How to use REST API as source for Lookup activity in Azure Data Factory

I am trying to incrementally load data from a ServiceNow data source into an Azure SQL table as per the guide from Microsoft https://learn.microsoft.com/en-us/azure/data-factory/tutorial-incremental-copy-portal. This uses 2 lookup activities to find 2 dates, I then filter the source data in the copy activity to only return and insert rows where the sys_updated_on date is between the 2 lookup values.
I would now like to lookup a value from a REST API dataset. However, I do not get the option to choose my REST dataset in the lookup activity. It just does not appear as an option. The REST URL is setup to return me one date value which I need to pass into the WHERE clause of my source in the copy data. If I cannot retrieve this value in the lookup, how else can I pass it to my WHERE clause?
Currently I use activity('LookupOldWaterMarkActivity').output.firstRow.watermarkvalue and
convertTimeZone(activity('LookupNewWaterMarkActivity').output.firstRow.watermarkvalue
Thanks
As per the Microsoft official document, the Rest dataset is supported in lookup activity.
You can post feedback from an Azure Data factory or raise a support request for fixing the issue.
As a workaround, you can create an HTTP dataset with JSON format and use the output value in later activities.

Talend tREST modify body

I have just started in a new company and they would like me to use Talend jobs to update the stocks of web sites. I learned towards the web services of prestashop except that I do not know the exchanges with the web services well and not at all talend.
I need to modify the body of the tREST component, for each iteration of a contentfile my ID and quantities.
Here is the body structure and my job. (Which works for a given ID and quantity)
I don't know very well the tREST component. But if you use a tRESTClient , with a tXMLMap before it, you should be able to create your XML schema in tXMLMap, then send the document produced to the tRESTClient.
tXMLMap allows you to use metadata from repository, so you can create a metadata associated with your XML example.
Type of the output in your tXMLMap should be Document (this is the java-type associated with XML), and the name of the output flow should be "payload" (i think it is mandatory , but not sure about it)

Automatically map contents of REST JSON body as flat table in Data Flow

With the Copy Data transformation it is possible to retrieve data from a REST call (array with flat json objects, similar to Odata) and copy the contents to a flat table keeping the data types from the source but without the necessity to set the schema for that specific data.
When I try to recreate this with Data Flow, I can't get this to work. When I check the Data Preview of my Source I get a hierarchy with a body (with my odata like data) and a header. And if I send that to my sink (Avro) it will be saved in this same hierarchical structure (including the header). I know I can fix this manually by using a Select operation (body.column1, body.column2, etc.), but I want to make my Data Flow dynamic so I'm able to use it with multiple tables/endpoints.
So I receive it like this with my REST source:
link
And I want it to be like this at my Sink without hardcoding my schema:
link
The only work around I can come up with is retrieving the data using Copy Data, put it somewhere temporarily and then use my data flow to further transform the data. Is there a more easy way to do this? I cannot imagine that I'm the only one that has this issue.
Hopefully it's clear and somebody is able to help. Thank you very much in advance.
Data flow projection will get schema from API including body and header. Hence, when you use auto mapping everything going to be saved.
Below work arounds you can think of,
As you mentioned, using copy data first and then data flow to further transform.
Use select or derived column transformations and transform your data to get all column names and then finally use sink. For this you can opt with Column pattern matching syntax. So that one condition can be meet with multiple columns to transform.
Check below link to know about column pattern mappings.
https://learn.microsoft.com/en-us/azure/data-factory/concepts-data-flow-column-pattern

Fetch only some parts of a large JSON in Dart

I wonder how to fetch only a part of large JSON file
In my example, its not that large but in my project the file is sometime like 7000 lines of code.
Example Json: https://statsapi.web.nhl.com/api/v1/schedule?expand=schedule.linescore
How to fech only the team Name for example.
Normally, from a network request you only get what the server serves you. You can't fetch only a portion from that. You can process the data after the response from server, where you can refer to the portion of your data by key values. Like this.
response['totalItems']

read sqlserver database using mirth connect and convert it into xml format and vice versa

I have a requirement where I have to read data from sql server local database and first map it in XML file provided by another third party org. who have their own database. Then once I have proper mapping of fields I have to transform the data from sql server database to XML format and vice versa.
So far, I am able to connect sqlserver database in mirthconnect however I dont know what steps are required to create in channels and transformer to carry the task of reading data and mapping corresponding fields to XML format provided by third party and finally writing in XML file provided and vice versa.
In short if I can get details of creating such channel in mirth connect where I can read sql server database and map the fields in corresponding xml file....I guess I can write to it. Same way applies if I go from xml format to sqlserver database. Can someone tell me how to accomplish this?
For database field mapping whats the best way to map fields entirely on two different databases is there any tool which can help....
Also once the task of transforming the data from one end to another is accomplished is there any way of validation in mirth connect that verifies that data is correctly moved from one to another?
If you want to process one row at a time, the normal database reader will work fine; just set the data type under Summary to XML for all steps. Set a destination of channel writer to nowhere and run it once to see what it does in the Dashboard. You can copy and paste that as an example into your message template so you can map variables.
If you want to work an entire result at one time in the Transformer steps, I find it easier to create a custom reader and use "FOR XML RAW, ELEMENTS" on the end of my Microsoft SQL query.
Something like:
//build connection
var dbConn = DatabaseConnectionFactory.createDatabaseConnection('com.microsoft.sqlserver.jdbc.SQLServerDriver','jdbc:sqlserver://servername:1433;databaseName=dbname;integratedSecurity=true;','',''); //this uses the MS JDBC driver and auth dll
//query results with XML output from server 'FOR XML' statement at end
var result = dbConn.executeCachedQuery("SELECT col1 AS FirstColumn, col2 AS SecondColumn FROM [dbname].[dbo].[table1] WHERE [processed] = 'False' FOR XML RAW, ELEMENTS");
//Make sure we are at the top of results
result.beforeFirst();
//wrap XML. Namespace etc. not required
XMLresult = '<message>';
//XML broke up across several rows in one column. Re-combine
while (result.next()) {
XMLresult += result.getString(1);
}
XMLresult += '</message>';
dbConn.close();
return XMLresult;