Can (Open Studio) Talend be used to automate a data load from a folder to vertica? - database-schema

I have been looking at a way to automate my data loads into vertica instead of manually exporting flat files each time, and stumbled upon the ETL Talend.
I have been working with a test folder containing multiple csv files, and am attempting to find a way to build a job so the file can be put into vertica.
However, I see in the open studio version (free), if your files do not have the same schema, this becomes next to impossible without having the dynamic schema option which is in the enterprise version.
I start with tFileList and attempt to iterate through tFileInputDelimited, but the schemas are not uniform, so of course it will stop the processing.
So, long story short, am I correct in assuming that there is no way to automate data loads in the free version of Talend if you have a folder consisting of files with different schemas?
If anyone has any suggestions for other open source ETLs to look at or a solution that would be great.

You can access the CURRENT_FILE variable from a tFileList compenent and then send a file down different route depening on the file name. You'd then create a tFileInputDelimited for each file. For example if you had two files named file1.csv and file2.csv, right click the tFileList and choose Trigger>Run If. In the run if condition type ((String)globalMap.get("tFileList_1_CURRENT_FILE")).toLowerCase().matches("file1.csv") and drag it to the tFileInputDelimited set up to handle file1.csv. Do the same for file2.csv, changing the filename in the run if condition.

Related

Can I use a sql query or script to create format description files for multiple tables in an IBM DB2 for System I database?

I have an AS400 with an IBM DB2 database and I need to create a Format Description File (FDF) for each table in the DB. I can create the FDF file using the IBM Export tool but it will only create one file at a time which will take several days to complete. I have not found a way to create the files systematically using a tool or query. Is this possible or should this be done using scripting?
First of all, to correct a misunderstanding...
A Format Description File has nothing at all to do with the format of a Db2 table. It actually describes the format of the data in a stream file that you are uploading into the Db2 table. Sure you can turn on an option during the download from Db2 to create the FDF file, but it's still actually describing the data in the stream file you've just downloaded the data into. You can use the resulting FDF file to upload a modified version of the downloaded data or as the starting point for creating an FDF file that matches the actual data you want to upload.
Which explain why there's no built-in way to create an appropriate FDF file for every table on the system.
I question why you think you actually to generate an FDF file for every table.
As I recall, the format of the FDF (or it's newer variant FDFX) is pretty simple; it shouldn't be all that difficult to generate if you really wanted to. But I don't have one handy at the moment, and my Google-FU has failed me.

Loop through .csv files using Talend

complete noob here to Talend/Data Integration in general. Have done simple things like loading a CSV to Oracle table using Talend. Below is the requirement now and looking for some ideas to get started please
Request:
Have a folder in Unix Environment where the source application is pushing out .csv files daily#5AM. They are named as below
Filename_20200301.csv
Filename_20200302.csv
Filename_20200303.csv
.
.
and so on till current day.
I have to create a Talend Job to parse through these csv files every morning and load them into an oracle table where my BI/reporting team can consume the data. This table will be used as a Lookup table, and the source is making sure not to send duplicate records in csv.
The files would usually have about 250-300 rows per day. Plan is to keep an eye and if volume of rows increase in future then maybe think of limiting the time frame of the date to rolling 12 months.
Currently i have files from March 1st, 2020 onwards to today.
The destination Oracle schema/table is always the same.
Tools: Talend Data Fabric 7.1
I can think of the below steps but no idea how to get started on step1) and step2)
1) Connect to a Unix server/shared location . I have the server details/Username/Password but what component to use in Metadata?
2) Parse through the files on the above location. Should i use TfileList? Where does the TFileInputDelimited come in?
3) Maybe use Tmap for some cleanup/changing datatypes before using TDBOutput to push into oracle. I have used these components in the past , just have to figure out how to insert into oracle table instead of truncate/load. 
Any thoughts/other cool ways to doing it please. Am i going down the right path? 
For Step 1, you can use the tFTPGet which will save your files from the Unix server/shared location to your local machine or job server.
Then for Step 2, as what you mentioned, you can use a combination of tFileList and tFileInputDelimited
Set tFileList to the directory to directory where your files are now saved (based on Step 1)
tFileList will iterate through the files found in the directory.
Next, tFileInputDelimited will parse each csv one by one
After that you can flow it through a tMap to do whatever transformation you need and write into your Oracle db. An additional optional step you can do is a tUnite so you will write into your db all in one go.
Hope this helps.
Please use below flow,
tFTPFileList --> tFileInputDelimited --> tMap --> tOracleOutput
If you are not picking the file from local server, please use tFileList instead of tFTPFileList

Talend Open Studio Big Data - Iterate and load multiple files in DB

I am new to talend and need guidance on below scenario:
We have set of 10 Json files with different structure/schema and needs to be loaded into 10 different tables in Redshift db.
Is there a way we can write generic script/job which can iterate through each file and load it into database?
For e.g.:
File Name: abc_< date >.json
Table Name: t_abc
File Name: xyz< date >.json
Table Name: t_xyz
and so on..
Thanks in advance
With Talend Enterprise version one can benefit of dynamic schema. However based on my experiences with json-s they are somewhat nested structures usually. So you'd have to figure out how to flatten them, once thats done it becomes a 1:1 load. However with open studio this will not work due to the missing dynamic schema.
Basically what you could do is: write some java code that transforms your JSON into CSV. Use either psql from commandline or if your Talend contains new enough PostgreSQL JDBC driver then invoke the client side \COPY from it to load the data. If your file and the database table column order matches it should work without needing to specify how many columns you have, so its dynamic, but the data newer "flows" through talend.
Really not cool but also theoretically possible solution: If Redshift supports JSON (Postgres does) then one can create a staging table, with 2 columns: filename, content. Once the whole content is in this staging table, INSERT-SELECT SQL could be created that transforms the JSON into tabular format that can be inserted into the final table.
However, with your toolset you probably have no other choice than to load these files with 1 job per file. And I'd suggest 1 dedicated job to each file. They would each look for their own files and triggered / scheduled individually or be part of a bigger job where you scan the folders and trigger the right job for the right file.

Storing data in array vs text file

My database migration automation script used to require the user to copy the database names into a text file, then the script would read in that text file and know which databases to migrate.
I now have a form where the user selects which databases to migrate, then my script automatically inserts those database names into the text file, then reads in that text file later in the script.
Would it be better practice to move away from the text file all together and just store the data in an array or some other structure?
I'm also using PowerShell.
I'm no expert on this, but I would suggest keeping the text file even if you choose to use the array or form only approach. You can keep the text file as sort of a log file, so you don't have to read from it, but you could write to it so you can quickly determine what databases were being migrated if an error happens.
Although in a production environment you probably have more sophisticated logging tools, but I say keep the file in case of an emergency and you have to debug.
When you finish migrating and determine in the script that everything is as it should be, then you can clear the text file or keep it, append the date and time, and store it, as a quick reference should another task come up and you need quick access to databases that were migrated on a certain date.

Postgresql/PGadmin3 - Export to multiple tabs in XLSX file

Is it possible to run multiple PostgreSQL queries, and using pgadmin3 have them each export to a separate tab on a XLSX file?
On those same lines, is it possible to run one PostgresQL query that exports to multiple tabs based on some criteria?
You'll want to use an external tool for this. PostgreSQL knows nothing about the XLSX format, nor about OpenDocument or any of that.
I suggest writing a script that exports a bunch of individual CSV files with copy. Then using an external tool to convert them to xlsx and assemble them into sheets in the document.
It's possible that ETL tools like CloverETL, Pentaho Kettle, or Talend Studio may do what you want too. I haven't checked this specific functionality.