How to Execute a SQL file with multiple queries in Talend and store the result in a log or output file - talend

I want to execute a SQL file with multiple queries inside it using Talend.
To my understanding of Talend there is no straight forward way of doing it.
My SQL File contains multiple Select Statement related to Various tables.
Ex: Select age,name from Table_AB; Select count(Salary), emp_id from Emp_table;
I have referred http://bekwam.blogspot.in/2011/02/running-file-of-sql-statements-with.html ; The queries get executed but i want to output the results of the tOracleRow to an output file or log file and send the file to an Email recipient.
What i Have Done till now is: Talend Job to execute a SQL file with multiple queries
Can Anyone please help me on how to Output the results of an tOracleRow to an output file and mail the results to an Email recipient. I am able to do it for a single table with a schema , but confused on how to do it for various Tables and output it

Related

Remove or ignore last column in CSV file in Azure

I have a CSV file on a SFTP which has 13 columns, but annoyingly the last one has no data or header, so basically there is an extra comma at the end of every record:
PGR,Concession ,Branch ,Branch Name ,Date ,Receipt ,Ref/EAN,Till No ,Qty , Price , Discount , Net Price ,
I want to import this file into a SQL table in Azure using copy Activity in Data Factory, but I'm getting this error:
I know if I manually open the file and right click and remove column M (which is completely blank), then it works fine. But this needs to be an automated process, can someone assist please? Not too familiar with Data Flow in ADF so that could be an option, or I can use Logic App to access the file too if ADF is not the correct approach.
One workaround is to Parse the csv file and directly send only the required data to Azure sql from logic app using SQL Server Connector. Here is the screenshot of my logic app.
Result:
Alternatively you can remove last column from ADF by using a Select rule with the required condition.
name != 'null' && left(name,2) != '_c'
Because the header of the ADF dataset is blank, data flows will name it "_c(< Some Column No>)" we are using left(name,2) != '_c'.
REFERENCES:
Remove specific columns using Azure Data Factory

How to export from sql server table to multiple csv files in Azure Data Factory

I have a simple clients table in sql server that contains 2 columns - client name and city. In Azure Data Factory, how can I export this table to multiple csv files that each file will contain only a list of clients from the same city, which will be the name of the file
I already tried, and succeeded, to split it to different files using lookup and foreach, but the data remains unfiltered by the city
any ideas anyone?
You can use Data Flow to achieve that easily.
I made an example for you. I create a table as source, export this table to multiple csv files that each file will contain only a list of clients from the same city, which will be the name of the file.
Data Flow Source:
Data Flow Sink settings: File name options: as data in column and use auto mapping.
Check the output files and data in it:
HTH.
You would need to follow the below flow chart:
LookUp Activity : Query : Select distinct city from table
For each activity
Input : #activity('LookUp').output.value
a) Copy activity
i) Source : Dynamic Query Select * from t1 where city=#item().City
This should generate separate files for each country as needed
Steps:
1)
The batch job can be any nbr of parallel executions
Create a parameterised dataset:
5)
Result: I have 2 different Entities, so 2 files are generated.
Input :
Output:

Remove header (column names) from query result

I am using a Java based program and I am writing a simple select query inside that program to retrieve data from the PostgreSQL database. The data come with the header which is an error for the rest of my codes.
How do I get rid of all column headings in an SQL query? I just want to
print out the raw data without any headings.
I am using Building Controls Virtual Test Bed (BCVTB) to connect my database to EnergyPlus. This BCVTB has a database actor that you can write a query in it and receive data and send it to your other simulation program. I decided to use PostgreSQL. however when I write Select * From mydb, it brings data with the column names (header). I just want raw data without header. what should I do?
PostgreSQL does not send table headings, not like a CSV file. The protocol (as used via JDBC) sends the rows. The driver does request a description of the rows that includes column names, but it is not part of the result set rows like the "header first" convention for CSV.
Whatever is happening must be a consequence of the BCVTB tools you are using, and I suggest pursuing it on that side of things.

Dump subset of records in an OpenEdge database table in the ".d" file format

I am looking for the easiest way to manually dump a subset of records in an OpenEdge database table in the Progress ".d" file format.
The best way I can imagine is creating an extra test database with the identical schema as the source database, and then copying the subset of records over to the test database using FOR EACH and BUFFER-COPY statements. Then just export the data from the test database using the Dump Data and Definitions Table Contens (.d file )... menu option.
That seems like a lot of trouble. If you can identify the subset of records in order to do the BUFFER-COPY than you should also be able to:
OUTPUT TO VALUE( "table.d" ).
FOR EACH table NO-LOCK WHERE someCondition:
EXPORT table.
END.
OUTPUT CLOSE.
Which is, essentially, what the dictionary "dump data" .d file is less a few lines of administrivia at the bottom which can be safely omitted for most purposes.

TSQL - export query to xls /xslx / csv

I have a complicated dynamic query in TSQL that I want to export to Excel.
[The result table contains fields with text longer than 255 chars, if it matters]
I know I can export result using the Management Studio menus but I want to do it automatically by code. Do you know how?
Thanks in advance.
You could have a look at sp_send_dbmail. This allows you to send an email from your query after it's run, containing an attached CSV of the resultset. Obviously the viability of this method would be dependent on how big your resultset is.
Example from the linked document:
EXEC msdb.dbo.sp_send_dbmail
#profile_name = 'AdventureWorks2008R2 Administrator',
#recipients = 'danw#Adventure-Works.com',
#query = 'SELECT COUNT(*) FROM AdventureWorks2008R2.Production.WorkOrder
WHERE DueDate > ''2006-04-30''
AND DATEDIFF(dd, ''2006-04-30'', DueDate) < 2' ,
#subject = 'Work Order Count',
#attach_query_result_as_file = 1 ;
One way is to use bcp which you can call from the command line - check out the examples in that reference, and in particular see the info on the -t argument which you can use to set the field terminator (for CSV). There's this linked reference on Specifying Field and Row Terminators.
Or, directly using TSQL you could use OPENROWSET as explained here by Pinal Dave.
Update:
Re;: 2008 64Bit & OPENROWSET - I wasn't aware of that, quick dig throws up this on MSDN forums with a link given. Any help?
Aside from that, other options include writing an SSIS package or using SQL CLR to write an export procedure in .NET to call directly from SQL. Or, you could call bcp from TSQL via xp_cmdshell - you have to enable it though which will open up the possible "attack surface" of SQL Server. I suggest checking out this discussion.
Some approaches here: SQL Server Excel Workbench
I needed to accept a dynamic query and save the results to disk so I can download it through the web application.
insert into data source didn't work out for me because of continued effort in getting it to work.
Eventually I went with sending the query to powershell from SSMS
Read my post here
How do I create a document on the server by running an existing storedprocedure or the sql statement of that procedure on a R2008 sql server
Single quotes however was a problem and at first i didn't trim my query and write it on one line so it had line breaks in sql studio which actually matters.