Import SQL result to txt file in One single line - amazon-redshift

I have a PostgreSQL query. Below the query.
select concat('abcdefghijklmnopqrstuvwyz',concat('abcdefghijklmnopqrstuvwyz','abcdefghijklmnopqrstuvwyz')) as ret3
union all
select concat('abcdefghijklmnopqrstuvwyz',concat('abcdefghijklmnopqrstuvwyz','abcdefghijklmnopqrstuvwyz')) as ret3
union all
select concat('abcdefghijklmnopqrstuvwyz',concat('abcdefghijklmnopqrstuvwyz','abcdefghijklmnopqrstuvwyz')) as ret3
The result of the query when exported to txt file is below
But I need the data to be exported in Text file on single line. Sample format is below
The requirement is, how can we export the data warehouse data to a text file , but in single line .
I tried a normal txt file import , PFB the screenshot ,
but the result is 3 line. Can somebody help, is there any way we can export the data warehoue result to a tet file ith a single line output.

Related

Handling delimited files in Azure Data factory

I have got a very large table with around 28 columns and 900k records.
I converted it to CSV file (Pipe separated) and then tried to use that file for feeding another table using ADF itself.
When I tried to use that file, it keeps triggering an error saying some column datatype mismatch.
So excavating more into the data I have found few rows having Pipe (|) symbol in their text itself. So at the time coverting it back, the text after the pipe been considered for the next column and thus the error.
So how to handle the conversion into CSV efficiently when there are texts with delimiters in their columns.
Option1: If there is a possibility, I would suggest changing the delimiter to other than pipe(|), as the column value also contains pipe in its text.
Option2: In the CSV dataset, select a Quote character to identify the columns.
Step1: Copying data from table1 to CSV.
Source:
Sink CSV dataset:
Output:
Step2: Loading same CSV data to table2 with a copy activity.
CSV output file of Step1.
Source CSV dataset:
Sink dataset:
Output:

TalendOpenStuido DI Replace content of one column of .slx File with another column of .csv file

I have two input files:
an .xlsx file that looks like this:
an .csv files that looks like this:
I already have a talend job that transforms the .xlsx file into an .xml file.
One node in the .xml file contains the
<stockLocationCode>SL213</stockLocationCode>
The output .xml file looks like this:
Now I need to replace every occurence of the stockLocationCode with the second column of the .csv file. In this case the result would be:
My talend job looks like this:
I use a tMap component to put the columns of the .xlsx file into the right node of the output xml file.
But I do not know how I can peplace the StockLocactionCode with the acutal full stock location using the .csv file. I tired to also map the .csv file with the tMap component.
I would neet to build in a methof that looks at the current value of the node <stockLocationCode> and loops over the whole .csv file until it find it in the first column of the .csv file and then replace the <stockLocationCode> content with the content of the second column of the .csv file.
Performance is not important ;)
First, you'll need a lookup in e.g. a tMap or tXMLMap component, where you map your keys and add a new column with the second column of the csv file
The resulting columns would look like this:
Product; Stock Location Code; CSV 2nd column data
Now in a second map you could just remove the stock location code and do the rest of your job.
Voila, you exchanged the columns.
u can use tXMLMap which lookup

talend open studio to extract different csv to mongodb

I have couple of csv file, all of my csv files are about to identical but some columns in csv file are differ from one another. As an example:
csv 1,2,3 have these columns:
id name post title cdate mdate path
but in csv 4,5 have these columns:
id name post title ddate mdate fpath
My output should be like this:
id name post title cdate mdate ddate path fpath
How to achieve this? Currently I am follwoing this:
But in this procedure I can extract data from csv but not in preferred output..
You need to put each file type in different folder, let's say files 1,2,3 in folder1 and 4,5 in folder 2.
Now, insert files from one folder into you Mongo DB, using this job:
tFileList --(iterate)--> tFileInputDelimited --(file_schema)--> tMap ---(DB_schema)--> tMongoDBOutput
Here, we use tMap to get DB schema from the file schema, extra columns will remain blanked.
Finally, using a second job which is the same first job but tFileList points to the second folder and tMap have a join between the already written data and the new set of files based on the id, also file schema is different.
tMongoDBInput
|
|
tFileList --(iterate)--> tFileInputDelimited --(file_schema)--> tMap ---(DB_schema)--> tMongoDBOutput
You can use OnSubJobOK to link the first and the second job.

Prevent thousand separator in TSQL export to CSV

When exporting a TSQL select result to CSV the values show a strange thousandseparator. Partial code is:
CONVERT(DECIMAL(10,2),i.UserNumber_04) as CAP
The query results show perfect values, for example: 1470.00 but the CSV or txt file show strange values like 1,470,00. How can I prevent the first comma?
At first I thought it was just the formatting style in excel, but it does the same in txt files.

Postgresql: Execute query write results to csv file - datatype money gets broken into two columns because of comma

After running Execute query write results to file - the columns in my output file for datatype money get broken into two columns. e.g if my revenue is $500 it is displayed correctly. But, if my revenue is $1,500.00 - there is an issue. It gets broken into two columns $1 and $500.00
Can you please help me getting my results in a csv file in a single column for datatype money?
What is this command "execute query write results to file"? Do you mean COPY? If so, have a look at the FORCE QUOTE option http://www.postgresql.org/docs/current/static/sql-copy.html
Eg.
COPY yourtable to '/some/path/and/file.csv' CSV HEADER FORCE QUOTE *;
Note: if the application that is consuming the csv files still fails because of the comma, you can change the delimiter from "," to whatever works for you (eg. "|").
Additionally, if you do not want CSV, but you do want TSV, you can omit the CSV HEADER keywords and the results will output in tab-separated format.
Comma is the list separator of our computer for some regions, some region semicolon is the list separator. so I think you need to replace the comma when you write it to csv.