Postgresql: Execute query write results to csv file - datatype money gets broken into two columns because of comma - postgresql

After running Execute query write results to file - the columns in my output file for datatype money get broken into two columns. e.g if my revenue is $500 it is displayed correctly. But, if my revenue is $1,500.00 - there is an issue. It gets broken into two columns $1 and $500.00
Can you please help me getting my results in a csv file in a single column for datatype money?

What is this command "execute query write results to file"? Do you mean COPY? If so, have a look at the FORCE QUOTE option http://www.postgresql.org/docs/current/static/sql-copy.html
Eg.
COPY yourtable to '/some/path/and/file.csv' CSV HEADER FORCE QUOTE *;
Note: if the application that is consuming the csv files still fails because of the comma, you can change the delimiter from "," to whatever works for you (eg. "|").
Additionally, if you do not want CSV, but you do want TSV, you can omit the CSV HEADER keywords and the results will output in tab-separated format.

Comma is the list separator of our computer for some regions, some region semicolon is the list separator. so I think you need to replace the comma when you write it to csv.

Related

PySpark read in multiple files CSV or TSV

I'm trying to load all the files in a folder. They have they same schema, but sometimes have a different delimiter (ie Usually CSV, but occasionally tab separated)
Is there a way to pass in two delimiters?
Being specific I don't want a two character delimiter "||", but to be able to treat multiple delimiters the same way.
I'm letting it infer the schema. Commas work, but tabbed rows just end up in the first column.

Handling delimited files in Azure Data factory

I have got a very large table with around 28 columns and 900k records.
I converted it to CSV file (Pipe separated) and then tried to use that file for feeding another table using ADF itself.
When I tried to use that file, it keeps triggering an error saying some column datatype mismatch.
So excavating more into the data I have found few rows having Pipe (|) symbol in their text itself. So at the time coverting it back, the text after the pipe been considered for the next column and thus the error.
So how to handle the conversion into CSV efficiently when there are texts with delimiters in their columns.
Option1: If there is a possibility, I would suggest changing the delimiter to other than pipe(|), as the column value also contains pipe in its text.
Option2: In the CSV dataset, select a Quote character to identify the columns.
Step1: Copying data from table1 to CSV.
Source:
Sink CSV dataset:
Output:
Step2: Loading same CSV data to table2 with a copy activity.
CSV output file of Step1.
Source CSV dataset:
Sink dataset:
Output:

Custom Row Delimiter in Azure Data Factory (ADF)

I have a CSV file that terminates a row with Comma and CRLF.
I set my dataset to ",\r\n" but when I ran the pipeline, it won't accept this, thinking it's multiple values in the delimiter... If I don't put the comma in the dataset row delimiter, when pipeline runs, it thinks that there's an unnamed header. Is it possible in ADF to have this combination as a delimeter (comma + crlf) - ",\r\n"?
FirstName,LastName,Occupation,<CRLF Char>
Michael,Jordan,Doctor,<CRLF Char>
Update:
When running the copy activity, I encountered the same problem as you.
Then I selcet Line feed(\n)as Row delimiter at Source.
Add Column mapping as follows:
When I run debug, the csv file was successfully copied into Azure SQL table.
I created a simple test. Do you just want ADF to read 3 columns?
This is the origin csv file.
In ADF, If we use default Row delimiter and Column delimiter settings, select First row as header.
We also can select Edit and enter \r\n at Row delimiter field.
You can import schema here.

COPY ignore blank columns

Unfortunately I've got some huge number of csv files with missing separator as following. Notice the second data got only 1 separator with 2 values. Currently I'm getting "delimiter not found error".
Only if I could insert NULL to 3rd column in case there is only two values.
1,avc,99
2,xyz
3,timmy,6
Is there anyway I can COPY this files into Redshift without modifying CSV files?
Use the FILLRECORD parameter to load NULLs for blank columns
You can check the docs for more details

Prevent thousand separator in TSQL export to CSV

When exporting a TSQL select result to CSV the values show a strange thousandseparator. Partial code is:
CONVERT(DECIMAL(10,2),i.UserNumber_04) as CAP
The query results show perfect values, for example: 1470.00 but the CSV or txt file show strange values like 1,470,00. How can I prevent the first comma?
At first I thought it was just the formatting style in excel, but it does the same in txt files.