Prevent thousand separator in TSQL export to CSV - tsql

When exporting a TSQL select result to CSV the values show a strange thousandseparator. Partial code is:
CONVERT(DECIMAL(10,2),i.UserNumber_04) as CAP
The query results show perfect values, for example: 1470.00 but the CSV or txt file show strange values like 1,470,00. How can I prevent the first comma?
At first I thought it was just the formatting style in excel, but it does the same in txt files.

Related

Handling delimited files in Azure Data factory

I have got a very large table with around 28 columns and 900k records.
I converted it to CSV file (Pipe separated) and then tried to use that file for feeding another table using ADF itself.
When I tried to use that file, it keeps triggering an error saying some column datatype mismatch.
So excavating more into the data I have found few rows having Pipe (|) symbol in their text itself. So at the time coverting it back, the text after the pipe been considered for the next column and thus the error.
So how to handle the conversion into CSV efficiently when there are texts with delimiters in their columns.
Option1: If there is a possibility, I would suggest changing the delimiter to other than pipe(|), as the column value also contains pipe in its text.
Option2: In the CSV dataset, select a Quote character to identify the columns.
Step1: Copying data from table1 to CSV.
Source:
Sink CSV dataset:
Output:
Step2: Loading same CSV data to table2 with a copy activity.
CSV output file of Step1.
Source CSV dataset:
Sink dataset:
Output:

Crystal Reports Export to .csv comma delimited string inserting blank columns

I have a issue and need some expert help! I'm trying to export a .csv field directly from crystal reports and I keep getting blank columns in between my datasets. The report is one formula only in the details section, that contains a string separated by comma's like below. Any help or suggestions is greatly appreciated!
*Side note the requirements are an export directly to .csv so no export to data only then save to .csv will work.
Your approach implicitly converts numbers to text. This might results in extra commas due to thousand separators.
Instead, use explicit conversion such as
ToText({your number}, 2, "")
to avoid the extra commas.

Multiple tables with different columns on single BIRT reports

I have a BIRT report with multiple tables with different datasets and number of columns on it. I generate output in .xls and convert into .csv using ssconvert utility on Unix. But in the .csv file I see extra delimiter for tables where there are fewer columns. For example, here is the .csv output with extra "," in .csv file:
table1-- this has only 10 columns
5912,,,0,,,0,,0,,,0,,,0,,,0,,,
tables2 --this has 20 columns
'12619493',28/03/2018 17:27:40,sdfsdfasd,'61901492478'1.08,,,1.08,sdfs,,dsf,,sdfadfs,'738331',,434,,,,,,,333,
I try to put grid but still I see extra ",". I have opened the .xls file and I see it has same issue. The cells in Excel are merged.

Excel 2010 - Pivot using external csv file - how to make dates work?

I have a set of pivot tables that use external csv files as their data sources. The csv files originally contained dates in the format dd/mm/yy (e.g. 31/01/13). The pivot tables did not recognise these as dates. I converted the dates in the csv files to dd/mm/yyyy (e.g. 31/01/2013) but these were still not recognised as dates by the pivot tables.
I tried setting up a calculated field =DATEVALUE(date_from_csv) but when used in the pivot table (I'm using the Max option to select the most recent date) I get #VALUE! errors.
I have tried converting the csv file to xlsx and also importing the data into the workbook that contains the pivot table - but I can't change from the external connection to use the internal data. I don't want to rebuild the pivots as there are a lot of variables and formatting that would take ages to redo.
Any ideas??
The problem was caused by the date column being blank for some rows and I found that if I moved a row to the top (after the header line) that had all the fields filled in, then Excel got the formats correct and the pivot tables now work!

Postgresql: Execute query write results to csv file - datatype money gets broken into two columns because of comma

After running Execute query write results to file - the columns in my output file for datatype money get broken into two columns. e.g if my revenue is $500 it is displayed correctly. But, if my revenue is $1,500.00 - there is an issue. It gets broken into two columns $1 and $500.00
Can you please help me getting my results in a csv file in a single column for datatype money?
What is this command "execute query write results to file"? Do you mean COPY? If so, have a look at the FORCE QUOTE option http://www.postgresql.org/docs/current/static/sql-copy.html
Eg.
COPY yourtable to '/some/path/and/file.csv' CSV HEADER FORCE QUOTE *;
Note: if the application that is consuming the csv files still fails because of the comma, you can change the delimiter from "," to whatever works for you (eg. "|").
Additionally, if you do not want CSV, but you do want TSV, you can omit the CSV HEADER keywords and the results will output in tab-separated format.
Comma is the list separator of our computer for some regions, some region semicolon is the list separator. so I think you need to replace the comma when you write it to csv.