how to export large numbers from HeidiSQL to csv - export-to-csv

I use HeidiSQL to manage my database.
When I export grid row to CSV format file, the large number 89610185002145111111 become 8.96102E+19
How can I keep the number without science notation conversion?

HeidiSQL does not do such a conversion. I tried to reproduce but I get the unformatted number:
id;name
89610185002145111111;hey
Using a text editor, by the way. If you use Excel, you may have to format the cell in a different format.

Related

How to use numbers as attributes in PostgreSQL?

I have a .csv file that has numbers as column names. I want to import that file to a table in PostgreSQL, but it gives an error.
I have 1024 columns so I can't manually change it in my file. Is there a way around that?
This is the Excel file that I got:
If you want a table with 1024 columns you are doing something wrong.
You should choose a different data model.
But it is possible to use numbers as column names, as long as you surround them with double quotes.

Keep leading zeros when joining data sources in tableau

I am trying to create a data source in Tableau (10.0) where I am joining a table from SQL with an Excel file. The join happens on a site id but when reading the id from the excel source, Tableau strips the leading zeros (and SQL keeps leading zeros). I see this example
to add the leading zeros back as a new, calculated field. But the join is still dropping rows because the id is not properly formatted when making the join.
How do I get the excel data source to read the column with the leading zeros so I can do the join?
Launch Excel and choose to open a new blank workbook.
Click the Data tab and select From Text.
Browse to the saved CSV file and select Import.
Ensure that Delimited is selected and click Next.
Leave Tab as the delimiter and click Next.
Select the column containing the data with leading zeros and click
Text.
Repeat for each column which contains leading zeros.
Click Finish.
Click OK.
Never heard of or used tableau, but it sounds as though something (jet/ace database driver being used to read excel file?) is determining the column to be numeric and parsing the data as numbers, losing leading zeroes
If your attempts at putting them back are giving you grief, I'd recommend trying the other direction instead; get sqlserver to convert its strings to numbers. Number matching should be more reliable than String matching, so long as the two systems don't handle rounding differently :)
If your Excel file was read in from a CSV and the Site ID is showing "Number Stored as Text", I think you can solve your problem by telling Tableau on the Data Source entry that the field is actually a string. On the preview data source view, change the "#" (designating number) to string so that both the SQL source and the Excel source are both strings before doing the join.
This typically has to do with the way Excel stores values as mentioned above. I would play around with the number formatting for the Site ID column in Excel itself, not Tableau, and changed that two "Text" in Excel. You can verify if Tableau will read it properly with the leading 0s by exporting your excel file to csv and looking in the csv files to see if the leading 0s are still there.

Excel 2010 - Pivot using external csv file - how to make dates work?

I have a set of pivot tables that use external csv files as their data sources. The csv files originally contained dates in the format dd/mm/yy (e.g. 31/01/13). The pivot tables did not recognise these as dates. I converted the dates in the csv files to dd/mm/yyyy (e.g. 31/01/2013) but these were still not recognised as dates by the pivot tables.
I tried setting up a calculated field =DATEVALUE(date_from_csv) but when used in the pivot table (I'm using the Max option to select the most recent date) I get #VALUE! errors.
I have tried converting the csv file to xlsx and also importing the data into the workbook that contains the pivot table - but I can't change from the external connection to use the internal data. I don't want to rebuild the pivots as there are a lot of variables and formatting that would take ages to redo.
Any ideas??
The problem was caused by the date column being blank for some rows and I found that if I moved a row to the top (after the header line) that had all the fields filled in, then Excel got the formats correct and the pivot tables now work!

The pgsql2shp.exe cuts-off text to max 254 characters (varchar(254))

im using the pgsql2shp tool to generate *.shp files from geometries in Postgres. The thing is that I have a description colomn with a lot of text. In the Postgres DB it is of type text. But when I use pgsql2shp these columns are cut-off to max 254 characters it makes a varchar(254) of this column.
Any ideas to make this work?
After some more googling and asking around, i found out that the accompanying dbf file with *.shp is based on a dBase IV format. This has a maximum length of a text field = 254 characters. Therefore it cuts the text off.
So I need to find some other solution.
As you have discovered, this is a limitation of Shapefiles. To get more characters in the output, you need to export to a different format.
You can use ogr2ogr to convert the spatial data into several different formats, such as Spatialite, GeoJSON, etc.

sql developer export table to xls

How to export the entire query results to xls format without the value truncated (with the header intact).
e.g
the value is round up to 276408428673510000 when the actual value is suppose to be 276408428673508271
Using this method
After selecting the data by clicking 'Ctrl+ A' in gride, use 'Ctrl+Shift+C' to copy the data with Header. After pasting the data into MS Excel, try changing the 'format' to number or text. You must be knowing that 'format cells' is available as a right click option in MS Excel.
My solution was use the LPAD or RPAD functions, you give the exact length that you want and when you export the data, these don't round it.