Wrongly formated numbers in CSV export from Grafana graphs - export-to-csv

One of our customers has strange problem with CSV export from Grafana (3.1.1 - we still run this "ancient" version due to some other dependencies).
When they export numbers from graph showing rates as percentage, they get repeatedly strangely formatted results:
2018-09-11T00:00:00.000Z;44.773.054;39.500.635;37.322.795
2018-09-12T00:00:00.000Z;51.743.917;4.409.222;37.691.824
2018-09-13T00:00:00.000Z;1.421.662;4.341.522;3.631.485
Proper results should look like this:
2018-09-11T00:00:00.000Z;4.4773054;3.9500635;3.7322795
2018-09-12T00:00:00.000Z;5.1743917;4.409222;3.7691824
2018-09-13T00:00:00.000Z;1.421662;4.341522;3.631485
As you can see - digits are generally OK, but decimal point is gone and number is formatted as huge number with separators for thousands, millions etc.
Client uses Windows 7 Enterprise, Latest Chrome and OS is set to German lang. Our best guess is that it could be caused by some setting of LOCALs because German settings are different from UK/US settings. But we are unable to simulate it on any of our computers.
Maybe some of you already encountered something like this? I tried to google for it but did not find so far anything close enough to this. Thank you very much.

CSV is made in the browser + numeric values are formated by toLocaleString function, which uses browser local setting. You need to change browser local configuration.
x = 123456789
console.log('Original: ' + x)
console.log('en-EN: ' + x.toLocaleString('en-EN'))
console.log('de-DE: ' + x.toLocaleString('de-DE'))

Related

Dataprep import dataset does not detect headers in first row automatically

I am importing a dataset from Google Cloud Storage (parameterized) into Dataprep. So far, this worked perfectly fine and one of the feature that I liked is that it auto detects that the first row in my (application/octet-stream) .csv file are my headers.
However, today I tried to import a new dataset and it did not detect the headers, but it auto assigned column1, column2...
What has changed and or why is this the case. I have checked the box auto-detect and use UTF-8:
While the auto-detect option is usually pretty good, there are times that it fails for numerous reasons. I've specifically noticed this when the field names contain certain characters (e.g. comma, invisible characters like zero-width-non-joiners, null bytes), or when multiple different styles of newline delimiters are used within the same file.
Another case I saw this is when there were more columns of data than there were headers.
As you already hit on, you can use the following snippet to do mostly the same thing:
rename type: header method: filter sanitize: true
. . . or make separate recipe steps to convert the first row to header and then bulk-rename to your own liking.
More often than not, however, I've found that when auto-detect fails on a previously working file, it tends to be a sign of some sort of issue with the source file. I would look for mismatched data, as well as misplaced commas within the output, as well as comparing the header and some data rows to the original source using a plaintext editor.
When all else fails, you can try a CSV validator . . . but in my experience they tend to be incredibly opinionated when it comes to the formatting options of the fileā€”so depending on the system generating the CSV, it could either miss any errors or give false-positives. I have had two experiences where auto-detect fails for no apparent reason on perfectly clean files, so it is possible that process was just skipped for some reason.
It should also be noted that if you have a structured file that was correctly detected but want to revert it, you can go to the dataset details, select the "..." (More) button, and choose "Remove structure..." (I'm hoping that one day they'll let you do the opposite when you want to add structure to a raw dataset or work around bugs like this!)
Best of luck!
Can be resolved as a transformation within a Flow:
rename type: header method: filter sanitize: true

tSendMail - New Line Trouble

I am trying to create an email with the some job status information, which I wish to put across multiple lines. However, whatever I do, I get the output in one line. Have changed the MIME type to HTML, used "\n", "\r", "\r\n", String Objects newline. Nothing seems to work.
Although I noticed that these characters do get processed, even though the outcome isn't as expected. I don't see them in the email body, which suggests that the text processor accepts them. Just doesn't process them they way it should. Do I see a bug in the component?
I am on Talend Open Studio 7.0.1, on Ubutntu 16.04.4 VM, on Windows 10 system (if that helps).
HTML < BR > works.
I tried it earlier but looks like I didn't structure my html tags well so it failed. Did it from start and got it right.
Guess what - The more you try, the more you learn. :)

Progress 4gl Creating a .xlsx file without excel

Version: 10.2b
I want to create a .xlsx file with progress but the machine this will run on doesn't have excel.
Can someone point me in the right direction about how to do this.
Is there a library already written that can do something like this?
Thanks for any help!
The project was moved to the Free DocxFactory Project.
It was rewritten in C++ with Progress 4GL/ABL wrappers and tutorial.
It is 300x times faster, alot of new features were added including barcodes, paging features etc.
and it's completely free for private and commercial use without any time or feature limits.
HTH
You might find this to be useful: http://www.oehive.org/project/libooxml although it appears that there is nothing there right now. There might also be an older version of that code here: http://www.oehive.org/project/lib
Also -- in many cases the need to provide data to Excel can be satisfied with a Tab or Comma delimited file.
Another trick is to create an HTML table fragment. Excel imports those quite nicely.
A super simple example of how to export a semi-colon delimited file from a temp-table. In 90% of the cases this is enough Excel-support - at least it has been for me.
DEFINE STREAM strCsv.
DEFINE TEMP-TABLE ttExample NO-UNDO
FIELD col1 AS CHARACTER
FIELD col2 AS INTEGER.
CREATE ttExample.
ASSIGN ttExample.col1 = "ABC"
ttExample.col2 = 123.
CREATE ttExample.
ASSIGN ttExample.col1 = "DEF"
ttExample.col2 = 456.
OUTPUT STREAM strCsv TO VALUE("c:\test\test.csv").
FOR EACH ttExample NO-LOCK:
EXPORT DELIMITER ";" ttExample.
END.
OUTPUT STREAM strCsv CLOSE.

MATLAB: How to import multiple CSV files with mixed data types

I have just started learning MATLAB and have difficulties to import csv files to a 2-D array..
Here is a sample csv for my needs:(all the csv files are in the same format with fixed columns)
Date, Code, Number....
2012/1/1, 00020.x1, 10
2012/1/2, 00203.x1, 0300
...
As csvread() only works with integer numbers, should I import numeric data and text data separately or is there any quick way to import multiple csv files with mixed data types?
Thanks a lot!!
What you're looking for is maybe the function xlsread.
It opens any file recognized by Excel, and automatically separates text data from numerical data.
The problem is that the default delimiter for at least on my computer is ;, and not , (at least for my locale here in Brazil). So xlsread will try to separate the fields on the file with a ;, and not a comma as you'd like.
To change that you have to change your system locales to add the comma as the list separator. So if you feel like it, to do it in windows vista, click Start, Control Panel, Regional and Language Options, Customize this format, and change the List Separator from ';' to ','. On other windows the process should be almost the same.
After doing that, typing:
[num, txt, all] = xlsread('your_file.csv');
will return something like:
num =
10
300
txt =
'01/01/2012' ' 00020.x1'
'02/01/2012' ' 00203.x1'
all =
'01/01/2012' ' 00020.x1' [ 10]
'02/01/2012' ' 00203.x1' [300]
Notice that if your locale has already the list separator set to ',', you won't have to change anything on your system to make that work.
If you don't want to change your system just to use the xlsread function, then you could use the textscan function described here: http://www.mathworks.com/help/techdoc/ref/textscan.html
The problem is that it is not as simple as calling it, as you will have to open the file, iterate on the lines, and tell matlab explicitly the format of your file.
Best regards
I recently wrote a function that solves exactly this problem. See delimread.
It's worth noting that xlsread on csv files only works in windows. On Linux or Mac, xlsread works in 'basic' mode which cannot read csv files. It might not be a great idea in the longrun to use xlsread in case you need to migrate across platforms or automate code runs on Linux servers.
xlsread is also much slower than other text parsing functions since it opens an Excel session to read the file.

Create Numbers file and open it with Numbers on iPad

I would like to do a task that is quite simple on other OS, but it is not so trivial on iOS. Namely, I want to create file and open it in Numbers.
I can preview the file with UIDocumentInteractionController and then offer it to user that he/she opens it.
THis seems to me quite a reasonable solution. However, I need to offer proper file format. I suppose CSV and XLS would be reasonable to implement and it would most probably work, but I would still like to do it in native Numbers format if possible. However, I can't find any info about this file format.
Basically, this task is about exporting data to another app and then working further with them.
I don't know of a library that can create native Numbers files. There are hoewever some libraries that allow creating XLS files. Since Numbers fully supports XLS, this is probably the way to go.
There is a comercial library available that might work on the iPhone (costs $200): http://www.libxl.com/
As for free XLS libraries, I only know xlwt, a Python module. You could set up a webservice that creates an XLS file for your app, using xlwt on the server side.
If you want to pass information to Numbers, you can probably also use CSV files. If you use CSV files, you must be aware of some things. There are two kinds of CSV files: the comma separated version (used in english speaking countries) and the semicolon separated (used in continental europe).
The comma separated CSV files look for example like this:
"ID","First Name","Last Name","Salary"
1,"John","Malkovich",3400.20
2,"Fred","Astaire",2000.60
The second kind of CSV files are semicolon separated and use a comma as decimal mark. They look like this:
"ID";"First Name";"Last Name";"Salary"
1;"John";"Malkovich";3400,20
2;"Fred";"Astaire";2000,60
On the Macintosh, Numbers expects a different format depending on the Region setting. If you have your Region set to the US, it will expect the first kind. If you choose Germany, it will expect the second kind.
I don't know what kind of files Numbers on the iPad expects.
Another alternative would be using copy and paste. Try to copy tab separated text into the clipboard.
I hope this may help you. I've contacted libxl team and they responded with the link to the demo version of their iPhone library: http://www.libxl.com/download/libxl-iphone.zip