Tableau rounds my numbers to 16 decimals on import. How can I stop this? - tableau-api

When connecting to a data source and importing data into Tableau decimals are limited to 16 places. I have large decimals that are being multiplied by millions of products so these rounding errors are affecting my end calculations.
I made sure the data type is coming in as Number (decimal).
I would like the data to be imported into Tableau un-rounded.

Tableau, like Excel, uses floating point precision for its numbering system.
This means that Tableau will only provide 15 digits of precision for numbers.
If this were for display purposes, I would suggest bringing the numbers in as strings. See this similar answer. However, because mathematical operations are being performed, the precision is unavoidable.
If possible:
Perform mathematical operations elsewhere. For example, a custom SQL query that will bring the table to Tableau with operations already complete.
Try to operate in a smaller scale - instead of multiplying by millions, multiply by hundreds/thousands and change the name of the field to represent the scale. (ie: 'thousands of millions' instead of billions.) Something like this perhaps.

Related

T-SQL Data type for fixed precision and variable scale

I have a set of data with a precision of 16 digits, however this can range from very large numbers with all 16 digits to the left of the decimal point to very small number with all digits to right of the decimal point. (e.g. 1234567890123456.0 & 0.1234567890123456 ) I am trying to figure out the correct ("best") data type to store this data in. I need to store the exact values and not an approximations so float & real are not viable options. Numeric or decimal seem appropriate, however I am getting hung up on the most efficient precision & scale to set, it seems I must go with (32,16) to account for both extremes, but that seem inefficient as I am requesting twice the bit storage that I will ever use. Is there a better option?
Thank You for your assistance.

Simulink data types

I'm reading an IMU on the arduino board with a s-function block in simulink by double or single data types though I just need 2 decimals precision as ("xyz.ab").I want to improve the performance with changing data types and wonder that;
is there a way to decrease the precision to 2 decimals in s-function block or by adding/using any other conversion blocks/codes in the simulink aside from using fixed-point tool?
For true fixed point transfer, fixed-point toolbox is the most general answer, as stated in Phil's comment.
However, to avoid toolbox use, you could also devise your own fix-point integer format and add a block that takes a floating point input and convert it into an integer format (and vice versa on the output).
E.g. If you know the range is 327.68 < var < 327.67 you could just define your float as an int16 divided by 10. In a matlab function block you would then just say
y=int16(u*100.0);
to convert the input to the S-function.
On the output it would be a reversal
y=double(u)/100.0;
(Eml/matlab function code can be avoided by using multiply, divide and convert blocks.)
However, be mindful of the bits available and that the scaling (*,/) operations is done on the floating point rather than the integer.
2^(nrOfBits-1)-1 shows you what range you can represent including signeage. For unsigned types uint8/16/32 the range is 2^(nrOfBits)-1. Then you use the scaling to fit the representable bit into your used floating point range. The scaled range divided by 2^nrOfBits will tell you what the resolution will be (how large are the steps).
You will need to scale the variables correspondingly on the Arduino side as well when you go to an integer interface of this type. (I'm assuming you have access to that code - if not it'd be hard to use any other interface than what is already provided)
Note that the intXX(doubleVar*scale) will always truncate the values to integer. If you need proper rounding you should also include the round function, e.g.:
int16(round(doubleVar*scale));
You don't need to use a base 10 scale, any scaling and offsets can be used, but it's easier to make out numbers manually if you keep to base 10 (i.e. 0.1 10.0 100.0 1000.0 etc.).
As a final note, if the Arduino code interface is floating point (single/double) and can't be changed to integer type; you will not get any speedup from rounding decimals since the full floating point is what will be is transferred anyway. Even if you do manage to reduce the data a bit using integers I suspect this might not give a huge speedup unless you transfer large amounts of data. The interface code will have a comparatively large overhead anyway.
Good luck with your project!

Efficiently Store Decimal Numbers with Many Leading Zeros in Postgresql

A number like:
0.000000000000000000000000000000000000000123456
is difficult to store without a large performance penalty with the available numeric types in postgres. This question addresses a similar problem, but I don't feel like it came to an acceptable resolution. Currently one of my colleagues landed on rounding numbers like this to 15 decimal places and just storing them as:
0.000000000000001
So that the double precision numeric type can be used which prevents the penalty associated with moving to a decimal numeric type. Numbers that are this small for my purposes are more or less functionally equivalent, because they are both very small (and mean more or less the same thing). However, we are graphing these results and when a large portion of the data set would be rounded like this it looks exceptionally stupid (flat line on the graph).
Because we are storing tens of thousands of these numbers and operating on them, the decimal numeric type is not a good option for us as the performance penalty is too large.
I am a scientist, and my natural inclination would just be to store these types of numbers in scientific notation, but it does't appear that postgres has this kind of functionality. I don't actually need all of the precision in the number, I just want to preserve 4 digits or so, so I don't even need the 15 digits that the float numeric type offers. What are the advantages and disadvantages of storing these numbers in two fields like this:
1.234 (real)
-40 (smallint)
where this is equivalent to 1.234*10^-40? This would allow for ~32000 leading decimals with only 2 bytes used to store them and 4 bytes to store the real value, for a total of maximally 6 bytes per number (gives me the exact number I want to store and takes less space than the existing solution which consumes 8 bytes). It also seems like sorting these numbers would be much improved as you'd need only sort on the smallint field first followed by the real field second.
You and/or your colleague seem to be confused about what numbers can be represented using the floating point formats.
A double precision (aka float) number can store at least 15 significant digits, in the range from about 1e-307 to 1e+308. You have to think of it as scientific notation. Remove all the zeroes and move that to the exponent. If whatever you have once in scientific notation has less than 15 digits and an exponent between -307 and +308, it can be stored as is.
That means that 0.000000000000000000000000000000000000000123456 can definitely be stored as a double precision, and you'll keep all the significant digits (123456). No need to round that to 0.000000000000001 or anything like that.
Floating point numbers have well-known issue of exact representation of decimal numbers (as decimal numbers in base 10 do not necessarily map to decimal numbers in base 2), but that's probably not an issue for you (it's an issue if you need to be able to do exact comparisons on such numbers).
What are the advantages and disadvantages of storing these numbers in
two fields like this
You'll have to manage 2 columns instead of one.
Roughly, what you'll be doing is saving space by storing lower-precision floats. If you only need 4 digits of precision, you can go further and save 2 more bytes by using smallint + smallint (1000-9999 + exponent). Using that format, you could cram the two smallint into one 32 bits int (exponent*2^16 + mantissa), that should work too.
That's assuming that you need to save storage space and/or need to go beyond the +/-308 digits exponent limit of the double precision float. If that's not the case, the standard format is fine.

Is it possible to change decimation precision of Scope in Matlab?

I'm saving the data of the scope while simulating into the workspace. I've seen that the data has four decimal places, e.g. 1.4617. Now in my case I need a higher precision. Is it possibly to adjust the data? Since I need a script to build my model it would be the easiest way. Alternatively I would use the "toWorkspace"-block.
As was already mentioned in the comments:
Matlab and Simulink do not usually calculate with 4 digits of precision, but many more. (It uses double precision, which typically means about 14 decimals in practice).
However, the 4 digits are what is shown by default.
For showing more, try format long, just don't worry about the lack of shown digits reducing the accuracy of your calculations.

Problem with very small numbers?

I tried to assign a very small number to a double value, like so:
double verySmall = 0.000000001;
9 fractional digits. For some reason, when I multiplicate this value by 10, I get something like 0.000000007. I slighly remember there were problems writing big numbers like this in plain text into source code. Do I have to wrap it in some function or a directive in order to feed it correctly to the compiler? Or is it fine to type in such small numbers in text?
The problem is with floating point arithmetic not with writing literals in source code. It is not designed to be exact. The best way around is to not use the built in double - use integers only (if possible) with power of 10 coefficients, sum everything up and display the final useful figure after rounding.
Standard floating point numbers are not stored in a perfect format, they're stored in a format that's fairly compact and fairly easy to perform math on. They are imprecise at surprisingly small precision levels. But fast. More here.
If you're dealing with very small numbers, you'll want to see if Objective-C or Cocoa provides something analagous to the java.math.BigDecimal class in Java. This is precisely for dealing with numbers where precision is more important than speed. If there isn't one, you may need to port it (the source to BigDecimal is available and fairly straightforward).
EDIT: iKenndac points out the NSDecimalNumber class, which is the analogue for java.math.BigDecimal. No port required.
As usual, you need to read stuff like this in order to learn more about how floating-point numbers work on computers. You cannot expect to be able to store any random fraction with perfect results, just as you can't expect to store any random integer. There are bits at the bottom, and their numbers are limited.