I have a spreadsheet containing both real and complex numbers. Some of them are like
0.48686
while others are like
4.85609+j3.24184
I am trying to round them, in order to have only two decimal places.
While Format > Cells works on the real numbers, it doesn't on the complex ones, because LibreOffice interprets them as a string.
I have looked up in google, but couldn't find anyone with the same problem.
I wanted to know if there was anyone who had already developed a macro for that, before trying to do it myself.
You could round the complex number by combining the ROUND() and the COMPLEX() functions:
A3 has the formula
=COMPLEX(A1;A2)
while A4 has
=COMPLEX( ROUND(A1;2) ; ROUND(A2;2) )
(adapting a solution from a german ms office forum to OOo.Calc / LO Calc)
Related
I am reading covariance data from flat files. For that reason, not being able to fully read the floating number results in covarince not satisfying positive semi definite requirements.
For instance, this is one of the input from raw text:
“-0.581050672”— no, actually raw text is this: -5.801050672E-01
When I read this into kdb and cast with F, it results in -0.50810507. When I do this for all and check the covariance, unfortunately it does not satisfy PSD constraints. Other hack I have been doing is to add small noise in Identity matrix…
Apart from this hack, is there way to read above data into proper floating number up to 9th digit? I tried \P and .Q.f but these only seem to work in Display.
Thank you
Sorry, does not seem like a kdb issue. Was exporting these data into different software and floating points were lost during this process. Thanks for pointer.
I could sit down a write this, but in the interests of not reinventing the wheel, I wanted to check that someone hasn't already done this.
I have to migrate over a little legacy tool which generates a text file containing a table of numeric values generated by a small tool. It was written many years back in DOS QBasic.
The only problem with the task is that QBasic had quite a few pecularities in decimal to text conversion. Lots of small exceptions.
The resultant file is imported into an old machine which works perfectly with the QBasic generated file, but when I pass it 6 or 7 digit precision decimals the results are not correct. QBasic output when writing decimals varies from 7 down to 3 digits of decimal depending on the whole number part and also generates decimals in the 0.0000E+1 format if there is no whole number part and there are zeros after the decimal point.
Has anyone seen a collection of functions which behave the same way as qbasic? Language doesn't matter. Googling hasn't turned up anything so far.
I need to extract the dates from a set of data s.
I use the command s(x).comm.date where x can be changed for each person however it is returning the serial date number as 7.3244e+005 which just gives me the day but I need it to show much more detail something like this 732162.65994213.
I don't know if the data I have is already saving it as the shorthand format but it's a set of data from MIT and the help documentation shows it as the long hand format so I sincerely doubt this.
Yours,
MATLAB Newbie
Try typing the following help format or format long (for starters).
By default, Matlab displays 5 significant digits (calculations are done in appropriate floating-point precision, no matter how those variables are displayed). Refer to the documentation for different ways of displaying.
I have a series of very small positive and negative values stored in oracle which I am attempting to display in crystal reports. Crystal is displaying all the values as 0. However if I turn the max decimal places on I will see some values as 0.00...00XY.
I would like these values simply displayed as scientific notation with something like 1.2E-12.
It seems crystal automatically converts very large values to SN, but not very small values?
This is not possible so I went and created a view instead for each needed table. Kind of a dumb solution, but it seems the only way possible.
I tried to assign a very small number to a double value, like so:
double verySmall = 0.000000001;
9 fractional digits. For some reason, when I multiplicate this value by 10, I get something like 0.000000007. I slighly remember there were problems writing big numbers like this in plain text into source code. Do I have to wrap it in some function or a directive in order to feed it correctly to the compiler? Or is it fine to type in such small numbers in text?
The problem is with floating point arithmetic not with writing literals in source code. It is not designed to be exact. The best way around is to not use the built in double - use integers only (if possible) with power of 10 coefficients, sum everything up and display the final useful figure after rounding.
Standard floating point numbers are not stored in a perfect format, they're stored in a format that's fairly compact and fairly easy to perform math on. They are imprecise at surprisingly small precision levels. But fast. More here.
If you're dealing with very small numbers, you'll want to see if Objective-C or Cocoa provides something analagous to the java.math.BigDecimal class in Java. This is precisely for dealing with numbers where precision is more important than speed. If there isn't one, you may need to port it (the source to BigDecimal is available and fairly straightforward).
EDIT: iKenndac points out the NSDecimalNumber class, which is the analogue for java.math.BigDecimal. No port required.
As usual, you need to read stuff like this in order to learn more about how floating-point numbers work on computers. You cannot expect to be able to store any random fraction with perfect results, just as you can't expect to store any random integer. There are bits at the bottom, and their numbers are limited.