What is the algorithm to convert a float of a base to the float of another base? - radix

I would like to ask if someone of you knows how to achieve this:
Let's say I have a float like 0.56 but with the base N and I want to convert it into a float with a base T, how can I achieve it? Is there a formula or something?
For example, if I have to convert 0.56 from base 8 to base 16, I know it is 0.B8 (I do it by hand converting 0.56 (base 8) to base 2 -> 0.101110 and then I group the bits by 4 starting from the float point, so 1011 & 10 (1000) is B and 8, thus B8).
But what if I want e.g. from base 8 convert to base 6 in a programmatic way? Do not need code, just need to understand how this is achieved in a an automatic way.
Thanks for the attention!

Their is a way to do this for all integers including negative bases.
Separate it into it's fraction parts.
x=number
b=base
n=digits
Loop i:
(x/(ib))%b=n
n_etc. n4 n3 n2 n1
If you didn't separate it into it's fractional parts then the parts after the decimal are evaluated using the multiplicative inverse of that base.
If you want fractional or irrational bases then I refer you to this, but I have yet to figure out how to get it to work for negative bases.
https://math.stackexchange.com/questions/1938993/converting-bases/1939925#1939925

Related

How do I round numbers in KDB?

It looks like I can hack floor and ceiling into a meta function for an arbitrary round, but is there a standard way to do:
round[3.1415;2] = 3.14
from the core functions?
Edit: This is problem I'm trying to solve:
I have unadjusted daily stock data, and I'm trying to identify splits and filter them out. It's further complicated by decimalisation artefacts (a split might not be perfectly 2:1 because prices have minimum tick sizes).
This is possible via the use of .Q.f, this does however return the output as a string.
Using this we can round 3.14159 to 2 decimal places like so:
q) .Q.f[2;] 3.14
"3.14"
q)round:{"F"$.Q.f[y]x}
q)round[3.1415;2]
3.14
There's no standard way because in kdb you shouldn't ever really need to round floats. What's the reason to round floats? If it's just for pretty-printing displays then you can use a method like Matthew suggests, but underlying kdb logic shouldn't need it.
If you must, you can use floor/ceiling as you've already figured out or you can use the built-in nearest-rounding that casting to interger/long achieves. E.g.
q)d:{("j"$x*y)%x}
q)d2:d[100]
q)d3:d[1000]
q)
q)a:5?10.0
q)
q)d2 a
3.92 0.81 9.37 2.78 2.39
q)d3 a
3.915 0.812 9.368 2.782 2.392
But even here you should be aware that this is completely fictional rounding because you're still using floats and still subject to floating point fuzziness. If I increase the precision value those same results above are:
q)\P 18
q)
q)d2 a
3.9199999999999999 0.81000000000000005 9.3699999999999992 2.7799999999999998 ..
q)d3 a
3.915 0.81200000000000006 9.3680000000000003 2.782 2.3919999999999999
The only way to have a truly concrete number of decimal places is to store the whole numbers as ints/longs and separately store the decimals as ints/longs

Calculating floating points as binary

The question is :
x and y are two floating point numbers in 32-bit IEEE floating-point format
(8-bit exponent with bias 127) whose binary representation is as follows:
x: 1 10000001 00010100000000000000000
y: 0 10000010 00100001000000000000000
Compute their product z = x y and give the result in binary IEEE floating-point format.
So I've found out that X = -4.3125. y = 9.03125. i can multiply them and get -38.947265625. I don't know how to show it in a IEEE format. Thanks in advance for the help.
I agree with the comment that it should be done in binary, rather than by conversion to decimal and decimal multiplication. I used Exploring Binary to do the arithmetic.
The first step is to find the actual binary significands. Neither input is subnormal, so they are 1.000101 and 1.00100001.
Multiply them, getting 1.00110111100101.
Similarly, subtract the bias, binary 1111111, from the exponents, getting 10 and 11. Add those, getting 101, then add back the bias, 10000100.
The sign bit for multiplying two numbers with different sign bits will be 1.
Now pack it all back together. The signficand came out in the [1,2) range so there is no need to normalize and adjust the exponent. We are still in the normal range, so drop the 1 before the binary point in the significand. The significand is narrow enough to fit without rounding - just add enough trailing zeros.
1 10000100 00110111100101000000000
You've made it harder by converting to decimal, the way you'd have to convert it back. It's not that it can't be done that way, but it's harder by hand.
Without converting, the algorithm to multiply two floats is (roughly) this:
put the implicit 1 back (if applicable)
multiply, to full size (don't truncate) (you can get away with using just Guard and Sticky, if you know how they work)
add the exponents
xor the signs
normalize/round/handle special cases (under-/overflow)
So here, multiply (look up how binary multiply worked if you forgot)
1.00010100000000000000000 *
1.00100001000000000000000 =
1.00100001000000000000000 +
0.000100100001000000000000000 +
0.00000100100001000000000000000 =
1.00110111100101000000000000000
Add exponents (mind the bias), 2+3 = 5 in this case, so 132 = 10000100.
Xor the signs, get 1.
No rounding is necessary because the dropped bits are all zero anyway.
Result: 1 10000100 00110111100101000000000

how to get reverse(not complement or inverse) of a binary number

I am implementing cooley-tuckey fft(raddix - 2 DIF / DIT) algorithm in matlab.In that for the bit reversing i want to have reverse of an binary number. so can anyone suggest how can I get the reverse of a binary number(like 100111 -> 111001). One who have worked on fft implementation can help me with the algorithm also.
Topic: How to do bit reversal in Matlab? .
If you're using double precision floating point ('double') numbers
which are integers, you can do this:
dr = bin2dec(fliplr(dec2bin(d,n))); % Bits in dr are in reverse order
where n is the number of bits to be reversed and where 0 <= d < 2^n.
You will experience no precision problems at all as long as the
integers are no more than 52 bits long.
And
Re: How to do bit reversal in Matlab?
How large will the numbers be that you need to reverse? May I ask what
is the purpose of it? Maybe there is a more efficient way to solve the
whole problem. If the numbers are large you can just store the bits as
a string. To reverse it just read the string backwards! Or use
fliplr().
(There may be better places to ask).
If it were VHDL I'd suggest an alias with 'REVERSE'RANGE.
Taken from the help section;
Y = swapbytes(X) reverses the byte ordering of each element in array X, converting little-endian values to big-endian (and vice versa). The input array must contain all full, noncomplex, numeric elements.

iPhone/Obj C: Why does convert float to int: (int) float * 100 does not work?

In my code, I am using float to do currency calculation but the rounding has yielded undesired results so I am trying to convert it all to int. With as little change to the infrastructure as possible, in my init functions, I did this:
-(id)initWithPrice:(float)p;
{
[self setPrice:(int)(p*100)];
}
I multiply by 100 b/c in the main section, the values are given as .xx to 2 decimals. I abnormally I notice is that for float 1.18, the int rounds it to 117. Does anyone know it does that? The float leaves it as 1.18. I expect it to be 118 for the int equiv in cents.
Thanks.
Floating point is always a little imprecise. With IEEE floating point encoding, powers of two can be represented exactly (like 4,2,1,0.5,0.25,0.125,0.0625,...) , but numbers like 0.1 are always an approximation (just try representing it as a sum of powers of 2).
Your (int) cast will truncate whatever comes in, so if p*100 is resolving to 117.9999995 due to this imprecision , that will become 1.17 instead of 1.18.
Better solution is to use something like roundf on p*100. Even better would be if you can go upstream and fully convert to fixed-point math using integers in the entire program.

Problem with arithmetic using logarithms to avoid numerical underflow (take 2)

I have two lists of fractions;
say A = [ 1/212, 5/212, 3/212, ... ]
and B = [ 4/143, 7/143, 2/143, ... ].
If we define A' = a[0] * a[1] * a[2] * ... and B' = b[0] * b[1] * b[2] * ...
I want to calculate a normalised value of A' and B'
ie specifically the values of A' / (A'+B') and B' / (A'+B')
My trouble is A are B are both quite long and each value is small so calculating the product causes numerical underflow very quickly...
I understand turning the product into a sum through logarithms can help me determine which of A' or B' is greater
ie max( log(a[0])+log(a[1])+..., log(b[0])+log(b[1])+... )
and that using logs I can calculate the value of A' / B' but how do I do A' / A'+B'
My best bet to date is to keep the number representations as fractions, ie A = [ [1,212], [5,212], [3,212], ... ] and implement my own arithmetic but it's getting clumsy and I have a feeling there is a (simple) way of logarithms I'm just missing....
The numerators for A and B don't come from a sequence. They might as well be random for the purpose of this question. If it helps the denominators for all values in A are the same, as are all the denominators for B.
Any ideas most welcome!
( ps. I asked a similar question 24 hours ago regarding the ratio A'/B' but it was actually the wrong question to ask. I'm actually after A'/(A'+B'). Sorry, my mistake. )
I see few ways here
First of all you can notice that
A' / (A'+B') = 1 / (1 + B'/A')
and you know how to calculate B'/A' with logarithms.
Another way is to implement your own rational arithmetic but you don't need to go far about it. Since you know that denominators are the same for the whole array, it immediately gives you
numerator(A') = numerator(a[0]) * numerator(a[1]) ...
denumerator(A') = denumerator(a[0]) ^ A.length
All you now need to do is to sum A' and B' which is easy and then multiply A' and 1/(A'+B') which is also easy. The hardest part here is to normalize resulting value which is done with modulo operation and is trivial.
Alternatively, since you most likely using some popular scripting language, most of them have classes for rational arithmetic built in, Python and Ruby have them for sure.
My best bet to date is to keep the number representations as fractions and implement my own arithmetic but it's getting clumsy
What language are you using? If you can overload operators, it should be really easy to make up a Fraction class that you can treat as a number pretty much everywhere.
As an example, determining whether a fraction A / B is larger than C / D is basically comparing whether A * D is larger than B * C.
Both A and B have the same denominator in each fraction you mention. Is that true for every term in the list? If that's so, why don't you factor that out when you calculate the product? The denominator will simply be X^n, when X is the value and n is the number of terms in the list.
If you do that, you'll have the opposite problem: overflow in the numerator. You know that it can't be smaller than max(X)^n, where max(X) is the maximum value in the numerator and n is the number of terms in the list. If you can calculate that, you can see if your computer will have a problem. You can't put 10 pounds of anything in a 5 pound bag.
Unfortunately, the properties of logarithms limit you to the following simplifications:
(source: equationsheet.com)
and
(source: equationsheet.com)
So you're stuck with:
(source: equationsheet.com)
If you're using a language that supports infinite precision numbers (e.g., Java BigDecimal) it might make your life a little easier. But there's still a good argument for doing some thinking before you compute. Why use brute force when you can be elegant?
Well ... if you know A'(A'+B'), then B'(A'+B') should be one minus that. I personally would not use logarithms. I would use the actual fractions. I would also use some sort of BigInt class to represent the numerator and denominator. Which language are you using? Python can be a good fit.