I'm trying to compare numbers in KDB.
If I do:
exp 0.6931472
I get 2f as a response.
If I do (exp 0.6931472) = 2, I get 0b when I expect 1b.
What am I doing wrong?
The 2f that is initially returned does not mean exactly 2f, and adjusting the precision showcases this.
q)\P 10
q)exp 0.6931472
2.000000039
You can use -P command line arg to set precision within the session, which should make things clearer.
https://code.kx.com/q/basics/cmdline/#-p-display-precision
Related
Is there a way to force MATLAB to use single precision as default precision?
I have a MATLAB code, whose output I need to compare to C code output, and C code is written exclusively using floats, no doubles allowed.
Short answer: You can't.
Longer answer: In most cases, you can get around this by setting your initial variables to single. Once that's done, that type will (almost always) propagate down through your code. (cf. this and this thread on MathWorks).
So, for instance, if you do something like:
>> x = single(magic(4));
>> y = double(6);
>> x * y
ans =
4×4 single matrix
96 12 18 78
30 66 60 48
54 42 36 72
24 84 90 6
MATLAB keeps the answer in the lower precision. I have occasionally encountered functions, both built-in and from the FileExchange, that recast the output to be a double, so you will want to sprinkle in the occasional assert statement to keep things honest during your initial debugging (or better yet put the assertion as the first lines of any sub-functions you write to check the critical inputs), but this should get you 99% of the way there.
You can convert any object A to single precision using A=single(A);
The Mathworks forums show that
in your case: system-specific('precision','8'); should do it. Try this in the console or add at the top of your script.
I know a similar question has been asked before with C#
Difference between 2 numbers
But I need to know if objective-c provides some function to find the difference between 2 numbers, (2 NSIntegers, to be specific)
For example the difference between:
100 and 25 is 75
-25 and 100 is 125
100 and -25 is also 125
-100 and -115 is 15
//I know I'm using the same example as the previous
Any help is very much appreciated
I think basic math is something that every programming language does: abs(num1 - num2) will work.
Remember that abs() works for integers and will automatically round any non-integer arguments in addition to performing the absolute value function. My programs tend to be full of fractional data so I end up using fabs() a lot.
If the following code is executed MATLAB makes a mistake. Can someone verify this?
floor([0.1:0.1:2]/0.01)
So what is the 129 doing here??
ans = 10 20 30 40 50 60 70 80 90 100 110 120 129 140 150 160 170 180 190 200
It is a floating point rounding error because of the colon-generated vector.
Like Rasman said, if you do:
floor((0.1:0.1:2 + eps) / 0.01)
There will be no rounding errors.
However, based on how the colon operator works, I suggest that you do the same calculation like this:
floor([(1:20)/10] / 0.01)
[Edit: following Rasman's comment, I will add that the latter approach works for negative values as well, while adding eps sometimes fails]
The bottom line is that it is better using the colon-operator with integer numbers to minimize rounding errors.
It is probably doing a floating point calculation resulting in an inexact value of 129.99999999999999... something instead of 130. and then you floor it to 129.
it's a rounding approximation brought on by the array construction. The solution would be to add eps:
floor([0.1:0.1:2]/0.01+ eps([0.1:0.1:2]/0.01))
I am trying to represent the maximum 64-bit unsigned value in different bases.
For base 2 (binary) it would be 64 1's:
1111111111111111111111111111111111111111111111111111111111111111
For base 16 (hex) it would be 16 F's
FFFFFFFFFFFFFFFF
For base 10 (decimal) it would be:
18446744073709551615
I'm trying to get the representation of this value in base 36 (it uses 0-9 and A-Z). There are many online base converters, but they all fail to produce the correct representation because they are limited by 64-bit math.
Does anyone know how to use DC (which is an extremely hard to use string math processors that can handle numbers of unlimited magnitude) and know how to do this conversion? Either that or can anyone tell me how I can perform this conversion with a calculator that won't fail due to integer roll-over?
I mad a quick test with ruby:
i = 'FFFFFFFFFFFFFFFF'.to_i(16)
puts i #18446744073709551615
puts i.to_s(36) #3w5e11264sgsf
You may also use larger numbers:
i = 'FFFFFFFFFFFFFFFF'.to_i(16) ** 16
puts i
puts i.to_s(36)
result:
179769313486231590617005494896502488139538923424507473845653439431848569886227202866765261632299351819569917639009010788373365912036255753178371299382143631760131695224907130882552454362167933328609537509415576609030163673758148226168953269623548572115351901405836315903312675793605327103910016259918212890625
1a1e4vngailcqaj6ud31s2kk9s94o3tyofvllrg4rx6mxa0pt2sc06ngjzleciz7lzgdt55aedc9x92w0w2gclhijdmj7le6osfi1w9gvybbfq04b6fm705brjo535po1axacun6f7013c4944wa7j0yyg93uzeknjphiegfat0ojki1g5pt5se1ylx93knpzbedn29
A short explanation what happens with big numbers:
Normal numbers are Fixnums. If you get larger numbers, the number becomes a Bignum:
small = 'FFFFFFF'.to_i(16)
big = 'FFFFFFFFFFFFFFFF'.to_i(16) ** 16
puts "%i is a %s" % [ small, small.class ]
puts "%i\n is a %s" % [ big, big.class ]
puts "%i^2 is a %s" % [ small, (small ** 2).class ]
Result:
268435455 is a Fixnum
179769313486231590617005494896502488139538923424507473845653439431848569886227202866765261632299351819569917639009010788373365912036255753178371299382143631760131695224907130882552454362167933328609537509415576609030163673758148226168953269623548572115351901405836315903312675793605327103910016259918212890625
is a Bignum
268435455^2 is a Bignum
From the documentation of Bignum:
Bignum objects hold integers outside the range of Fixnum. Bignum objects are created automatically when integer calculations would otherwise overflow a Fixnum. When a calculation involving Bignum objects returns a result that will fit in a Fixnum, the result is automatically converted.
It can be done with dc, but the output is not extremely useful.
$ dc
36
o
16
i
FFFFFFFFFFFFFFFF
p
03 32 05 14 01 01 02 06 04 28 16 28 15
Here's the explanation:
Entering a number by itself pushes that number
o pops the stack and sets the output radix.
i pops the stack and sets the input radix.
p prints the top number on the stack, in the current output radix. However, dc prints any output with a higher radix than 16 as binary (not ASCII).
In dc, the commands may be all put on the same line, like so:
$ dc
36o16iFFFFFFFFFFFFFFFFp
03 32 05 14 01 01 02 06 04 28 16 28 15
Get any language that can handle arbitrarily large integers. Ruby, Python, Haskell, you name it.
Implement the basic step: modulo 36 gives you the next digit, division by 36 gives you the number with the last digit cut out.
Map the digits to characters the way you like. For instance, '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ'[digit] is fine by me. Append digits to the result as you produce them.
???
Return the concatenated string of digits. Profit!
I'm working with a binary protocol that uses LLV to encode some variables.
I was given an example below which is used to specify a set of 5 chars to display.
F1 F0 F5 4C 69 6E 65 31
the F1 is specific to my device, it indicates display text on line one. The f0 and f5 I'm not sure about, the rest looks like ASCII text.
Anyone know how this encoding works exactly?
LLV is referenced in this protocol spec. pasted below, but doesn't seem to be defined in there.
http://www.google.com/url?sa=t&source=web&cd=1&ved=0CBIQFjAA&url=http%3A%2F%2Fwww.terminalhersteller.de%2FDownload%2FPA00P016_03_en.pdf&ei=yUFPTOSzH432tgON5PjuBw&usg=AFQjCNGjS_y264qKIRCSJQpdhlSXWtiadw&sig2=jMGtIwd42dozDSq7ub844w
Since the F1 is device-specific, this leaves the rest as F0 F5 ..., and this looks like an LLVAR sequence, in which the first two bytes specify the length of the rest (decimal 05 here). My guess would be that the whole data represents F1 "Line1", which looks quite reasonable.
By the way, LLVAR stands for "VARiable length with two decimal digits specifying the length". With three decimal digits for the length, it's LLLVAR.