How to get Exponent of Scientific Notation in Matlab - matlab

When the numbers are really small, Matlab automatically shows them formatted in Scientific Notation.
Example:
A = rand(3) / 10000000000000000;
A =
1.0e-016 *
0.6340 0.1077 0.6477
0.3012 0.7984 0.0551
0.5830 0.8751 0.9386
Is there some in-built function which returns the exponent? Something like: getExponent(A) = -16?
I know this is sort of a stupid question, but I need to check hundreds of matrices and I can't seem to figure it out.
Thank you for your help.

Basic math can tell you that:
floor(log10(N))
The log base 10 of a number tells you approximately how many digits before the decimal are in that number.
For instance, 99987123459823754 is 9.998E+016
log10(99987123459823754) is 16.9999441, the floor of which is 16 - which can basically tell you "the exponent in scientific notation is 16, very close to being 17".
Floor always rounds down, so you don't need to worry about small exponents:
0.000000000003754 = 3.754E-012
log10(0.000000000003754) = -11.425
floor(log10(0.000000000003754)) = -12

You can use log10(A). The exponent used to print out will be the largest magnitude exponent in A. If you only care about small numbers (< 1), you can use
min(floor(log10(A)))
but if it is possible for them to be large too, you'd want something like:
a = log10(A);
[v i] = max(ceil(abs(a)));
exponent = v * sign(a(i));
this finds the maximum absolute exponent, and returns that. So if A = [1e-6 1e20], it will return 20.
I'm actually not sure quite how Matlab decides what exponent to use when printing out. Obviously, if A is close to 1 (e.g. A = [100, 203]) then it won't use an exponent at all but this solution will return 2. You'd have to play around with it a bit to work out exactly what the rules for printing matrices are.

Related

SCALA: Function for Square root of BigInt

I searched internet for a function to find exact square root of BigInt using scala programming language. I didn't get one, But saw one Java Program and I converted that function into Scala version. It is working but I am not sure, whether it can handle very large BigInt. But it returns BigInt only. Not BigDecimal as Square Root. It shows there is some bit manipulation done in the code with some hard coding of numbers like shiftRight(5), BigInt("8") and shiftRight(1). I can understand the logic clearly, But not the hard coding of these bitshift numbers and the number 8. May be these bitshift functions are not available in scala, and thats why it is needed to convert to java BigInteger at few places. These hard coded numbers may impact the precision of the result.I just changed the java code into scala code just copying the exact algorithm. And here is the code I have written in scala:
def sqt(n:BigInt):BigInt = {
var a = BigInt(1)
var b = (n>>5)+BigInt(8)
while((b-a) >= 0) {
var mid:BigInt = (a+b)>>1
if(mid*mid-n> 0) b = mid-1
else a = mid+1
}
a-1
}
My Points are:
Can't we return a BigDecimal instead of BigInt? How can we do that?
How these hardcoded numbers shiftRight(5), shiftRight(1) and 8 are related
to precision of the result.
I tested for one number in scala REPL: The function sqt is giving exact square root of the squared number. but not for the actual number as below:
scala> sqt(BigInt("19928937494873929279191794189"))
res9: BigInt = 141169888768369
scala> res9*res9
res10: scala.math.BigInt = 19928937494873675935734920161
scala> sqt(res10)
res11: BigInt = 141169888768369
scala>
I understand shiftRight(5) means divide by 2^5 ie.by 32 in decimal and so on..but why 8 is added here after shift operation? why exactly 5 shifts? as a first guess?
Your question 1 and question 3 are actually the same question.
How [do] these bitshifts impact [the] precision of the result?
They don't.
How [are] these hardcoded numbers ... related to precision of the result?
They aren't.
There are many different methods/algorithms for estimating/calculating the square root of a number (as can be seen here). The algorithm you've posted appears to be a pretty straight forward binary search.
Pick a number a guaranteed to be smaller than the target (square root of n).
Pick a number b guaranteed to be larger than the target (square root of n).
Calculate mid, the whole number mid-point between a and b.
If mid is larger than (or equal to) the target then move b to mid (-1 because we know it's too large).
If mid is smaller than the target then move a to mid (+1 because we know it's too small).
Repeat 3,4,5 until a is no longer less than b.
Return a-1 as the square root of n rounded down to a whole number.
The bitshifts and hardcoded numbers are used in selecting the initial value of b. But b only has be greater than the target. We could have just done var b = n. Why all the bother?
It's all about efficiency. The closer b is to the target, the fewer iterations are needed to find the result. Why add 8 after the shift? Because 31>>5 is zero, which is not greater than the target. The author chose (n>>5)+8 but he/she might have chosen (n>>7)+12. There are trade-offs.
Can't we return a BigDecimal instead of BigInt? How can we do that?
Here's one way to do that.
def sqt(n:BigInt) :BigDecimal = {
val d = BigDecimal(n)
var a = BigDecimal(1.0)
var b = d
while(b-a >= 0) {
val mid = (a+b)/2
if (mid*mid-d > 0) b = mid-0.0001 //adjust down
else a = mid+0.0001 //adjust up
}
b
}
There are better algorithms for calculating floating-point square root values. In this case you get better precision by using smaller adjustment values but the efficiency gets much worse.
Can't we return a BigDecimal instead of BigInt? How can we do that?
This makes no sense if you want exact roots: if a BigInt's square root can be represented exactly by a BigDecimal, it can be represented by a BigInt. If you don't want exact roots, you'll need to specify precision and modify the algorithm (and for most cases, Double will be good enough and much much much faster than BigDecimal).
I understand shiftRight(5) means divide by 2^5 ie.by 32 in decimal and so on..but why 8 is added here after shift operation? why exactly 5 shifts? as a first guess?
These aren't the only options. The point is that for every positive n, n/32 + 8 >= sqrt(n) (where sqrt is the mathematical square root). This is easiest to show by a bit of calculus (or just by building a graph of the difference). So at the start we know a <= sqrt(n) <= b (unless n == 0 which can be checked separately), and you can verify this remains true on each step.

How do I round up a float whose mantissa is ending in 5?

I want to round up a floating-point number. For example, sprintf '%.4f', 0.12345 returns 0.1235 and sprintf '%.4f', 0.12325 returns 0.1232. For the second example, I want it to print out 0.1233, not 0.1232.
sprintf in Perl is not good enough. Math::BigFloat can do it, but it’s a little over-kill.
Does anyone know if there is other effective way to round up, or whether there is any other module in Perl?
The exact way sprintf will round a number is dependent on how the system libraries round numbers. If you need to control exactly how a number is rounded you will need to use a library that implements your desired rounding explicitly, or write a function to round as desired.
For example to round a positive number to an arbitrary precision, where 5 rounds up this would work.
sub round {
my $numer = shift;
my $precision = shift;
return int($numer * 10 ** $precision + 0.5) * 10 ** -$precision;
}
This however doesn't round correctly for negative numbers, -0.12324 incorrectly rounds to -0.1231. A solution where 5 should round up (that is towards positive infinity) would be to use floor instead of int.
use POSIX qw(floor);
sub round {
my $numer = shift;
my $precision = shift;
return floor($numer * 10 ** $precision + 0.5) * 10 ** -$precision;
}
If instead 5 should round to the largest absolute value (that is round away from 0) then you can add a simple check for negative numbers to round them in the correct direction.
sub round {
my $numer = shift;
my $precision = shift;
my $direction = $numer >= 0 ? 0.5 : -0.5;
return int($numer * 10 ** $precision + $direction) * 10 ** -$precision;
}
There are more complex rules for rounding used in some circumstances where the bias from rounding 5 in one direction for all numbers is unacceptable and any (decimal) rounding of floating point numbers is subject to possible errors due to imprecision in their binary format.
Perl's sprintf can only be as good as the data it's fed.
There is no floating point number equal exact to either 0.12345 or 0.12325:
The floating point number closest to the 0.12345 is exactly 2223877495995551/2**54 ≈ 0.123450000000000004, a value slightly greater than the input.
The floating point number closest to the 0.12325 is exactly 4440549232587309/2**55 ≈ 0.123249999999999998, a value slightly less than the input.
Note that the 5th decimal digit of the latter one is 4 not 5.
When rounded to 4 decimal places under the usual rules, they must come out as 0.1235 and 0.1232 respectively.
You say that using Math::BigFloat is a little overkill, but it seems that what you really want is decimal arithmetic so the overkill is inescapable.
This did work for me:
print sprintf '%.*f', 4, 0.12325
Your code does work for me too. Did you test that code on its own?
Could you tell us your perl version too (perl -v)?

Algorithm to convert integer (represented as an array) with base n to integer with base m

I have a, very long, integer. The integer is represented by a array of unsigned chars.
Example: the integer 1234 with base 10 is represented in the array as [4,3,2,1], [2,2,3,2] (base 8) and [2,13,4] (base 16)
Now I want to convert my integer with base n to another integer with base m. In my persued for a answer I came accross Wallar's algorithm, originally from here.
from math import *
def baseExpansion(n,c,b):
j = 0
base10 = sum([pow(c,len(n)-k-1)*n[k] for k in range(0,len(n))])
while floor(base10/pow(b,j)) != 0: j = j+1
return [floor(base10/pow(b,j-p)) % b for p in range(1,j+1)]
At first I thought this was my answer but unfortunately it is not. The problem I have is that the algorithm computes the sum. In my case this is a problem because the variable base10 is of type unsigned integer of 32 bits. Therefore when my integer, represented as a array, has more then 10 digits it can not convert the number anymore. Anyone has a solution?
Here's the school-book algorithm for doing what you're trying. You start with a representation for zero and call it a running total. Then, for each digit of the number to be converted, starting with the most significant and going to the least significant, 1) multiply the running total by the base of the source number and 2) add the digit to the running total. Now all you need is algorithms to do the multiplication and addition (and you can actually do both at once). Here's how to do that: 1) set the current digit to a variable, call it "carry", 2) for each digit in your new number, starting with the least significant and going to the most significant: 2a) set carry to the current digit in the new number times the output base plus carry, 2b) set the current digit to carry mod the output base, 2c) set carry to carry divided by the output base. And that should do it. There is an implementation of what you are trying to do somewhere here: http://www.cis.ksu.edu/~howell/calculator/comparison.html

How big can the argument to Perl's rand be?

rand(n) returns a number between 0 and n. Will rand work as expected, with regard to "randomness", for all arguments up to the integer limit on my platform?
This is going to depend on your randbits value:
rand calls your system's random number generator (or whichever one was
compiled into your copy of Perl). For this discussion, I'll call that
generator RAND to distinguish it from rand, Perl's function. RAND produces
an integer from 0 to 2**randbits - 1, inclusive, where randbits is a small
integer. To see what it is in your perl, use the command 'perl
-V:randbits'. Common values are 15, 16, or 31.
When you call rand with an argument arg, perl takes that value as an
integer and calculates this value.
arg * RAND
rand(arg) = ---------------
2**randbits
This value will always fall in the range required.
0 <= rand(arg) < arg
But as arg becomes large in comparison to 2**randbits, things become
problematic. Let's imagine a machine where randbits = 15, so RAND ranges
from 0..32767. That is, whenever we call RAND, we get one of 32768
possible values. Therefore, when we call rand(arg), we get one of 32768
possible values.
It depends on the number of bits used by your system's (pseudo)random number generator. You can find this value via
perl -V:randbits
or within a program via
use Config;
my $randbits = $Config{randbits};
rand can generate 2^randbits distinct random numbers. While you can generate numbers larger than 2^randbits, you can't generate all of the integer values in the range [0, N) when N > 2^randbits.
Values of N which aren't a power of two can also be problematic, as the distribution of (integer truncated) random values won't quite be flat. Some values will be slightly over-represented, others slightly under-represented.
It's worth noting that randbits is a paltry 15 on Windows. This means you can only get 32768 (2**15) distinct values. You can improve the situation by making multiple calls to rand and combining the values:
use Config;
use constant RANDBITS => $Config{randbits};
use constant RAND_MAX => 2**RANDBITS;
sub double_rand {
my $max = shift || 1;
my $iv =
int rand(RAND_MAX) << RANDBITS
| int rand(RAND_MAX);
return $max * ($iv / 2**(2*RANDBITS));
}
Assuming randbits = 15, double_rand mimics randbits = 30, providing 1073741824 (2**30) possible distinct values. This alleviates (but can never eliminate) both of the problems mentioned above.
We are talking about big random integers and whether it is possible to get them. It should be noted that the concatenation of two random integers is also a random integer. So if your system, for any reason, cannot go beyond 999999999999, then just write
$bigrand = int(rand(999999999999)).int(rand(999999999999));
and you'll get a random integer of (maximally) twice the length.
(Actually this is not a numeric answer to the question “how big a rand number can be” but rather the answer “you can get as big as you want, just concatenate small numbers”.)

hash function providing unique uint from an integer coordinate pair

The problem in general:
I have a big 2d point space, sparsely populated with dots.
Think of it as a big white canvas sprinkled with black dots.
I have to iterate over and search through these dots a lot.
The Canvas (point space) can be huge, bordering on the limits
of int and its size is unknown before setting points in there.
That brought me to the idea of hashing:
Ideal:
I need a hash function taking a 2D point, returning a unique uint32.
So that no collisions can occur. You can assume that the number of
dots on the Canvas is easily countable by uint32.
IMPORTANT: It is impossible to know the size of the canvas beforehand
(it may even change),
so things like
canvaswidth * y + x
are sadly out of the question.
I also tried a very naive
abs(x) + abs(y)
but that produces too many collisions.
Compromise:
A hash function that provides keys with a very low probability of collision.
Cantor's enumeration of pairs
n = ((x + y)*(x + y + 1)/2) + y
might be interesting, as it's closest to your original canvaswidth * y + x but will work for any x or y. But for a real world int32 hash, rather than a mapping of pairs of integers to integers, you're probably better off with a bit manipulation such as Bob Jenkin's mix and calling that with x,y and a salt.
a hash function that is GUARANTEED collision-free is not a hash function :)
Instead of using a hash function, you could consider using binary space partition trees (BSPs) or XY-trees (closely related).
If you want to hash two uint32's into one uint32, do not use things like Y & 0xFFFF because that discards half of the bits. Do something like
(x * 0x1f1f1f1f) ^ y
(you need to transform one of the variables first to make sure the hash function is not commutative)
Like Emil, but handles 16-bit overflows in x in a way that produces fewer collisions, and takes fewer instructions to compute:
hash = ( y << 16 ) ^ x;
You can recursively divide your XY plane into cells, then divide these cells into sub-cells, etc.
Gustavo Niemeyer invented in 2008 his Geohash geocoding system.
Amazon's open source Geo Library computes the hash for any longitude-latitude coordinate. The resulting Geohash value is a 63 bit number. The probability of collision depends of the hash's resolution: if two objects are closer than the intrinsic resolution, the calculated hash will be identical.
Read more:
https://en.wikipedia.org/wiki/Geohash
https://aws.amazon.com/fr/blogs/mobile/geo-library-for-amazon-dynamodb-part-1-table-structure/
https://github.com/awslabs/dynamodb-geo
Your "ideal" is impossible.
You want a mapping (x, y) -> i where x, y, and i are all 32-bit quantities, which is guaranteed not to generate duplicate values of i.
Here's why: suppose there is a function hash() so that hash(x, y) gives different integer values. There are 2^32 (about 4 billion) values for x, and 2^32 values of y. So hash(x, y) has 2^64 (about 16 million trillion) possible results. But there are only 2^32 possible values in a 32-bit int, so the result of hash() won't fit in a 32-bit int.
See also http://en.wikipedia.org/wiki/Counting_argument
Generally, you should always design your data structures to deal with collisions. (Unless your hashes are very long (at least 128 bit), very good (use cryptographic hash functions), and you're feeling lucky).
Perhaps?
hash = ((y & 0xFFFF) << 16) | (x & 0xFFFF);
Works as long as x and y can be stored as 16 bit integers. No idea about how many collisions this causes for larger integers, though. One idea might be to still use this scheme but combine it with a compression scheme, such as taking the modulus of 2^16.
If you can do a = ((y & 0xffff) << 16) | (x & 0xffff) then you could afterward apply a reversible 32-bit mix to a, such as Thomas Wang's
uint32_t hash( uint32_t a)
a = (a ^ 61) ^ (a >> 16);
a = a + (a << 3);
a = a ^ (a >> 4);
a = a * 0x27d4eb2d;
a = a ^ (a >> 15);
return a;
}
That way you get a random-looking result rather than high bits from one dimension and low bits from the other.
You can do
a >= b ? a * a + a + b : a + b * b
taken from here.
That works for points in positive plane. If your coordinates can be in negative axis too, then you will have to do:
A = a >= 0 ? 2 * a : -2 * a - 1;
B = b >= 0 ? 2 * b : -2 * b - 1;
A >= B ? A * A + A + B : A + B * B;
But to restrict the output to uint you will have to keep an upper bound for your inputs. and if so, then it turns out that you know the bounds. In other words in programming its impractical to write a function without having an idea on the integer type your inputs and output can be and if so there definitely will be a lower bound and upper bound for every integer type.
public uint GetHashCode(whatever a, whatever b)
{
if (a > ushort.MaxValue || b > ushort.MaxValue ||
a < ushort.MinValue || b < ushort.MinValue)
{
throw new ArgumentOutOfRangeException();
}
return (uint)(a * short.MaxValue + b); //very good space/speed efficiency
//or whatever your function is.
}
If you want output to be strictly uint for unknown range of inputs, then there will be reasonable amount of collisions depending upon that range. What I would suggest is to have a function that can overflow but unchecked. Emil's solution is great, in C#:
return unchecked((uint)((a & 0xffff) << 16 | (b & 0xffff)));
See Mapping two integers to one, in a unique and deterministic way for a plethora of options..
According to your use case, it might be possible to use a Quadtree and replace points with the string of branch names. It is actually a sparse representation for points and will need a custom Quadtree structure that extends the canvas by adding branches when you add points off the canvas but it avoids collisions and you'll have benefits like quick nearest neighbor searches.
If you're already using languages or platforms that all objects (even primitive ones like integers) has built-in hash functions implemented (Java platform Languages like Java, .NET platform languages like C#. And others like Python, Ruby, etc ).
You may use built-in hashing values as a building block and add your "hashing flavor" in to the mix. Like:
// C# code snippet
public class SomeVerySimplePoint {
public int X;
public int Y;
public override int GetHashCode() {
return ( Y.GetHashCode() << 16 ) ^ X.GetHashCode();
}
}
And also having test cases like "predefined million point set" running against each possible hash generating algorithm comparison for different aspects like, computation time, memory required, key collision count, and edge cases (too big or too small values) may be handy.
the Fibonacci hash works very well for integer pairs
multiplier 0x9E3779B9
other word sizes 1/phi = (sqrt(5)-1)/2 * 2^w round to odd
a1 + a2*multiplier
this will give very different values for close together pairs
I do not know about the result with all pairs