What is the probability that two random-floats will be the same? - netlogo

How is the netlogo float defined and how does how reliable is random-float as a method for producing reliably unique numbers?

The algorithm used by random-float is known as the Mersenne Twister. (That fact is documented at http://ccl.northwestern.edu/netlogo/docs/programming.html#random.)

From the docs;
All numbers in NetLogo are stored internally as double precision floating point numbers, as defined in the IEEE 754 standard. They are 64 bit numbers consisting of one sign bit, an 11-bit exponent, and a 52-bit mantissa. See the IEEE 754 standard for details.
How reliable "random float" is to generate unique numbers depends greatly on the algorithm used, however a sequence of truly random numbers are in no way guaranteed to be unique, just likely to be.

Related

What is the optimum precision to use in an arithmetic encoder?

I've implemented an arithmetic coder here - https://github.com/danieleades/arithmetic-coding
i'm struggling to understand a general way to choose an optimal number of bits for representing integers within the encoder. I'm using a model where probabilities are represented as rationals.
I know that to prevent underflows/overflows, the number of bits used to represent integers within the encoder/decoder must be at least 2 bits greater than the maximum number of bits used to represent the denominator of the probabilities.
for example, if i use a maximum of 10 bits to represent the denominator of the probabilities, then to ensure the encoding/decoding works, i need to use at least MAX_DENOMINATOR_BITS + 2 = 12 bits to represent the integers.
If i was to use 32bit integers to store these values, I would have another 10 bits up my sleeve (right?).
I've seen a couple of examples that use 12 bits for integers, and 8 bits for probabilities, with a 32bit integer type. Is this somehow optimal, or is this just a fairly generic choice?
I've found that increasing the precision above the minimum improves the compression ratio slightly (but it saturates quickly). Given that increasing the precision improves compression, what is the optimum choice? Should I simply aim to maximise the number of bits i use to represent the integers for a given denominator? Performance is a non-goal for my application, in case that's a consideration.
Is it possible to quantify the benefit of moving to say, a 64bit internal representation to provide a greater number of precision bits?
I've based my implementation on this (excellent) article - https://marknelson.us/posts/2014/10/19/data-compression-with-arithmetic-coding.html

Matlab: How to decrease the precision of calculations in matlab to let's say 4 digits?

I am wondering what would be the way to tell Matlab that all computations need to be done and continued with let's say 4 digits.
format long or other formats I think is for showing the results with specific precision, not for their value precision.
Are you hoping to increase performance or save memory by not using double precision? If so, then variable precision arithmetic and vpa (part of the Symbolic Math toolbox) are probably not the right choice. Instead, you might consider fixed point arithmetic. This type of math is mostly used on microcontrollers and architectures with memory address widths of less than 32 bits or that lack dedicated floating point hardware. In Matlab you'll need the Fixed Point Designer toolbox. You can read more about the capabilities here.
Use digits: http://www.mathworks.com/help/symbolic/digits.html
digits(d) sets the current VPA accuracy to d significant decimal digits. The value d must be a positive integer greater than 1 and less than 2^29 + 1.

Are there architectures which are not using two's complement for representation of negative values?

The benefits of using the two's complement for storing negative values in memory are well-known and well-discussed in this board.
Hence, I'm wondering:
Do or did some architectures exist, which have chosen a different way for representing negative values in memory than using two's complement?
If so: What were the reasons?
Signed-magnitude existed as the most obvious, naive implementation of signed numbers.
One's complement has also been used on real machines.
On both of those representations, there's a benefit that the positive and negative ranges span equal intervals. A downside is that they both contain a negative zero representation that doesn't naturally occur in the sort of integer arithmetic commonly used in computation. And of course, the hardware for two's complement turns out to be much simpler to build
Note that the above applies to integers. Common IEEE-style floating point representations are effectively sign-magnitude, with some more details layered into the magnitude representation.

double or decimal for temperature spread formula

I am looking at writing an application which calculates temperatures spread in a material with an exposure (such as fire) over time.
I then need to draw the result as a heat map on the geometry of the material.
Now I wonder if I should in general go with Decimal or Double for all calculations and drawing. I have looked into both but are still unsure which to use.
I will need to compare values, including interpolated values over time. And double have as far as I understand it comparison problems due to inexact representation.
But decimal is heavy to work with.
I am leaning towards double only but at the same time more exact representation and comparison is worth a lot too.
Any finite representation of decimal numbers is bound to have the same “inexact representation” issues as a finite binary representation. Not all real numbers can be represented in finite space and going to base 10 is not going to help there.
Decimal just uses the same bits less efficiently—when talking of representing a physical simulation. On the other hand, a particular implementation of decimal numbers may allow to use more bits, but multi-precision binary floating-point implementations are available too.
The superstition that decimal floating-point libraries would somehow not be “inexact” or have “comparison problems” is widespread because these libraries can represent exactly decimal numbers, starting with 0.1. If your simulation involves coefficients that are powers of ten, then decimal would indeed be a good fit. Otherwise, decimal will not solve any of the problems inherent to the finite representation of a continuous range of numbers.

exponent difference of floating points

for example we have two floating point numbers
task is calculate exponent diffrence or e_x-e_y what means exponent difference?
You'll want to look at the specification for IEEE-754 floating point numbers. Wikipedia has a description here. Your professor probably wants you to determine the exponential components of the floats you're working with and compute the integer difference between them.