exponent difference of floating points - numbers

for example we have two floating point numbers
task is calculate exponent diffrence or e_x-e_y what means exponent difference?

You'll want to look at the specification for IEEE-754 floating point numbers. Wikipedia has a description here. Your professor probably wants you to determine the exponential components of the floats you're working with and compute the integer difference between them.

Related

MATLAB precision causing problems when dealing with floating point numbers

I am performing operations in matrix multiplication where I have floating point numbers. Due to the precision in MATLAB I am getting incorrect output. For example, in the below
a = 1+1e-18
a = 1
a is rounding to 1 but I want all of the decimals places to be kept for my calculation such that it does not round to one. How can I get MATLAB to keep all of the decimal places when performing my calculations.
MATLAB does not natively support rational data types or extended floating point beyond double. Functions such as rat and rats look promising at first, but don't offer the functionality to work with the result out of the box.
You could get some mileage by retaining the numerator and denominator of your numbers separately. If you then implement the operators that you need for fractions, you'd have much higher precision in your final result.

Matlab: How to decrease the precision of calculations in matlab to let's say 4 digits?

I am wondering what would be the way to tell Matlab that all computations need to be done and continued with let's say 4 digits.
format long or other formats I think is for showing the results with specific precision, not for their value precision.
Are you hoping to increase performance or save memory by not using double precision? If so, then variable precision arithmetic and vpa (part of the Symbolic Math toolbox) are probably not the right choice. Instead, you might consider fixed point arithmetic. This type of math is mostly used on microcontrollers and architectures with memory address widths of less than 32 bits or that lack dedicated floating point hardware. In Matlab you'll need the Fixed Point Designer toolbox. You can read more about the capabilities here.
Use digits: http://www.mathworks.com/help/symbolic/digits.html
digits(d) sets the current VPA accuracy to d significant decimal digits. The value d must be a positive integer greater than 1 and less than 2^29 + 1.

Does unary minus just change sign?

Consider for example the following double-precision numbers:
x = 1232.2454545e-89;
y = -1232.2454545e-89;
Can I be sure that y is always exactly equal to -x (or Matlab's uminus(x))? Or should I expect small numerical differences of the order or eps as it often happens with numerical computations? Try for example sqrt(3)^2-3: the result is not exactly zero. Can that happen with unary minus as well? Is it lossy like square root is?
Another way to put the question would be: is a negative numerical literal always equal to negating its positive counterpart?
My question refers to Matlab, but probably has more to do with the IEEE 754 standard than with Matlab specifically.
I have done some tests in Matlab with a few randomly selected numbers. I have found that, in those cases,
They turn out to be equal indeed.
typecast(x, 'uint8') and typecast(-x, 'uint8') differ only in the sign bit as defined by IEEE 754 double-precision format.
This suggests that the answer may be affirmative. If applying unary minus only changes the sign bit, and not the significand, no precision is lost.
But of course I have only tested a few cases. I'd like to be sure this happens in all cases.
This question is computer architecture dependent. However, the sign of floating point numbers on modern architectures (including x64 and ARM cores) is represented by a single sign bit, and they have instructions to flip this bit (e.g. FCHS). That being the case, we can draw two conclusions:
A change of sign can be achieved (and indeed is by modern compilers and architectures) by a single bit flip/instruction. This means that the process is completely invertible, and there is no loss of numerical accuracy.
It would make no sense for MATLAB to do anything other than the fastest, most accurate thing, which is just to flip that bit.
That said, the only way to be sure would be to inspect the assembly code for uminus in your MATLAB installation. I don't know how to do this.

What is the probability that two random-floats will be the same?

How is the netlogo float defined and how does how reliable is random-float as a method for producing reliably unique numbers?
The algorithm used by random-float is known as the Mersenne Twister. (That fact is documented at http://ccl.northwestern.edu/netlogo/docs/programming.html#random.)
From the docs;
All numbers in NetLogo are stored internally as double precision floating point numbers, as defined in the IEEE 754 standard. They are 64 bit numbers consisting of one sign bit, an 11-bit exponent, and a 52-bit mantissa. See the IEEE 754 standard for details.
How reliable "random float" is to generate unique numbers depends greatly on the algorithm used, however a sequence of truly random numbers are in no way guaranteed to be unique, just likely to be.

double or decimal for temperature spread formula

I am looking at writing an application which calculates temperatures spread in a material with an exposure (such as fire) over time.
I then need to draw the result as a heat map on the geometry of the material.
Now I wonder if I should in general go with Decimal or Double for all calculations and drawing. I have looked into both but are still unsure which to use.
I will need to compare values, including interpolated values over time. And double have as far as I understand it comparison problems due to inexact representation.
But decimal is heavy to work with.
I am leaning towards double only but at the same time more exact representation and comparison is worth a lot too.
Any finite representation of decimal numbers is bound to have the same “inexact representation” issues as a finite binary representation. Not all real numbers can be represented in finite space and going to base 10 is not going to help there.
Decimal just uses the same bits less efficiently—when talking of representing a physical simulation. On the other hand, a particular implementation of decimal numbers may allow to use more bits, but multi-precision binary floating-point implementations are available too.
The superstition that decimal floating-point libraries would somehow not be “inexact” or have “comparison problems” is widespread because these libraries can represent exactly decimal numbers, starting with 0.1. If your simulation involves coefficients that are powers of ten, then decimal would indeed be a good fit. Otherwise, decimal will not solve any of the problems inherent to the finite representation of a continuous range of numbers.