Fixed point arithmetic - fixed-point

I'm currently using Microchip's Fixed Point Library, but I think this applies to most fixed point libraries. It supports Q15 and Q15.16 types, respectively 16-bit and 32-bit data.
One thing I noticed is that it does not include add, subtract, multiply or divide functions.
How am I supposed to do these? Is it as simple as just adding/subtracting/multiplying/dividing them together using integer math? I can see addition and subtraction working, but multiplying or dividing wouldn't take care of the fractional part...?

The Microsoft library includes functions for adding and subtracting that deal with underflow/overflow (_Q15add and _Q15sub).
Multiplication can be implemented as an assembly function (I think the code is good - this is from memory).
C calling prototype is:
extern _Q15 Q15mpy(_Q15 a, _Q15 b);
The routine (placed in a .s source file in your project) is:
.global _Q15mpy
_Q15mpy:
mul.ss w0, w1, w2 ; signed multiple parameters, result in w2:w3
SL w2, w2 ; place most significant bit of W2 in carry
RLC w3, w0 ; rotate left carry into w3; result in W0
return ; return value in W0
.end
Remember to include libq.h
This routine does a left-shift of one bit rather than a right-shift of 15 bit on the result. There are no overflow concerns because Q15 numbers always have a magnitude <= 1.

It turns out that all basic arithmetic functions are performed by using the native operators due to how the numbers are represented. e.g. divide uses the / operator and multiply the * operator, and these compile to simple 32-bit divides and multiplies.

Related

RSA Prime Generation using Provable vs Probable Prime Construction

I am trying to implement RSA prime generation for P and Q based on FIP186-4 specification. The specification describes two different implementations: Section 3.2 Provable Prime Construction vs. Section 3.3 Probable Prime Construction. Initially, I tried implementing the probable prime approach because it is easier to understand and implement, but I discovered it is very slow because of the number of iterations needed to find P and Q primes (worst case it takes 15 minutes). Next, I decided to try the provable prime approach but I found out the algorithm is much more complex and might be slow as well. Below are my two issues:
In Section C.10, Step 12, how to eliminate the sqrt(2) to the expression x = floor(sqrt(2))(2^(L−1))) + (x mod (2^L − floor((sqrt(2)(2^(L−1))))) so that I can represent it as whole numbers using BigNum representation?
In Section C.10, Step 14, is there a fast way to compute y in the interval [1, p2] such that 0 = ( y p0 p1–1) mod p2? The specification doesn't specify a method to implement this. My initial thought was to perform a linear search staring from integer 1 and up but that can be very slow because p2 can be a very large number.
I tried searching online for help on this issue, but I discovered a lot of examples don't even comply with FIPS186-4. I assume it is because these two methods are too slow.

What does [int.,int] means in Maple?

I have a code that works as non linear system equation solver.
I have so much trouble with a command that goes like this:
newt[0]:=[-2.,20]:
I don't know what does that dot works there!
I thought it may be for showing that it is -2.0, but there is no reason to use that when by default -2 = -2.0.
Can anyone help me with this?
The dot forces float calculations
It is not correct that by default -2 = -2.0. There is a very big difference for Maple in how it calculates: if you use -2 it calculates exacts (arithmetic expressions) while -2.0 tells Maple to calculate with floats (numerical expressions).
The two expressions -2.*sqrt(5) and -2*sqrt(5.) are quite different in how Maple handles them, if you notice the float position! For the first example, the square root is calculated arithmetically, while in the second example it is calculated numerically!
This can be a very big deal for some calculations; both with regards to speed and precision, and should be considered carefully when one wants to do complicated computations.
Speed example: Calculate exp(x) for x = 1,2,...,50000. (Arithmetic > numerical)
CodeTools:-Usage(seq(exp(x),x=1..50000)): # Arithmetic
memory used=19.84MiB, alloc change=0 bytes, cpu time=875.00ms,
real time=812.00ms, gc time=265.62ms
CodeTools:-Usage(seq(exp(1.*x),x=1..50000)): # Numerical
memory used=292.62MiB, alloc change=0 bytes, cpu time=9.67s,
real time=9.45s, gc time=1.09s
Notice especially the huge difference in memory used.
This is an example of when using floats gives worse performance. On the contrary, if we are just approximating anyways, numerical approximation is much faster.
Approximate exp(1) (numerical > arithmetic)
CodeTools:-Usage(seq((1+1/x)^x,x=1..20000)): # Arithmetic
memory used=0.64GiB, alloc change=0 bytes, cpu time=39.05s,
real time=40.92s, gc time=593.75ms
CodeTools:-Usage(seq((1+1./x)^x,x=1..20000)): # Numerical
memory used=56.17MiB, alloc change=0 bytes, cpu time=1.06s,
real time=1.13s, gc time=0ns
Precision example: For precision, things can go very wrong if one is not careful.
f:=x->(Pi-x)/sin(x);
limit(f(x),x=Pi); # Arithmetic returns 1 (true value)
limit(f(x),x=Pi*1.); # Numerical returns 0 (wrong!!!)
After a little working with that I finally found what it does!
short answer: it calculate the result of expression where those 2 integers are inputs.
extended answer:(example)
given 2 functions, we want to calculate Jacobin matrix for this equation system
with(linalg);
with(plots);
f := proc (x, y) -> (1/64)*(x-11)^2-(1/100)*(y-7)^2-1;
g := proc (x, y) -> (x-3)^2+(y-1)^2-400;
then we put functions in vector:
F:=(x, y) -> vector([f(x,y),g(x,y)]);
F(-2 ,20)
F(-2.,20)
result will be this:
[-79/1600 -14]
[-0.049375000 -14]

Variable precicion arithmetic for symbolic integral in Matlab

I am trying to calculate some integrals that use very high power exponents. An example equation is:
(-exp(-(x+sqrt(p)).^2)+exp(-(x-sqrt(p)).^2)).^2 ...
./( exp(-(x+sqrt(p)).^2)+exp(-(x-sqrt(p)).^2)) ...
/ (2*sqrt(pi))
where p is constant (1000 being a typical value), and I need the integral for x=[-inf,inf]. If I use the integral function for numeric integration I get NaN as a result. I can avoid that if I set the limits of the integration to something like [-20,20] and a low p (<100), but ideally I need the full range.
I have also tried setting syms x and using int and vpa, but in this case vpa returns:
1.0 - 1.0*numeric::int((1125899906842624*(exp(-(x - 10*10^(1/2))^2) - exp(-(x + 10*10^(1/2))^2))^2)/(3991211251234741*(exp(-(x - 10*10^(1/2))^2) + exp(-(x + 10*10^(1/2))^2)))
without calculating a value. Again, if I set the limits of the integration to lower values I do get a result (also for low p), but I know that the result that I get is wrong – e.g., if x=[-100,100] and p=1000, the result is >1, which should be wrong as the equation should be asymptotic to 1 (or alternatively the codomain should be [0,1) ).
Am I doing something wrong with vpa or is there another way to calculate high precision values for my integrals?
First, you're doing something that makes solving symbolic problems more difficult and less accurate. The variable pi is a floating-point value, not an exact symbolic representation of the fundamental constant. In Matlab symbolic math code, you should always use sym('pi'). You should do the same for any other special numeric values, e.g., sqrt(sym('2')) and exp(sym('1')), you use or they will get converted to an approximate rational fraction by default (the source of strange large number you see in the code in your question). For further details, I recommend that you read through the documentation for the sym function.
Applying the above, here's a runnable example:
syms x;
p = 1000;
f = (-exp(-(x+sqrt(p)).^2)+exp(-(x-sqrt(p)).^2)).^2./(exp(-(x+sqrt(p)).^2)...
+exp(-(x-sqrt(p)).^2))/(2*sqrt(sym('pi')));
Now vpa(int(f,x,-100,100)) and vpa(int(f,x,-1e3,1e3)) return exactly 1.0 (to 32 digits of precision, see below).
Unfortunately, vpa(int(f,x,-Inf,Inf)), does not return an answer, but a call to the underlying MuPAD function numeric::int. As I explain in this answer, this is what can happen when int cannot obtain a result. Normally, it should try to evaluate the the integral numerically, but your function appears to be ill-defined at ±∞, resulting in divide by zero issues that the variable precision quadrature methods can't handle well. You can evaluate the integral at wider bounds by increasing the variable precision using the digits function (just remember to set digits back to the default of 32 when done). Setting digits(128) allowed me to evaluate vpa(int(f,x,-1e4,1e4)). You can also more efficiently evaluate your integral over a wider range via 2*vpa(int(f,x,0,1e4)) at lower effective digits settings.
If your goal is to see exactly how much less than one p = 1000 corresponds to, you can use something like vpa(1-2*int(f,x,0,1e4)). At digits(128), this returns
0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000086457415971094118490438229708839420392402555445545519907545198837816908450303280444030703989603548138797600750757834260181259102
Applying double to this shows that it is approximately 8.6e-89.

Real numbers (constants) in genetic programming

I can't figure out how a genetically programmed A.I. can determine when there should be a constant in the final equation. If I take the formula F(m) = ma; F(m) = m9.8, how can the A.I. know what the real number 9.8 actually is? I understand that instead of putting the final number in the binary tree, you can actually put a symbol that describes a constant and then later calculate or guess what is its value in a certain way.
Thank you
Given a predefined set of constants (part of the terminal set) they'll be combined to form new constants (using a tree-representation, any sub-tree with only numeric constants as leaves can itself be thought of as a new numeric constant).
Even with a single constant (c) the system will create:
the 1.0 constant (constant divided by itself: c / c);
the 2.0 constant (1.0 + 1.0 i.e. c / c + c / c);
the 0.5 constant (1.0 / 2.0 i.e. c / c / (c / c + c / c));
many constants will be created this way (if you are lucky... 9.8).
Sometimes special terminals named "ephemeral random constant" (Koza) are used. For each ephemeral in the initial population, a random number in a specified range is generated. Then these random constants are moved around and combined.
Anyway, even with the use of the ephemeral random constant, GP can be hard put to generate the right constants (Koza said "the finding of numeric constants is a skeleton in the GP closet").
So other techniques can be used during/after the evolution, e.g. numeric mutation, hill climbing...
These hybrid systems often have significant improvements in the success ratios (at least for regression problems).

Why inverse equality does not satisfy in MATLAB?

MATLAB does not satisfy matrix arithmetic for inverse, that is;
(ABC)-1 = C-1 * B-1 * A-1
in MATLAB,
if inv(A*B*C) == inv(C)*inv(B)*inv(A)
disp('satisfied')
end
It does not qualify. When I made it format long, I realized that there is difference in points, but it even does not satisfy when I make it format rat.
Why is that so?
Very likely a floating point error. Note that the format function affects only how numbers display, not how MATLAB computes or saves them. So setting it to rat won't help the inaccuracy.
I haven't tested, but you may try the Fractions Toolbox for exact rational number arithmetics, which should give an equality to above.
Consider this (MATLAB R2011a):
a = 1e10;
>> b = inv(a)*inv(a)
b =
1.0000e-020
>> c = inv(a*a)
c =
1.0000e-020
>> b==c
ans =
0
>> format hex
>> b
b =
3bc79ca10c924224
>> c
c =
3bc79ca10c924223
When MATLAB calculates the intermediate quantities inv(a), or a*a (whether a is a scalar or a matrix), it by default stores them as the closest double precision floating point number - which is not exact. So when these slightly inaccurate intermediate results are used in subsequent calculations, there will be round off error.
Instead of comparing floating point numbers for direct equality, such as inv(A*B*C) == inv(C)*inv(B)*inv(A), it's often better to compare the absolute difference to a threshold, such as abs(inv(A*B*C) - inv(C)*inv(B)*inv(A)) < thresh. Here thresh can be an arbitrary small number, or some expression involving eps, which gives you the smallest difference between two numbers at the precision at which you're working.
The format command only controls the display of results at the command line, not the way in which results are internally stored. In particular, format rat does not make MATLAB do calculations symbolically. For this, you might take a look at the Symbolic Math Toolbox. format hex is often even more useful than format long for diagnosing floating point precision issues such as the one you've come across.