Julia - significance of f0 after a number - numbers

What is the significance of f0 in the following:
julia> 1.25f3
1250.0f0
What is the difference with 1.25 e3 which means 1.25 * 10^3 ?
I looked for in the documentation, but I did not find it...

That is a very old manual you are reading, take a look at this section instead: https://docs.julialang.org/en/v1/manual/integers-and-floating-point-numbers/ . The TL;DR is that with f you get Float32 (single precision) and with e you get Float64 (double precision):
julia> typeof(1.25f3)
Float32
julia> typeof(1.25e3)
Float64

Related

Equivalent to MATLAB's freqz in Julia

MATLAB has a nice library of digital signals processing functions that give you a lot of control over linear-time invariant systems. My goal is to view the frequency response of a system governed by two vectors a=[1, -1.1, 0.46] and b=[1, 0.04, 0.76].
To do this in MATLAB, I'd just use the freqz(b,a) and get back the frequency response h and frequencies evaluated w. I am working in Julia and am using the DSP package, so I'm attempting to use the DSP.Filters.freqresp function. To set this up, I define the vectors and import the relevant package:
a=[1, -1.1, 0.46]
b=[1, 0.04, 0.76]
using DSP.Filters : freqresp
h, w = freqresp([b,a])
When I run this I get the error
MethodError: no method matching freqresp(::Vector{Int64})
How should I be implementing this function to obtain a valid result?
#Shayan pointed me in the right direction by mentioning PolynomialRatio, as plugging in the a and b vectors give a representation that freqresp will accept. As a result, I get the same answer as in MATLAB
a=[1, -1.1, 0.46]
b=[1, 0.04, 0.76]
using DSP
z = PolynomialRatio(b,a)
h, w = freqresp(z)
mag = abs.(h)
phase = atan.(imag.(h)./real.(h))*180/pi
p1 = plot(w, mag)
xaxis!("")
yaxis!("Magnitude")
p2 = plot(w, phase)
xaxis!("Normalized Frequency")
yaxis!("Wrapped Phase [deg]")
plot(p1, p2, layout=(2,1))
Which gives
You can also make use of the ControlSystems.jl package for this:
julia> using ControlSystemsBase, Plots
julia> a=[1, -1.1, 0.46]
3-element Vector{Float64}:
1.0
-1.1
0.46
julia> b=[1, 0.04, 0.76]
3-element Vector{Float64}:
1.0
0.04
0.76
julia> Z = tf(b,a,1)
TransferFunction{Discrete{Int64}, ControlSystemsBase.SisoRational{Float64}}
1.0z^2 + 0.04z + 0.76
---------------------
1.0z^2 - 1.1z + 0.46
Sample Time: 1 (seconds)
Discrete-time transfer function model
julia> bodeplot(Z)
The bodeplot plot recipe handles phase unrolling etc. for you.

Inf*0 in Matlab

I have the following line in my code:
1 - sqrt(pi/2)*sig*sqrt(Eb)*theta_l*exp(theta_l^2*sig^2*Eb/2).*(1 + erf(-theta_l*sig*sqrt(Eb)/sqrt(2)));
When I evaluate this expression for the following parameters:
Eb = 6324.6;
sig = 1/sqrt(2);
theta = 0.7;, I get Nan. I know that this comes from the product of Infinity by 0.
However when I tested the same line in Mathematica, the result was a finite value. How can I solve this issue? Thanks.
The problematic part of your function is exp(Eb/2). The value of Eb is so large, that the result of its exponentiation cannot be represented by a double precision floating point number (The numerical precision in Mathematica is obviously higher, or dynamic probably at the cost of performance), so you get Inf.
However, you can just change the input units to your function to stop this happening. For example, if we define your function as an anonymous function ...
funky = #(Eb, sig, theta_l) ...
1 - sqrt(pi/2)*sig*sqrt(Eb)*theta_l*exp(theta_l^2*sig^2*Eb/2) .* ...
(1 + erf(-theta_l*sig*sqrt(Eb)/sqrt(2)));
Then
funky(6324.6 / 1000, (1/sqrt(2))/1000, 0.7 / 1000) == ...
funky(6324.6 / 1e6, (1/sqrt(2))/1e6, 0.7 / 1e6) == ...
funky(6324.6 / 1e10, (1/sqrt(2))/1e10, 0.7 / 1e10) % etc

why is there significant double precision difference between Matlab and Mathematica?

I created a random double precision value in Matlab by
x = rand(1,1);
then display all possible digits of x by
vpa(x,100)
and obtain:
0.2238119394911369 7971853298440692014992237091064453125
I save x to a .mat file, and import it into Mathematica, and then convert it:
y = N[FromDigits[RealDigits[x]],100]
and obtain:
0.2238119394911369 0000
Then go back to Matlab and use (copy and paste all the Mathematica digits to Matlab):
vpa(0.22381193949113690000,100)
and obtain:
0.22381193949113689 64518061375201796181499958038330078125
Why there is significant difference between the same double precision variable?
How to bridge the gap when exchanging data between Mathematica and Matlab?
You can fix this problem by using ReadList instead of Import. I have added some demo steps below to explore displayed rounding and equality. Note the final test d == e? is False in Mathematica 7 but True in Mathematica 9, (with all the expected digits). So it looks like some precision has been added to Import by version 9. The demo uses a demo file.
Contents of demo.dat:
0.22381193949113697971853298440692014992237091064453125
"0.22381193949113697971853298440692014992237091064453125"
Exploring:-
a = Import["demo.dat"]
b = ReadList["demo.dat"]
a[[1, 1]] == a[[2, 1]]
b[[1]] == b[[2]]
a[[1, 1]] == b[[1]]
a[[1, 1]] == ToExpression#b[[2]]
b[[1]] // FullForm
c = First#StringSplit[ToString#FullForm#b[[1]], "`"]
b[[2]]
ToExpression /# {c, b[[2]]}
d = N[FromDigits[RealDigits[a[[1, 1]]]], 100]
e = N[FromDigits[RealDigits[b[[1]]]], 100]
d == e
The precision is as expected for double values. A double has a 53 bit fraction, thus the precision is about 53*log(10)/log(2)=16 significant digits. You have 16 significant digits, it works as expected.

(0.3)^3 == (0.3)*(0.3)*(0.3) returns false in matlab?

I am trying to understand roundoff error for basic arithmetic operations in MATLAB and I came across the following curious example.
(0.3)^3 == (0.3)*(0.3)*(0.3)
ans = 0
I'd like to know exactly how the left-hand side is computed. MATLAB documentation suggests that for integer powers an 'exponentiation by squaring' algorithm is used.
"Matrix power. X^p is X to the power p, if p is a scalar. If p is an integer, the power is computed by repeated squaring."
So I assumed (0.3)^3 and (0.3)*(0.3)^2 would return the same value. But this is not the case. How do I explain the difference in roundoff error?
I don't know anything about MATLAB, but I tried it in Ruby:
irb> 0.3 ** 3
=> 0.026999999999999996
irb> 0.3 * 0.3 * 0.3
=> 0.027
According to the Ruby source code, the exponentiation operator casts the right-hand operand to a float if the left-hand operand is a float, and then calls the standard C function pow(). The float variant of the pow() function must implement a more complex algorithm for handling non-integer exponents, which would use operations that result in roundoff error. Maybe MATLAB works similarly.
Interestingly, scalar ^ seems to be implemented using pow while matrix ^ is implemented using square-and-multiply. To wit:
octave:13> format hex
octave:14> 0.3^3
ans = 3f9ba5e353f7ced8
octave:15> 0.3*0.3*0.3
ans = 3f9ba5e353f7ced9
octave:20> [0.3 0;0 0.3]^3
ans =
3f9ba5e353f7ced9 0000000000000000
0000000000000000 3f9ba5e353f7ced9
octave:21> [0.3 0;0 0.3] * [0.3 0;0 0.3] * [0.3 0;0 0.3]
ans =
3f9ba5e353f7ced9 0000000000000000
0000000000000000 3f9ba5e353f7ced9
This is confirmed by running octave under gdb and setting a breakpoint in pow.
The same is likely true in matlab, but I can't really verify.
Thanks to #Dougal I found this:
#include <stdio.h>
int main() {
double x = 0.3;
printf("%.40f\n", (x*x*x));
long double y = 0.3;
printf("%.40f\n", (double)(y*y*y));
}
which gives:
0.0269999999999999996946886682280819513835
0.0269999999999999962252417162744677625597
The case is strange because the computation with more digits gives a worst result. This is due to the fact that anyway the initial number 0.3 is approximated with few digits and hence we start with a relatively "large" error. In this particular case what happens is that the computation with few digits gives another "large" error but with opposite sign... hence compensating the initial one. Instead the computation with more digits gives a second smaller error but the first one remains.
Here's a little test program that follows what the system pow() from Source/Intel/xmm_power.c, in Apple's Libm-2026, does in this case:
#include <stdio.h>
int main() {
// basically lines 1130-1157 of xmm_power.c, modified a bit to remove
// irrelevant things
double x = .3;
int i = 3;
//calculate ix = f**i
long double ix = 1.0, lx = (long double) x;
//calculate x**i by doing lots of multiplication
int mask = 1;
//for each of the bits set in i, multiply ix by x**(2**bit_position)
while(i != 0)
{
if( i & mask )
{
ix *= lx;
i -= mask;
}
mask += mask;
lx *= lx; // In double this might overflow spuriously, but not in long double
}
printf("%.40f\n", (double) ix);
}
This prints out 0.0269999999999999962252417162744677625597, which agrees with the results I get for .3 ^ 3 in Matlab and .3 ** 3 in Python (and we know the latter just calls this code). By contrast, .3 * .3 * .3 for me gets 0.0269999999999999996946886682280819513835, which is the same thing that you get if you just ask to print out 0.027 to that many decimal places and so is presumably the closest double.
So there's the algorithm. We could track out exactly what value is set at each step, but it's not too surprising that it would round to a very slightly smaller number given a different algorithm for doing it.
Read Goldberg's "What Every Computer Scientist Should Know About Floating-Point Arithmetic" (this is a reprint by Oracle). Do understand it. Floating point numbers are not the real numbers of calculus. Sorry, no TL;DR version available.

Math library in iPhone SDK?

I m trying to calculate this equation for a small iphone app:
x = 120 x num^-1.123 x tr^-0.206 x [te] x [fe]
x equals 120 times num raised to the power of -1.123 times tr raised to the power of -0.206 times te times fe.
Where num, tr, te and fe are known numbers entered by the user.
How do I do that?
I m stuck in the negative decimal power of num and tr.
any help appreciated...
Foundation will include math functions, so all you have to do is use one of the
pow() functions to work with. Use pow() for working with doubles, powf()
for floats, or powl() for long doubles. Here's an example:
double num = 2.0;
double tr = 3.0;
double te = 4.0;
double fe = 5.0;
double x = 120.0 * pow(num, -1.123) * pow(tr, -0.206) * te * fe;
The iPhone SDK contains the standard C library functions (math.h) that you could use for these kind of math. Searching for math.h in your XCode documentation should show all the related functions (like pow for raising x to the power of y).