I'm perplexed by scipy.stats.hypergeom. Why doesn't the first expression work?
>>> from scipy.stats import hypergeom
>>> hypergeom.pmf(14,6,3,24)
nan
>>> hypergeom.pmf(10,10,10,10)
1.0
The docs say:
pmf(k, M, n, N) = choose(n, k) * choose(M - n, N - k) / choose(M, N),
14, 6, 3, 24 choose(3, 14) * choose(3, 10) / choose(6, 24)
That value at the bottom should compute, unless I'm looking at choose wrong.
>>> hypergeom.pmf(14,6,3,24)
nan
You can't draw 24 objects from a collection whose total size is 6. The PMF is undefined in that case, so the function returns nan.
Related
Trying to replicate a calculation from Matlab in Julia but am having trouble converting a single column complex array into a sparse diagonalized array for matrix multiplication.
Here is the Matlab code I am trying to replicate in Julia:
x*diag(sparse(y))
where x is of size 60,600000 and is of type: double, and y is size 600000,1 and is of type: complex double.
You can use Diagonal for that:
using LinearAlgebra
x=rand(6,60)
y=rand(Complex{Float64},60,1)
x*Diagonal(view(y,:))
I have used view(y,:) to convert y to a Vector - this is here a dimension drop operator you can also use shorter form vec(y) instead. Depending on what you want to do you might explicitly say that you want first column by view(y,:,1).
Note that Diagonal is just a sparse representation of a matrix.
julia> Diagonal(1:4)
4×4 Diagonal{Int64,UnitRange{Int64}}:
1 ⋅ ⋅ ⋅
⋅ 2 ⋅ ⋅
⋅ ⋅ 3 ⋅
⋅ ⋅ ⋅ 4
Another option that might cover more use case scenarios are BandedMatrices:
using BandedMatrices
x*BandedMatrix(0=>view(y,:))
Note that BandedMatrix uses set of pairs for bands, where band 0 is actually the diagonal.
I guess you don't mean it like this, but one can also interpret the question in the way that y is a sparse vector in the Julia sense, and you want to construct a sparse diagonal matrix out of it. In that case you can do the following:
julia> y = sprand(10, 0.2)
10-element SparseVector{Float64,Int64} with 2 stored entries:
[4 ] = 0.389682
[5 ] = 0.232429
julia> I, V = findnz(y)
([4, 5], [0.3896822408908356, 0.2324294021548845])
julia> sparse(I, I, V)
5×5 SparseMatrixCSC{Float64,Int64} with 2 stored entries:
[4, 4] = 0.389682
[5, 5] = 0.232429
Unfortunately, spdiagm does not preserve structural zeros for a sparse input:
julia> spdiagm(0 => y)
10×10 SparseMatrixCSC{Float64,Int64} with 10 stored entries:
[1 , 1] = 0.0
[2 , 2] = 0.0
[3 , 3] = 0.0
[4 , 4] = 0.389682
[5 , 5] = 0.232429
[6 , 6] = 0.0
[7 , 7] = 0.0
[8 , 8] = 0.0
[9 , 9] = 0.0
[10, 10] = 0.0
I don't know whether this is intentional, but I filed an issue about this behaviour.
I need some help, i have a matrix rapresenting points on a grid and when given a element i would like to find the indices of its nearest neighbors keeping in mind that i have periodic boundary conditions, so that if i have the element A(1,1) its nearest neighbors are
A(1,N)
A(2,1)
A(1,2)
A(N, 1)
Where A is my matrix and N is the dimension, and i need a code which will find the indices of n.n of a given element.
Thanks in advance.
Here's my interpretation of your problem:
Given some periodic matrix A:
>> A = magic(4)
A =
16 2 3 13
5 11 10 8
9 7 6 12
4 14 15 1
and some element x (example 1), then find the (i,j) indices of the 4 neighbours of x. In this case, the indices (3, 4), (4,3), (4, 1), (1, 4) correspond to 12, 15, 4, 13.
Since I don't know your use case, I don't know in what format the indices are most convenient for you. But as an example, we can write a function neighbors which returns a struct with the 4 indices of the element x.
function out = neighbors(A, x)
[m, n] = size(A);
[i, j] = find(A == x);
mod2 = #(x) mod(x-1, [m, n])+1;
out.down = mod2([i+1, j ]);
out.up = mod2([i-1, j ]);
out.right = mod2([i , j+1]);
out.left = mod2([i , j-1]);
end
We can then run the function as follows.
A = magic(4);
out = neighbors(A, 1);
A(out.left(1), out.left(2)); % this returns 15
A(out.right(1), out.right(2)); % this returns 4
A(out.up(1), out.up(2)); % this returns 12
A(out.down(1), out.down(2)); % this returns 13
let num = 32.0
Double(num).remainder(dividingBy: 12.0)
I'm getting -4?..instead of 8.0...it's subtracting 12.0 from 8.0
how do i fix this?
Please, read the documentation carefully:
For two finite values x and y, the remainder r of dividing x by y satisfies x == y * q + r, where q is the integer nearest to x / y. If x / y is exactly halfway between two integers, q is chosen to be even. Note that q is not x / y computed in floating-point arithmetic, and that q may not be representable in any available integer type.
(emphasis mine)
You want to use truncatingRemainder(dividingBy:) instead:
let num = 32.0
let value = Double(num)
.truncatingRemainder(dividingBy: 12)
print(value) // 8
remainder(dividingBy:)is not the modulus function.
In real division 32.0/12.0 = 2.666666.... The remainder(dividingBy:) function defines the nearest integer to that result as q: in this case 3. So we write:
32.O = q * 12 + r
With q being an integer, and r a Double.
32.0 = 3 * 12.0 + r ⇒ r = - 4.0
The remainder r, as defined by this function, is -4.0.
I tried the following:
spline= interpolate.InterpolatedUnivariateSpline(X, Y, k=3)
coefs= spline.get_coeffs()
With five values in each of X and Y, I ended up with coefs also having
five values. Given that five data points implies four spline sections, and
that a cubic polynomial has four coefficients, I would have expected to get
four times four= 16 coefficients. Does anyone know how to interpret the values that are returned by the get_coeffs method? Is there any place where this is documented?
These are not the coefficients of x, x**2, and so forth: such monomials are ill-suited to representing splines. Rather, they are coefficients of B-splines which are computed for the specific grid on which interpolation is done. The number of B-splines involved is equal to the number of data points, and so is the number of coefficients. As an example, suppose we want to interpolate these data:
xv = [0, 1, 2, 3, 4, 5]
yv = [3, 6, 5, 7, 9, 1]
Begin with the simpler case of degree k=1 (piecewise linear spline). Then the B-splines are these "triangular hat" functions:
There are 6 of them. Each of them is equal to 1 at "its" grid point, and 0 at all other grid points. This makes it really easy to write the interpolating spline: it is y[0]*b[0] + ... + y[5]*b[5]. And indeed, get_coeffs shows the coefficients are the the y-values themselves.
InterpolatedUnivariateSpline(xv, yv, k=1).get_coeffs() # [ 3., 6., 5., 7., 9., 1.]
Cubic splines
Now it gets hairy, because we need "hats" that are smooth, rather than pointy as those above. The smoothness requirement forces them to be wider, so each B-spline has nonzero values on several grid points. (Technicality: a cubic B-spline has nonzero values at up to 3 knots, but on the chart below, 1 and 4, despite being grid points, are not knots due to so-called "not a knot" condition. Never mind this.) Here are the B-splines for our grid:
To get these, I used older methods splrep and splev of scipy.interpolate, which call the same fitpack routines under the hood. The coefficient vector here is the second entry of the tuple tck; I modify it to have one 1 and the rest 0, thus creating a basis spline (b-spline).
k = 3
tck = splrep(xv, yv, s=0, k=k)
xx = np.linspace(min(xv), max(xv), 500)
bsplines = []
for j in range(len(xv)):
tck_mod = (tck[0], np.arange(len(xv)+2*k-2) == j, k)
bsplines.append(splev(xx, tck_mod))
plt.plot(xx, bsplines[-1])
Now that we have a list bsplines, we can use the coefficients returned by get_coeffs to put them together ourselves into an interpolating spline:
coeffs = InterpolatedUnivariateSpline(xv, yv, k=3).get_coeffs()
interp_spline = sum([coeff*bspline for coeff, bspline in zip(coeffs, bsplines)])
plt.plot(xx, interp_spline)
If you want a formula for the pieces of these B-splines, the Cox-de Boor recursion formula on B-splines can help but these are a chore to compute by hand.
SymPy can give formulas for B-splines, but there is a little twist. One should pass in a padded set of knots, by repeating the end values like
[0, 0, 0, 0, 2, 3, 5, 5, 5, 5]
This is because at 0 and 5 all four coefficients change values, while at 1 and 4 none of them do, so they are omitted ("not a knot"). (Also, the current version of SymPy (1.1.1) has an issue with repeated knots, but this will be fixed in the next version.)
from sympy import symbols, bspline_basis_set, plot
x = symbols('x')
xv_padded = [0, 0, 0, 0, 2, 3, 5, 5, 5, 5]
bs = bspline_basis_set(3, xv_padded, x)
Now bs is an array of scary-looking piecewise formulas:
[Piecewise((-x**3/8 + 3*x**2/4 - 3*x/2 + 1, (x >= 0) & (x <= 2)), (0, True)),
Piecewise((19*x**3/72 - 5*x**2/4 + 3*x/2, (x >= 0) & (x <= 2)), (-x**3/9 + x**2 - 3*x + 3, (x >= 2) & (x <= 3)), (0, True)),
Piecewise((-31*x**3/180 + x**2/2, (x >= 0) & (x <= 2)), (11*x**3/45 - 2*x**2 + 5*x - 10/3, (x >= 2) & (x <= 3)), (-x**3/30 + x**2/2 - 5*x/2 + 25/6, (x >= 3) & (x <= 5)), (0, True)),
Piecewise((x**3/30, (x >= 0) & (x <= 2)), (-11*x**3/45 + 5*x**2/3 - 10*x/3 + 20/9, (x >= 2) & (x <= 3)), (31*x**3/180 - 25*x**2/12 + 95*x/12 - 325/36, (x >= 3) & (x <= 5)), (0, True)),
Piecewise((x**3/9 - 2*x**2/3 + 4*x/3 - 8/9, (x >= 2) & (x <= 3)), (-19*x**3/72 + 65*x**2/24 - 211*x/24 + 665/72, (x >= 3) & (x <= 5)), (0, True)),
Piecewise((x**3/8 - 9*x**2/8 + 27*x/8 - 27/8, (x >= 3) & (x <= 5)), (0, True))]
Suppose I have a quantization function which quantize a 8bit gray scale image :
function mse = uni_quan(I, b)
Q = I / 2 ^ (8 - b);
Q = uint8(Q);
Q = Q * 2 ^ (8 - b);
mse = sum(sum((I - Q) .^ 2, 1), 2) / numel(I);
end
This function perform a uniform quantization on image I and convert it into a b bit image, then scale it in 0-255 range, Now I want to calculate MSE (Mean Square Error) of this process
But the result for
mse = sum(sum((I - Q) .^ 2, 1), 2) / numel(I);
and
mse = sum(sum((Q - I) .^ 2, 1), 2) / numel(I);
is different. Can anyone please point me out whats the problem?
Thanks
The problem is the type of the matrixes. You are combining two unsigned matrixes. So if Q-I<0 then the result is 0 and it is different from I-Q.
In order to use uint8, you can compute MSE in two steps:
%Compute the absolute difference, according to the sign
difference = Q-I;
neg_idx = find(I>Q);
difference(neg_idx) = I(neg_idx)-Q(neg_idx);
%Compute the MSE
mse = sum(sum((difference) .^ 2, 1), 2) / numel(I);