DCT julia equivalent to scipy.fftpack.dct - scipy

I wonder how I could compute the following in Julia
import scipy.fftpack
scipy.fftpack.dct([1,2,3], axis=0)
array([ 1.20000000e+01, -3.46410162e+00, -4.44089210e-16])
I have seen that FFTW.jl seems to have the equivalent of
import scipy.fftpack
scipy.fftpack.dct([1,2,3], norm='ortho')
array([ 3.46410162, -1.41421356, 0. ])
which in julia FFTW would be
using FFTW
dct([1,2,3])
3-element Vector{Float64}:
3.4641016151377544
-1.414213562373095
9.064933036736789e-17

I don't think there is an equivalent for that, but you can certainly build your own normalization:
import FFTW: dct
function dct(x, dims = 1; norm = nothing)
res = dct(x, dims)
if norm == "ortho"
res[1] = res[1] * 2 * sqrt(size(x, dims))
res[2:end] = res[2:end] * sqrt(2 * size(x, dims))
end
res
end
julia> dct([1,2,3], norm = "ortho")
3-element Vector{Float64}:
11.999999999999998
-3.464101615137754
2.2204460492503128e-16

Related

inv(A)*B vs A\B - Why this weird behavior in MatLab?

Lets create two random matrices,
A = randn(2)
B = randn(2)
both inv(A)*B and A\B give the same result
inv(A)*B
A\B
ans =
0.6175 -2.1988
-0.7522 5.0343
ans =
0.6175 -2.1988
-0.7522 5.0343
unless I multiply with some factor. Why is this?
.5*A\B
.5*inv(A)*B
ans =
1.2349 -4.3977
-1.5045 10.0685
ans =
0.3087 -1.0994
-0.3761 2.5171
This is very annoying since MatLab always nudges me to use A\B instead of inv(A)*B and it took me years to figure out why my code was not working.
When A is non-singular matrix, then inv(A) * B = A \ B.
Your calculation is as follows: .5 * A\B = (0.5 * A) \ B vs .5* inv(A) * B = 0.5 * (A\B) . As such, it will give your unequal result.

Writing Own fft2() Function on MATLAB

I want to write my own 2 Dimensional DFT function with reduced loops.
What I try to implement is Discrete Fourier Transform:
Using the separability property of transform (actually exponential function), we can write this as multiplication of two 1 dimensional DFT. Then, we can calculate the exponential terms for rows (the matrix wM below) and columns (the matrix wN below) of transform. Then, for summation process we can multiply them as "F = wM * original_matrix * wN"
Here is the code I wrote:
f = imread('cameraman.tif');
[M, N, ~] = size(f);
wM = zeros(M, M);
wN = zeros(N, N);
for u = 0 : (M - 1)
for x = 0 : (M - 1)
wM(u+1, x+1) = exp(-2 * pi * 1i / M * x * u);
end
end
for v = 0 : (N - 1)
for y = 0 : (N - 1)
wN(y+1, v+1) = exp(-2 * pi * 1i / N * y * v);
end
end
F = wM * im2double(f) * wN;
The first thing is I dont want to use 2 loops which are MxM and NxN times running. If I used a huge matrix (or image), that would be a problem. Is there any chance to make this code faster (for example eliminating the loops)?
The second thing is displaying the Fourier Transform result. I use the codes below to display the transform:
% // "log" method
fl = log(1 + abs(F));
fm = max(fl(:));
imshow(im2uint8(fl / fm))
and
% // "abs" method
fa = abs(F);
fm = max(fa(:));
imshow(fa / fm)
When I use the "abs" method, I see only black figure, nothing else. What is wrong with "abs" method you think?
And the last thing is when I compare the transform result of my own function with MATLAB' s fft2() function', mine displays darker figure than MATLAB' s result. What am I missing here? Implementation misktake?
The transform result of my own function:
The transform result of MATLAB fft2() function:
I am happy you solved your problem but unfortunately you answer is not completely right. Indeed it does the job, but as I commented, im2double will normalize everything to 1, therefore showing the scaled result you have. What you want (if you are looking for performance) is not doing im2doubleand then multiply by 255, but directly casting to double().
You can eliminate loops by using meshgrid.
For example:
M = 1024;
tic
[ mX, mY ] = meshgrid( 0 : M - 1, 0 : M - 1 );
wM1 = exp( -2 * pi * 1i / M .* mX .* mY );
toc
tic
for u = 0 : (M - 1)
for x = 0 : (M - 1)
wM2( u + 1, x + 1 ) = exp( -2 * pi * 1i / M * x * u );
end
end
toc
all( wM1( : ) == wM2( : ) )
The timing on my system was:
Elapsed time is 0.130923 seconds.
Elapsed time is 0.493163 seconds.

Is there a cleaner way to generate a sum of sinusoids?

I have written a simple matlab / octave function to create the sum of sinusoids with independent amplitude, frequency and phase for each component. Is there a cleaner way to write this?
## Create a sum of cosines with independent amplitude, frequency and
## phase for each component:
## samples(t) = SUM(A[i] * sin(2 * pi * F[i] * t + Phi[i])
## Return samples as a column vector.
##
function signal = sum_of_cosines(A = [1.0],
F = [440],
Phi = [0.0],
duration = 1.0,
sampling_rate = 44100)
t = (0:1/sampling_rate:(duration-1/sampling_rate));
n = length(t);
signal = sum(repmat(A, n, 1) .* cos(2*pi*t' * F + repmat(Phi, n, 1)), 2);
endfunction
In particular, the calls to repmat() seem a bit clunky -- is there some nifty vectorization technique waiting for me to learn?
Is this the same?
signal = cos(2*pi*t' * F + repmat(Phi, n, 1)), 2) * A';
And then maybe
signal = real(exp(j*2*pi*t'*F) * (A .* exp(j*Phi))');
If you are memory constrained, this should work nicely:
e_jtheta = exp(j * 2 * pi * F / sampling_rate);
phasor = A .* exp(j*Phi);
samples = zeros(duration,1);
for k = 1:duration
samples(k) = real((e_jtheta .^ k) * phasor');
end
For row vectors A, F, and Phi, so you can use bsxfun to get rid of the repmat, but it is arguably uglier looking:
signal = cos(bsxfun(#plus, 2*pi*t' * F, Phi)) * A';
Heh. When I call any of the above vectorized versions with length(A) = 10000, octave fills up VM and grinds to a halt (or at least, a slow crawl -- I haven't had the patience to wait for it to complete.
As a result, I've fallen back to the straightforward iterative version:
function signal = sum_of_cosines(A = [1.0],
F = [440],
Phi = [0.0],
duration = 1.0,
sampling_rate = 44100)
t = (0:1/sampling_rate:(duration-1/sampling_rate));
n = length(t);
signal = zeros(n, 1);
for i=1:length(A)
samples += A(i) * cos(2*pi*t'*F(i) + Phi(i));
endfor
endfunction
This version works plenty fast and teaches me a lesson about trying to be 'elegant'.
P.S.: This doesn't diminish my appreciation for the answers given by #BenVoigt and #chappjc -- I've learned something useful from both!

Octave backpropagation implementation issues

I wrote a code to implement steepest descent backpropagation with which I am having issues. I am using the Machine CPU dataset and have scaled the inputs and outputs into range [0 1]
The codes in matlab/octave is as follows:
steepest descent backpropagation
%SGD = Steepest Gradient Decent
function weights = nnSGDTrain (X, y, nhid_units, gamma, max_epoch, X_test, y_test)
iput_units = columns (X);
oput_units = columns (y);
n = rows (X);
W2 = rand (nhid_units + 1, oput_units);
W1 = rand (iput_units + 1, nhid_units);
train_rmse = zeros (1, max_epoch);
test_rmse = zeros (1, max_epoch);
for (epoch = 1:max_epoch)
delW2 = zeros (nhid_units + 1, oput_units)';
delW1 = zeros (iput_units + 1, nhid_units)';
for (i = 1:rows(X))
o1 = sigmoid ([X(i,:), 1] * W1); %1xn+1 * n+1xk = 1xk
o2 = sigmoid ([o1, 1] * W2); %1xk+1 * k+1xm = 1xm
D2 = o2 .* (1 - o2);
D1 = o1 .* (1 - o1);
e = (y_test(i,:) - o2)';
delta2 = diag (D2) * e; %mxm * mx1 = mx1
delta1 = diag (D1) * W2(1:(end-1),:) * delta2; %kxm * mx1 = kx1
delW2 = delW2 + (delta2 * [o1 1]); %mx1 * 1xk+1 = mxk+1 %already transposed
delW1 = delW1 + (delta1 * [X(i, :) 1]); %kx1 * 1xn+1 = k*n+1 %already transposed
end
delW2 = gamma .* delW2 ./ n;
delW1 = gamma .* delW1 ./ n;
W2 = W2 + delW2';
W1 = W1 + delW1';
[dummy train_rmse(epoch)] = nnPredict (X, y, nhid_units, [W1(:);W2(:)]);
[dummy test_rmse(epoch)] = nnPredict (X_test, y_test, nhid_units, [W1(:);W2(:)]);
printf ('Epoch: %d\tTrain Error: %f\tTest Error: %f\n', epoch, train_rmse(epoch), test_rmse(epoch));
fflush (stdout);
end
weights = [W1(:);W2(:)];
% plot (1:max_epoch, test_rmse, 1);
% hold on;
plot (1:max_epoch, train_rmse(1:end), 2);
% hold off;
end
predict
%Now SFNN Only
function [o1 rmse] = nnPredict (X, y, nhid_units, weights)
iput_units = columns (X);
oput_units = columns (y);
n = rows (X);
W1 = reshape (weights(1:((iput_units + 1) * nhid_units),1), iput_units + 1, nhid_units);
W2 = reshape (weights((((iput_units + 1) * nhid_units) + 1):end,1), nhid_units + 1, oput_units);
o1 = sigmoid ([X ones(n,1)] * W1); %nxiput_units+1 * iput_units+1xnhid_units = nxnhid_units
o2 = sigmoid ([o1 ones(n,1)] * W2); %nxnhid_units+1 * nhid_units+1xoput_units = nxoput_units
rmse = RMSE (y, o2);
end
RMSE function
function rmse = RMSE (a1, a2)
rmse = sqrt (sum (sum ((a1 - a2).^2))/rows(a1));
end
I have also trained the same dataset using the R RSNNS package mlp and the RMSE for train set (first 100 examples) are around 0.03 . But in my implementation I cannot achieve lower RMSE than 0.14 . And sometimes the errors grow for some higher learning rates, and no learning rate gets me lower RMSE than 0.14. Also a paper i referred report the RMSE in for the train set is around 0.03
I wanted to know where is the problem i the code. I have followed Raul Rojas book and confirmed that things are okay.
In backprobagation code the line
e = (y_test(i,:) - o2)';
is not correct, because the o2 is the output from the train set and i am finding the difference from one example from the test set y_test. The line should have been as below:
e = (y(i,:) - o2)';
which correctly finds the difference between the predicted output by the current model and the target output of the corresponding example.
This took me 3 days to find this one, I am fortunate enough to find this freaking bug which stopped me from going into further modifications.

In Scipy LeastSq - How to add the penalty term

If the object function is
How to code it in python?
I've already coded the normal one:
import numpy as np
import scipy as sp
from scipy.optimize import leastsq
import pylab as pl
m = 9 #the degree of the polynomial
def real_func(x):
return np.sin(2*np.pi*x) #sin(2 pi x)
def fake_func(p, x):
f = np.poly1d(p) #polynomial
return f(x)
def residuals(p, y, x):
return y - fake_func(p, x)
#randomly choose 9 points as x
x = np.linspace(0, 1, 9)
x_show = np.linspace(0, 1, 1000)
y0 = real_func(x)
#add normalize noise
y1 = [np.random.normal(0, 0.1) + y for y in y0]
p0 = np.random.randn(m)
plsq = leastsq(residuals, p0, args=(y1, x))
print 'Fitting Parameters :', plsq[0]
pl.plot(x_show, real_func(x_show), label='real')
pl.plot(x_show, fake_func(plsq[0], x_show), label='fitted curve')
pl.plot(x, y1, 'bo', label='with noise')
pl.legend()
pl.show()
Since the penalization term is also just quadratic, you could just stack it together with thesquares of the error and use weights 1 for data and lambda for the penalization rows.
scipy.optimize.curvefit does weighted least squares, if you don't want to code it yourself.