Alternative method for calculating mean for a 4D matrice - matlab

I need some advice on some AAM e.g. coding that I trying to understand. The result were unable to complete since an error occur that said that the Matlab is out of memory:
Error using zeros
Out of memory. Type HELP MEMORY for your options.
In the coding that causes the error was:
Error in AAM_MakeSearchModel2D (line 6)
drdp=zeros(size(ShapeAppearanceData.Evectors,2)+4,6,length(TrainingData),length(AppearanceData.g_mean));
the actual data in the drpd are:
drdp=zeros(13,6,10,468249);
Since the 4th array is large, it's understandable that the Matlab that I was using which is 32bit is out of memory. The output that the code will produce is a 2d. Here is the code that will later use the drpd:
drdpt=squeeze(mean(mean(drdp,3),2));
R=pinv(drdpt)';
The question that i want to ask is, is it possible to split the 4D matrices into smaller ones (e.g. 2D or 3D) and do normal addition and division (to get mean). If yes, how would one do it?
edited 17/12/2013
I cannot used sparse since the 4D drpd is an initialization for obtaining another whole calculation that stored all weighted errors of model versus real into the drpd. I copy the part of the AAM function that calculate this drpd:
function R=AAM_MakeSearchModel2D(ShapeAppearanceData,ShapeData,AppearanceData,TrainingData,options)
% Structure which will contain all weighted errors of model versus real
% intensities, by several offsets of the parameters
drdp=zeros(size(ShapeAppearanceData.Evectors,2)+4,6,length(TrainingData),length(AppearanceData.g_mean));
% We use the trainingdata images, to train the model. Because we want
% the background information to be included
% Loop through all training images
for i=1:length(TrainingData);
% Loop through all model parameters, bot the PCA parameters as pose
% parameters
for j = 1:size(ShapeAppearanceData.Evectors,2)+4
if(j<=size(ShapeAppearanceData.Evectors,2))
% Model parameters, offsets
de = [-0.5 -0.3 -0.1 0.1 0.3 0.5];
% First we calculate the real ShapeAppearance parameters of the
% training data set
c = ShapeAppearanceData.Evectors'*(ShapeAppearanceData.b(:,i) -ShapeAppearanceData.b_mean);
% Standard deviation form the eigenvalue
c_std = sqrt(ShapeAppearanceData.Evalues(j));
for k=1:length(de)
% Offset the ShapeAppearance parameters with a certain
% value times the std of the eigenvector
c_offset=c;
c_offset(j)=c_offset(j)+c_std *de(k);
% Transform back from ShapeAppearance parameters to Shape parameters
b_offset = ShapeAppearanceData.b_mean + ShapeAppearanceData.Evectors*c_offset;
b1_offset = b_offset(1:(length(ShapeAppearanceData.Ws)));
b1_offset= inv(ShapeAppearanceData.Ws)*b1_offset;
x = ShapeData.x_mean + ShapeData.Evectors*b1_offset;
pos(:,1)=x(1:end/2);
pos(:,2)=x(end/2+1:end);
% Transform the Shape back to real image coordinates
pos=AAM_align_data_inverse2D(pos,TrainingData(i).tform);
% Get the intensities in the real image. Use those
% intensities to get ShapeAppearance parameters, which
% are then used to get model intensities
[g, g_offset]=RealAndModel(TrainingData,i,pos, AppearanceData,ShapeAppearanceData,options,ShapeData);
% A weighted sum of difference between model an real
% intensities gives the "intensity / offset" ratio
w = exp ((-de(k)^2) / (2*c_std^2))/de(k);
drdp(j,k,i,:)=(g-g_offset)*w;
end
else
% Pose parameters offsets
j2=j-size(ShapeAppearanceData.Evectors,2);
switch(j2)
case 1 % Translation x
de = [-2 -1.2 -0.4 0.4 1.2 2]/2;
case 2 % Translation y
de = [-2 -1.2 -0.4 0.4 1.2 2]/2;
case 3 % Scaling & Rotation Sx
de = [-0.2 -.12 -0.04 0.04 0.12 0.2]/2;
case 4 % Scaling & Rotation Sy
de = [-0.2 -.12 -0.04 0.04 0.12 0.2]/2;
end
for k=1:length(de)
tform=TrainingData(i).tform;
switch(j2)
case 1 % Translation x
tform.offsetv(1)=tform.offsetv(1)+de(k);
case 2 % Translation y
tform.offsetv(2)=tform.offsetv(2)+de(k);
case 3 % Scaling & Rotation Sx
tform.offsetsx=tform.offsetsx+de(k);
case 4 % Scaling & Rotation Sy
tform.offsetsy=tform.offsetsy+de(k);
end
% From Shape tot real image coordinates, with a certain
% pose offset
pos=AAM_align_data_inverse2D(TrainingData(i).CVertices, tform);
% Get the intensities in the real image. Use those
% intensities to get ShapeAppearance parameters, which
% are then used to get model intensities
[g, g_offset]=RealAndModel(TrainingData,i,pos, AppearanceData,ShapeAppearanceData,options,ShapeData);
% A weighted sum of difference between model an real
% intensities gives the "intensity / offset" ratio
w =exp ((-de(k)^2) / (2*2^2))/de(k);
drdp(j,k,i,:)=(g-g_offset)*w;
end
end
end
end
% Combine the data to the intensity/parameter matrix,
% using a pseudo inverse
% for i=1:length(TrainingData);
% drdpt=squeeze(mean(drdp(:,:,i,:),2));
% R(:,:,i) = (drdpt * drdpt')\drdpt;
% end
% % Combine the data intensity/parameter matrix of all training datasets.
% %
% % In case of only a few images, it will be better to use a weighted mean
% % instead of the normal mean, depending on the probability of the trainingset
% R=mean(R,3);
drdpt=squeeze(mean(mean(drdp,3),2));
R=pinv(drdpt)';
%R = (drdpt * drdpt')\drdpt;
As you can see in the final code of the function, the 4D drpd is then squeeze and then calculate again to become a 2D matrices store in R. Because of 'Out of Memory' problem, the function cannot initialize the drpd because it used to much space (the drdp=zeros(13,6,10,468249)). Can I stored the data in a 2D or 3D form (split the drpd part) and then do simple addition and division to get the mean and then finally get the 'R'?
Thank you, and sorry for the long question.

I guess you want to use some sparse representation if many elements ofdrdpremain zero.
Matlab'ssparsecommand only creates 2D matrices though so something like this might work?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/167669
Once you get that to work you can worry about computing means --
apart from requiring a little bookkeeping that should be doable.

Related

How to plot a cosine wave in MATLAB given fitted parameters?

In my project, I used a generic cosine function to fit my data:
cos_fun = #(p, theta) p(1) + p(2) * cos(theta - p(3))
p = nlinfit(x,y,cos_fun,[1 1 0])
As a result, p has three values, which are y-offset, amplitude and phase.
Can I draw a smooth cosine curve using these three parameters?
TL;DR: It is possible to both fit and plot the curve, with and without requiring toolboxes. All cases presented below.
Plotting
Plotting the function follows directly from the function used to obtain your parameters using plot(). Notice that you control the smoothness of the plotted function based on the step size for the domain (see step below).
In the figure, the results obtained from the nlinfit() (Toolbox required) are the same as "SSE" obtained without a toolbox using fminsearch().
% Plot (No toolbox Required)
step = 0.01; % smaller is smoother
Xrng = 0:step:12;
figure, hold on, box on
plot(Xdata,Ydata,'b.','DisplayName','Data')
plot(Xrng,cos_fun(p_SSE,Xrng),'r--','DisplayName','SSE')
plot(Xrng,cos_fun(p_SAE,Xrng),'k--','DisplayName','SAE')
legend('show')
As pointed out in the comment by #Daniel, you can also make the plot with nlintool() but this requires the Statistics Toolbox.
nlintool(Xdata,Ydata,cos_fun,[1 1 0]) % toolbox required
Fitting
Using nlinfit(): (Statistics Toolbox Required)
pNL = nlinfit(Xdata,Ydata,cos_fun,[1 1 0]) % same as SSE approach below
A Toolbox Free Approach:
You can construct a convex error function to minimize and return the global optimum using fminsearch() as a down and dirty approach. For example, the sum of squared error or the sum of absolute error will be convex.
% MATLAB R2019a
% Generate Example Data
sigma = 0.5; % increase this for more variable data (more noise)
Xdata = [repmat(1:10,1,4)].';
Ydata = cos(Xdata)+sigma*randn(length(Xdata),1);
% Function Evaluation
cos_fun=#(p,x) p(1) + p(2).*cos(x-p(3));
% Error Functions
SSEh =#(p) sum((cos_fun(p,Xdata)-Ydata).^2); % sum of squared error
SAEh =#(p) sum(abs(cos_fun(p,Xdata)-Ydata)); % sum of absolute error
Of course, these will give you different errors for the same parameter.
% Test
SSEh([1 1 0])
SAEh([1 1 0])
But you then call fminsearch() given an initial guess for the parameters, p0, to obtain the parameters that minimize your chosen error function. Since SSEh and SAEh are both convex with respect to p, there's no need to do this multiple times and save the best one since for every p0, you'll get the same answer.
p0 = [1 1 0.25]; % Initial starting point
[p_SSE, SSE] = fminsearch(SSEh,p0)
[p_SAE, SAE] = fminsearch(SAEh,p0)
You fit slightly different curves depending on the error function.
Notice that SSEh(pNL) and SSEh(p_SSE) are the same since pNL equals p_SSE since nlinfit() estimates the coefficients "using iterative least squares estimation."

Linear regression -- Stuck in model comparison in Matlab after estimation?

I want to determine how well the estimated model fits to the future new data. To do this, prediction error plot is often used. Basically, I want to compare the measured output and the model output. I am using the Least Mean Square algorithm as the equalization technique. Can somebody please help what is the proper way to plot the comparison between the model and the measured data? If the estimates are close to true, then the curves should be very close to each other. Below is the code. u is the input to the equalizer, x is the noisy received signal, y is the output of the equalizer, w is the equalizer weights. Should the graph be plotted using x and y*w? But x is noisy. I am confused since the measured output x is noisy and the model output y*w is noise-free.
%% Channel and noise level
h = [0.9 0.3 -0.1]; % Channel
SNRr = 10; % Noise Level
%% Input/Output data
N = 1000; % Number of samples
Bits = 2; % Number of bits for modulation (2-bit for Binary modulation)
data = randi([0 1],1,N); % Random signal
d = real(pskmod(data,Bits)); % BPSK Modulated signal (desired/output)
r = filter(h,1,d); % Signal after passing through channel
x = awgn(r, SNRr); % Noisy Signal after channel (given/input)
%% LMS parameters
epoch = 10; % Number of epochs (training repetation)
eta = 1e-3; % Learning rate / step size
order=10; % Order of the equalizer
U = zeros(1,order); % Input frame
W = zeros(1,order); % Initial Weigths
%% Algorithm
for k = 1 : epoch
for n = 1 : N
U(1,2:end) = U(1,1:end-1); % Sliding window
U(1,1) = x(n); % Present Input
y = (W)*U'; % Calculating output of LMS
e = d(n) - y; % Instantaneous error
W = W + eta * e * U ; % Weight update rule of LMS
J(k,n) = e * e'; % Instantaneous square error
end
end
Lets start step by step:
First of all when using some fitting method it is a good practice to use RMS error . To get this we have to find error between input and output. As I understood x is an input for our model and y is an output. Furthermore you already calculated error between them. But you used it in loop without saving. Lets modify your code:
%% Algorithm
for k = 1 : epoch
for n = 1 : N
U(1,2:end) = U(1,1:end-1); % Sliding window
U(1,1) = x(n); % Present Input
y(n) = (W)*U'; % Calculating output of LMS
e(n) = x(n) - y(n); % Instantaneous error
W = W + eta * e(n) * U ; % Weight update rule of LMS
J(k,n) = e(n) * (e(n))'; % Instantaneous square error
end
end
Now e consists of errors at the last epoch. So we can use something like this:
rms(e)
Also I'd like to compare results using mean error and standard deviation:
mean(e)
std(e)
And some visualization:
histogram(e)
Second moment: we can't use compare function just for vectors! You can use it for dynamic system models. For it you have to made some workaround about using this method as dynamic model. But we can use some functions as goodnessOfFit for example. If you want something like error at each step that consider all previous points of data then make some math workaround - calculate it at each point using [1:currentNumber].
About using LMS method. There are built-in function calculating LMS. Lets try to use it for your data sets:
alg = lms(0.001);
eqobj = lineareq(10,alg);
y1 = equalize(eqobj,x);
And lets see at the result:
plot(x)
hold on
plot(y1)
There are a lot of examples of such implementation of this function: look here for example.
I hope this was helpful for you!
Comparison of the model output vs observed data is known as residual.
The difference between the observed value of the dependent variable
(y) and the predicted value (ŷ) is called the residual (e). Each data
point has one residual.
Residual = Observed value - Predicted value
e = y - ŷ
Both the sum and the mean of the residuals are equal to zero. That is,
Σ e = 0 and e = 0.
A residual plot is a graph that shows the residuals on the vertical
axis and the independent variable on the horizontal axis. If the
points in a residual plot are randomly dispersed around the horizontal
axis, a linear regression model is appropriate for the data;
otherwise, a non-linear model is more appropriate.
Here is an example of residual plots from a model of mine. On the vertical axis is the difference between the output of the model and the measured value. On the horizontal axis is one of the independent variables used in the model.
We can see that most of the residuals are within 0.2 units which happens to be my tolerance for this model. I can therefore make a conclusion as to the worth of the model.
See here for a similar question.
Regarding you question about the lack of noise in your models output. We are creating a linear model. There's the clue.

Registration using particle filter in matlab

I wrote the following code as an implementation to the registration algorithm presented in an article named:"E. Arce-Santana, D. Campos-Delgado and A. Alba, Affine image registration guided by particle filter, IET Image Process. 6 (2012)".
Question 1: When I run the code no matter how many particles or iterations I choose, the output is still inaccurate. I do not know what is the problem?
Question 2: I commented a formula for updating the particle's weight, my question is, does I implement the equation write or the one that appears above it (which relied on the entropy) is the right one and therefore delete the commented one and leave the entropy based equation?
The code is as follow:
%% clear everything
clear all
close all
clc
%% Read the image data
% I1: Reference Image
I1=imread('cameraman.tif');
I1=double(imresize(I1,[256 256]));
figure,imshow(I1)
[I1r I1c I1d]=size(I1);
%I2: Target image
I2=randomtransform(I1); %user-defined function to generate random transformation
% I2=double(imresize(I2,[I1r I1c]));
figure,imshow(I2)
%% Particle Filter Steps:
%%%%% Input:
n=4; % Related to the initialization
m=256; % Related to the initialization
N=20; %(no. of iteration)
M=100; %(no. of particles)
phai=[]; %(state vector of the affine transformation parameters)
w=[]; %(the vector of the phai weights)
%k: iteration no.
%i: particle no.
Vk=1; % noise variance in the system (Used in the predection state)
w_v=1; % weights update equation related variance
beta=0.99; % annealing factor
%%%%% Output
phai_est=[]; %(Final estimated state vector of affine parameters)
%%%%% Steps:
%%% Step 1: Random generation of the particles
% The ranges are obtained from the paper
rotationAngle=double(int8(unifrnd(-pi/4,pi/4,1,1)));
scalingCoefX = double(unifrnd(0.5,2,1,1));
scalingCoefY = double(unifrnd(0.5,2,1,1));
shearingCoefX = double(unifrnd(0.5,2,1,1));
shearingCoefY = double(unifrnd(0.5,2,1,1));
translationCoefX = double(int8(unifrnd(-I1r/2,I1r/2,1,1)));
translationCoefY = double(int8(unifrnd(-I1c/2,I1c/2,1,1)));
%The initialization of the first phai
phai(1,:)=[round(rotationAngle*10^n)/10^n, round(scalingCoefX*10^n)/10^n, round(scalingCoefY*10^n)/10^n, round(shearingCoefX*10^n)/10^n, round(shearingCoefY*10^n)/10^n, round(translationCoefX*10^n)/10^n, round(translationCoefY*10^n)/10^n]';
%Make the randomly generated particles from the initial prior gaussian distribution
for i = 1:M
phai(i,:) = phai(1,:) + sqrt(2) * randn; %2: the variance of the initial esimate
w(i,:)=1/M;
end
% Normalize the weights:
w = w./sum(w);
%%% Step2: Resample process
for k=1:N
for i=1:M
% rand: u (Uniform random value between 0 and 1
j=find((cumsum(w) >= max(w)),1,'first');
phai_select(i,:)=phai(j,:);
phai(i,:)=phai_select(i,:)+(sqrt(Vk^2)*randn);
I2_new=targettransform(I2,phai(i,:)); %user-defined function to apply the generated transformation to the target image
E_I1=entropy(I1);
I=E_I1+entropy(I2_new)-joint_entropy(I1,I2_new); %joint_entropy: user defined function to calculate joint entropy of the two images
w(i)=(1/sqrt(2*pi*w_v))*exp(-((E_I1-I)^2)/(2*w_v));
% w(i)=prod(prod(((1/sqrt(2*pi*w_v))*exp(-((I2_new-I1)^2)/(2*w_v)))));
end
% Normalize the weights
w = w./sum(w);
% Reduce the noise standard deviation
Vk=beta*Vk;
phai_est=mean(phai);
end

Matlab: Error when working with higher order QAM signal - Matrix dimension must agree

This problem seems to be trivial but I am left scratching my head when trying to resolve it. I am trying to apply Fractionally spaced equalizer with constant modulus technique for 64 QAM constellation. The program works for QPSK or 4 QAM but when I apply it to 64QAM, it throws error :
Error using /
Matrix dimensions must agree.
Error in Working_FSE_CMA_64QAM (line 68)
sb1=sb/(fh(temp)); % scale the output
I don not have the Communications toolbox so have generated 64QAM symbols using the answer given in my previous question Generate 16 QAM signal
Can somebody please help in making the code work? Thank you.
% Blind channel estimation/equalization
% adpative CMA method in Fractional space
T=1000; % total number of data
dB=25; % SNR in dB value
%%%%%%%%% Simulate the Received noisy Signal %%%%%%%%%%%
N=5; % smoothing length N+1
Lh=5; % channel length = Lh+1
Ap=4; % number of subchannels or receive antennas
h=randn(Ap,Lh+1)+sqrt(-1)*randn(Ap,Lh+1); % channel (complex)
for i=1:Ap, h(i,:)=h(i,:)/norm(h(i,:)); end % normalize
s = (randi(8,1,T)*2-5)+j*(randi(8,1,T)*2-5); %64 QAM
%s=round(rand(1,T))*2-1; % QPSK or 4 QAM symbol sequence
%s=s+sqrt(-1)*(round(rand(1,T))*2-1);
% generate received noisy signal
x=zeros(Ap,T); % matrix to store samples from Ap antennas
SNR=zeros(1,Ap);
for i=1:Ap
x(i,:)=filter(h(i,:),1,s);
vn=randn(1,T)+sqrt(-1)*randn(1,T); % AWGN noise (complex)
vn=vn/norm(vn)*10^(-dB/20)*norm(x(i,:)); % adjust noise power
SNR(i)=20*log10(norm(x(i,:))/norm(vn)); % Check SNR of the received samples
x(i,:)=x(i,:)+vn; % received signal
end
SNR=SNR % display and check SNR
%%%%%%%%%%%%% adaptive equalizer estimation via CMA
Lp=T-N; %% remove several first samples to avoid 0 or negative subscript
X=zeros((N+1)*Ap,Lp); % sample vectors (each column is a sample vector)
for i=1:Lp
for j=1:Ap
X((j-1)*(N+1)+1:j*(N+1),i)=x(j, i+N:-1:i).';
end
end
e=zeros(1,Lp); % used to save instant error
f=zeros((N+1)*Ap,1); f(N*Ap/2)=1; % initial condition
%R2=2; % constant modulas of QPSK symbols
R2 = 1.380953; %For 64 QAM http://www.google.com/patents/US7433400
mu=0.001; % parameter to adjust convergence and steady error
for i=1:Lp
e(i)=abs(f'*X(:,i))^2-R2; % instant error
f=f-mu*2*e(i)*X(:,i)*X(:,i)'*f; % update equalizer
f(N*Ap/2)=1;
% i_e=[i/10000 abs(e(i))] % output information
end
%sb=f'*X; % estimate symbols (perform equalization)
sb = filter(f, 1, X);
% calculate SER
H=zeros((N+1)*Ap,N+Lh+1); temp=0;
for j=1:Ap
for i=1:N+1, temp=temp+1; H(temp,i:i+Lh)=h(j,:); end % channel matrix
end
fh=f'*H; % composite channel+equalizer response should be delta-like
temp=find(abs(fh)==max(abs(fh))); % find the max of the composite response
sb1=sb/(fh(temp)); % scale the output
sb1=sign(real(sb1))+sqrt(-1)*sign(imag(sb1)); % perform symbol detection
start=N+1-temp; % general expression for the beginning matching point
sb2=sb1(10:length(sb1))-s(start+10:start+length(sb1)); % find error symbols
SER=length(find(sb2~=0))/length(sb2) % calculate SER
if 1
subplot(221),
plot(s,'o'); % show the pattern of transmitted symbols
grid,title('Transmitted symbols'); xlabel('Real'),ylabel('Image')
axis([-2 2 -2 2])
subplot(222),
plot(x,'o'); % show the pattern of received samples
grid, title('Received samples'); xlabel('Real'), ylabel('Image')
subplot(223),
plot(sb,'o'); % show the pattern of the equalized symbols
grid, title('Equalized symbols'), xlabel('Real'), ylabel('Image')
subplot(224),
plot(abs(e)); % show the convergence
grid, title('Convergence'), xlabel('n'), ylabel('Error e(n)')
end
The main problem is that the iterative algorithm does not converge (or more specifically diverges). As a results the values in f contains NaN. The result of
temp=find(abs(fh)==max(abs(fh)))
is then an empty vector which thus does not match the expected scalar size on the line
sb1=sb/(fh(temp)); % scale the output
To fix the problem, you may note that the value of mu and R2 you use are based on a signal constellation with unit variance. You can generate such a 64-QAM constellation with:
s = ((randi(8,1,T)*2-9)+j*(randi(8,1,T)*2-9))/sqrt(42); %64 QAM
Also, a few lines redefine the complex constant j=sqrt(-1) to use as an index variable. You should avoid using j as index. Also, clearing all variables at the beginning of your sript (with clear all) can help start you execution with a consistent clean state.

compute S transform and its square value in matlab

let us consider following code taken from
http://www.mathworks.com/matlabcentral/fileexchange/45848-stockwell-transform--s-transform-/content/stran.m
to compute S transform, here is my code
function ST=stran(h)
% Compute S-Transform without for loops
%%% Coded by Kalyan S. Dash %%%
%%% IIT Bhubaneswar, India %%%
[~,N]=size(h); % h is a 1xN one-dimensional series
nhaf=fix(N/2);
odvn=1;
if nhaf*2==N;
odvn=0;
end
f=[0:nhaf -nhaf+1-odvn:-1]/N;
Hft=fft(h);
%Compute all frequency domain Gaussians as one matrix
invfk=[1./f(2:nhaf+1)]';
W=2*pi*repmat(f,nhaf,1).*repmat(invfk,1,N);
G=exp((-W.^2)/2); %Gaussian in freq domain
% End of frequency domain Gaussian computation
% Compute Toeplitz matrix with the shifted fft(h)
HW=toeplitz(Hft(1:nhaf+1)',Hft);
% Exclude the first row, corresponding to zero frequency
HW=[HW(2:nhaf+1,:)];
% Compute Stockwell Transform
ST=ifft(HW.*G,[],2); %Compute voice
%Add the zero freq row
st0=mean(h)*ones(1,N);
ST=[st0;ST];
end
and consider following chirp signal
>> t = 0:0.001:2;
x = chirp(t,100,1,200,'quadratic');
i need confirmation that i am doing correctly following things
>> ST=stran(x);
>> plot(abs(ST))
?
picture is here
Posting my comment as answer:
I don't have much idea of the s-transform, but AFAIK the result of it is a 3D signal (as you can clearly see in the size of ST), so you may want to do imagesc(abs(ST)) or surf(abs(ST),'linestyle','none') instead of plot.
In your figure you have plotted 1000 lines, that's why its so chaotic.
Using
imagesc(abs(ST))