Matlab - FFT of Gaussian - Equivalency - matlab

simple problem:
I plot out a 2D Gaussian function with a certain resolution in Matlab. I test with variance or sigma = 1.0. I want to compare it to the result of FFT(Gaussian), which should result in another Gaussian with a variance of (1./sigma). Since I am testing with sigma = 1.0, I would think that I should get two equivalent, 2D kernels.
i.e.
g1FFT = buildKernel(rows, cols, mu, sigma) % uses normpdf over arbitrary resolution (rows, cols, 3) with the peak in the center
buildKernel:
function result = buildKernel(rows, cols, mu, sigma)
result = zeros(rows, cols, 3);
center_w = floor(cols / 2);
center_h = floor(rows / 2);
for i = 1:rows
for j = 1:cols
distance = sqrt((center_w - j).^2 + (center_h - i).^2);
g_val = normpdf(distance, mu, sigma);
result(i, j, :) = g_val;
end
end
% normalize so that kernel sums to 1
sumKernel = sum(result(:));
result = result ./ sumKernel;
end
I am testing with mu = 0.0 (always), and variance or sigma = 1.0. I want to compare it to the result of FFT(Gaussian), which should result in another Gaussian with a variance of (1./sigma).
i.e.
g1FFT = circshift(g1FFT, [rows/2, cols/2, 0]); % fft2 expects center to be in corners
freq_G1 = fft2(g1FFT);
freq_G1 = circshift(freq_G1, [-rows/2, -cols/2, 0]); % shift back to center, for comparison's sake
Since I am testing with sigma = 1.0, I would think that I should get two equivalent, 2D kernels, because if sigma = 1.0, then 1.0/sigma = 1.0. So, g1FFT would equal freq_G1.
However, I do not. They have different magnitudes, even after normalization. Is there something I am missing?

To keep things simple, I will first cover the case for one-dimensional signals. Similar observations can be made for multi-dimensional cases.
The Fourier Transform of a continuous time Gaussian signal is itself a Gaussian function as indicated in this table. One can note that the wider the Gaussian in the time domain, the narrower the transformed Gaussian in the frequency domain and that for mu=0 and sigma=1/sqrt(2π) (which corresponds to a=1/(2*sigma^2)=π in the above transform table), the Fourier Transform of the continuous time function
would be the similar function (where only a change of variables occurred):
That's all good, but this is for a continuous time signal and we are really interested in discreet time signals.
Unfortunately, and as also indicated on wikipedia, the Discrete Fourier Transform of a kernel obtained by sampling the continuous time Gaussian function, is not itself a sampled Gaussian function.
Fortunately, this relationship is still often approximately true (without going into too much details, it requires the time-domain kernel to be wide enough but not too wide such that the frequency-domain approximation is also wide enough for the relationship to also be approximately true for the inverse transform). In this case, the Discrete Fourier Transform of the periodic extension (with period N) of the discrete time signal
where mu=0 and sigma=sqrt(N/2π) could be approximated by the similar function (up to a scaling factor and a change of variables):
You could then modify buildKernel to support different standard deviations sqrt(rows/2π) and sqrt(cols/2π) along the rows and columns respectively:
function result = buildKernel(rows, cols, mu, sigma)
if (length(mu)>1)
mu_h = mu(1);
mu_w = mu(2);
else
mu_h = mu;
mu_w = mu;
endif
if (length(sigma)>1)
sigma_h = sigma(1);
sigma_w = sigma(2);
else
sigma_h = sigma;
sigma_w = sigma;
endif
center_w = mu_w + floor(cols / 2);
center_h = mu_h + floor(rows / 2);
r = transpose(normpdf([0:rows-1],center_h,sigma_h));
c = normpdf([0:cols-1],center_w,sigma_w);
result = repmat(r * c, [1 1 3]);
% normalize so that kernel sums to 1
sumKernel = sum(result(:));
result = result ./ sumKernel;
end
which you could use to get a kernel whose FFT is a scaled version of itself. In other words a kernel obtained using
g1FFTin = buildKernel(rows, cols, mu, [sqrt(rows/2/pi) sqrt(cols/2/pi)]);
would be such that freq_G1 (as computed in your posted code) is nearly equal to g1FFTin * sqrt(rows*cols).
Finally given that your intention is really only to test that the kernel's FFT is also (approximately) Gaussian, you may wish to compare the FFT of a more arbitrary kernel with standard deviation sigma against another appropriately scaled Gaussian kernel computed directly in the frequency domain. In other words, assuming a spatial domain kernel obtained with:
g1FFTin = buildKernel(rows, cols, mu, sigma);
with corresponding frequency-domain representation obtained with:
g1FFT = circshift(g1FFTin, [rows/2, cols/2, 0]);
freq_G1 = fft2(g1FFT);
freq_G1 = circshift(freq_G1, [-rows/2, -cols/2, 0]);
Then freq_G1 can be compared against another appropriately scaled Gaussian kernel computed directly in the frequency domain:
freq_G1_approx = (rows*cols/(2*pi*sigma^2))*buildKernel(rows, cols, mu, [rows cols]/(2*pi*sigma));

Related

Plot normalized uniform mixture

I need to reproduce the normalized density p(x) below, but the code given does not generate a normalized PDF.
clc, clear
% Create three distribution objects with different parameters
pd1 = makedist('Uniform','lower',2,'upper',6);
pd2 = makedist('Uniform','lower',2,'upper',4);
pd3 = makedist('Uniform','lower',5,'upper',6);
% Compute the pdfs
x = -1:.01:9;
pdf1 = pdf(pd1,x);
pdf2 = pdf(pd2,x);
pdf3 = pdf(pd3,x);
% Sum of uniforms
pdf = (pdf1 + pdf2 + pdf3);
% Plot the pdfs
figure;
stairs(x,pdf,'r','LineWidth',2);
If I calculate the normalized mixture PDF by simply scaling them by their total sum, I have different normalized probability comparing with the original figure above.
pdf = pdf/sum(pdf);
Mixture
A mixture of two random variables means with probability p use Distribution 1, and with probability 1-p use Distribution 2.
Based on your graph, it appears you are mixing the distributions rather than adding (convolving) them. The precise results matter very much upon the mixing probabilities. As an example, I've chosen a = 0.25, b = 0.35, and c = 1-a-b.
For a mixture, the probability density function (PDF) is analytically available:
pdfMix =#(x) a.*pdf(pd1,x) + b.*pdf(pd2,x) + c.*pdf(pd3,x).
% MATLAB R2018b
pd1 = makedist('Uniform',2,6);
pd2 = makedist('Uniform',2,4);
pd3 = makedist('Uniform',5,6);
a = 0.25;
b = 0.35;
c = 1 - a - b; % a + b + c = 1
pdfMix =#(x) a.*pdf(pd1,x) + b.*pdf(pd2,x) + c.*pdf(pd3,x);
Xrng = 0:.01:8;
plot(Xrng,pdfMix(Xrng))
xlabel('X')
ylabel('Probability Density Function')
Since the distributions being mixed are uniform you could also use the stairs() command: stairs(Xrng,pdfMix(Xrng)).
We can verify this is a valid PDF by ensuring the total area is 1.
integral(pdfMix,0,9)
ans = 1.0000
Convolution: Adding Random Variables
Adding the random variables together yields a different result. Again, this can be done empirically easily. It is possible to this analytically. For example, convolving two Uniform(0,1) distributions yields a Triangular(0,1,2) distribution. The convolution of random variables is just a fancy way of saying we add them up and there is a way to obtain the resulting PDF using integration if you're interested in analytical results.
N = 80000; % Number of samples
X1 = random(pd1,N,1); % Generate samples
X2 = random(pd2,N,1);
X3 = random(pd3,N,1);
X = X1 + X2 + X3; % Convolution
Notice the change of scale for the x-axis (Xrng = 0:.01:16;).
To obtain this, I generated 80k samples from each distribution with random() then added them up to obtain 80k samples of the desired convolution. Notice when I used histogram() I used the 'Normalization', 'pdf' option.
Xrng = 0:.01:16;
figure, hold on, box on
p(1) = plot(Xrng,pdf(pd1,Xrng),'DisplayName','X1 \sim U(2,6)')
p(2) = plot(Xrng,pdf(pd2,Xrng),'DisplayName','X2 \sim U(2,4)')
p(3) = plot(Xrng,pdf(pd3,Xrng),'DisplayName','X3 \sim U(5,6)')
h = histogram(X,'Normalization','pdf','DisplayName','X = X1 + X2 + X3')
% Cosmetics
legend('show','Location','northeast')
for k = 1:3
p(k).LineWidth = 2.0;
end
title('X = X1 + X2 + X3 (50k samples)')
xlabel('X')
ylabel('Probability Density Function (PDF)')
You can obtain an estimate of the PDF using the fitdist() and the Kernel distribution object then calling the pdf() command on the resulting Kernel distribution object.
pd_kernel = fitdist(X,'Kernel')
figure, hold on, box on
h = histogram(X,'Normalization','pdf','DisplayName','X = X1 + X2 + X3')
pk = plot(Xrng,pdf(pd_kernel,Xrng),'b-') % Notice use of pdf command
legend('Empirical','Kernel Distribution','Location','northwest')
If you do this, you'll notice the resulting kernel is unbounded but you can correct this since you know the bounds using truncate(). You could also use the ksdensity() function, though the probability distribution object approach is probably more user friendly due to all the functions you have direct access to. You should be aware that the kernel is an approximation (you can see that clearly in the kernel plot). In this case, the integration to convolve 3 uniform distributions isn't too bad, so finding the PDF analytically is probably the preferred choice if the PDF is desired. Otherwise, empirical approaches (especially for generation), are probably sufficient though this depends on your application.
pdt_kernel = truncate(pd_kernel,9,16)
Generating samples from mixtures and convolutions is a different issue (but manageable).

Creating a high pass filter in matlab

I'm trying to create a high pass filter in Matlab. I generate the Gaussian Kernel using
function kernel = compute_kernel(sigma,size)
[x,y] = meshgrid(-size/2:size/2,-size/2:size/2);
constant = 1/(2*pi*sigma*sigma);
kernel = constant*exp( -(y.^2 + x.^2 )/(2 * sigma * sigma));
kernel = (kernel - min(kernel(:)))./(max(kernel(:)) - min(kernel(:)));
end
Then after creating the Kernel I use it to create a low pass filter for the image(variable im2 ):
g = compute_kernel(9,101);
im2_low = conv2(im2,g,'same');
As I understand I can then use subtract the filtered image from the original image(in the frequency domain) to extract the high frequencies making it the equivalent of a high pass filter.
F = fft2(im2_low);
IM2 = fft2(im2);
IM2_high = IM2 - F;
figure; fftshow(IM2_high);
im2_high = ifft2(IM2_high);
figure; imshow(im2_high,[]);
There seems to be something wrong with this though. When I view the high pass filtered image it seems to a color inverted blurred image not the ones with the edges defined as I've seen online. I'm not sure if my process is wrong or whether I'm just using the wrong values for my Gaussian Kernel.
Any kernel that you want to use for maintaining image features (i.e. you don't want a magnitude of something, but the image to look as human recognizable image) you need to make sure you do a thing to the kernel: normalize it.
You seem to have tried that, but you misinterpreted the meaning of normalizing in kernels. The don't need to be [0-1], their sum needs to be 1.
So, taking your code:
im2=imread('https://upload.wikimedia.org/wikipedia/en/2/24/Lenna.png');
im2=double(rgb2gray(im2));
sigma=9;
sizei=101;
[x,y] = meshgrid(-sizei/2:sizei/2,-sizei/2:sizei/2);
constant = 1/(2*pi*sigma*sigma);
kernel = constant*exp( -(y.^2 + x.^2 )/(2 * sigma * sigma));
%%%%%% NORMALIZATION
kernel=kernel/sum(kernel(:));
%%%%%%
im2_low = conv2(im2,kernel,'same');
F = fft2(im2_low);
IM2 = fft2(im2);
IM2_high = IM2 - F;
im2_high = ifft2(IM2_high);
figure; imshow(im2_high,[]);
But, as CrisLuengo mentions, subtraction is an operation that doesn't change in the Fourier domain, thus the answer is
im2_high=im2-im2_low
This is a long answer to a short question. Read it if you want to learn something.
A low-pass filter and a high-pass filter are both linear filters.
A linear filter can be applied in the spatial domain through a convolution or in the frequency domain (a.k.a. Fourier domain) as a multiplication.
It is true that, in the Fourier domain, the difference between a low-pass filter kernel and a identity filter (all-pass filter) is a high-pass filter:
high_pass_filter = identity_filter - low_pass_filter
The identity filter would be a kernel where every element is 1. The filter isapplied by multiplication, so
IM2 * high_pass_filter = IM2 * ( identity_filter - low_pass_filter )
which is the same as
IM2 * high_pass_filter = IM2 - IM2 * low_pass_filter
(here, as in the question, IM2 is the Fourier-domain representation of the image im2; all the stuff with the yellow borders are meant to be equations but are written in pseudo-code, with the * symbol used for multiplication).
Thus, the OP wants to apply a low-pass filter and subtract the input image in the Fourier domain to obtain a high-pass--filtered image.
However, one of the properties of the Fourier transform is that it is a linear transform. This means that
F(a*f + b*g) == a * F(f) + b * F(g)
(with F(.) the Fourier transform, a and b constants, and f and g functions). Setting a=1 and b=-1, and g the low-pass--filtered image and f the input image, we get
F(im2 - im2_low) == F(im2) - F(im2_low)
That is, subtraction in the spatial domain and in the Fourier domain are equivalent. Thus, if one computes im2_low in the spatial domain, there is no need to go to the Fourier domain for the subtraction. These two bits of code produce an identical result (up to numerical precision):
F = fft2(im2_low);
IM2 = fft2(im2);
IM2_high = IM2 - F;
im2_high = ifft2(IM2_high);
im2_high = im2 - im2_low;
Furthermore, the convolution is linear also. This means that, if you think of F(.) in the equations above as a convolution, those equations still hold. You can do manipulations like this:
conv(f, h) - f == conv(f, h) - conv(f, 1) == conv(f, h-1)
This directly leads to a definition of a high-pass filter in the spatial domain:
g = - compute_kernel(9,101);
g(51,51) = g(51,51) + 1;
im2_high2 = conv2(im2,g,'same');
You will see that max(max(abs(im2_high-im2_high2))) yields a value very close to 0.
A note regarding computing the Gaussian filter:
The compute_kernel function posted in the question computes a 2D filter kernel by directly evaluating a 2D Gaussian. The resulting filter kernel is 101x101 pixels, meaning that computing the convolution requires 101 * 101 * N multiplications and additions (MADs), with N the number of pixels in the filtered image. However, the Gaussian filter is separable, meaning that the same result can be obtained in only 101 * 2 * N MADs (50x fewer!). Additionally, for sigma = 9 one can get away with a smaller kernel too.
Gaussian kernel size:
The Gaussian function never reaches zero, but it reaches very close to zero quite quickly. When cutting it off at 3*sigma, very little of it is lost. I find 3 sigma to be a good balance. In the case of sigma = 9, the 3 sigma cutoff leads to a kernel with 55 pixels (3*sigma * 2 + 1).
Gaussian separability:
The multi-dimensional Gaussian can be obtained by multiplying 1D Gaussians together:
exp(-(y.^2+x.^2)/(2*sigma*sigma)) == exp(-(x.^2)/(2*sigma*sigma)) * exp(-(y.^2)/(2*sigma*sigma))
This leads to a much more efficient implementation of the convolution:
conv(f,h1*h2) == conv( conv(f,h1), h2 )
That is, convolving an image with a column filter h1 and then convolving the result with a row filter h2 is the same as convolving the image with a 2D filter h1*h2. In code:
sigma = 9;
sizei = ceil(3*sigma); % 3 sigma cutoff
g = exp(-(-sizei:sizei).^2/(2*sigma.^2)); % 1D Gaussian kernel
g = g/sum(g(:)); % normalize kernel
im2_low = conv2(g,g,im2,'same');
g2d = g' * g;
im2_low2 = conv2(im2,g2d,'same');
The difference is numerical imprecision:
max(max(abs(im2_low-im2_low2)))
ans =
1.3927e-12
You'll find a more detailed description about Gaussian filtering on my blog, as well as some issues you can run into when using MATLAB's Image Processing Toolbox.

Determining time-dependent frequency using a sliding-window FFT

I have an instrument which produces roughly sinusoidal data, but with frequency varying slightly in time. I am using MATLAB to prototype some code to characterize the time dependence, but I'm running into some issues.
I am generating an idealized approximation of my data, I(t) = sin(2 pi f(t) t), with f(t) variable but currently tested as linear or quadratic. I then implement a sliding Hamming window (of width w) to generate a set of Fourier transforms F[I(t), t'] corresponding to the data points in I(t), and each F[I(t), t'] is fit with a Gaussian to more precisely determine the peak location.
My current MATLAB code is:
fs = 1000; %Sample frequency (Hz)
tlim = [0,1];
t = (tlim(1)/fs:1/fs:tlim(2)-1/fs)'; %Sample domain (t)
N = numel(t);
f = #(t) 100-30*(t-0.5).^2; %Frequency function (Hz)
I = sin(2*pi*f(t).*t); %Sample function
w = 201; %window width
ww=floor(w/2); %window half-width
for i=0:2:N-w
%Take the FFT of a portion of I, convolved with a Hamming window
II = 1/(fs*N)*abs(fft(I((1:w)+i).*hamming(w))).^2;
II = II(1:floor(numel(II)/2));
p = (0:fs/w:(fs/2-fs/w))';
%Find approximate FFT maximum
[~,maxIx] = max(II);
maxLoc = p(maxIx);
%Fit the resulting FFT with a Gaussian function
gauss = #(c,x) c(1)*exp(-(x-c(2)).^2/(2*c(3)^2));
op = optimset('Display','off');
mdl = lsqcurvefit(gauss,[max(II),maxLoc,10],p,II,[],[],op);
%Generate diagnostic plots
subplot(3,1,1);plot(p,II,p,gauss(mdl,p))
line(f(t(i+ww))*[1,1],ylim,'color','r');
subplot(3,1,2);plot(t,I);
line(t(1+i)*[1,1],ylim,'color','r');line(t(w+i)*[1,1],ylim,'color','r')
subplot(3,1,3);plot(t(i+ww),f(t(i+ww)),'b.',t(i+ww),mdl(2),'r.');
hold on
xlim([0,max(t)])
drawnow
end
hold off
My thought process is that the peak location in each F[I(t), t'] should be a close approximation of the frequency at the center of the window which was used to produce it. However, this does not seem to be the case, experimentally.
I have had some success using discrete Fourier analysis for engineering problems in the past, but I've only done coursework on continuous Fourier transforms--so there may be something obvious that I'm missing. Also, this is my first question on StackExchange, so constructive criticism is welcome.
So it turns out that my problem was a poor understanding of the mathematics of the sine function. I had assumed that the frequency of the wave was equal to whatever was multiplied by the time variable (e.g. the f in sin(ft)). However, it turns out that the frequency is actually defined by the derivative of the entire argument of the sine function--the rate of change of the phase.
For constant f the two definitions are equal, since d(ft)/dt = f. But for, say, f(t) = sin(t):
d(f(t)t)/dt = d(sin(t) t)/dt = t cos(t) + sin(t)
The frequency varies as a function very different from f(t). Changing the function definition to the following fixed my problem:
f = #(t) 100-30*(t-0.5).^2; %Frequency function (Hz)
G = cumsum(f(t))/fs; %Phase function (Hz)
I = sin(2*pi*G); %Sampling function

Gaussian random function

By using normrnd, I would like to create a normal distribution function with mean and sigma values expressed as vectors of size 1x45 varying from 1:45 and plot this simulated PDF with ideal values.
Whenever I create a normrnd like the one expressed below,
Gaussian = normrnd([1 45],[1 45],[1 500],length(c_t));
I am obtaining the following error,
Size information is inconsistent.
The reason for creating this PDF is to compute Chemical kinetics of a tracer with variable gaussian noise model. Basically i have an Ideal characteristics of a Tracer now i would like to add gaussian noise and understand how the chemical kinetics of a tracer vary with changing noise.
Basically there are different computational models for understanding chemical kinetics of tracer, one of which is Three compartmental model ,others are viz shape analysis,constrained shape analysis model.
I currently have ideal curve for all respective models, now i would like to add noise to these models and understand how each particular model behaves with varying noise
This is why i would like to create a variable noise model with normrnd add this model to ideal characteristics and compute Noise(Sigma) Vs Error -This analysis will give me an approximate estimation how different models behave with varying noise and which model is suitable for estimating chemical kinetics of tracer.
function [c_t,c_t_noise] =Noise_ConstrainedK2(t,a1,a2,a3,b1,b2,b3,td,tmax,k1,k2,k3)
K_1 = (k1*k2)/(k2+k3);
K_2 = (k1*k3)/(k2+k3);
%DV_free= k1/(k2+k3);
c_t = zeros(size(t));
ind = (t > td) & (t < tmax);
c_t(ind)= conv(((t(ind) - td) ./ (tmax - td) * (a1 + a2 + a3)),(K_1*exp(-(k2+k3)*t(ind)+K_2)),'same');
ind = (t >= tmax);
c_t(ind)=conv((a1 * exp(-b1 * (t(ind) - tmax))+ a2 * exp(-b2 * (t(ind) - tmax))) + a3 * exp(-b3 * (t(ind) - tmax)),(K_1*exp(-(k2+k3)*t(ind)+K_2)),'same');
meanAndVar = (rand(45,2)-0.5)*2;
numPoints = 500;
randSamples = zeros(1,numPoints);
for ii = 1:numPoints
idx = mod(ii,size(meanAndVar,1))+1;
randSamples(ii) = normrnd(meanAndVar(idx,1),meanAndVar(idx,2));
c_t_noise = c_t + randSamples(ii);
end
scatter(1:numPoints,randSamples)
dg = [0 0.5 0];
plot(t,c_t,'r');
hold on;
plot(t,c_t_noise,'Color',dg);
hold off;
axis([0 50 0 1900]);
xlabel('Time[mins]');
ylabel('concentration [Mbq]');
title('My signal');
%plot(t,c_tnp);
end
The output characteristics from the above function are as follows,Here i could not visualize any noise
The only remotely close thing to what you want to be done can be done as follows, but will involve looping because you can not request 500 data points from only 45 different means and variances, without the assumption that multiple sets can be revisited.
This is my interpretation of what you want, though I am still not entirely sure.
Random Gaussian Function Selection
meanAndVar = rand(45,2);
numPoints = 500;
randSamples = zeros(1,numPoints);
for ii = 1:numPoints
randMeanVarIdx = randi([1,size(meanAndVar,1)]);
randSamples(ii) = normrnd(meanAndVar(randMeanVarIdx,1),meanAndVar(randMeanVarIdx,2));
end
scatter(1:numPoints,randSamples)
The above code generates a random 2-D matrix of mean and variance (1st col = mean, 2nd col = variance). We then preallocate some space.
Inside the loop we chose a random set of mean and variance to use (uniformly) and then take that mean and variance, plug it into a random gaussian value function, and store it.
the matrix randSamples will contain a list of random values generated by a random set of gaussian functions chosen in a randomly uniform manner.
Sequential Function Selection
If you do not want to randomly select which function to use, and just want to go sequentially you loop using modulus to get the index of which set of values to use.
meanAndVar = (rand(45,2)-0.5)*2; % zero shift and make bounds [-1,1]
numPoints = 500;
randSamples = zeros(1,numPoints);
for ii = 1:numPoints
idx = mod(ii,size(meanAndVar,1))+1;
randSamples(ii) = normrnd(meanAndVar(idx,1),meanAndVar(idx,2));
end
scatter(1:numPoints,randSamples)
The problem with this statement
Gaussian = normrnd([1 45],[1 45],[1 500],length(c_t));
is that you supply two mu values and two sigma values, and ask for a matrix of size [1 500] x length(c_t). You need to pass the size in a uniform way, so either
Gaussian = normrnd(mu, sigma,[500 length(c_t)]);
or
Gaussian = normrnd(mu, sigma, 500, length(c_t));
Then you should make sure that the size of the mu/sigma vectors match the size of the matrix you ask for. So if you want a 500 x length(c_t) matrix as output you need to pass 500 x length(c_t) (mu,sigma) pairs. If you only want to vary one of mu or sigma you can pass a single value for the other parameter
To get N values from a normal distribution with fixed mean and steadily increasing sigma you can do
noise = #(mu, s0, s1, n) normrnd(mu, s0:(s1-s0)/(n-1):s1, 1,n)
where s0 is the lowest sigma value and s1 is the largest sigma value. To get 10 values drawn from distributions with mu=0 and sigma increasing from 1 to 5 you can do
noise(0,1,5,10)
If you want to introduce some randomness in the increase of sigma you can do
noise_rand = #(mu, s0, s1, n) normrnd(mu, (s0:(s1-s0)/(n-1):s1) .* rand(1,n), 1,n)

How can we produce kappa and delta in the following model using Matlab?

I have a following stochastic model describing evolution of a process (Y) in space and time. Ds and Dt are domain in space (2D with x and y axes) and time (1D with t axis). This model is usually known as mixed-effects model or components-of-variation models
I am currently developing Y as follow:
%# Time parameters
T=1:1:20; % input
nT=numel(T);
%# Grid and model parameters
nRow=100;
nCol=100;
[Grid.Nx,Grid.Ny,Grid.Nt] = meshgrid(1:1:nCol,1:1:nRow,T);
xPower=0.1;
tPower=1;
noisePower=1;
detConstant=1;
deterministic_mu = detConstant.*(((Grid.Nt).^tPower)./((Grid.Nx).^xPower));
beta_s = randn(nRow,nCol); % mean-zero random effect representing location specific variability common to all times
gammaTemp = randn(nT,1);
for t = 1:nT
gamma_t(:,:,t) = repmat(gammaTemp(t),nRow,nCol); % mean-zero random effect representing time specific variability common to all locations
end
var=0.1;% noise has variance = 0.1
for t=1:nT
kappa_st(:,:,t) = sqrt(var)*randn(nRow,nCol);
end
for t=1:nT
Y(:,:,t) = deterministic_mu(:,:,t) + beta_s + gamma_t(:,:,t) + kappa_st(:,:,t);
end
My questions are:
How to produce delta in the expression for Y and the difference in kappa and delta?
Help explain, through some illustration using Matlab, if I am correctly producing Y?
Please let me know if you need some more information/explanation. Thanks.
First, I rewrote your code to make it a bit more efficient. I see you generate linearly-spaced grids for x,y and t and carry out the computation for all points in this grid. This approach has severe limitations on the maximum attainable grid resolution, since the 3D grid (and all variables defined with it) can consume an awfully large amount of memory if the resolution goes up. If the model you're implementing will grow in complexity and size (it often does), I'd suggest you throw this all into a function accepting matrix/vector inputs for s and t, which will be a bit more flexible in this regard -- processing "blocks" of data that will otherwise not fit in memory will be a lot easier that way.
Then, I generated the the delta_st term with rand instead of randn since the noise should be "white". Now I'm very unsure about that last one, and I didn't have time to read through the paper you linked to -- can you tell me on what pages I can find relevant the sections for the delta_st?
Now, the code:
%# Time parameters
T = 1:1:20; % input
nT = numel(T);
%# Grid and model parameters
nRow = 100;
nCol = 100;
% noise has variance = 0.1
var = 0.1;
xPower = 0.1;
tPower = 1;
noisePower = 1;
detConstant = 1;
[Grid.Nx,Grid.Ny,Grid.Nt] = meshgrid(1:nCol,1:nRow,T);
% deterministic mean
deterministic_mu = detConstant .* Grid.Nt.^tPower ./ Grid.Nx.^xPower;
% mean-zero random effect representing location specific
% variability common to all times
beta_s = repmat(randn(nRow,nCol), [1 1 nT]);
% mean-zero random effect representing time specific
% variability common to all locations
gamma_t = bsxfun(#times, ones(nRow,nCol,nT), randn(1, 1, nT));
% mean zero random effect capturing the spatio-temporal
% interaction not found in the larger-scale deterministic mu
kappa_st = sqrt(var)*randn(nRow,nCol,nT);
% mean zero random effect representing the micro-scale
% spatio-temporal variability that is modelled by white
% noise (i.i.d. at different time steps) in Ds·Dt
delta_st = noisePower * (rand(nRow,nCol,nT)-0.5);
% Final result:
Y = deterministic_mu + beta_s + gamma_t + kappa_st + delta_st;
Your implementation samples beta, gamma and kappa as if they are white (e.g. their values at each (x,y,t) are independent). The descriptions of the terms suggest that this is not meant to be the case. It looks like delta is supposed to capture the white noise, while the other terms capture the correlations over their respective domains. e.g. there is a non-zero correlation between gamma(t_1) and gamma(t_1+1).
If you wish to model gamma as a stationary Gaussian Markov process with variance var_g and correlation cor_g between gamma(t) and gamma(t+1), you can use something like
gamma_t = nan( nT, 1 );
gamma_t(1) = sqrt(var_g)*randn();
K_g = cor_g/var_g;
K_w = sqrt( (1-K_g^2)*var_g );
for t = 2:nT,
gamma_t(t) = K_g*gamma_t(t-1) + K_w*randn();
end
gamma_t = reshape( gamma_t, [ 1 1 nT ] );
The formulas I've used for gains K_g and K_w in the above code (and the initialization of gamma_t(1)) produce the desired stationary variance \sigma^2_0 and one-step covariance \sigma^2_1:
Note that the implementation above assumes that later you will sum the terms using bsxfun to do the "repmat" for you:
Y = bsxfun( #plus, deterministic_mu + kappa_st + delta_st, beta_s );
Y = bsxfun( #plus, Y, gamma_t );
Note that I haven't tested the above code, so you should confirm with sampling that it does actually produce a zero noise process of the specified variance and covariance between adjacent samples. To sample beta the same procedure can be extended into two dimensions, but the principles are essentially the same. I suspect kappa should be similarly modeled as a Markov Gaussian Process, but in all three dimensions and with a lower variance to represent higher-order effects not captured in mu, beta and gamma.
Delta is supposed to be zero mean stationary white noise. Assuming it to be Gaussian with variance noisePower one would sample it using
delta_st = sqrt(noisePower)*randn( [ nRows nCols nT ] );