Plot normalized uniform mixture - matlab

I need to reproduce the normalized density p(x) below, but the code given does not generate a normalized PDF.
clc, clear
% Create three distribution objects with different parameters
pd1 = makedist('Uniform','lower',2,'upper',6);
pd2 = makedist('Uniform','lower',2,'upper',4);
pd3 = makedist('Uniform','lower',5,'upper',6);
% Compute the pdfs
x = -1:.01:9;
pdf1 = pdf(pd1,x);
pdf2 = pdf(pd2,x);
pdf3 = pdf(pd3,x);
% Sum of uniforms
pdf = (pdf1 + pdf2 + pdf3);
% Plot the pdfs
figure;
stairs(x,pdf,'r','LineWidth',2);
If I calculate the normalized mixture PDF by simply scaling them by their total sum, I have different normalized probability comparing with the original figure above.
pdf = pdf/sum(pdf);

Mixture
A mixture of two random variables means with probability p use Distribution 1, and with probability 1-p use Distribution 2.
Based on your graph, it appears you are mixing the distributions rather than adding (convolving) them. The precise results matter very much upon the mixing probabilities. As an example, I've chosen a = 0.25, b = 0.35, and c = 1-a-b.
For a mixture, the probability density function (PDF) is analytically available:
pdfMix =#(x) a.*pdf(pd1,x) + b.*pdf(pd2,x) + c.*pdf(pd3,x).
% MATLAB R2018b
pd1 = makedist('Uniform',2,6);
pd2 = makedist('Uniform',2,4);
pd3 = makedist('Uniform',5,6);
a = 0.25;
b = 0.35;
c = 1 - a - b; % a + b + c = 1
pdfMix =#(x) a.*pdf(pd1,x) + b.*pdf(pd2,x) + c.*pdf(pd3,x);
Xrng = 0:.01:8;
plot(Xrng,pdfMix(Xrng))
xlabel('X')
ylabel('Probability Density Function')
Since the distributions being mixed are uniform you could also use the stairs() command: stairs(Xrng,pdfMix(Xrng)).
We can verify this is a valid PDF by ensuring the total area is 1.
integral(pdfMix,0,9)
ans = 1.0000
Convolution: Adding Random Variables
Adding the random variables together yields a different result. Again, this can be done empirically easily. It is possible to this analytically. For example, convolving two Uniform(0,1) distributions yields a Triangular(0,1,2) distribution. The convolution of random variables is just a fancy way of saying we add them up and there is a way to obtain the resulting PDF using integration if you're interested in analytical results.
N = 80000; % Number of samples
X1 = random(pd1,N,1); % Generate samples
X2 = random(pd2,N,1);
X3 = random(pd3,N,1);
X = X1 + X2 + X3; % Convolution
Notice the change of scale for the x-axis (Xrng = 0:.01:16;).
To obtain this, I generated 80k samples from each distribution with random() then added them up to obtain 80k samples of the desired convolution. Notice when I used histogram() I used the 'Normalization', 'pdf' option.
Xrng = 0:.01:16;
figure, hold on, box on
p(1) = plot(Xrng,pdf(pd1,Xrng),'DisplayName','X1 \sim U(2,6)')
p(2) = plot(Xrng,pdf(pd2,Xrng),'DisplayName','X2 \sim U(2,4)')
p(3) = plot(Xrng,pdf(pd3,Xrng),'DisplayName','X3 \sim U(5,6)')
h = histogram(X,'Normalization','pdf','DisplayName','X = X1 + X2 + X3')
% Cosmetics
legend('show','Location','northeast')
for k = 1:3
p(k).LineWidth = 2.0;
end
title('X = X1 + X2 + X3 (50k samples)')
xlabel('X')
ylabel('Probability Density Function (PDF)')
You can obtain an estimate of the PDF using the fitdist() and the Kernel distribution object then calling the pdf() command on the resulting Kernel distribution object.
pd_kernel = fitdist(X,'Kernel')
figure, hold on, box on
h = histogram(X,'Normalization','pdf','DisplayName','X = X1 + X2 + X3')
pk = plot(Xrng,pdf(pd_kernel,Xrng),'b-') % Notice use of pdf command
legend('Empirical','Kernel Distribution','Location','northwest')
If you do this, you'll notice the resulting kernel is unbounded but you can correct this since you know the bounds using truncate(). You could also use the ksdensity() function, though the probability distribution object approach is probably more user friendly due to all the functions you have direct access to. You should be aware that the kernel is an approximation (you can see that clearly in the kernel plot). In this case, the integration to convolve 3 uniform distributions isn't too bad, so finding the PDF analytically is probably the preferred choice if the PDF is desired. Otherwise, empirical approaches (especially for generation), are probably sufficient though this depends on your application.
pdt_kernel = truncate(pd_kernel,9,16)
Generating samples from mixtures and convolutions is a different issue (but manageable).

Related

Gaussian iterative curve fitting [duplicate]

I have a set of frequency data with peaks to which I need to fit a Gaussian curve and then get the full width half maximum from. The FWHM part I can do, I already have a code for that but I'm having trouble writing code to fit the Gaussian.
Does anyone know of any functions that'll do this for me or would be able to point me in the right direction? (I can do least squares fitting for lines and polynomials but I can't get it to work for gaussians)
Also it would be helpful if it was compatible with both Octave and Matlab as I have Octave at the moment but don't get access to Matlab until next week.
Any help would be greatly appreciated!
Fitting a single 1D Gaussian directly is a non-linear fitting problem. You'll find ready-made implementations here, or here, or here for 2D, or here (if you have the statistics toolbox) (have you heard of Google? :)
Anyway, there might be a simpler solution. If you know for sure your data y will be well-described by a Gaussian, and is reasonably well-distributed over your entire x-range, you can linearize the problem (these are equations, not statements):
y = 1/(σ·√(2π)) · exp( -½ ( (x-μ)/σ )² )
ln y = ln( 1/(σ·√(2π)) ) - ½ ( (x-μ)/σ )²
= Px² + Qx + R
where the substitutions
P = -1/(2σ²)
Q = +2μ/(2σ²)
R = ln( 1/(σ·√(2π)) ) - ½(μ/σ)²
have been made. Now, solve for the linear system Ax=b with (these are Matlab statements):
% design matrix for least squares fit
xdata = xdata(:);
A = [xdata.^2, xdata, ones(size(xdata))];
% log of your data
b = log(y(:));
% least-squares solution for x
x = A\b;
The vector x you found this way will equal
x == [P Q R]
which you then have to reverse-engineer to find the mean μ and the standard-deviation σ:
mu = -x(2)/x(1)/2;
sigma = sqrt( -1/2/x(1) );
Which you can cross-check with x(3) == R (there should only be small differences).
Perhaps this has the thing you are looking for? Not sure about compatability:
http://www.mathworks.com/matlabcentral/fileexchange/11733-gaussian-curve-fit
From its documentation:
[sigma,mu,A]=mygaussfit(x,y)
[sigma,mu,A]=mygaussfit(x,y,h)
this function is doing fit to the function
y=A * exp( -(x-mu)^2 / (2*sigma^2) )
the fitting is been done by a polyfit
the lan of the data.
h is the threshold which is the fraction
from the maximum y height that the data
is been taken from.
h should be a number between 0-1.
if h have not been taken it is set to be 0.2
as default.
i had similar problem.
this was the first result on google, and some of the scripts linked here made my matlab crash.
finally i found here that matlab has built in fit function, that can fit Gaussians too.
it look like that:
>> v=-30:30;
>> fit(v', exp(-v.^2)', 'gauss1')
ans =
General model Gauss1:
ans(x) = a1*exp(-((x-b1)/c1)^2)
Coefficients (with 95% confidence bounds):
a1 = 1 (1, 1)
b1 = -8.489e-17 (-3.638e-12, 3.638e-12)
c1 = 1 (1, 1)
I found that the MATLAB "fit" function was slow, and used "lsqcurvefit" with an inline Gaussian function. This is for fitting a Gaussian FUNCTION, if you just want to fit data to a Normal distribution, use "normfit."
Check it
% % Generate synthetic data (for example) % % %
nPoints = 200; binSize = 1/nPoints ;
fauxMean = 47 ;fauxStd = 8;
faux = fauxStd.*randn(1,nPoints) + fauxMean; % REPLACE WITH YOUR ACTUAL DATA
xaxis = 1:length(faux) ;fauxData = histc(faux,xaxis);
yourData = fauxData; % replace with your actual distribution
xAxis = 1:length(yourData) ;
gausFun = #(hms,x) hms(1) .* exp (-(x-hms(2)).^2 ./ (2*hms(3)^2)) ; % Gaussian FUNCTION
% % Provide estimates for initial conditions (for lsqcurvefit) % %
height_est = max(fauxData)*rand ; mean_est = fauxMean*rand; std_est=fauxStd*rand;
x0 = [height_est;mean_est; std_est]; % parameters need to be in a single variable
options=optimset('Display','off'); % avoid pesky messages from lsqcurvefit (optional)
[params]=lsqcurvefit(gausFun,x0,xAxis,yourData,[],[],options); % meat and potatoes
lsq_mean = params(2); lsq_std = params(3) ; % what you want
% % % Plot data with fit % % %
myFit = gausFun(params,xAxis);
figure;hold on;plot(xAxis,yourData./sum(yourData),'k');
plot(xAxis,myFit./sum(myFit),'r','linewidth',3) % normalization optional
xlabel('Value');ylabel('Probability');legend('Data','Fit')

Matlab - FFT of Gaussian - Equivalency

simple problem:
I plot out a 2D Gaussian function with a certain resolution in Matlab. I test with variance or sigma = 1.0. I want to compare it to the result of FFT(Gaussian), which should result in another Gaussian with a variance of (1./sigma). Since I am testing with sigma = 1.0, I would think that I should get two equivalent, 2D kernels.
i.e.
g1FFT = buildKernel(rows, cols, mu, sigma) % uses normpdf over arbitrary resolution (rows, cols, 3) with the peak in the center
buildKernel:
function result = buildKernel(rows, cols, mu, sigma)
result = zeros(rows, cols, 3);
center_w = floor(cols / 2);
center_h = floor(rows / 2);
for i = 1:rows
for j = 1:cols
distance = sqrt((center_w - j).^2 + (center_h - i).^2);
g_val = normpdf(distance, mu, sigma);
result(i, j, :) = g_val;
end
end
% normalize so that kernel sums to 1
sumKernel = sum(result(:));
result = result ./ sumKernel;
end
I am testing with mu = 0.0 (always), and variance or sigma = 1.0. I want to compare it to the result of FFT(Gaussian), which should result in another Gaussian with a variance of (1./sigma).
i.e.
g1FFT = circshift(g1FFT, [rows/2, cols/2, 0]); % fft2 expects center to be in corners
freq_G1 = fft2(g1FFT);
freq_G1 = circshift(freq_G1, [-rows/2, -cols/2, 0]); % shift back to center, for comparison's sake
Since I am testing with sigma = 1.0, I would think that I should get two equivalent, 2D kernels, because if sigma = 1.0, then 1.0/sigma = 1.0. So, g1FFT would equal freq_G1.
However, I do not. They have different magnitudes, even after normalization. Is there something I am missing?
To keep things simple, I will first cover the case for one-dimensional signals. Similar observations can be made for multi-dimensional cases.
The Fourier Transform of a continuous time Gaussian signal is itself a Gaussian function as indicated in this table. One can note that the wider the Gaussian in the time domain, the narrower the transformed Gaussian in the frequency domain and that for mu=0 and sigma=1/sqrt(2π) (which corresponds to a=1/(2*sigma^2)=π in the above transform table), the Fourier Transform of the continuous time function
would be the similar function (where only a change of variables occurred):
That's all good, but this is for a continuous time signal and we are really interested in discreet time signals.
Unfortunately, and as also indicated on wikipedia, the Discrete Fourier Transform of a kernel obtained by sampling the continuous time Gaussian function, is not itself a sampled Gaussian function.
Fortunately, this relationship is still often approximately true (without going into too much details, it requires the time-domain kernel to be wide enough but not too wide such that the frequency-domain approximation is also wide enough for the relationship to also be approximately true for the inverse transform). In this case, the Discrete Fourier Transform of the periodic extension (with period N) of the discrete time signal
where mu=0 and sigma=sqrt(N/2π) could be approximated by the similar function (up to a scaling factor and a change of variables):
You could then modify buildKernel to support different standard deviations sqrt(rows/2π) and sqrt(cols/2π) along the rows and columns respectively:
function result = buildKernel(rows, cols, mu, sigma)
if (length(mu)>1)
mu_h = mu(1);
mu_w = mu(2);
else
mu_h = mu;
mu_w = mu;
endif
if (length(sigma)>1)
sigma_h = sigma(1);
sigma_w = sigma(2);
else
sigma_h = sigma;
sigma_w = sigma;
endif
center_w = mu_w + floor(cols / 2);
center_h = mu_h + floor(rows / 2);
r = transpose(normpdf([0:rows-1],center_h,sigma_h));
c = normpdf([0:cols-1],center_w,sigma_w);
result = repmat(r * c, [1 1 3]);
% normalize so that kernel sums to 1
sumKernel = sum(result(:));
result = result ./ sumKernel;
end
which you could use to get a kernel whose FFT is a scaled version of itself. In other words a kernel obtained using
g1FFTin = buildKernel(rows, cols, mu, [sqrt(rows/2/pi) sqrt(cols/2/pi)]);
would be such that freq_G1 (as computed in your posted code) is nearly equal to g1FFTin * sqrt(rows*cols).
Finally given that your intention is really only to test that the kernel's FFT is also (approximately) Gaussian, you may wish to compare the FFT of a more arbitrary kernel with standard deviation sigma against another appropriately scaled Gaussian kernel computed directly in the frequency domain. In other words, assuming a spatial domain kernel obtained with:
g1FFTin = buildKernel(rows, cols, mu, sigma);
with corresponding frequency-domain representation obtained with:
g1FFT = circshift(g1FFTin, [rows/2, cols/2, 0]);
freq_G1 = fft2(g1FFT);
freq_G1 = circshift(freq_G1, [-rows/2, -cols/2, 0]);
Then freq_G1 can be compared against another appropriately scaled Gaussian kernel computed directly in the frequency domain:
freq_G1_approx = (rows*cols/(2*pi*sigma^2))*buildKernel(rows, cols, mu, [rows cols]/(2*pi*sigma));

Random samples from Lognormal distribution

I have a parameter X that is lognormally distributed with mean 15 and standard deviation 0.48. For monte carlo simulation in MATLAB, I want to generate 40,000 samples from this distribution. How could be done in MATLAB?
To generate an MxN matrix of lognornally distributed random numbers with parameter mu and sigma, use lognrnd (Statistics Toolbox):
result = lognrnd(mu,sigma,M,N);
If you don't have the Statistics Toolbox, you can equivalently use randn and then take the exponential. This exploits the fact that, by definition, the logarithm of a lognormal random variable is a normal random variable:
result = exp(mu+sigma*randn(M,N));
The parameters mu and sigma of the lognormal distribution are the mean and standard deviation of the associated normal distribution. To see how the mean and standard deviarion of the lognormal distribution are related to parameters mu, sigma, see lognrnd documentation.
To generate random samples, you need the inverted cdf. If you have done this, generating samples is nothing more than 'my_icdf(rand(n, m))'
First get the cdf (integrating the pdf) and then invert the function to get the inverted cdf.
You can convert between the mean and variance of the Lognormal distribution and its parameters (mu,sigma) which correspond to the associated Normal (Gaussian) distribution using the formulas.
The approach below uses the Probability Distribution Objects introduced in MATLAB 2013a. More specifically, it uses the makedist, random, and pdf functions.
% Notation
% if X~Lognormal(mu,sigma) the E[X] = m & Var(X) = v
m = 15; % Target mean for Lognormal distribution
v = 0.48; % Target variance Lognormal distribution
getLmuh=#(m,v) log(m/sqrt(1+(v/(m^2))));
getLvarh=#(m,v) log(1 + (v/(m^2)));
mu = getLmuh(m,v);
sigma = sqrt(getLvarh(m,v));
% Generate Random Samples
pd = makedist('Lognormal',mu,sigma);
X = random(pd,1000,1); % Generates a 1000 x 1 vector of samples
You can verify the correctness via the mean and var functions and the distribution object:
>> mean(pd)
ans =
15
>> var(pd)
ans =
0.4800
Generating samples via the inverse transform is also made easy using the icdf (inverse CDF) function.
% Alternate way to generate X~Lognormal(mu,sigma)
U = rand(1000,1); % U ~ Uniform(0,1)
X = icdf(pd,U); % Inverse Transform
The graphic generated by following code (MATLAB 2018a).
Xrng = [0:.01:20]';
figure, hold on, box on
h(1) = histogram(X,'DisplayName','Random Sample (N = 1000)');
h(2) = plot(Xrng,pdf(pd,Xrng),'b-','DisplayName','Theoretical PDF');
legend('show','Location','northwest')
title('Lognormal')
xlabel('X')
ylabel('Probability Density Function')
% Options
h(1).Normalization = 'pdf';
h(1).FaceColor = 'k';
h(1).FaceAlpha = 0.35;
h(2).LineWidth = 2;

Gaussian random function

By using normrnd, I would like to create a normal distribution function with mean and sigma values expressed as vectors of size 1x45 varying from 1:45 and plot this simulated PDF with ideal values.
Whenever I create a normrnd like the one expressed below,
Gaussian = normrnd([1 45],[1 45],[1 500],length(c_t));
I am obtaining the following error,
Size information is inconsistent.
The reason for creating this PDF is to compute Chemical kinetics of a tracer with variable gaussian noise model. Basically i have an Ideal characteristics of a Tracer now i would like to add gaussian noise and understand how the chemical kinetics of a tracer vary with changing noise.
Basically there are different computational models for understanding chemical kinetics of tracer, one of which is Three compartmental model ,others are viz shape analysis,constrained shape analysis model.
I currently have ideal curve for all respective models, now i would like to add noise to these models and understand how each particular model behaves with varying noise
This is why i would like to create a variable noise model with normrnd add this model to ideal characteristics and compute Noise(Sigma) Vs Error -This analysis will give me an approximate estimation how different models behave with varying noise and which model is suitable for estimating chemical kinetics of tracer.
function [c_t,c_t_noise] =Noise_ConstrainedK2(t,a1,a2,a3,b1,b2,b3,td,tmax,k1,k2,k3)
K_1 = (k1*k2)/(k2+k3);
K_2 = (k1*k3)/(k2+k3);
%DV_free= k1/(k2+k3);
c_t = zeros(size(t));
ind = (t > td) & (t < tmax);
c_t(ind)= conv(((t(ind) - td) ./ (tmax - td) * (a1 + a2 + a3)),(K_1*exp(-(k2+k3)*t(ind)+K_2)),'same');
ind = (t >= tmax);
c_t(ind)=conv((a1 * exp(-b1 * (t(ind) - tmax))+ a2 * exp(-b2 * (t(ind) - tmax))) + a3 * exp(-b3 * (t(ind) - tmax)),(K_1*exp(-(k2+k3)*t(ind)+K_2)),'same');
meanAndVar = (rand(45,2)-0.5)*2;
numPoints = 500;
randSamples = zeros(1,numPoints);
for ii = 1:numPoints
idx = mod(ii,size(meanAndVar,1))+1;
randSamples(ii) = normrnd(meanAndVar(idx,1),meanAndVar(idx,2));
c_t_noise = c_t + randSamples(ii);
end
scatter(1:numPoints,randSamples)
dg = [0 0.5 0];
plot(t,c_t,'r');
hold on;
plot(t,c_t_noise,'Color',dg);
hold off;
axis([0 50 0 1900]);
xlabel('Time[mins]');
ylabel('concentration [Mbq]');
title('My signal');
%plot(t,c_tnp);
end
The output characteristics from the above function are as follows,Here i could not visualize any noise
The only remotely close thing to what you want to be done can be done as follows, but will involve looping because you can not request 500 data points from only 45 different means and variances, without the assumption that multiple sets can be revisited.
This is my interpretation of what you want, though I am still not entirely sure.
Random Gaussian Function Selection
meanAndVar = rand(45,2);
numPoints = 500;
randSamples = zeros(1,numPoints);
for ii = 1:numPoints
randMeanVarIdx = randi([1,size(meanAndVar,1)]);
randSamples(ii) = normrnd(meanAndVar(randMeanVarIdx,1),meanAndVar(randMeanVarIdx,2));
end
scatter(1:numPoints,randSamples)
The above code generates a random 2-D matrix of mean and variance (1st col = mean, 2nd col = variance). We then preallocate some space.
Inside the loop we chose a random set of mean and variance to use (uniformly) and then take that mean and variance, plug it into a random gaussian value function, and store it.
the matrix randSamples will contain a list of random values generated by a random set of gaussian functions chosen in a randomly uniform manner.
Sequential Function Selection
If you do not want to randomly select which function to use, and just want to go sequentially you loop using modulus to get the index of which set of values to use.
meanAndVar = (rand(45,2)-0.5)*2; % zero shift and make bounds [-1,1]
numPoints = 500;
randSamples = zeros(1,numPoints);
for ii = 1:numPoints
idx = mod(ii,size(meanAndVar,1))+1;
randSamples(ii) = normrnd(meanAndVar(idx,1),meanAndVar(idx,2));
end
scatter(1:numPoints,randSamples)
The problem with this statement
Gaussian = normrnd([1 45],[1 45],[1 500],length(c_t));
is that you supply two mu values and two sigma values, and ask for a matrix of size [1 500] x length(c_t). You need to pass the size in a uniform way, so either
Gaussian = normrnd(mu, sigma,[500 length(c_t)]);
or
Gaussian = normrnd(mu, sigma, 500, length(c_t));
Then you should make sure that the size of the mu/sigma vectors match the size of the matrix you ask for. So if you want a 500 x length(c_t) matrix as output you need to pass 500 x length(c_t) (mu,sigma) pairs. If you only want to vary one of mu or sigma you can pass a single value for the other parameter
To get N values from a normal distribution with fixed mean and steadily increasing sigma you can do
noise = #(mu, s0, s1, n) normrnd(mu, s0:(s1-s0)/(n-1):s1, 1,n)
where s0 is the lowest sigma value and s1 is the largest sigma value. To get 10 values drawn from distributions with mu=0 and sigma increasing from 1 to 5 you can do
noise(0,1,5,10)
If you want to introduce some randomness in the increase of sigma you can do
noise_rand = #(mu, s0, s1, n) normrnd(mu, (s0:(s1-s0)/(n-1):s1) .* rand(1,n), 1,n)

How can we produce kappa and delta in the following model using Matlab?

I have a following stochastic model describing evolution of a process (Y) in space and time. Ds and Dt are domain in space (2D with x and y axes) and time (1D with t axis). This model is usually known as mixed-effects model or components-of-variation models
I am currently developing Y as follow:
%# Time parameters
T=1:1:20; % input
nT=numel(T);
%# Grid and model parameters
nRow=100;
nCol=100;
[Grid.Nx,Grid.Ny,Grid.Nt] = meshgrid(1:1:nCol,1:1:nRow,T);
xPower=0.1;
tPower=1;
noisePower=1;
detConstant=1;
deterministic_mu = detConstant.*(((Grid.Nt).^tPower)./((Grid.Nx).^xPower));
beta_s = randn(nRow,nCol); % mean-zero random effect representing location specific variability common to all times
gammaTemp = randn(nT,1);
for t = 1:nT
gamma_t(:,:,t) = repmat(gammaTemp(t),nRow,nCol); % mean-zero random effect representing time specific variability common to all locations
end
var=0.1;% noise has variance = 0.1
for t=1:nT
kappa_st(:,:,t) = sqrt(var)*randn(nRow,nCol);
end
for t=1:nT
Y(:,:,t) = deterministic_mu(:,:,t) + beta_s + gamma_t(:,:,t) + kappa_st(:,:,t);
end
My questions are:
How to produce delta in the expression for Y and the difference in kappa and delta?
Help explain, through some illustration using Matlab, if I am correctly producing Y?
Please let me know if you need some more information/explanation. Thanks.
First, I rewrote your code to make it a bit more efficient. I see you generate linearly-spaced grids for x,y and t and carry out the computation for all points in this grid. This approach has severe limitations on the maximum attainable grid resolution, since the 3D grid (and all variables defined with it) can consume an awfully large amount of memory if the resolution goes up. If the model you're implementing will grow in complexity and size (it often does), I'd suggest you throw this all into a function accepting matrix/vector inputs for s and t, which will be a bit more flexible in this regard -- processing "blocks" of data that will otherwise not fit in memory will be a lot easier that way.
Then, I generated the the delta_st term with rand instead of randn since the noise should be "white". Now I'm very unsure about that last one, and I didn't have time to read through the paper you linked to -- can you tell me on what pages I can find relevant the sections for the delta_st?
Now, the code:
%# Time parameters
T = 1:1:20; % input
nT = numel(T);
%# Grid and model parameters
nRow = 100;
nCol = 100;
% noise has variance = 0.1
var = 0.1;
xPower = 0.1;
tPower = 1;
noisePower = 1;
detConstant = 1;
[Grid.Nx,Grid.Ny,Grid.Nt] = meshgrid(1:nCol,1:nRow,T);
% deterministic mean
deterministic_mu = detConstant .* Grid.Nt.^tPower ./ Grid.Nx.^xPower;
% mean-zero random effect representing location specific
% variability common to all times
beta_s = repmat(randn(nRow,nCol), [1 1 nT]);
% mean-zero random effect representing time specific
% variability common to all locations
gamma_t = bsxfun(#times, ones(nRow,nCol,nT), randn(1, 1, nT));
% mean zero random effect capturing the spatio-temporal
% interaction not found in the larger-scale deterministic mu
kappa_st = sqrt(var)*randn(nRow,nCol,nT);
% mean zero random effect representing the micro-scale
% spatio-temporal variability that is modelled by white
% noise (i.i.d. at different time steps) in Ds·Dt
delta_st = noisePower * (rand(nRow,nCol,nT)-0.5);
% Final result:
Y = deterministic_mu + beta_s + gamma_t + kappa_st + delta_st;
Your implementation samples beta, gamma and kappa as if they are white (e.g. their values at each (x,y,t) are independent). The descriptions of the terms suggest that this is not meant to be the case. It looks like delta is supposed to capture the white noise, while the other terms capture the correlations over their respective domains. e.g. there is a non-zero correlation between gamma(t_1) and gamma(t_1+1).
If you wish to model gamma as a stationary Gaussian Markov process with variance var_g and correlation cor_g between gamma(t) and gamma(t+1), you can use something like
gamma_t = nan( nT, 1 );
gamma_t(1) = sqrt(var_g)*randn();
K_g = cor_g/var_g;
K_w = sqrt( (1-K_g^2)*var_g );
for t = 2:nT,
gamma_t(t) = K_g*gamma_t(t-1) + K_w*randn();
end
gamma_t = reshape( gamma_t, [ 1 1 nT ] );
The formulas I've used for gains K_g and K_w in the above code (and the initialization of gamma_t(1)) produce the desired stationary variance \sigma^2_0 and one-step covariance \sigma^2_1:
Note that the implementation above assumes that later you will sum the terms using bsxfun to do the "repmat" for you:
Y = bsxfun( #plus, deterministic_mu + kappa_st + delta_st, beta_s );
Y = bsxfun( #plus, Y, gamma_t );
Note that I haven't tested the above code, so you should confirm with sampling that it does actually produce a zero noise process of the specified variance and covariance between adjacent samples. To sample beta the same procedure can be extended into two dimensions, but the principles are essentially the same. I suspect kappa should be similarly modeled as a Markov Gaussian Process, but in all three dimensions and with a lower variance to represent higher-order effects not captured in mu, beta and gamma.
Delta is supposed to be zero mean stationary white noise. Assuming it to be Gaussian with variance noisePower one would sample it using
delta_st = sqrt(noisePower)*randn( [ nRows nCols nT ] );