i'm new in Simulink and I'm using interpreted MATLAB function block to create a gaussian pulse generator.
This is the function:
function y=mono_gauss(t)
fs=20E9; %sample rate-10 times the highest frequency
ts=1/fs; %sample period
t1=.5E-9; %pulse width(0.5 nanoseconds)
x=(t/t1).*(t/t1); %x=(t^2/t1^2)(square of (t/t1);
A=1;
y=(A*(t/t1)-ts).*exp(-x); %first derivative of Gaussian pulsefunction
end
The problem is that the output of the block generate only one pulse and my objective is to generate a train of pulses just like a pulse generator block.
Any solutions ?
You're most likely better off designing your pulse in MATLAB, then using the Repeating Sequence to use it in Simulink.
For instance, in MATLAB
t = 0:0.01:1;
y = normpdf(t,0.5,0.05);
plot(t,y)
Then within Simulink,
I have also changed the step size of the model Solver to be 0.01.
You'll need to play around with various of these parameters to get the exact curve you desire.
Related
How can I make gaussian noise with mean= 18 and variance= 0.1 in simulink? I can't use AWGN block since I'm not able to specify mean value in it.
I want to generate the below signal which is gaussian noise with mean= 18 and variance= 0.1:
Construct the model as follows. From the Library Browser select the DSP System Toolbox, then choose the Random Source block.
Now, adjust the block parameters Source type, Mean and Variance:
Construct the model by adding a Scope block as follows:
After running the model for 10 sec., you will see the generated Gaussian noise waveform by double-clicking the Scope block. You will notice the plot is not similar to the image you posted, this is how the default plot works in Simulink.
Now, to make sure it's the correct signal, send the signal to MATLAB using the To Workspace block and plot it. This is how this noise waveform is plotted in MATLAB, the same as your image.
The mean just says how much the noise is shifted so if you take a constant function of value 18 and then add a gaussian noise of variance .1 you will get what you want.
This is assuming your noise is one-dimensional:
variance = 0.1;
std_deviation = sqrt(variance);
mean = 18;
n = 1000; % number of samples
noise = std_deviation .* randn(n, 1) + mean;
I am currently working on a project for my Speech Processing course and have just finished making a time waveform plot as well as both wide/narrow band spectrograms for a spoken word in Spanish (aire).
The next part of the project is as follows:
Make a 3-D plot of each word signal, as a function of time, frequency and power spectral density. The analysis time step should be 20ms, and power density should be computed using a 75%-overlapped Hamming window and the FFT. Choose a viewing angle that best highlights the signal features as they change in time and frequency.
I was hoping that someone can offer me some guidance as to how to begin doing this part. I have started by looking here under the Spectrogram and Instantaneous Frequency heading but was unsure of how to add PSD to the script.
Thanks
I am going to give you an example.
I am going to generate a linear chirp signal.
Fs = 1000;
t = 0:1/Fs:2;
y = chirp(t,100,2,300,'linear');
And then, I am going to define number of fft and hamming window.
nfft=128;
win=hamming(nfft);
And then I am going to define length of overlap, 75% of nfft.
nOvl=nfft*0.75;
And then, I am performing STFT by using spectrogram function.
[s,f,t,pxx] = spectrogram(y,win,nOvl,nfft,Fs,'psd');
'y' is time signal, 'win' is defined hamming window, 'nOvl' is number of overlap, 'nfft' is number of fft, 'Fs' is sampling frequency, and 'psd' makes the result,pxx, as power spectral density.
Finally, I am going to plot the 'pxx' by using waterfall graph.
waterfall(f,t,pxx')
xlabel('frequency(Hz)')
ylabel('time(sec)')
zlabel('PSD')
The length of FFT, corresponding to 20ms, depends on sampling frequency of your signal.
EDIT : In plotting waterfall graph, I transposed pxx to change t and f axis.
I would like to perform conditional simulations for Gaussian process (GP) models in Matlab. I have found a tutorial by Martin Kolář (http://mrmartin.net/?p=223).
sigma_f = 1.1251; %parameter of the squared exponential kernel
l = 0.90441; %parameter of the squared exponential kernel
kernel_function = #(x,x2) sigma_f^2*exp((x-x2)^2/(-2*l^2));
%This is one of many popular kernel functions, the squared exponential
%kernel. It favors smooth functions. (Here, it is defined here as an anonymous
%function handle)
% we can also define an error function, which models the observation noise
sigma_n = 0.1; %known noise on observed data
error_function = #(x,x2) sigma_n^2*(x==x2);
%this is just iid gaussian noise with mean 0 and variance sigma_n^2s
%kernel functions can be added together. Here, we add the error kernel to
%the squared exponential kernel)
k = #(x,x2) kernel_function(x,x2)+error_function(x,x2);
X_o = [-1.5 -1 -0.75 -0.4 -0.3 0]';
Y_o = [-1.6 -1.3 -0.5 0 0.3 0.6]';
prediction_x=-2:0.01:1;
K = zeros(length(X_o));
for i=1:length(X_o)
for j=1:length(X_o)
K(i,j)=k(X_o(i),X_o(j));
end
end
%% Demo #5.2 Sample from the Gaussian Process posterior
clearvars -except k prediction_x K X_o Y_o
%We can also sample from this posterior, the same way as we sampled before:
K_ss=zeros(length(prediction_x),length(prediction_x));
for i=1:length(prediction_x)
for j=i:length(prediction_x)%We only calculate the top half of the matrix. This an unnecessary speedup trick
K_ss(i,j)=k(prediction_x(i),prediction_x(j));
end
end
K_ss=K_ss+triu(K_ss,1)'; % We can use the upper half of the matrix and copy it to the
K_s=zeros(length(prediction_x),length(X_o));
for i=1:length(prediction_x)
for j=1:length(X_o)
K_s(i,j)=k(prediction_x(i),X_o(j));
end
end
[V,D]=eig(K_ss-K_s/K*K_s');
A=real(V*(D.^(1/2)));
for i=1:7
standard_random_vector = randn(length(A),1);
gaussian_process_sample(:,i) = A * standard_random_vector+K_s/K*Y_o;
end
hold on
plot(prediction_x,real(gaussian_process_sample))
set(plot(X_o,Y_o,'r.'),'MarkerSize',20)
The tutorial generates the conditional simulations using a direct simulation method based on covariance matrix decomposition. It is my understanding that there are several methods of generating conditional simulations that may be better when the number of simulation points is large such as conditioning by Kriging using a local neighborhood. I have found information regarding several methods in J.-P. Chilès and P. Delfiner, “Chapter 7 - Conditional Simulations,” in Geostatistics: Modeling Spatial Uncertainty, Second Edition, John Wiley & Sons, Inc., 2012, pp. 478–628.
Is there an existing Matlab toolbox that can be used for conditional simulations? I am aware of DACE, GPML, and mGstat (http://mgstat.sourceforge.net/). I believe only mGstat offers the capability to perform conditional simulations. However, mGstat also seems to be limited to only 3D models and I am interested in higher dimensional models.
Can anybody offer any advice on getting started performing conditional simulations with an existing toolbox such as GPML?
===================================================================
EDIT
I have found a few more Matlab toolboxes: STK, ScalaGauss, ooDACE
It appears STK is capable of conditional simulations using covariance matrix decomposition. However, is limited to a moderate number (maybe a few thousand?) of simulation points due to the Cholesky factorization.
I used the STK toolbox and I recommend it for others:
http://kriging.sourceforge.net/htmldoc/
I found that if you need conditional simulations at a large number of points then you might consider generating a conditional simulation at the points in a large design of experiment (DoE) and then simply relying on the mean prediction conditional on that DoE.
I am trying to simulate a distribution for parameter theta f= theta ^(z_f+n+alpha-1)*(1-theta)^(n+1-z_f-k+ beta-1), where all the parameter except for theta is know. I am using Metro polish hasting algorithm to do the MCMC simulation . My proposal density is a beta distribution with parameter alpha and beta. My code for the simulation are as follows. I am using a buitlin Matlab code called mhsample() for this purpose, How do I know if my code is working properly?
clear
clc
alpha=2;
beta=2;
z_f=1;
n=6;
k=5;
nsamples = 3000;
pdf= #(x) x^(z_f+n+alpha-1)*(1-x)^(n+1-z_f-k+beta-1); % here x acts as theta
proppdf= #(x,y) betapdf(x, alpha, beta);
proprnd =#(x) betarnd(alpha,beta,1);
smpl = mhsample(0.1,nsamples,'pdf',pdf,'proprnd',proprnd,'proppdf',proppdf);
I'm unsure of what you're asking when you say "how do I know if my code is working properly" -- I'm assuming it executes? But for a visual comparison of your function vs. the simulation, you can plot both the PDF and the data you got from mhsample as follows:
% i'm assuming you ran the code above so that smpl and #pdf are both defined...
fplot(pdf,[0 1]); % fplot takes your function and plots it between x-limit [0,1]
figure % new figure
hist(smpl,30); % 30 here is bin size, change it to your preference
Figure below:
the histogram of smpl's output on left, i.e., your simulation
the function pdf bounded in [0,1] on right for comparison to your simulation
This was just a wild guess because those two figures resemble each other and are also beta-distribution-esque.
If you want a more complex analysis than that, I'm afraid I'm not yet proficient in MCMC :)
The equation has the following form:
x'' + w.^2 x=n
w=1
and n is Gaussian noise with mean = 0 and standard deviation = 1.
Without the Gaussian noise I can solve the equation by using ODE45 from matlab.The problem is, how can I deal with this equation when the Gaussian noise is taken into consideration?
it really depends on how the noise is added to the system. If you want to arbitrarily add noise to the system, in which every time the function is called, you add it to the equation representing your data:
function dydt = solve(t,y)
dydt = [y(2); -y(1)+randn(1)];
then call
[t,y] = ode45(#solve, [0 10],[1 -1]);
the problem here is that if the noise is large compared to the signal size, more iterations will be needed thus more time.
On the other hand, if the noise is predetermined, you can either sample and hold, or incorporate a first-order hold and then add that to the system