Zmap7 on matlab R2018a - matlab

I cannot get the Modified Omori law graph on zmap7. I started from cumulative number graph. From Timeplot window, I clicked on p-value estimation then on Estimate p which is highlighted. The latter gives another cumulative number graph and not the expected Modified Omori law graph.

Related

Are MATLAB's lsim outputs derivatives or the state vector?

I'm trying to do a simulation of a 2-body mass-spring-damper. I've set up a state-space model that I'm pretty confident in and set an input of a displacement and velocity at the base in just one degree of freedom. Upon getting my outputs, I expected that the output vector would just be the state vector at each time step. However, when plotting the output vector corresponding to displacement for each mass in the vertical direction (the input direction), it looked much more like a velocity (0 at the extrema of the input). The plots are shown below:
When I integrated the top 2 plots, I got the following:
Now, I obviously can just accept the outputs as they are and assume I am right in my understanding. But, I want to be sure. From the documentation page:
lsim(___) also returns the time vector t used for simulation and the
state trajectories x (for state-space models only)
I'm just hoping to find out whether or not I am correct in that the output matrix columns correspond to the history of the state derivatives before I go base an analysis on a bad assumption.
I figured it out. My B-matrix expected [derivative, state,...] but I had them in the opposite order.

the result does`t match what I expect when I used log-normal PDF in matlab

I`m learning a paper
the paper presents a figure
the figure shows CDF of buildings height
and the paper also gives details about this figure
Building height statistics: The present model uses the statistics of
building heights in typical built-up areas as input data. A suitable
form was sought by comparing with geographical data for the city of
Guildford, United Kingdom. The probability density function that was
selected to fit the data was the log-normal distribution with unknown
parameters: mean value p and standard deviation t. As can be noted
from Fig. 3, it was found to be a good fit to the geographical data
values with parameters p = 7.3m, t= 0.26.
it tells the mean value is 7.3 and the standard deviation is 0.26 right?
however, when I try them in matlab by adding codes
x=0:0.01:20;
meanValue = 7.3;
standardDeviation = 0.26;
y1 = logncdf(x,meanValue,standardDeviation);
plot(x,y1);
what the result showed is different from the figure 3
I tried to re-read the paper to make sure parameters are correct.
and check the document on matlab about how to use this method.
everything seem all right except the simulation result.
please help me fix it ! thanks
As mentioned in the comments, the parameters mu and sigma are the mean and standard derivation of the associated normal distribution, not of the log normal distribution. The details, especially the connection between both is explained in the Wikipedia article.
To calculate mu and sigma from the mean and variance, the formulas are given in the Wikipedia article or here in the matlab syntax:
m=7.3
t=0.26
v=t.^2;
%A lognormal distribution with mean m and variance v has parameters
mu = log((m^2)/sqrt(v+m^2));
sigma = sqrt(log(v/(m^2)+1));
%finally your code:
x=0:0.01:20;
y1 = logncdf(x,mu,sigma);
plot(x,y1);
Which is much closer to the graph in your question, but the graph in your question seems to be the CDF for a much higher standard derivation. Visually guessing the parameters form your plot, I would say it's roughly t=5

My fourier series doesn't fit the graph

I'm trying to plot a Fourier series that should fit the original graph (which is right), but I don't know what's wrong. I also double-checked the Fourier approximation.
The original graph is generated with:
t=-pi:0.01:0;
x=ones(size(t));
plot(t,x)
axis([-3*pi 3*pi -1 4])
hold on
t=0:0.01:pi;
y=cos(t);
plot(t,y)
whereas the Fourier series is generated with:
t=-pi:0.01:pi;
f=1/2;
for n=1:5
costerm=0;
if n/2== round(n/2)
sinterm=((-2*n)/(pi*(1-n^2)))*sin(2*n*t);
else
sinterm= (-2/(pi*n))*sin(2*n*t);
end
f=f+sinterm+costerm;
end
plot(t,f)
The graph looks like this:
Can someone tell me why this isn't working?
The first thing that can be noticed is that the generated series in your plot runs for two periods in the support interval [-pi:pi]. This point to an incorrect constant in your sin(2*n*t) argument, which should instead be sin(n*t).
Also, as a general rule
odd functions have only sin terms
even functions have only cos terms
otherwise, the Fourier series contain a mixture of sin and cos terms.
In your case the function is neither even nor odd, so you should expect both sin and cos terms to be present. However you are only computing the sinterm and leaving costerm=0. More specifically, while the cosine series coefficients evaluate to 0 for all n>1, you are in fact missing the term for n=1 which is 0.5*cos(t).
With these corrections you should get
f=1/2 + 0.5*cos(t);
for n=1:5
if 0==mod(n,2)
sinterm=((-2*n)/(pi*(1-n^2)))*sin(n*t);
else
sinterm= (-2/(pi*n))*sin(n*t);
end
f=f+sinterm;
end
which should give you the following plot (blue line being the original function, and the red line being the Fourier series expansion):

Finding Probability of Gaussian Distribution Using Matlab

The original question was to model a lightbulb, which are used 24/7, and usually one lasts 25 days. A box of bulbs contains 12. What is the probability that the box will last longer than a year?
I had to use MATLAB to model a Gaussian curve based on an exponential variable.
The code below generates a Gaussian model with mean = 300 and std= sqrt(12)*25.
The reason I had to use so many different variables and add them up was because I was supposed to be demonstrating the central limit theorem. The Gaussian curve represents the probability of a box of bulbs lasting for a # of days, where 300 is the average number of days a box will last.
I am having trouble using the gaussian I generated and finding the probability for days >365. The statement 1-normcdf(365,300, sqrt(12)*25) was an attempt to figure out the expected value for the probability, which I got as .2265. Any tips on how to find the probability for days>365 based on the Gaussian I generated would be greatly appreciated.
Thank you!!!
clear all
samp_num=10000000;
param=1/25;
a=-log(rand(1,samp_num))/param;
b=-log(rand(1,samp_num))/param;
c=-log(rand(1,samp_num))/param;
d=-log(rand(1,samp_num))/param;
e=-log(rand(1,samp_num))/param;
f=-log(rand(1,samp_num))/param;
g=-log(rand(1,samp_num))/param;
h=-log(rand(1,samp_num))/param;
i=-log(rand(1,samp_num))/param;
j=-log(rand(1,samp_num))/param;
k=-log(rand(1,samp_num))/param;
l=-log(rand(1,samp_num))/param;
x=a+b+c+d+e+f+g+h+i+j+k+l;
mean_x=mean(x);
std_x=std(x);
bin_sizex=.01*10/param;
binsx=[0:bin_sizex:800];
u=hist(x,binsx);
u1=u/samp_num;
1-normcdf(365,300, sqrt(12)*25)
bar(binsx,u1)
legend(['mean=',num2str(mean_x),'std=',num2str(std_x)]);
[f, y]=ecdf(x) will create an empirical cdf for the data in x. You can then find the probability where it first crosses 365 to get your answer.
Generate N replicates of x, where N should be several thousand or tens of thousands. Then p-hat = count(x > 365) / N, and has a standard error of sqrt[p-hat * (1 - p-hat) / N]. The larger the number of replications is, the smaller the margin of error will be for the estimate.
When I did this in JMP with N=10,000 I ended up with [0.2039, 0.2199] as a 95% CI for the true proportion of the time that a box of bulbs lasts more than a year. The discrepancy with your value of 0.2265, along with a histogram of the 10,000 outcomes, indicates that actual distribution is still somewhat skewed. In other words, using a CLT approximation for the sum of 12 exponentials is going to give answers that are slightly off.

Understanding the Pearson Correlation Coefficient

As part of the calculations to generate a Pearson Correlation Coefficient, the following computation is performed:
In the second formula: p_a,i is the predicted rating user a would give item i, n is the number of similar users being compared to, and ru,i is the rating of item i by user u.
What value will be used if user u has not rated this item? Did I misunderstand anything here?
According to the link, earlier calculations in step 1 of the algorithm are over a set of items, indexed 1 to m, whe m is the total number of items in common.
Step 3 of the algorithm specifies: "To find a rating prediction for a particular user for a particular item, first select a number of users with the highest, weighted similarity scores with respect to the current user that have rated on the item in question."
These calculations are performed only on the intersection of different users set of rated items. There will be no calculations performed when a user has not rated an item.
It only makes sense to calculate results if both users have rated a movie. Linear regression can be visualised as a method of finding a straight line through a two-dimensional graph where one variable is plotted on the X axis and another one - on Y axis. Each combination of ratings is represented as a point on an euclidean plane [u1_rating, u2_rating]. Since you can not plot points which only have one dimension to them, you'll have to discard those cases.