Invert Markov Chain - queue

A Markov Chain's Sequence of State is fully characterized by
it's birth rate l(t) (for lambda) and it's deathrate m(t) (for mu) and given an initial probability distribution P0 of the initial state. Also there is a maximum amount of states c (for capacity).
Let us say the sequence of states y(t) which is obtained by the birth-death system described above is available as measurements and also m(t), P0 and c is known.
How to get the inverse of the system and to calculate l(t) which creates y(t) given mu(t) and P0 and c?
What if mu(t) or P0 is not available anymore. Is it still possible?

Related

MATLAB: How to compute the similarity of two signals and get the correct consistency or coherence metric

I was wondering about the consistency metric. Generally, it allows us to deduce the parity or similarity between two signals, right? If so, if the probability is higher (from 0.5 to 1), does it means that there is a strong similarity of the signals? If the margin is less than (0.1-0.43), can this predict the poor coherence between the signals (or poor similarity, the probability the signals are different)? So, if we got the metric <0, is this approved the signal is totally different? Because I'm getting negative numbers. Is this hypothesis possible?
Can I have a clear understanding of the consistency metric of the signal? Here is my small code and figure. Thanks in advance.
s1 = signal3
s2 = signal4
if s1 ~= s2
[C1] = xcorr(s1);
[C2] = xcorr(s2);
signal_mix = C1.*C2 %mixing vector
signal_mix1 = signal_mix
else
s1(1,:) == s2(1,:)
s3 = s1
s3= s2
signal_mix = s2
end
n =2;
for i = length(signal_mix1)
signal_mix1(i) = min(C1(i),C2(i))/ max(C1(i),C2(i)) % consistency score
signal_mix2 = sum(signal_mix1(i))
end
Depending on your use case you might want to consider a dynamic time wraping distance (Matlab has a build in function for that) as similarity metric. One problem with using correlation as a metric is that it compares always the same timestep of the signals. So two identical signals, where one is time delayed, could lead to low correlation. The DTW distance adresses this by comparing to values of adjacent timesteps.
The downside of the dtw distance is that the distance it self can't be interpretet on its only only relative to other distances. So you can tell that two signals A & B with a distance of 150 are more similar than A & C with a distance of 250. But the distance of 150 on its own doesn't tell you a lot.
first of all, you could use xcorrfunction to calculate cross-correlation between two signals.
from Matlab help:
r = xcorr(x,y) returns the cross-correlation of two discrete-time
sequences. Cross-correlation measures the similarity between a vector
x and shifted (lagged) copies of a vector y as a function of the lag.
If x and y have different lengths, the function appends zeros to the
end of the shorter vector so it has the same length as the other.
additionally you could use xcov:
xcov computes the mean of its inputs, subtracts the mean, and then
calls xcorr.
The result of xcov can be interpreted as an estimate of the covariance
between two random sequences or as the deterministic covariance
between two deterministic signals.
in case of your example you are using xcorr with one signal so it computes auto-correlation between the signal itself and its lagged signal.
update:
based on the comment, it seems you need linear correlation, it can be calculated by corr function:
p=corr(x,y)
the value of p is 1 when x , y behave exactly like each other, and is -1 when x and y behave quite the opposite of each other.
when p is 0 it means there is no correlation between two signals.

Equation system with derivatives

I have to solve the following system:
X'(t) = -D(t)x(t)+μ(s(t), p(t))x(t);
S'(t) = D(t)(s(t)^in - s(t)) - Yxsμ(s(t), p(t))x(t)
p'(t) = -D(t)p(t)+(aμ(s(t), p(t))+b)x(t)
where
μ(s(t), p(t)) = Μmax ((1 - (p(t)/pm))s(t)) / (km+s(t)+(s(t) ^ 2)/ki)
where Yxs, a,b, Mmax, Pm, km, ki are constant variables, then I have to linearizate the system and find the balance Points of thiw system. any suggestion how to do it with Matlab or Mathematica??
Matlab can help with some steps, but there might be few where you do have to write down some equations yourself.
To start with a simple side note: Matlabs ODE45 function allows to simulate any function of the form dx/dt = f(x,u), regardless of how non-linear or time variant they might become.
to linearize such a system, you need to derive a jacobian matrix and substitute the linearization point in this matrix. This linearization point is any point where all state derivatives equal 0, it does not need to be a balance point. However, it is desired to have it a balance point as this means the linearization point is a stable equilibrium.
So in MATLAB:
create symbolic variables for all states and inputs (so x(t), s(t) and p(t))
create symbolic state equations dx/dt = f(x,u) and output equation y = g(x,u)
derive symbolic State space matrices A,B,C,D using the "jacobian" function
substitute linearization point in these symbolic state space matrices using "subs"
use eval(symbolic matrix) to retrieve a numeric matrix.
Depending on the non-linear complexity and the chosen linearization point, the linearized system might only within acceptable bounds from the actual system for a very tight region, so be aware of that.

How does covariance matrix (P) in Kalman filter get updated in relation to measurements and state estimate?

I am in the midst of implementing a Kalman filter based AHRS in C++. There's something rather strange to me in the equations of the filter.
I can't find the part where the P (covariance) matrix is actually updated to represent uncertainty of predictions.
During the "predict" step P estimate is calculated from its previous value, A and Q. From what I understand A (system matrix) and Q (covariance of noise) are constant. Then during "Correct" P is calculated from K, H and predicted P. H (observation matrix) is constant, so the only variable that affects P is K (Kalman gain). But K is calculated from predicted P, H and R (observation noise) that are either constants or the P itself. So where is the part of the equations that makes P relate to x? To me it seems like P is recursively looping here depending only on the constants and initial value of P. This doesn't make any sense. What am I missing?
You are not missing anything.
It can come as a surprise to realise that, indeed, the state error covariance matrix (P) in a linear kalman filter does not depend on the the data (z). One way to lessen the surprise is to note what the covariance is saying: it is how uncertain you should be in the estimated state, given that the models you are using (effectively A,Q and H,R) are accurate. It is not saying: this is the uncertainty. By judicious tweaking of Q and R you could change P arbitrarily. In particular you should not interpret P as a 'quality' figure, but rather look at the observation residuals. You could, for example, make P smaller by reducing R. However then the residuals would be larger compared with their computed sds.
When the observations come in at a constant rate, and always the same set of observations, P will tend to a steady state that could, in principal, be computed ahead of time.
However there is no difficulty in applying the kalman filter when you have varying times between observations and varying sets of observations at each time, for example if you have various sensor systems with different sampling periods. In this case you will see more variation in P, though again in principal this could be computed ahead of time.
Further the kalman filter can be extended (in various ways, eg the extended kalman filter and the unscented kalman filter) to handle non linear dynamics and non linear observations. In this case because the transition matrix (A) and the observation model matrix (H) have a state dependency, so too will P.

Is there a Matlab function for calculating std of a binomial distribution?

I have a binary vector V, in which each entry describes success (1) or failure (0) in the relevant trial out of a whole session.
(the length of the vector denotes the number of trials in the session).
I can easily calculate the success rate of the session (by taking the mean of the vector i.e. (sum(V)/length(V))).
However I also need to know the variance or std of each session.
In order to calculate that, is it OK to use the Matlab std function (i.e. to take std(V)/length(V))?
Or, should I use something which is specifically suited for the binomial distribution?
Is there a Matlab std (or variance) function which is specific for a "success/failure" distribution?
Thanks
If you satisfy the assumptions of the Binomial distribution,
a fixed number of n independent Bernoulli trials,
each with constant success probability p,
then I'm not sure that is necessary, since the parameters n and p are available from your data.
Note that we model number of successes (in n trials) as a random variable distributed with the Binomial(n,p) distribution.
n = length(V);
p = mean(V); % equivalently, sum(V)/length(V)
% the mean is the maximum likelihood estimator (MLE) for p
% note: need large n or replication to get true p
Then the standard deviation of the number of successes in n independent Bernoulli trials with constant success probability p is just sqrt(n*p*(1-p)).
Of course you can assess this from your data if you have multiple samples. Note this is different from std(V). In your data formatting, it would require having multiple vectors, V1, V2, V2, etc. (replication), then the sample standard deviation of the number of successes would obtained from the following.
% Given V1, V2, V3 sets of Bernoulli trials
std([sum(V1) sum(V2) sum(V3)])
If you already know your parameters: n, p
You can obtain it easily enough.
n = 10;
p = 0.65;
pd = makedist('Binomial',n, p)
std(pd) % 1.5083
or
sqrt(n*p*(1-p)) % 1.5083
as discussed earlier.
Does the standard deviation increase with n ?
The OP has asked:
Something is bothering me.. if std = sqrt(n*p*(1-p)), then it increases with n. Shoudn't the std decrease when n increases?
Confirmation & Derivation:
Definitions:
Then we know that
Then just from definitions of expectation and variance we can show the variance (similarly for standard deviation if you add the square root) increases with n.
Since the square root is a non-decreasing function, we know the same relationship holds for the standard deviation.

Approximate continuous probability distribution in Matlab

Suppose I have a continuous probability distribution, e.g., Normal, on a support A. Suppose that there is a Matlab code that allows me to draw random numbers from such a distribution, e.g., this.
I want to build a Matlab code to "approximate" this continuous probability distribution with a probability mass function spanning over r points.
This means that I want to write a Matlab code to:
(1) Select r points from A. Let us call these points a1,a2,...,ar. These points will constitute the new discretised support.
(2) Construct a probability mass function over a1,a2,...,ar. This probability mass function should "well" approximate the original continuous probability distribution.
Could you help by providing also an example? This is a similar question asked for Julia.
Here some of my thoughts. Suppose that the continuous probability distribution of interest is one-dimensional. One way to go could be:
(1) Draw 10^6 random numbers from the continuous probability distribution of interest and store them in a column vector D.
(2) Suppose that r=10. Compute the 10-th, 20-th,..., 90-th quantiles of D. Find the median point falling in each of the 10 bins obtained. Call these median points a1,...,ar.
How can I construct the probability mass function from here?
Also, how can I generalise this procedure to more than one dimension?
Update using histcounts: I thought about using histcounts. Do you think it is a valid option? For many dimensions I can use this.
clear
rng default
%(1) Draw P random numbers for standard normal distribution
P=10^6;
X = randn(P,1);
%(2) Apply histcounts
[N,edges] = histcounts(X);
%(3) Construct the new discrete random variable
%(3.1) The support of the discrete random variable is the collection of the mean values of each bin
supp=zeros(size(N,2),1);
for j=2:size(N,2)+1
supp(j-1)=(edges(j)-edges(j-1))/2+edges(j-1);
end
%(3.2) The probability mass function of the discrete random variable is the
%number of X within each bin divided by P
pmass=N/P;
%(4) Check if the approximation is OK
%(4.1) Find the CDF of the discrete random variable
CDF_discrete=zeros(size(N,2),1);
for h=2:size(N,2)+1
CDF_discrete(h-1)=sum(X<=edges(h))/P;
end
%(4.2) Plot empirical CDF of the original random variable and CDF_discrete
ecdf(X)
hold on
scatter(supp, CDF_discrete)
I don't know if this is what you're after but maybe it can help you. You know, P(X = x) = 0 for any point in a continuous probability distribution, that is the pointwise probability of X mapping to x is infinitesimal small, and thus regarded as 0.
What you could do instead, in order to approximate it to a discrete probability space, is to define some points (x_1, x_2, ..., x_n), and let their discrete probabilities be the integral of some range of the PDF (from your continuous probability distribution), that is
P(x_1) = P(X \in (-infty, x_1_end)), P(x_2) = P(X \in (x_1_end, x_2_end)), ..., P(x_n) = P(X \in (x_(n-1)_end, +infty))
:-)