I have channel measurements which has values > 20,000, which has to be divided into discrete levels, as in my case K=8 and which has to be mapped to channel measurements with states. I have to find state-transition probability matrix for this in Matlab.
My question is, I need to know how to divide these values into 8 states and to find the state-transition probability matrix for these 8 states in Matlab.
Here is a made-up example:
%# some random vector (load your data here instead)
x = randn(1000,1);
%# discretization/quantization into 8 levels
edges = linspace(min(x),max(x),8+1);
[counts,bins] = histc(x, edges);
%# fix last level of histc output
last = numel(counts);
bins(bins==last) = last - 1;
counts(last-1) = counts(last-1) + counts(last);
counts(last) = [];
%# show histogram
bar(edges(1:end-1), counts, 'histc')
%# transition matrix
trans = full(sparse(bins(1:end-1), bins(2:end), 1));
trans = bsxfun(#rdivide, trans, sum(trans,2));
A few things to note:
Discretization is performed simply by dividing the whole range of data into 8 bins. This is done using histc. Note that due to the way the function works, we had to combine the last two counts and fix the bins accordingly.
the transition matrix is computed by first counting the co-occurrences using a less-known call form of the sparse function. The accumarray could have also been used. The count matrix is then normalized to obtain probabilities that sum to one.
You mentioned that your MC model should only allow transitions between adjacent states (1 to 2 or 8 to 7, but not between 2 and 5). I did not enforce this fact since this should be a property of the data itself, which is not applicable in this example with random data.
Related
I have two matrices A and B. The size of A is 200*1000 double (here: 1000 represents 1000 different features). Matrix A belongs to group 1, where I use ones(200,1) as the label vector. The size of B is also 200*1000 double (here: 1000 also represents 1000 different features). Matrix B belongs to group 2, where I use -1*ones(200,1) as the label vector.
My question is how do I visualize matrices A and B so that I can clearly distinguish them based on the given groups?
I'm assuming each sample in your matrices A and B is determined by a row in either matrix. If I understand you correctly, you want to draw a series of 1000-dimensional vectors, which is impossible. We can't physically visualize anything beyond three dimensions.
As such, what I suggest you do is perform a dimensionality reduction to reduce your data so that each input is reduced to either 2 or 3 dimensions. Once you reduce your data, you can plot them normally and assign a different marker to each point, depending on what group they belonged to.
If you want to achieve this in MATLAB, use Principal Components Analysis, specifically the pca function in MATLAB, that calculates the residuals and the reprojected samples if you were to reproject them onto a lower dimensionality. I'm assuming you have the Statistics Toolbox... if you don't, then sorry this won't work.
Specifically, given your matrices A and B, you would do this:
[coeffA, scoreA] = pca(A);
[coeffB, scoreB] = pca(B);
numDimensions = 2;
scoreAred = scoreA(:,1:numDimensions);
scoreBred = scoreB(:,1:numDimensions);
The second output of pca gives you reprojected values and so you simply have to determine how many dimensions you want by extracting the first N columns, where N is the desired number of dimensions you want.
I chose 2 for now, and we can see what it looks like in 3 dimensions after. Once we have what we need for 2 dimensions, it's just a matter of plotting:
plot(scoreAred(:,1), scoreAred(:,2), 'rx', scoreBred(:,1), scoreBred(:,2), 'bo');
This will produce a plot where the samples from matrix A are with red crosses while the samples from matrix B are with blue circles.
Here's a sample run given completely random data:
rng(123); %// Set seed for reproducibility
A = rand(200,1000); B = rand(200,1000); %// Generate random data
%// Code as before
[coeffA, scoreA] = pca(A);
[coeffB, scoreB] = pca(B);
numDimensions = 2;
scoreAred = scoreA(:,1:numDimensions);
scoreBred = scoreB(:,1:numDimensions);
%// Plot the data
plot(scoreAred(:,1), scoreAred(:,2), 'rx', scoreBred(:,1), scoreBred(:,2), 'bo');
We get this:
If you want three dimensions, simply change numDimensions = 3, then change the plot code to use plot3:
plot3(scoreAred(:,1), scoreAred(:,2), scoreAred(:,3), 'rx', scoreBred(:,1), scoreBred(:,2), scoreBred(:,3), 'bo');
grid;
With those changes, this is what we get:
I need to compute a moving average over a data series, within a for loop. I have to get the moving average over N=9 days. The array I'm computing in is 4 series of 365 values (M), which itself are mean values of another set of data. I want to plot the mean values of my data with the moving average in one plot.
I googled a bit about moving averages and the "conv" command and found something which i tried implementing in my code.:
hold on
for ii=1:4;
M=mean(C{ii},2)
wts = [1/24;repmat(1/12,11,1);1/24];
Ms=conv(M,wts,'valid')
plot(M)
plot(Ms,'r')
end
hold off
So basically, I compute my mean and plot it with a (wrong) moving average. I picked the "wts" value right off the mathworks site, so that is incorrect. (source: http://www.mathworks.nl/help/econ/moving-average-trend-estimation.html) My problem though, is that I do not understand what this "wts" is. Could anyone explain? If it has something to do with the weights of the values: that is invalid in this case. All values are weighted the same.
And if I am doing this entirely wrong, could I get some help with it?
My sincerest thanks.
There are two more alternatives:
1) filter
From the doc:
You can use filter to find a running average without using a for loop.
This example finds the running average of a 16-element vector, using a
window size of 5.
data = [1:0.2:4]'; %'
windowSize = 5;
filter(ones(1,windowSize)/windowSize,1,data)
2) smooth as part of the Curve Fitting Toolbox (which is available in most cases)
From the doc:
yy = smooth(y) smooths the data in the column vector y using a moving
average filter. Results are returned in the column vector yy. The
default span for the moving average is 5.
%// Create noisy data with outliers:
x = 15*rand(150,1);
y = sin(x) + 0.5*(rand(size(x))-0.5);
y(ceil(length(x)*rand(2,1))) = 3;
%// Smooth the data using the loess and rloess methods with a span of 10%:
yy1 = smooth(x,y,0.1,'loess');
yy2 = smooth(x,y,0.1,'rloess');
In 2016 MATLAB added the movmean function that calculates a moving average:
N = 9;
M_moving_average = movmean(M,N)
Using conv is an excellent way to implement a moving average. In the code you are using, wts is how much you are weighing each value (as you guessed). the sum of that vector should always be equal to one. If you wish to weight each value evenly and do a size N moving filter then you would want to do
N = 7;
wts = ones(N,1)/N;
sum(wts) % result = 1
Using the 'valid' argument in conv will result in having fewer values in Ms than you have in M. Use 'same' if you don't mind the effects of zero padding. If you have the signal processing toolbox you can use cconv if you want to try a circular moving average. Something like
N = 7;
wts = ones(N,1)/N;
cconv(x,wts,N);
should work.
You should read the conv and cconv documentation for more information if you haven't already.
I would use this:
% does moving average on signal x, window size is w
function y = movingAverage(x, w)
k = ones(1, w) / w
y = conv(x, k, 'same');
end
ripped straight from here.
To comment on your current implementation. wts is the weighting vector, which from the Mathworks, is a 13 point average, with special attention on the first and last point of weightings half of the rest.
I have a vector of 3D points lets say A as shown below,
A=[
-0.240265581092000 0.0500598627544876 1.20715641293013
-0.344503191645519 0.390376667574812 1.15887540716612
-0.0931248606994074 0.267137193112796 1.24244644549763
-0.183530493218807 0.384249186312578 1.14512014134276
-0.0201358671977785 0.404732019283683 1.21816745283019
-0.242108038906952 0.229873488902244 1.24229940627651
-0.391349107031230 0.262170158259873 1.23856838565023
]
what I want to do is to connect 3D points with lines which only have distance less than a specific threshold T. I want to get a list of pairs of points needed to be connected. Such as,
[
( -0.240265581092000 0.0500598627544876 1.20715641293013), (-0.344503191645519 0.390376667574812 1.15887540716612);
(-0.0931248606994074 0.267137193112796 1.24244644549763),(-0.183530493218807 0.384249186312578 1.14512014134276),.....
]
So as shown, I'll have a vector of pairs of points needed to be connected. So if anyone could please advise how this can be done in Matlab.
The following example demonstrates how to accomplish this.
%# Build an example matrix
A = [1 2 3; 0 0 0; 3 1 3; 2 0 2; 0 1 0];
Threshold = 3;
%# Calculate distance between all points
D = pdist2(A, A);
%# Discard any points with distance greater than threshold
D(D > Threshold) = nan;
If you wish to extract an index of all observation pairs that are linked by a distance less than (or equal to) Threshold, as well as the corresponding distance (your question didn't specify what form you wanted the output to take, so I am essentially guessing here), then instead use the following:
%# Obtain a list of linear indices of observations less than or equal to TH
I1 = find(D <= Threshold);
%#Extract the actual distances, as well as the corresponding observation indices from A
[Obs1Index, Obs2Index] = ind2sub(size(D), I1);
DList = [Obs1Index, Obs2Index, D(I1)];
Note, pdist2 uses Euclidean distance by default, but there are other options - see the documentation here.
UPDATE: Based on the OP's comments, the following code will express the output as a K*6 matrix, where K is the number of distance measures less than the threshold value, and the first three columns of each row is the first data point (3 dimensions) and the second three columns of each row is the connected data point.
DList2 = [A(Obs1Index, :), A(Obs2Index, :)];
SECOND UPDATE: I have not made any assumptions on the distance measure in this answer. That is, I'm deliberately using pdist2 in case your distance measure is not symmetric. However, if you are using a symmetric distance measure, then you could probably speed up the run-time by using pdist instead, although my indexing code would need to be adjusted accordingly.
Plot3 and pdist2 can be used to achieve what you want.
D=pdist2(A,A);
T=0.2;
for i=1:7
for j=i+1:7
if D(i,j)<T & D(i,j)~=0
i
j
plot3(A([i j],1),A([i j],2),A([i j],3));
hold on;
fprintf('line is plotted\n');
pause;
end
end
end
I have a series of n=400 sequences of varying length containing the letters ACGTE.
For example, the probability of having C after A is:
and which can be calculated from the set of empirical sequences, thus
Assuming:
Then I get a transition matrix:
But I'm interested in calculating the confidence intervals for Phat, any thoughts on how I could I go about it?
You could use bootstrapping to estimate confidence intervals. MATLAB provides bootci function in the Statistics toolbox. Here is an example:
%# generate a random cell array of 400 sequences of varying length
%# each containing indices from 1 to 5 corresponding to ACGTE
sequences = arrayfun(#(~) randi([1 5], [1 randi([500 1000])]), 1:400, ...
'UniformOutput',false)';
%# compute transition matrix from all sequences
trans = countFcn(sequences);
%# number of bootstrap samples to draw
Nboot = 1000;
%# estimate 95% confidence interval using bootstrapping
ci = bootci(Nboot, {#countFcn, sequences}, 'alpha',0.05);
ci = permute(ci, [2 3 1]);
We get:
>> trans %# 5x5 transition matrix: P_hat
trans =
0.19747 0.2019 0.19849 0.2049 0.19724
0.20068 0.19959 0.19811 0.20233 0.19928
0.19841 0.19798 0.2021 0.2012 0.20031
0.20077 0.19926 0.20084 0.19988 0.19926
0.19895 0.19915 0.19963 0.20139 0.20088
and two other similar matrices containing the lower and upper bounds of confidence intervals:
>> ci(:,:,1) %# CI lower bound
>> ci(:,:,2) %# CI upper bound
I am using the following function to compute the transition matrix from a set of sequences:
function trans = countFcn(seqs)
%# accumulate transition matrix from all sequences
trans = zeros(5,5);
for i=1:numel(seqs)
trans = trans + sparse(seqs{i}(1:end-1), seqs{i}(2:end), 1, 5,5);
end
%# normalize into proper probabilities
trans = bsxfun(#rdivide, trans, sum(trans,2));
end
As a bonus, we can use bootstrp function to get the statistic computed from each bootstrap sample, which we use to show a histogram for each of the entries in the transition matrix:
%# compute multiple transition matrices using bootstrapping
stat = bootstrp(Nboot, #countFcn, sequences);
%# display histogram for each entry in the transition matrix
sub = reshape(1:5*5,5,5);
figure
for i=1:size(stat,2)
subplot(5,5,sub(i))
hist(stat(:,i))
end
Not sure whether it is statistically sound, but an easy way to get an indicative upper and lower bound:
Cut your sample in n equal pieces (for example 1:40,41:80,...,361:400) and calculate the probability matrix for each of these subsamples.
By looking at the distribution of probabilities amongst subsamples you should get a pretty good idea of what the variance is.
The disadvantage of this method is that it may not be possible actually calculate an interval with a desired given probability. The advantage is that it should give you good feeling for how the series behaves, and that it may capture some information that could be lost in other methods due to the assumptions that other methods (for example bootstrapping) are based on.
I have a vector of solar radiation measurements for a water body, I would like to calculate the radiation that reaches certain depths in the water column. This can be calculated from Beer's law, which I have applied for the second depth of my measurements:
rad = 1+(30-1).*rand(365,1);
depth = 1:10;
kz = 0.4;
rad(:,2) = rad(:,1).*exp(-kz.*depth(2));
How would I apply this to all of the depths specified in the vector 'depth'? i.e. how would I generate a matrix which has 365 rows and 10 columns where each column refers to the radiation that reaches that particular depth.
Since the decay of radiation due to scattering and absorption is a simple %-loss per depth, you can calculate the result very easily from the initial radiation:
initialRad = 1+(30-1).*rand(365,1);
depth = 0:10; %# start with zero so that the first column is your initial radiation
kz = 0.4;
rad = bsxfun(#times, initialRad, exp(-kz*depth) );
Note that as #Rasman points out, you can use vector multiplication instead of bsxfun, since multiplying a m-by-1 array with a 1-by-n array results in a m-by-n array. The bsxfun solution can be more robust, since it also works when the arrays have additional dimensions (e.g. m-by-1-by-k and 1-by-n-by-k if you do multiple tests), or if the vectors are transposed (e.g. 1-by-m and n-by-1). The solution below is a nice demonstration of good linear algebra skills, though you may want to add a note why you don't use dot multiplication with the two vectors initialRad and the exp-statement.
rad = initialRad * exp(-kz * depth);
You should use loops,
here you can read a tutorial about them, and how to use them,
http://www.mathworks.com/help/distcomp/for.html
basically what you need is, a for loop that contains i as main parameter. Which should run for
i=1 .. 9
and your main assignment would become
rad(:,i+1) = rad(:,i).*exp(-kz.*depth(2));
to be more precise
for i = drange(1:9)
rad(:,i+1) = rad(:,i).*exp(-kz.*depth(2));
end
I do not know the subject but this function will sweep your matrix, column by column, starts assigning column 2 using column 1 and goes on till column 10.