Bring two vectors in the same length [duplicate] - matlab

This question already has an answer here:
MATLAB: Comparing 2 arrays with different lengths
(1 answer)
Closed 6 years ago.
I have the following problem:
I have two data vectors v1 (Length N1=13812) and v2 (Length N2=60002021). I have to bring both vectors in the same length N3 using interpolation bzw. downsampling, with the requirement: 2xN1.
Can somebody help me? My idea was to use: interp, interp1 and downsample to solve to problem. Is that the right approach?

Depending on your signal and sampling rate, using interp1 might not be the right thing to do.
There is a resample function that you could use like this:
v1_resampled = resample(v1, 2, 1);
v2_resampled = resample(v2, p, q);
where the parameters p, q depending on the sampling rate of your vector v2.
Always check the beginning / end of the resampled vectors. Check NaNs and be careful if you have non equidistant sampling.
Another possible alternative would be to use a moving average / moving median filter on the higher resolution signal. The best resampling approach really depends on the signal type.

Related

How to speed up column wise operation in MATLAB using bsxfun? [duplicate]

This question already has answers here:
Fast Algorithms for Finding Pairwise Euclidean Distance (Distance Matrix)
(3 answers)
Closed 5 years ago.
I am trying to calculate the Square Euclidean Distance between each column from two matrices and store in matrix D.
im_patches is 81*60840 double
codebook is 81*456 double
SquareEuclidean = #(x, y) x'*x+y'*y-2*x'*y;
% Get N*K distance matrix D between the N patches extracted
% from the image (im patches) and the K prototypes in the codebook
D=zeros(size(im_patches,2),size(codebook, 2));
for i=1:size(im_patches,2)
for j=1:size(codebook, 2)
D(i,j)=SquareEuclidean(im_patches(:,i),codebook(:,j));
end
end
However, this is very inefficient that cost more than 10 minutes in my laptop.
I am wondering is there a better way of using bsxfun. So I tried:
D2 = bsxfun(#(x,y) x'.*x+y'.*y-2.*x'.*y,im_patches,codebook);
which gives an error:
Error using bsxfun: Non-singleton dimensions of the two input arrays must match each other.
I think bsxfun or arrayfun would be a nice way of dealing such problem. But don't know the correct way of doing this.
Thank you in advance.
Your loop can be reduced to:
bsxfun(#plus,sum(im_patches.'.^2,2),sum(codebook.^2)-2*im_patches.'*codebook)
In MATLAB r2016b there is no need to bsxfun:
sum(im_patches.'.^2,2)+sum(codebook.^2)-2*im_patches.'*codebook

How do you calculate N random numbers that are weighted without repeats in Matlab? [duplicate]

This question already has answers here:
Weighted sampling without replacement in Matlab
(5 answers)
Weighted random numbers in MATLAB
(4 answers)
Closed 7 years ago.
I'm trying to calculate 5 numbers at random. The numbers 1-35 have set probability weights that are assigned to each number. I'm wondering how, in Matlab, to compute 5 random numbers with weights WITHOUT repeats. Also seeking how to compute 5 sets of those.
Although I would suspect MATLAB has a built in function for this, the documentation for randsample suggests otherwise:
Y = randsample(N,K,true,W) or randsample(POPULATION,K,true,W) returns a
weighted sample, using positive weights W, taken with replacement. W is
often a vector of probabilities. This function does not support weighted
sampling without replacement.
So, instead, since you only are looking for a few numbers, looping isn't a terrible idea:
POP = 1:35;
W = rand(1,35); W=W/sum(W);
k = 5;
mynumbers = zeros(1,k);
for i=1:k
mynumbers(i) = randsample(POP,1,true,W);
idx2remove = find(POP==mynumbers(i));
POP(idx2remove) = [];
W(idx2remove) = [];
end
The entries in W are your weights. The vector POP is your number 1 through 35. The number k is how many you'd like to choose.
The loop randomly samples one number (with weights) at a time using MATLAB's randsample, then the selected number and corresponding weight are removed from POP and W.
For larger k I hope there's a better solution...

Gaussian derivative - Matlab

I have an RGB image and I am trying to calculate its Gaussian derivative.
Image is a greyscale image and the Gaussian window is 5x5,st is the standard deviation
This is the code i am using in order to find a 2D Gaussian derivative,in Matlab:
N=2
[X,Y]=meshgrid(-N:N,-N:N)
G=exp(-(x.^2+y.^2)/(2*st^2))/(2*pi*st)
G_x = -x.*G/(st^2);
G_x_s = G_x/sum(G_x(:));
G_y = -y.*G/(st^2);
G_y_s = G_y/sum(G_y(:));
where st is the standard deviation i am using. Before I proceed to the convolution of the Image using G_x_s and G_y_s, i have the following problem. When I use a standard deviation that is an even number(2,4,6,8) the program works and gives results as expected. But when i use an odd number for standard deviation (3 or 5) then the G_y_s value becomes Inf because sum(G_y(:))=0. I do not understand that behavior and I was wondering if there is some problem with the code or if in the above formula the standard deviation can only be an even number. Any help will be greatly appreciated.
Thank you.
Your program doesn't work at all. The results you find when using an even number is just because of some numerical errors.
Your G will be a matrix symmetrical to the center. x and y are both point symmetrical to the center. So the multiplication (G times x or y) will result in a matrix with a sum of zero. So a division by that sum is a division by zero. Everything else you observe is because of some roundoff errors. Here, I see a sum og G_xof about 1.3e-17.
I think your error is in the multiplication x.*G and y.*G. I can not figure out why you would do that.
I assume you want to do edge detection rigth? You can use fspecialto create several edge filters. Laplace of gaussian for instance. You could also create two gaussian filters with different standard deviations and subtract them from another to get an edge filter.

Detect steps in a Piecewise constant signal

I have a piecewise constant signal shown below. I want to detect the location of step transition (Marked in red).
My current approach:
Smooth signal using moving average filter (http://www.mathworks.com/help/signal/examples/signal-smoothing.html)
Perform Discrete Wavelet transform to get discontinuities
Locate the discontinuities to get the location of step transition
I am currently implementing the last step of detecting the discontinuities. However, I cannot get the precise location and end with many false detection.
My question:
Is this the correct approach?
If yes, can someone shed some info/ algorithm to use for the last step?
Please suggest an alternate/ better approach.
Thanks
Convolve your signal with a 1st derivative of a Gaussian to find the step positions, similar to a Canny edge detection in 1-D. You can do that in a multi-scale approach, starting from a "large" sigma (say ~10 pixels) detect local maxima, then to a smaller sigma (~2 pixels) to converge on the right pixels where the steps are.
You can see an implementation of this approach here.
If your function is really piecewise constant, why not use just abs of diff compared to a threshold?
th = 0.1;
x_steps = x(abs(diff(y)) > th)
where x a vector with your x-axis values, y is your y-axis data, and th is a threshold.
Example:
>> x = [2 3 4 5 6 7 8 9];
>> y = [1 1 1 2 2 2 3 3];
>> th = 0.1;
>> x_steps = x(abs(diff(y)) > th)
x_steps =
4 7
Regarding your point 3: (Please suggest an alternate/ better approach)
I suggest to use a Potts "filter". This is a variational approach to get an accurate estimation of your piecewise constant signal (similar to the total variation minimization). It can be interpreted as adaptive median filtering. Given the Potts estimate u, the jump points are the points of non-zero gradient of u, that is, diff(u) ~= 0. (There are free Matlab implementations of the Potts filters on the web)
See also http://en.wikipedia.org/wiki/Step_detection
Total Variation Denoising can produce a piecewise constant signal. Then, as pointed out above, "abs of diff compared to a threshold" returns the position of the transitions.
There exist very efficient algorithms for TVDN that process millions of data points within milliseconds:
http://www.gipsa-lab.grenoble-inp.fr/~laurent.condat/download/condat_fast_tv.c
Here's an implementation of a variational approach with python and matlab interface that also uses TVDN:
https://github.com/qubit-ulm/ebs
I think, smoothing with a sharper lowpass filter should work better.
Try to use medfilt1() (a median filter) instead, since you have very concrete levels. If you know how long your plateau is, you can take half/quarter of the plateau length for example. Then you would get very sharp edges. The sharp edges should be detectable using a Haar wavelet or even just using simple differentiation.

How to use uniform distribution to see central limit theorem in action? [duplicate]

This question already has an answer here:
PDF and CDF plot for central limit theorem using Matlab
(1 answer)
Closed 3 years ago.
I would like to use MATLAB to visualize the Central Limit Theorem in action. I would like to use rand() to produce 10 samples of uniform distribution U[0,1] and compute their average, then save it to a matrix 'Mat'.
I would then use a histogram to visualize the convergence in distribution. How would you do this and normalize that histogram so it is a valid probability density (instead of just counting the frequency of occurrence)?
To generate the samples I am doing something like:
Mat = rand(N,sizeOfVector) > rand(1);
But I guess I am going to the wrong side.
To generate N samples of length sizeOfVector you start out with rand as you suggested, and then continue as follows (calling the array average instead of Mat for readability):
samples = rand(N,sizeOfVector);
average = mean(samples,1);
binWidth = 3.49*std(average)*N^(-1/3)); %# Scott's rule for good bin width for normal data
nBins = ceil((max(average)-min(average))/binWidth);
[counts,x] = hist(average,nBins);
normalizedCounts = counts/sum(counts);
bar(x,normalizedCounts,1)