I have eye tracking data sampled at 2000Hz with 45000 samples of x-y pixel coordinates on a 1920x1080 plane.
The velocity (saccade) of the eye is shown in the plot below and contains high frequency noise. x-axis contains the time and y-axis is the velocity/saccade (I forgot the labels)
I want to filter out the noise in such a way that the values between the peaks are 0 and the peaks do not contain noise nor do they lose amplitude.
The latter I could probably do by locating the peaks and simply interpolating between its start position and end position since I just need the peaks and their width. However, this is not really an elegant option.
I was curious if there is a smart or elegant way of doing this. I tried a butterworth filter but that reduces peak amplitude.
It will be imposible to leave the peak amplitudes unchanged because they are also corrupted by high frequency noise. I think that you have two options to filter out the noise
Using a low pass filter
Using the smooth function
You would have to play around with both methods to determine which better suites your needs, and leave the saccade velocity amplitudes mostly unchanged.
Related
Lets some objects make complex spiral moving in 3D and we have get their trajectories projected on a plane.
How to find a median trajectory of such movements and estimate the amplitudes of spirals?
I assume that this requires averaging the coordinates of the trajectories, then somehow finding the distances from the extreme points of the trajectories to the midline. But I don't know a concrete algorithm for this. Can someone suggest this algorithm?
By median trajectory I mean a line that ges between path waves, something like linew on a picture below.
This is a case for a Kalman filter, but this method is a little complicated.
A simpler one is a moving average, with a number of samples that covers as closes as possible to a full period (which you can estimate visually).
Regarding the distances, you can compute the shortest Euclidean distance of every point to the midline (using a line-to-segment function). This will yield an alternating plot, which you can smoothen with a moving maximum (rather than average) over a period.
I am using a 3D cross correlation technqiue to track a particle in 3D. It is very robust but my z dimension is 4x times lower resolution than my x and y. The cross correlation produces a 3D image with a single maximum. I would like to localise this point with sub-pixel accuracy using interpolation of some sort I expect.
Any help welcome!
Craig
You could use bicubic (tricubic in 3D?) or similar interpolation around the peak, as used for image scaling, to better localize the peak. This is commonly done in image processing, for example when localizing peaks in difference-of-gaussian stacks for blob detection, by performing a cubic approximation in each dimension, with the respective neighbouring pixels.
I am a beginner in digital image processing field, recently I am working on a project where I have to decompose an image into two frequency components namely (low and high) using DCT. I searched a lot on web and I found that MATLAB has a built-in function for Discrete Cosine Transform which is used like this:
dct_img = dct2(img);
where img is input image and dct_img is resultant DCT of img.
Question
My question is, "How can I decompose the dct_img into two frequency components namely low and high frequency components".
As you've mentioned, dct2 and idct2 will do most of the job for you. The question that remains is then: What is high frequency and what is low frequency content? The coefficients after the 2 dimensional transform will actually represent two frequencies each (one in x- and one in y-direction). The following figure shows the bases for each coefficient in an 8x8 discrete cosine transform:
Therefore, that question of low vs. high can be answered in different ways. A common way, which is also used in the JPEG encoding, proceeds diagonally from zero-frequency downto the max as shown above. As we can see in the following example that is mostly motivated because natural images are largely located in the "top left" corner of "low" frequencies. It is certainly worth looking at the result of dct2 and play around with the actual choice of your regions for high and low.
In the following I'm dividing the spectrum diagonally and also plotting the DCT coefficients - in logarithmic scale because otherwise we would just see one big peak around (1,1). In the example I'm cutting far above half of the coefficients (adjustable with cutoff) we can see that the high-frequency part ("HF") still contains some relevant image information. If you set cutoff to 0 or below only noise of small amplitude will be left.
%// Load an image
Orig = double(imread('rice.png'));
%// Transform
Orig_T = dct2(Orig);
%// Split between high- and low-frequency in the spectrum (*)
cutoff = round(0.5 * 256);
High_T = fliplr(tril(fliplr(Orig_T), cutoff));
Low_T = Orig_T - High_T;
%// Transform back
High = idct2(High_T);
Low = idct2(Low_T);
%// Plot results
figure, colormap gray
subplot(3,2,1), imagesc(Orig), title('Original'), axis square, colorbar
subplot(3,2,2), imagesc(log(abs(Orig_T))), title('log(DCT(Original))'), axis square, colorbar
subplot(3,2,3), imagesc(log(abs(Low_T))), title('log(DCT(LF))'), axis square, colorbar
subplot(3,2,4), imagesc(log(abs(High_T))), title('log(DCT(HF))'), axis square, colorbar
subplot(3,2,5), imagesc(Low), title('LF'), axis square, colorbar
subplot(3,2,6), imagesc(High), title('HF'), axis square, colorbar
(*) Note on tril: The lower triangle-function operates with respect to the mathematical diagonal from top-left to bottom-right, since I want the other diagonal I'm flipping left-right before and afterwards.
Also note that this kind of operations are not usually applied to entire images, but rather to blocks of e.g. 8x8. Have a look at blockproc and this article.
An easy example:
I2 = dct_img;
I2(8:end,8:end) = 0;
I3 = idct2(I2);
imagesc(I3)
I3 can be seen as the image after low pass filter (the low frequency components), then idct2(dct_img - I2) can be viewed as high frequency.
I want to get a metric of straightness of contour in my binary image (relatively faster). The image looks as follows:
Now, the contours in the red box are the ones which I would like to be removed preferably. Since they are not straight. These are the things I have tried. I am as of now implementing in MATLAB.
1.Collect row and column coordinates of each contour and then take derivative. For straight objects (such as rectangle), derivative will be mostly low with a few spikes (along the corners of the rectangle).
Problem: The coordinates collected are not in order i.e. the order in which the contour will be traversed if we imaging it as a path. Therefore, derivative gives absurdly high values sometimes. Also, the contour is not absolutely straight, its an output of edge detection algorithm, so you can imagine that there might be some discontinuity (see the rectangle at the bottom, human eye can understand that it is a rectangle though it is not absolutely straight).
2.Tried to think about polyfit, but again this contour issue comes up. Since its a rectangle I don't know how to apply polyfit to that point set.
Also, I would like to remove contours which are distributed vertically/horizontally. Basically this is a lane detection algorithm. So lanes cannot be absolutely vertical/horizontal.
Any ideas?
You should look into the features of regionprops more. To be fair I stole the script from this answer, but here it is:
BW = imread('lanes.png');
BW = im2bw(BW);
figure(1),
subplot(1,2,1);
imshow(BW);
cc = bwconncomp(BW);
l = labelmatrix(cc);
a_rp = regionprops(CC,'Area','MajorAxisLength','MinorAxislength','Orientation','PixelList','Eccentricity');
idx = ([a_rp.Eccentricity] > 0.99 & [a_rp.Area] > 100 & [a_rp.Orientation] < 70 & [a_rp.Orientation] > -90);
BW2 = ismember(l,find(idx));
subplot(1,2,2);
imshow(BW2);
You can mess around with the properties. 'Orientation', 'Eccentricity', and 'Area' are probably the parameters you want to mess with. I also messed with the ratios of the major/minor axis lengths but eccentricity basically does this (eccentricity is a measure of how "circular" an ellipse is). Here's the output:
I actually saw a good video specifically from matlab for lane detection using regionprops. I'll try to see if I can find it and link it.
You can segment your image using bwlabel, then work separately on each bwlabel connected object, using find. This should help solve your order problem.
About a metric, the only thing that come to mind at the moment is to fit to an ellipse, and set the a/b (major axis/minor axis) ratio (basically eccentricity) a parameter. For example a straight line (even if not perfect) will be fitted to an ellipse with a very big major axis and a very small minor axis. So say you set a ratio threshold of >10 etc... Fitting to an ellipse can be done using this FEX submission for example.
I am trying to convert an array of velocity values to acceleration values. I understand that acceleration is the integral of velocity, but don't know how to acheive this. I am using MATLAB, so if anyone can offer a solution in this language, I would be very grateful! See the graph below:
The yellow line plots the velocity and the vertical dotted lines show the peaks and troughs of that waveform (peaks and troughs found using peakdet). The green horizontal stuff in the middle is unrelated to this question.
What I am trying to isolate is the steepest part of the large downward slopes on the curve above. Can anyone offer any advice on how to calculate this?
P.S. I am aware that quad() is the function used to integrate in MATLAB but don't know how to implement it in this situation.
Acceleration is a derivative of velocity.
If your velocity values are stored in v, you can get a quick numerical derivative of v with
a = diff(v)
Be aware that if v is a real rather than synthetic signal, a is likely to be pretty noisy, so some smoothing may be in order, depending on how you're going to use it.