I have written a code to create an animation (satellite movement around the Earth). When I run it, it works fine. However, when it is modified to be part of a code much more complex present in a Matlab GUI, the results produced changes (mainly because of the bigger number of points to plot). I also have noticed that if I use the OpenGL renderer the movement of the satellite is quicker than when the other renderers (Painters and Zbuffer) are used. I do not know if there are further possibilities to achieve an improvement in the rendering of the satellite movement. I think the key is, perhaps, changing the code that creates the actual position of the satellite (handles.psat) and its trajectory along the time (handles.tray)
handles.tray = zeros(1,Fin);
handles.psat = line('parent',ah4,'XData',Y(1,1), 'YData',Y(1,2),...
'ZData',Y(1,3),'Marker','o', 'MarkerSize',10,'MarkerFaceColor','b');
...
while (k<Fin)
az = az + 0.01745329252;
set(hgrot,'Matrix',makehgtform('zrotate',az));
handles.tray(k) = line([Y(k-1,1) Y(k,1)],[Y(k-1,2) Y(k,2)],...
[Y(k-1,3) Y(k,3)],...
'Color','red','LineWidth',3);
set(handles.psat,'XData',Y(k,1),'YData',Y(k,2),'ZData',Y(k,3));
pause(0.02);
k = k + 1;
if (state == 1)
state = 0;
break;
end
end
...
Did you consider to apply a rotation transform matrix on your data instead of the axis?
I think <Though I haven't checked it> that it can speedup your code.
You've used the typical tricks that I use to speed things up, like precomputing the frames, setting XData and YData rather than replotting, and selecting a renderer. Here are a couple more tips though:
1) One thing I noticed in your description is that different renderers and different complexities changed how fast your animation appeared to be running. This is often undesirable. Consider using the actual interval between frames (i.e. use tic; dt = toc) to calculate how far to advance the animation, rather than relying on pause(0.2) to generate a steady frame rate.
2) If the complexity is such that your frame rate is undesirably low, consider replacing pause(0.02) with drawnow, or at least calculate how long to pause on each frame.
3) Try to narrow down the source of your bottleneck a bit further by measuring how long the various steps take. That will let you optimize the right stage of the operation.
Related
Two cameras takes two images of a wooden plank. The images have an overlap of the plank which I need to stitch together in a way that it looks natural and preferably seamless to the human eye for inspection purposes. The images are cropped to the same size and masked to remove the background and most of the non-overlapping areas but the plank can have a slight tilt on the conveyor belt.
Currently I'm using the normxcorr2 function on the general overlay area, following ideas from the Matlab totorial of the normxcorr2 function, to try and identify one of the images in the other and work out an overlay offset, following the tutorial. However, this fails quite often as the normxcorr2 functions returns a zero offset - resulting in a bad stitching:
c = normxcorr2(plank_part1,plank_part2);
Find peak in cross-correlation:
[ypeak, xpeak] = ind(c==max(c(:)));
Account for the padding that normxcorr2 adds:
yoffSet = ypeak-size(onion,1);
xoffSet = xpeak-size(onion,2);
[xoffSet,yoffSet]
ans =
0 0
It would seem normxcorr2 can not always find the correct overlay of the images, or any overlay at all(?), even though I try to make it easier by increasing the gray scale contrast by the function histeq. My guess is that the amount of "gray-ish" area from the sapwood overwhelms the distinct knots, which are the important parts to stitch propperly.
Does anyone know of a way to either increase the likelihood of this stitching process, maybe by some more preprocessing, or use any other matlab skills/functions to make this work better?
P.S I can not use anything but freely accessible scripts as this would probably become license/copyright issues for my project.
Thank you for your time in trying to help!
You should look at the following link. The term that you should be looking for is image registration. There are more advanced methods than normxcorr2
I'm new to Kalman filtering, but is it possible to apply kalman filter for prediction and tracking of objects in video frames using MATLAB?
Further info: I have a sequential set of 20 images of a bullet coming out of a gun (A burst shot of images). I did some image processing on the frames and now i'm able to indicate the bullet as a point.
Can I predict the bullet's position in the 21st frame?
NOTE: I got to know that, I need to loop the image frames and make a video and then put it for kalman filter prediction. But is it possible to do the prediction without making the frames into a video.
Thank you.
If you have applied Kalman filter for the starting 20 frames, then you will understand the following answer.
If you don't have the 21st frame.
Then X(t+1) = A*X(t)+B(u)+Noise
This is the predict statement, and you can predict the value at frame 21st. A= state ,atrix, B= control matrix. X(t) = position of the bullet in the 20th frame assigned by the kalman filter.
If u dont have the 21st frame, then you can use the above value to show the bullet in the 21st frame. This is one of the main features of kalman filter i.e even if you don't have observation value, still you can predict the value in the next frame., which is a very common practical case, because most of the time the sensors don't pick the objects, and there is no observation value.
And if you get the 21st frame, then take the observation value and update your prediction.
I would recommend to check Student dave tutorial on youtube for kalman filter and tracking.
And go to the udacity course on Udacity link.
A kalman filter function for your problem would like
void Kalman_Filter(float *Zx, float *Zy)
{
Mat Zt = (Mat_<float>(2, 1) << *Zx, *Zy);
//prediction
Predict = A*Prior;// +B*a;
//covariacne
P = P*P*A.t() + Ex;
//measurement uopdate
Mat Kt = P*H.t()*(H*P*H.t() + Ez);
//
Prior = Predict + Kt*(Zt - H*Predict);
//
P = (I - Kt*H)*P;
//
*Zx = Prior.at<float>(0, 0);
*Zy = Prior.at<float>(1, 0);
//
return;
}
here *Zx and *Zy are the observation values and contains the x and y position of the point size bullet,which you have found out.
The "Computer Vision System Tooblox" has an implementation of Kalman filters: vision.KalmanFilter
There is a demo in the documentation that shows this in action:
Using Kalman Filter for Object Tracking
Note that it doesn't matter where the frames are coming from (could be sequence of images, or an actual video).
Predicting the position is not very hard. You average the changes in position over time and use that to predict the new position.
For example, let's say your bullet is at position 50, 46, 41, 37 in subsequent frames with a known (and constant) frame rate. Position differences are -4, -5, -4.
Use an average of your liking. (Kalman-Filter uses exponential averaging) Mean speed is -4.33
Predicted position in the next frame is therefore 37 + (-4.33) = 32.67 Pixels
From what I see, you so not need a Kalman-Filter. If you only observe position. predict and update, you are not needing it. A Kalman-Filter really shines when you have multiple sensors that measure related things, or a complicated system behavior. (In high-speed imaging, you can often ignore gravity. As all your motion is linear, you have an easy system.)
To answer the question: Yes, you can use a Kalman-Filter for tracking a bullet. However, for such a simple use case it seems like overkill.
I am currently trying to record footage of a camera and represent it with matlab in a graphic window using the "image" command. The problem I'm facing is the slow redraw of the image and this of course effects my whole script. Here's some quick pseudo code to explain my program:
figure
while(true)
Frame = AcquireImageFromCamera(); % Mex, returns current frame
image(I);
end
AcquireImageFromCamera() is a mex coming from an API for the camera.
Now without displaying the acquired image the script easily grabbs all frames coming from the camera (it records with a limited framerate). But as soon as I display every image for a real-time video stream, it slows down terribly and therefore frames are lost as they are not captured.
Does anyone have an idea how I could split the process of acquiring images and displaying them in order to use multiple cores of the CPU for example? Parallel computing is the first thing that pops into my mind, but the parallel toolbox works entirely different form what I want here...
edit: I'm a student and in my faculty's matlab version all toolboxes are included :)
Running two threads or workers is going to be a bit tricky. Instead of that, can you simply update the screen less often? Something like this:
figure
count = 0;
while(true)
Frame = AcquireImageFromCamera(); % Mex, returns current frame
count = count + 1;
if count == 5
count = 0;
image(I);
end
end
Another thing to try is to call image() just once to set up the plot, then update pixels directly. This should be much faster than calling image() every frame. You do this by getting the image handle and changing the CData property.
h = image(I); % first frame only
set(h, 'CData', newPixels); % other frames update pixels like this
Note that updating pixels like this may then require a call to drawnow to show the change on screen.
I'm not sure how precise your pseudo code is, but creating the image object takes quite a bit of overhead. It is much faster to create it once and then just set the image data.
figure
himg = image(I)
while(true)
Frame = AcquireImageFromCamera(); % Mex, returns current frame
set(himg,'cdata',Frame);
drawnow; %Also make sure the screen is actually updated.
end
Matlab has a video player in Computer vision toolbox, which would be faster than using image().
player = vision.VideoPlayer
while(true)
Frame = AcquireImageFromCamera(); % Mex, returns current frame
step(player, Frame);
end
I'm trying to write a code The helps me in my biology work.
Concept of code is to analyze a video file of contracting cells in a tissue
Example 1
Example 2: youtube.com/watch?v=uG_WOdGw6Rk
And plot out the following:
Count of beats per min.
Strenght of Beat
Regularity of beating
And so i wrote a Matlab code that would loop through a video and compare each frame vs the one that follow it, and see if there was any changes in frames and plot these changes on a curve.
Example of My code Results
Core of Current code i wrote:
for i=2:totalframes
compared=read(vidObj,i);
ref=rgb2gray(compared);%% convert to gray
level=graythresh(ref);%% calculate threshold
compared=im2bw(compared,level);%% convert to binary
differ=sum(sum(imabsdiff(vid,compared))); %% get sum of difference between 2 frames
if (differ ~=0) && (any(amp==differ)==0) %%0 is = no change happened so i dont wana record that !
amp(end+1)=differ; % save difference to array amp wi
time(end+1)=i/framerate; %save to time array with sec's, used another array so i can filter both later.
vid=compared; %% save current frame as refrence to compare the next frame against.
end
end
figure,plot(amp,time);
=====================
So thats my code, but is there a way i can improve it so i can get better results ?
because i get fealing that imabsdiff is not exactly what i should use because my video contain alot of noise and that affect my results alot, and i think all my amp data is actually faked !
Also i actually can only extract beating rate out of this, by counting peaks, but how can i improve my code to be able to get all required data out of it ??
thanks also really appreciate your help, this is a small portion of code, if u need more info please let me know.
thanks
You say you are trying to write a "simple code", but this is not really a simple problem. If you want to measure the motion accuratly, you should use an optical flow algorithm or look at the deformation field from a registration algorithm.
EDIT: As Matt is saying, and as we see from your curve, your method is suitable for extracting the number of beats and the regularity. To accuratly find the strength of the beats however, you need to calculate the movement of the cells (more movement = stronger beat). Unfortuantly, this is not straight forwards, and that is why I gave you links to two algorithms that can calculate the movement for you.
A few fairly simple things to try that might help:
I would look in detail at what your thresholding is doing, and whether that's really what you want to do. I don't know what graythresh does exactly, but it's possible it's lumping different features that you would want to distinguish into the same pixel values. Have you tried plotting the differences between images without thresholding? Or you could threshold into multiple classes, rather than just black and white.
If noise is the main problem, you could try smoothing the images before taking the difference, so that differences in noise would be evened out but differences in large features, caused by motion, would still be there.
You could try edge-detecting your images before taking the difference.
As a previous answerer mentioned, you could also look into motion-tracking and registration algorithms, which would estimate the actual motion between each image, rather than just telling you whether the images are different or not. I think this is a decent summary on Wikipedia: http://en.wikipedia.org/wiki/Video_tracking. But they can be rather complicated.
I think if all you need is to find the time and period of contractions, though, then you wouldn't necessarily need to do a detailed motion tracking or deformable registration between images. All you need to know is when they change significantly. (The "strength" of a contraction is another matter, to define that rigorously you probably would need to know the actual motion going on.)
What are the structures we see in the video? For example what is the big dark object in the lower part of the image? This object would be relativly easy to track, but would data from this object be relevant to get data about cell contraction?
Is this image from a light microscop? At what magnification? What is the scale?
From the video it looks like there are several motions and regions of motion. So should you focus on a smaller or larger area to get your measurments? Per cell contraction or region contraction? From experience I know that changing what you do at the microscope might be much better then complex image processing ;)
I had sucsess with Gunn and Nixons Dual Snake for a similar problem:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.64.6831
I placed the first aproximation in the first frame by hand and used the segmentation result as starting curv for the next frame and so on. My implementation for this is from 2000 and I only have it on paper, but if you find Gunn and Nixons paper interesting I can probably find my code and scan it.
#Matt suggested smoothing and edge detection to improve your results. This is good advise. You can combine smoothing, thresholding and edge detection in one function call, the Canny edge detector.Then you can dialate the edges to get greater overlap between frames. Little overlap will probably mean a big movement between frames. You can use this the same way as before to find the beat. You can now make a second pass and add all the dialated edge images related to one beat. This should give you an idea about the area traced out by the cells as they move trough a contraction. Maybe this can be used as a useful measure for contraction of a large cluster of cells.
I don't have access to Matlab and the Image Processing Toolbox now, so I can't give you tested code. Here are some hints: http://www.mathworks.se/help/toolbox/images/ref/edge.html , http://www.mathworks.se/help/toolbox/images/ref/imdilate.html and http://www.mathworks.se/help/toolbox/images/ref/imadd.html.
Users can sketch in my app using a very simple tool (move mouse while holding LMB). This results in a series of mousemove events and I record the cursor location at each event. The resulting polyline curve tends to be rather dense, with recorded points almost every other pixel. I'd like to smooth this pixelated polyline, but I don't want to smooth intended kinks. So how do I figure out where the kinks are?
The image shows the recorded trail (red pixels) and the 'implied' shape as a human would understand it. People tend to slow down near corners, so there is usually even more noise here than on the straight bits.
Polyline tracker http://www.freeimagehosting.net/uploads/c83c6b462a.png
What you're describing may be related to gesture recognition techniques, so you could search on them for ideas.
The obvious approach is to apply a curve fit, but that will have the effect of smoothing away all the interesting details and kinks. Another approach suggested is to look at speeds and accelerations, but that can get hairy (direction changes can be very fast or very slow and deliberate)
A fairly basic but effective approach is to simplify the samples directly into a polyline.
For example, work your way through the samples (e.g.) from sample 1 to sample 4, and check if all 4 samples lie within a reasonable error of the straight line between 1 & 4. If they do, then extend this to points 1..5 and repeat until such a time as the straight line from the start point to the end point no longer provides a resonable approximation to the curve defined by those samples. Create a line segment up to the previous sample point and start accumulating a new line segment.
You have to be careful about your thresholds when the samples are too close to each other, so you might want to adjust the sensitivity when regarding samples fewer than 4-5 pixels away from each other.
This will give you a set of straight lines that will follow the original path fairly accurately.
If you require additional smoothing, or want to create a scalable vector graphic, then you can then curve-fit from the polyline. First, identify the kinks (the places in your polyline where the angle between one line and the next is sharp - e.g. anything over 140 degrees is considered a smooth curve, anything less than that is considered a kink) and break the polyline at those discontinuities. Then curve-fit each of these sub-sections of the original gesture to smooth them. This will have the effect of smoothing the smooth stuff and sharpening the kinks. (You could go further and insert small smooth corner fillets instead of these sharp joints to reduce the sharpness of the joins)
Brute force, but it may just achieve what you want.
Rather than trying to do this from the resultant data, have you considered looking at the timing of the data as it comes in? If the mouse stops or slows noticably, you use the trend since the last 'kink' (the last time the mouse slowed) to establish the direction of travel. If the user goes off in a new direction, you call it a kink, otherwise, you ignore the current slowing trend and start waiting for the next one.
Well, one way would be to use a true curve-fitting algorithm. Generate a bezier curve (with exact endpoints, using Catmull-Rom or something similar), then optimize & recursively subdivide (using distance from actual line points as a cost metric). This may be too complicated for your use-case, though.
Record the order the pixels are drawn in. Then, compute the slope between pixels that are "near" but not "close". I'm guessing a graph of the slope between pixel(i) and pixel(i+7) might exhibit easily identifable "jumps" around kinks in the curve.