I was trying to animate different moves in Matlab. This post Matlab for loop animations helped my a lot , but I would like to change moves after some time. Thus, after defining the trajectories I animated them. Could you please have a look?
I wanted to keep the speed of the dot fixed and thus I solved it with differential equations, which are defined at other files. I have defined also the times tf , tf1... I used exactly the same way as it was suggested at the above link, with hPoint.
tf=4*pi/15; % time at which 4pi are completed. speed=15
tf1= 2+tf;
tf2= pi/15 +tf1;
[t,X]=ode45(#dif,[0 tf],[0 -15 -15])
p1 = [X(:,2) X(:,3)];
[t,X2]=ode45(#dif2,[tf tf1],[-15 -15])
p1a = [X2(:,1) X2(:,2)];
[t,X3]=ode45(#dif,[tf1 tf2],[0 -15 +15])
p1b = [X3(:,2) X3(:,3)];
D = [p1(:,1) p1(:,2)
p1a(:,1) p1a(:,2)
p1b(:,1) p1b(:,2)];
hPoint = line('XData',D(1,1), 'YData',D(1,2), 'EraseMode',ERASEMODE, ...
'Color','r', 'Marker','o', 'MarkerSize',50, 'LineWidth',1);
However, when I am trying to animate it, the dot stops a bit and then continues. Especially for the vector p1b, which is the third part( the upper circle). Any ideas about this behavior? Is there any way to make it stable and animate with the same speed? Thank you in advance !
Related
Please, I'm trying to detect the fingers' movement in a video. Firstly, I'd like to apply a skin color detection to separate the hand from the background and then I'll find the hand counter followed by calculating the convex points to detect the fingers. What I want now from this video is a new video showing only the movement of the two tapping fingers (or their contour), as shown in this figure.
I used this code to detect the skin color:
function OutImg=Skin_Detect(I)
% clear
% I=imread('D:\New Project\Movie Frames from RLC_L_FTM_IP60\Frame
0002.png');
I=double(I);
[hue,s,v]=rgb2hsv(I);
cb = 0.148* I(:,:,1) - 0.291* I(:,:,2) + 0.439 * I(:,:,3) + 128;
cr = 0.439 * I(:,:,1) - 0.368 * I(:,:,2) -0.071 * I(:,:,3) + 128;
[w h]=size(I(:,:,1));
for i=1:w
for j=1:h
if 138<=cr(i,j) && cr(i,j)<=169 && 136<=cb(i,j) && cb(i,j)<=200 &&
0.01<=hue(i,j) && hue(i,j)<=0.2
segment(i,j)=1;
else
segment(i,j)=0;
end
end
end
% imshow(segment);
OutImg(:,:,1)=I(:,:,1).*segment;
OutImg(:,:,2)=I(:,:,2).*segment;
OutImg(:,:,3)=I(:,:,3).*segment;
% figure,imshow(uint8(im));
The same code is working perfectly when I apply it to an image, but I detect nothing when I apply it to a video, as follows:
videoFReader = vision.VideoFileReader('RLC_L_FT_IP60.m4v');
% Create a video player object for displaying video frames.
videoPlayer = vision.DeployableVideoPlayer;
% Display the original video
while ~isDone(videoFReader)
videoFrame = step(videoFReader);
% Track using the Hue channel data
Out=Skin_Detect(videoFrame);
step(videoPlayer,Out);
end
Please, any suggestion and idea to solve this problem?
I'll be so grateful if anyone can help with this even with different codes.
Thank you in advance.
I had a similar problem. I think a proper way is using a classifier, even it is a "simple" classifier... Here the steps I have followed in my solution:
1) I have used the RGB color space and a Mahalanobis distance for the skin color model. It is fast and work pretty well.
2) Connected components: A simple morphological close operation with a small structuring element can be used to join areas that might be disconnected during imperfect thresholding, like the fingers of the hand.
3) Feature extraction: area, perimeter, and ratio of area over perimeter, for example.
4) Classification: use an SVM classifier to perform the final classification. I hope you have labeled training data for the process.
I am not solving exactly yor concrete problem, but maybe it could give you some idea... :)
If you don't insist on writing on your own, you can use Google's MediaPipe for hand- and finger-tracking.
Info:
https://ai.googleblog.com/2019/08/on-device-real-time-hand-tracking-with.html
Examples for Desktop and Android:
https://github.com/google/mediapipe/blob/master/mediapipe/docs/hand_tracking_mobile_gpu.md
I recently have started learning how to code in matlab, i.e. programming simple experience for cognitive psychological investigations. I wanted to ask, whether someone knows both, how to define, where to draw a dot in the screen,and how to define the fixation time before the stimulus onset. I know, that the code for defining a dot position is the following:
dotXpos = [?] * screenXpixels;
dotYpos = [?] * screenYpixels;
However, I don't know, which coordinates define the exact middle of the screen.
Thank you in advance!
In Psychtoolbox, most of the fundamental drawing routines are provided through the Screen function. To draw a dot, you can use the DrawDots subcommand:
Screen('DrawDots', windowPtr, xy [,size] [,color] [,center] [,dot_type]);
Here, the xy should be the position of all the "centers" of the dots. For you it should be [dotXpos, dotYpos].
The center position of the screen is:
dotXpos = 0.5 * screenXpixels;
dotYpos = 0.5 * screenYpixels;
To implement a timed delay before stimulus appears, you can use WaitSecs
Please check out:
https://web.archive.org/web/20160515043421/http://docs.psychtoolbox.org/DrawDots
https://web.archive.org/web/20160419072932/http://docs.psychtoolbox.org/WaitSecs
I am currently trying to record footage of a camera and represent it with matlab in a graphic window using the "image" command. The problem I'm facing is the slow redraw of the image and this of course effects my whole script. Here's some quick pseudo code to explain my program:
figure
while(true)
Frame = AcquireImageFromCamera(); % Mex, returns current frame
image(I);
end
AcquireImageFromCamera() is a mex coming from an API for the camera.
Now without displaying the acquired image the script easily grabbs all frames coming from the camera (it records with a limited framerate). But as soon as I display every image for a real-time video stream, it slows down terribly and therefore frames are lost as they are not captured.
Does anyone have an idea how I could split the process of acquiring images and displaying them in order to use multiple cores of the CPU for example? Parallel computing is the first thing that pops into my mind, but the parallel toolbox works entirely different form what I want here...
edit: I'm a student and in my faculty's matlab version all toolboxes are included :)
Running two threads or workers is going to be a bit tricky. Instead of that, can you simply update the screen less often? Something like this:
figure
count = 0;
while(true)
Frame = AcquireImageFromCamera(); % Mex, returns current frame
count = count + 1;
if count == 5
count = 0;
image(I);
end
end
Another thing to try is to call image() just once to set up the plot, then update pixels directly. This should be much faster than calling image() every frame. You do this by getting the image handle and changing the CData property.
h = image(I); % first frame only
set(h, 'CData', newPixels); % other frames update pixels like this
Note that updating pixels like this may then require a call to drawnow to show the change on screen.
I'm not sure how precise your pseudo code is, but creating the image object takes quite a bit of overhead. It is much faster to create it once and then just set the image data.
figure
himg = image(I)
while(true)
Frame = AcquireImageFromCamera(); % Mex, returns current frame
set(himg,'cdata',Frame);
drawnow; %Also make sure the screen is actually updated.
end
Matlab has a video player in Computer vision toolbox, which would be faster than using image().
player = vision.VideoPlayer
while(true)
Frame = AcquireImageFromCamera(); % Mex, returns current frame
step(player, Frame);
end
I have written a code to create an animation (satellite movement around the Earth). When I run it, it works fine. However, when it is modified to be part of a code much more complex present in a Matlab GUI, the results produced changes (mainly because of the bigger number of points to plot). I also have noticed that if I use the OpenGL renderer the movement of the satellite is quicker than when the other renderers (Painters and Zbuffer) are used. I do not know if there are further possibilities to achieve an improvement in the rendering of the satellite movement. I think the key is, perhaps, changing the code that creates the actual position of the satellite (handles.psat) and its trajectory along the time (handles.tray)
handles.tray = zeros(1,Fin);
handles.psat = line('parent',ah4,'XData',Y(1,1), 'YData',Y(1,2),...
'ZData',Y(1,3),'Marker','o', 'MarkerSize',10,'MarkerFaceColor','b');
...
while (k<Fin)
az = az + 0.01745329252;
set(hgrot,'Matrix',makehgtform('zrotate',az));
handles.tray(k) = line([Y(k-1,1) Y(k,1)],[Y(k-1,2) Y(k,2)],...
[Y(k-1,3) Y(k,3)],...
'Color','red','LineWidth',3);
set(handles.psat,'XData',Y(k,1),'YData',Y(k,2),'ZData',Y(k,3));
pause(0.02);
k = k + 1;
if (state == 1)
state = 0;
break;
end
end
...
Did you consider to apply a rotation transform matrix on your data instead of the axis?
I think <Though I haven't checked it> that it can speedup your code.
You've used the typical tricks that I use to speed things up, like precomputing the frames, setting XData and YData rather than replotting, and selecting a renderer. Here are a couple more tips though:
1) One thing I noticed in your description is that different renderers and different complexities changed how fast your animation appeared to be running. This is often undesirable. Consider using the actual interval between frames (i.e. use tic; dt = toc) to calculate how far to advance the animation, rather than relying on pause(0.2) to generate a steady frame rate.
2) If the complexity is such that your frame rate is undesirably low, consider replacing pause(0.02) with drawnow, or at least calculate how long to pause on each frame.
3) Try to narrow down the source of your bottleneck a bit further by measuring how long the various steps take. That will let you optimize the right stage of the operation.
I am experimenting with some new ideas in Cocos2D/Box2D on iPhone.
I want to animate a small swarm of fireflies moving on circular (random?) paths... the idea is that the user can capture a firefly with a net..
I have considered using gravity simulations for this but I believe it is over complicating things... my previous experience with using Bezier curves tells me that this isn't the solution either..
Does anyone have any bright insights for me?
Thanks so much.
Do you need the fireflies to collide with each other?
I ask, as if this isn't a requirement, Box2D is probably overkill for your needs. Cocos2d is an excellent choice for this by the sounds of it, but I think you'd be better off looking into flocking algorithms like boids
Even that may be overly complicated. Mix a few sin and cosine terms together with some random scaling factors will likely be enough.
You could have one sin/cosine combination forming an ellipse nearly the size of the screen:
x = halfScreenWidth + cos (t) * halfScreenWidth * randomFactor;
y = halfScreenHeight + sin (t) * halfScreenHeight * randomFactor;
where randomFactor would be something in the realm of 0.6 to 0.9
This will give you broad elliptical motion around the screen, then you could add a smaller sin/cos factor to make them swirl around the point on that ellipse.
By multiplying your time delta (t) by different values (negative and positive) the path of the curve will move in a less geometric way. For example, if you use
x = halfScreenWidth + cos (2*t) * halfScreenWidth * randomFactor;
the ellipse will turn into a figure 8. (i think!)
Hope this helps get you started. Good luck.
One place to look for ideas would be in the domain of artificial life. They have been simulating swarms of entities for a long time. Here is a link for some simple swarm code written in Java that should give you some ideas.
http://www.aridolan.com/ofiles/Download.aspx