I'm building a standalone guide application which uses impoly inside.
When I drag the vertices of polygon created using impoly or drag the polygon itself, there's noticeable delay for actions, this doesn't happen in the MATLAB script file.
Even a simple script like the code below will create the delay after being converted to executable.
figure, imshow('peppers.png')
h = impoly(gca, []);
What is causing the delay here, and how can I solve this?
I know that using self defined drawing functions and windowbutton functions is faster but I don't want to lose the ease of using an impoly object because it's handled by internal codes.
The compiler version is R2011a.
Updates:
Not only impoly becomes slow when deployed. All rendering of graphic objects become slow. Pan tool and zoom tools have delays as well.
The solution is using uiwait to block the execution before exit. But I don't know why that solves the problem.
Related
I started with the vol3d function from MATLAB file exchange for the 3D display part but its slowing down the entire application.
I am working on a MATLAB based GUI. Need to display a volume of size 512X512X512, single precision. The display has 4 different views of the volume. 3 standard orthogonal views and the 4th view is the 3D rendered isometric view.
With vol3D, The display is looking fine but its causing the GUI little lagging and slow. If I remove the vol3d function, the GUI works fine, faster.
I am new in the field of 3D volume rendering. What are the alternatives to this volume rendering in MATLAB. Is there any way to call some C-subroutine using mex, do the calculation in C and display in MATLAB. I have a good GPU(GeForce gtx titan x, 12 gb) but I am afraid I am not utilizing it well for the volume rendering thing.
Any suggestions are welcome.
thanx for reading :)
I am working on a drone based video surveillance project. I am required to implement object tracking in the same. I have tried conventional approaches but these seem to fail due to non static environment.
This is an example of what i would want to achieve. But this uses background subtraction which is impossible to achieve with a non static camera.
I have also tried feature based tracking using SURF features, but it fails for smaller objects and is prone to false positives.
What would be the best way to achieve the objective in this scenario ?.
Edit : An object can be anything within a defined region of interest. The object will usually be a person or a vehicle. The idea is that the user will make a bounding box which will define the region of interest. The drone now has to start tracking whatever is within this region of interest.
Tracking local features (like SURF) won't work in your case. Training a classifier (like Boosting with HAAR features) won't work either. Let me explain why.
Your object to track will be contained in a bounding box. Inside this bounding box there could be any object, not a person, a car, or something else that you used to train you classifier.
Also, near the object, in the bounding box there will be also background noise that will change as soon as your target object moves, even if the appearance of the object doesn't change.
Moreover the appearance of you object changes (e.g. a person turns, or drop the jacket, a vehicle get a reflection of the sun, etc...), or the object gets (partially or totally) occluded for a while. So tracking local features is very likely to lose the tracked object very soon.
So the first problem is that you must deal with potentially a lot of different objects, possibly unknown a priori, to track and you cannot train a classifier for each one of these.
The second problem is that you must follow an object whose appearance may change, so you need to update your model.
The third problem is that you need some logic that tells you that you lost the tracked object, and you need to detect it again in the scene.
So what to do? Well, you need a good long term tracker.
One of the best (to my knowledge) is Tracking-Learning-Detection (TLD) by Kalal et. al.. You can see on the dedicated page a lot of example videos, and you can see that it works pretty good with moving cameras, objects that change appearance, etc...
Luckily for us, OpenCV 3.0.0 has an implementation for TLD, and you can find a sample code here (there is also a Matlab + C implementation in the aforementioned site).
The main drawback is that this method could be slow. You can test if it's an issue for you. If so, you can downsample the video stream, upgrade your hardware, or switch to a faster tracking method, but this depends on you requirements and needs.
Good luck!
The simplest thing to try is frame differencing instead of background subtraction. Subtract the previous frame from the current frame, threshold the difference image to make it binary, and then use some morphology to clean up the noise. With this approach you typically only get the edges of the objects, but often that is enough for tracking.
You can also try to augment this approach using vision.PointTracker, which implements the KLT (Kanad-Lucas-Tomasi) point tracking algorithm.
Alternatively, you can try using dense optical flow. See opticalFlowLK, opticalFlowHS, and opticalFlowLKDoG.
I use matlab to render a complex mesh (using trimesh, material, camlight, view...) and need not display it to the user, just to get the rendered image. This is discussed in another question.
Using any of the suggested solutions (save-as image, saving into a video object, and using undocumented hardcopy) is very slow (~1sec), especially compared to rendering the plot itself, including painting on the screen takes less than 0.5sec.
I believe it is caused by hardcopy method not to utilize the GPU, while rendering the original plot for display do use the GPU; using GPU-Z monitor software I see the GPU working during ploting but not during hardcopy.
The figure use 'opengl' as renderer, but hardcopy, which is the underlying implementation of all the suggested methods, don't seem to respect this...
Any suggestion on how to configure it to use the GPU?
EDITED: following this thread I've moved to use the following, but GPU usage is still a flatliner.
cdata=hardcopy(f, '-Dopengl', '-r0')
Being used to Matlab and its great capabilities of drawing vector graphics, I am looking for something similar in OpenCV. OpenCV drawing functions seem to raster the lines or points at pixel level. Currently, I am dumping the data into text, copy-paste to Matlab and doing all the plots. I also thought about using Matlab engine to pass it the parameters and running plots, but it seems to be too much mess for simple debug operation.
I want to be able to do the following:
Zoom in, out of the image
Draw a line/point which is re-rastered each time I do zoom, like in Matlab.
Currently, I found image watch plugin to take care of zooming, but it does not help with the second part.
Any idea?
OpenCV has a lot of capabilities to process an image but only minimal ones for displaying the result. It has nothing that can display vector graphics like Matlab. When I need to see polygons on image (or just polygons) I am dumping them to file and using third party viewer (usually Giv viewer).
I'm a bit of a newb when it comes to threading, so any pointers in the right direction would be a great help. I've got a game with both a fairly heavy update function, and a fairly heavy draw function. I'd assume that the majority of the weight in the draw function is going to happen on the GPU. Because of this, I'd like to start calculating the update on the next frame while the drawing is happening. Right now, my game loop is quite simple:
Game->Update1();
Game->Update2();
Game->Draw();
Update1() updates variables that do not change game state, so it can run independently from Draw. That is to say, there should be no fights over data between the two. It is also the bulk of the CPU processing.
Update2() updates variables that Draw needs, and it is quite fast, so it seems right to have it running serially with Draw(). Additionally, I believe that the Draw() function is light on CPU and heavy on GPU.
What I would like to happen is that while the GPU is busy processing all the Draw functionality, the next frame's Update1() can use the CPU to get the next frame's update ready. It doesn't seem like I'm automatically getting this functionality -- the Draw cycle seems to take a little while and block everything until it's done, which is less than ideal.
What's the proper way to do this? Is this already happening, and I'm just not observing it properly?
That depends on what Draw() contains, you should get the CPU-GPU parallelism automatically, unless some call inside Draw() synchronizes between CPU and GPU. One simple example is using glReadPixels.