I want to do hand gesture recognition with hmm in matlab. I studied the theoretical materials in both hmm concept and hmm in mathwork . But I need to see some real examples which uses matlab instructions for dealing with hmm. I searched in the web but could not find a good one. Does any one know a reference which uses matlab instructions in a hmm process?
I suggest you look at the toolbox by K. Murphy and its tutorial.
The built-in hmm functions in Matlab are pretty limited I find (though I did not use the very last edition of Matlab). However you can look at the Matlab tutorial too.
Finally, you can check this toolbox. It contains a demo file.
On a side note, be aware that your question is somewhat off-topic here. Adding some code to show that you a least tried something and clearly spot what is causing you trouble (training the model? formatting the data? applying the Viterbi algorithm?) would make this question much more interesting to the community.
Related
Can someone direct me to a document or some kind of source about the "magic" behind trainCascadeObjectDetector? I was looking around for quite a bit but couldn't find anything. Thank you for your time.
Check the answer on MATLAB Answers:
Hello Dai,
Do you need to explain how the face detector itself works, or how the training of the face detector works?
vision.CascadeObjectDetector implements the deletection algorithm by Viola and Jones. It is a "sliding-window" approach, which slides a window across an image, and tries to determine whether or not there is a face in each location using a cascade of boosted classifiers. The original algorithm uses Haar-like features. vision.CacadeObjectDetector also gives you the option to use LBP or HOG features.
The Algorithms section of the vision.CascadeObjectDetector documentation gives a good high-level overview of what it is doing.
As far as the training process, there is a tutorial in the documentation explaining some of the details.
Are there any tools or algorithm in Matlab or OpenCv, which will take multiple images of any object as input (from different location around the object) and produce the 3D coordinate of the object in the world.
Like Naveh said, in OpenCV the building blocks are there, but putting it together is something you would have to do.
That being said, people have generated a number of SfM tools in both C++ and Matlab. Depending on your goals there are a number of prepackaged things you can look at:
-There is a SfM Matlab Toolbox here, I have not personally used it but I've seen it a number of times.
-If you are just looking for a black-box solution, check out Visual SfM, it is a GUI-fied version of a common SfM workflow.
-A while ago I put together a guide for installing the Visual SfM components individually on Fedora, if you wanted to dig into them. I'm not sure how relevant it is now but it might help.
Regardless, you should certainly educate yourself on the processes involved in creating 3D structure from imagery. It is a complicated process with many details which need to be understood.
What you are asking for is a fully fledged structure from motion algorithm. I don't think such a thing exists in MATLAB or OpenCV right off the shelf. However, the building blocks required for such an algorithm are there.
I suggest you do some background reading to better understand what specific algorithm will suit your needs. A good place to start is in Richard Szeliski's textbook, chapter 7. A free draft is available here. This book is recommended both in general as a good computer vision textbook, and specifically as well for your question, in which Szeliski himself is quite an expert.
I am using an Arduino to control a car and I want to make it autonomous by using a webcam to see the object I want and make the car move to this location. I need several things:
MATLAB code
Interface between MATLAB and Arduino
How do I connect between them (software, not hardware)
I need any tutorial to learn or any instructions to make my project. I see many people have done this before, but unfortunately they did not mention how to start these kind of projects.
This question is fairly broad, so I apologize in advance for my somewhat general response.
The easiest way to interface a webcam with MATLAB is to make use of the Image Acquisition Toolbox. This link provides documentation detailing how to do this.
There is a good chance that you'll also want to make use of the Image Processing toolbox in MATLAB to be able to process the acquired images to determine where to go. See this doc. Though, after you've determined more specifically how you plan to process these images, there are probably numerous algorithms that you could find online that would not explicitly require this toolbox.
As far as interfacing with Arduino is concerned, there is a support package from the MathWorks that allows you to interface MATLAB code and Simulink models with Arduino. See this link
The only other general suggestion that I have is to consider using Simulink for this project rather than MATLAB. I feel that the model based approach of Simulink is a much better fit when designing control systems.
I hope that this helps you get things started.
For my computer vision class, I'm going to be doing a project where I extract information about a hallway based on an image of that hallway. In particular, the lines of the hallway which extend toward a vanishing point will be of interest. My question is whether I should use Matlab, OpenCV, or something else to implement this.
I don't have a ton of time for this project. This fact makes Matlab seem like a good option since it seems you can usually get things up and running quickly there. On the other hand, I hope to take what I do for this class project and extend it out further for research once the class is complete. This makes OpenCV seem better as (from what I've read) it's much more efficient. It's possible another choice would be to implement it in Matlab for the project than port that code to an OpenCV form later. It should be noted that I have plenty of experience with C/C++, but only a little in both Matlab and OpenCV.
At the moment, I'm leaning toward just using OpenCV from the start. However, I would like the opinion of someone who's had a bit more experience here than myself. If you'd recommend something over both OpenCV and Matlab, please say so. Also, if you have any tips on what packages or toolkits might be useful for such a project, they would be greatly appreciated.
Any suggestions? Thanks for your time!
Using which one it is easier for you to write a piece of code to read an image file and display it?
If you know C++ very well, then it should be easy to debug the code. Since you say you have little experience with Matlab, if you make a small mistake in the code debugging can take a long time.
So I suggest break down the problem into:
read image and display it, this is very easy in both
detect edges using a simple/classic method, this is super easy in both, display the result and visually check it's correctly done
use a robust line fitting method, the RANSAC and Hough transform methods are probably what you're going to use, OpenCV makes using the easier than you can guess, Matlab also has built in functions to detect lines using the Hough transform, and gives you the start/end points of each segment. But if you're finding a vanishing point, you shouldn't need those.
The decision is yours, this is not a very difficult problem, can find loads of help on the web. Good luck with the project, and please let us know how it goes.
How do you do reconstruction by threshold decomposition in Matlab? Is there a function for it?
Can you provide some more details of the steps in such a procedure? Some of us are not well versed in DSP theory. I did find a link from a book in Google books here.
If this is exactly what you want it does not seem difficult to code it up. On the other hand if you want something reliable and optimized for speed maybe on of the DSP toolkit users knows a way.