Can MATLAB do realtime motion tracking? - matlab

I tried to track motion in Matlab by using this tutorial (http://www.mathworks.com/help/vision/examples/motion-based-multiple-object-tracking.html) and it works fine but it implies video as source to work.
I wanna know if it's possible to track motion by using the same tutorial but in real time by using camera as source!

Everything is possible, just please try to find some stuff by yourself before asking here.
I think you may find the information you need in this link:
http://www.matlabtips.com/realtime-processing/
Alternately, you could of course just store the camera output as a (very short) video and continuously analyse that instead.

As of release R2014a, MATLAB includes support for USB webcams. If you have an older version, or if you want to use a high-end camera, you would need the Image Acquisition Toolbox.
Once you are able to get frames from the camera, you can reuse almost all of the code in the multiple object tracking example. You would only need to rewrite the readFrame function with code to get a frame from the camera.

Related

How to offline debug augmented reality in Unity?

I was wondering if there was a way to record the sensor and video data from my iPhone, save it in some way, and then feed it into Unity to test an AR app.
I'd like to see how different algorithms behave on identical input, and that's hard to do when the only way to test is to pick up my phone and wave it around.
What do you can do is capture the image buffer. I've done something similar using ARCore. Not sure if ARKit has a similar implementation. I found this when I did a brief search https://forum.unity.com/threads/how-to-access-arframe-image-in-unity-arkit.496372/
In ARCore, you can take this image buffer and using ImageConversion.EncodeToPNG you can create PNG files with the timestamp. You can pull your sensor data in parallel. Depending on what you want, you can write it to a file using a similar approach: https://support.unity3d.com/hc/en-us/articles/115000341143-How-do-I-read-and-write-data-from-a-text-file-
After which, you can use FFMPEG to convert these PNGs into a video. If you want to try different algorithms, there's a good chance the PNGs alone will be enough. Else you can use a command like so: http://freesoftwaremagazine.com/articles/assembling_video_png_stream_ffmpeg/
You should be able to pass these images and the corresponding sensor data to your algorithm to check.

Object Recognition with Mixed Reality Capture (MRC)

We're using the HoloLens' locatable camera (in Unity) to perform a number of image recognition tasks. We'd like to utilize the mixed reality capture feature (MRC) available in the HoloLens developer portal so that we can demo our app, but MRC crashes because we're hogging the camera in Photo Mode.
Does anyone have a good workaround for this? We've had some ideas, but none of them are without large downsides.
Solution: Put your locatable camera in Video Mode so that you can share the video camera with MRC.
Downside: Video Mode only allows us to save the video to disk, but we need realtime access to the buffer in memory (the way photo mode gives us access) so that we can do our detection in realtime.
Solution: Capture the video in a C++ plugin, and pass the frame bytes to Unity. This allows MRC to work as expected.
Downside: We lose the 'locatable' part of the 'locatable camera' as we no longer get access to the cameraSpaceToWorldSpace transformation matrix, which we are utilizing in our UI to locate our recognized objects in world space.
Sub-solution: recreate the locatable camera view's transformation matrix yourself.
Sub-downside: I don't have any insight into how Microsoft creates this transformation matrix. I imagine it involves some hardware complexities, such as accounting for lens distortions. If someone can guide me to how this matrix is created, that might be one solution.
Solution: Turn off object recognition while you create the MRC, then turn it back on when you're done recording
Downside: Our recognition system runs in real time, n times per second. There would be no way to capture the recognitions on video.
We ended up creating a plugin for Unity that uses Microsoft's Media Foundation to get access to the video camera frames. We open sourced it in case anyone else runs into this problem.
The plugin mimics Unity's VideoCapture class so that developers will be able to easily understand how to implement it.
Hopefully this is helpful to a few.

matlab live video extraction

How do you take a live video feed from a webcam and selectively extract frames of it? Do I need a special toolbox or can this be done through normal matlab functions?
I have the image acquisition toolbox, but haven't seen a good way to use it for this yet...
The image acquisition toolbox is all you need. Just look at the documentation and it should be pretty straight forward.
make sure that the webcam is properly installed and connected.
type the following code:
obj = videoinput('winvideo', 1);
preview(obj)
this shall generate a window capturing live video input.

What Webcam is Compatible with MATLAB

I am working on a project of mine which requires a webcam and MATLAB. I have a Logitech Webcam, and I dont know if I could talk to it through MATLAB because Im trying to work with images and image processing, so I just want to know if there is a way to find out if the webcam is combatible with matlab or if I need to get some other type of webcam to get the job done, if I do need to get another one, it would be helpful if you suggest a certain cam that is cheap and available around.
Thank you.
I've used several Logitech webcams with the Image Acquisition Toolbox on Windows. You'll find a list of supported hardware here.

Real-time Pitch Shifting on the iPhone

I have a children's iPhone application that I am writing and I need to be able to shift the pitch of a sound sample using Core Audio. Does anyone have any example code I could look at where this is done. There are many music and game apps in the app store that do this so I know I am not the first one. However, I cannot find any examples of it being done.
you can use dirac-2 from dsp dimension for pitch shifting on the iphone. quote: -
"DIRAC2 is available as both a commercial object library offering unlimited sample rates and phase locked multichannel support and as a free single channel, 44.1/48kHz LE version."
use the soundtouch open source project to change pitch
Here is the link : http://www.surina.net/soundtouch/
Once you add soundtouch to your project, you have to give the input sound file path, output sound file path and pitch change as the input.
Since it takes more time to process your sound its better to modify soundtouch so that when you record the voice, directly give the data for processing. It will make your application better.
I know it's too late for the person who asked but it is really a valuable link (As I found) for any one else who is looking for the solution of the same problem.
So Here we have latest DIRAC3 with it's own audio player classes which will take care of run time pitch and speed(explore for god knows what more) shifting. Run the sample and have huge round of applause for that.
Try Dirac - it's the best technology out there and it's available on Win, Linux, MacOS X and iOS. We're using it in all our products (and a couple of others do as well, search for "Capo" on the App Store). They're at version 3 now which has seen a huge increase in performance since previous versions. Hope this helps.
See: Related question
How much control over pitch do you need... could you precalculate all the different sounds?
If the answer is yes, then you can just pick the right sounds and play them.
You could also use Audio Converter Services in conjunction with AVAudioPlayer, which will allow you to resample the audio (which will effectively repitch them, though they'll change duration).
Alternatively, as the related question points out, you could use OpenAL and AL_PITCH