MATLAB - Run webcam parallel to processing - matlab

Hello and thank you in advance.
I am working on a MATLAB algorithm, using the computer vision toolbox, to detect objects from a live camera feed, displaying frames with bounding boxes on a deployable video player.
Due to limitations of my hardware, the detection will be slower than the maximum FPS delivered by the camera.
Now, I'd like to display the webcam video feed at maximum speed, not waiting for the detection to finish so that I will have a fluent output video with detections whenever they will be inserted.
Is there a way?
My first approach was to use the parfeval function to somehow run the detection parallel, but failed due to my lack of knowledge, how to give the frame to the detector and insert the resulting bounding boxes to the frame, "whenever they are finished".

Related

Model lost on uniform background surface with ARCamera (Vuforia, Unity)

I'm trying to use Vuforia in Unity to see a model in AR. It is working properly when I'm in a room with lost of different colors, but if I go in a room with one single color (example : white floor, white wall, no furniture), the model keeps disappearing. I'm using Extended tracking with Prediction enabled.
Is there a way to keep the model on screen whatever the background seen by webcam?
Is there a way to keep the model on screen whatever the background seen by webcam??
I am afraid this is not possible. Since vuforia uses Markerless Tracking it requires high contrast on the points.
Since most of AR SDKs only use a monocular RGB camera (not RGB-Depth), they rely on computer vision techniques to calculate missing depth information. It means extracting visual distinct feature points and locating device using estimated distance to these feature points over several frames while you move.
However, they also leverage from sensor fusion which means they combine data gathered from camera and the data from IMU unit(sensors) of the device. Unfortunately, this data is mainly used for complementing when motion tracking fails in situations like excessive motion(when camera image is blurred). Therefore, sensor data itself is not reliable which is the case when you walk into a room where there are no distinctive points to extract.
The only way you can solve this is by placing several image targets in that room. That will allow Vuforia to calculate device position in 3D space. Otherwise this is not possible.
You can also refer to SLAM for more information.

Only recording motion using gaussian mixture models

I am using this example on Gaussian mixture models.
I have a video displaying moving cars, but it's on a street that isn't very busy. A few cars go past every now and again, but the vast majority of the time there isn't any motion in the background. It gets pretty tedious watching nothing moving, so I would like to cut that time out. Is it possible to remove the still frames from the video, only leaving the motion frames? I guess it would essentially crop the video.
The example you give uses a foreground detector. Still frame should not have foreground pixels detected. You can then choose to skip them when building a demo video of your results.
You can build your new video by creating a rule of the type if N frames in a row do not contain foreground, do not write these frames in the output video.
This is just an idea...

How to find contours of soccer player in dynamic background

My project is a designation for a system which analyze soccer videos. In a part of this project I need to detect contours of players and everybody in the play field. For all players which don’t have occlusion with the advertisement billboards, I have used the color of play field (green) to detect contours and extract players. But I have problem with the situation that players or referee have occlusion with the advertisement billboards. Suppose a situation that advertisements on the billboards are dynamic (LED billboards). As you know in this situation finding the contours is more difficult because there is no static background color or texture. You can see two example of this condition in the following images.
NOTE: in order to find the position of occlusion, I use the region between the field line and advertisement billboards, because this region has the color of the field (green). This region is shown by a red rectangular in the following image.
I expect the result be similar to the following image.
Could anyone suggest an algorithm to detect these contours?
You can try several things.
Use vision.PeopleDetector object to detect people on the field. You can also track the detected people using vision.KalmanFilter, as in the Tracking Pedestrians from a Moving Car example.
Use vision.OpticalFlow object in the Computer Vision System Toolbox to compute optical flow. You can then analyze the resulting flow field to separate camera motion from the player motion.
Use frame differencing to detect moving objects. The good news is that that will give you the contours of the people. The bad news is that it will give you many spurious contours as well.
Optical Flow would work for such problems as it captures motion information. Foreground extraction techniques using HMM or GMM or non-parametric may solve the problem as I have used for motion analysis in surveillance videos to detect anomaly (Background was static). Magnitude and orientation of optical flow seems to be an effective method. I have read papers on segmentation using optical flow. I hope this may help you.

operating 2 camera for motion detection in MATLAB

I am using two camera for motion detection. THese cameras are facing against each other. i wrote matlab code for single camera and it is working. Now i need to add one more camera. I did so, but the processing speed is so slow many moving object goes undetected. I am looking at block processing and parallel computing tools in matlab but have not figured out how to do it. Also, can GUI be used in image processing for parallel computation. If so how? Your help is appreciated!
Thank You.

MediaCodec for Simultaneous Camera

I am working on development for simultaneous camera streaming and the recording using MediaCodec API. I want to merge the frame from both the camera and give to rendering as well as to Mediacodec for recording as surface.
I do not want to create multiple EGLContext Rather same should be used across.
I am taking Bigflake media codec example as reference however i am not clear whether it is possible or not. Also how to bind the multiple textures? As we require two textures for two camera.
Your valueable input will help me to progress further. Currently i am stuck and not clear what to do further.
regards
Nehal