Huge problems with reading in movie frames with MATLAB - matlab

I've been working on a project that reads in video frames, stores them in an array and then performs operations on them. The frames are each split into 6 subsections that I have to analyze individually.I had previously been cropping the video beforehand and then was loading it in. I now have the program allow the user to load in the whole movie and then crop each 6th themselves and then the program runs consecutively on each 6th. The problem is that matlab just crashes when loading this now 6-times more pixel dense video in (It's about 120k frames). assuming I can get the user to specify the 6 cropping areas before, is there anyway to load in only a specific area of the movie at a time? Rather than storing the whole frame, only store a 6th? (Unlike how I currently store the whole and THEN crop out a 6th, just store a 6th right off the bat).

VideoReader does not allow you to load in part of a frame into memory. However, it allows you to load in only certain frames from the video into MATLAB instead of loading in the entire video. Agree with sam that loading in 120K frames of video into MATLAB is a very bad idea. Consider using the READ syntax that allows you to specify the start and stop frames to only read in the video in chunks after which you can use array indexing to slice each frame into 6 portions.
Dinesh

Related

How to know the delay of frames between 2 videos, to sync an audio from video 1 to video 2?

world.
I have many videos that I want to compare one-to-one to check if they are the same, and get from there the delay of frames, let's say. What I do now is opening both video files with virtualdub and checking manually at the beginning of video 1 that a given frame is at position, i.e., 4325. Then I check video 2 to see the position of the same frame, i.e., 5500. That would make a delay of +1175 frames. Then I check at the end of the video 1 another given frame, position let's say 183038. I check too the video 2 (imagine the position is 184213) and I calculate the difference, again +1175: eureka, same video!
The frame I chose to compare aren't exactly random, it must be one that I know it is exactly one I can compare to (for example, a scene change, an explosion that appears from one frame to another, a dark frame after a lighten one...) and I always try to check for the first comparison frames within the first 10000 positions and for the second check I take at the end.
What I do next is to convert the audio from video 1 to video 2 calculating the number of ms needed, but I don't need help with that. I'd love to automatize the comparison so I just have to select video 1 and video 2, nothing else, that way I could forget forever virtualdub and save a lot of time.
I'm tagging this post as powershell too because I'm making a script where at the moment I have to introduce the delay between frames (after comparing manually) myself. It would be perfect that I could add this at the beginning of the script.
Thanks!

Fast movie creation using MATLAB and ffmpeg

I have some time series data that I would like to create into movies. The data could be 2D (about 500x10000) or 3D (500x500x10000). For 2D data, the movie frames are simply line plot using plot, and for 3D data, we can use surf, imagesc, contour etc. Then we create a video file using these frames in MATLAB, then compress the video file using ffmpeg.
To do it fast, one would try not to render all the images to display, nor save the data to disk then read it back again during the process. Usually, one would use getframe or VideoWriter to create movie in MATLAB, but they seem to easily get tricky if one tries not to display the figures to screen. Some even suggest plotting in hidden figures, then saving them as images to disk as .png files, then compress them using ffmpeg (e.g. with x265 encoder into .mp4). However, saving the output of imagesc in my iMac took 3.5s the first time, then 0.5s after. I also find it not fast enough to save so many files to disk only to ask ffmpeg to read them again. One could hardcopy the data as this suggests, but I am not sure whether it works regardless of the plotting method (e.g. plot, surf etc.), and how one would transfer data over to ffmpeg with minimal disk access.
This is similiar to this, but immovie is too slow. This post 3 is similar, but advocates writing images to disk then reading them (slow IO).
maybe what you're trying to do is to convert your data into an image by doing the same kind of operation that surf, or imagesc or contour is doing and then writing it to a file directly, that would keep all the data in the memory until writing is needed.
I had little experience with real images that could also work here:
I saw that calling imshow took lot of time, but changing the CData of a presetted figure created by the imshow function took around 5ms, so, maybe you could set a figure using any of the function you like, and then update the underlying XData, YData etc. so that the figure will update in the same fashion?
best of luck!

How to transition from a prerecorded video to real time video?

I have come up with an algorithm on Matlab, that permits me to recognize hand gestures on prerecorded videos. Now, I would like to run the same code but for real time video this time, I am not sure how to do it after putting these 2 lines:
vid=videoinput('winvideo',1);
preview(vid);
(real time video is on)
I am thinking about a loop : while the video is on, snap repeatedly some images in order to analyze them.
for k=1:numFrames
my code is applied here.
So, I would like to know how to make this transition from prerecorded videos to real time video.
Your help in this so much appreciated!
I would suggest you to first verify whether you can perform acquisition + gesture recognition in real-time using your algorithm. for that, first read video frames in a loop and render or save them and compute the reading and rendering overhead of a single frame say t1. Also compute the time taken by your algorithm to process one image say t2. The throughput(no. of frames process per second) of your system will be
throughput = 1/(t1 + t2)
It is important to know how many frames you need to process a gesture. First, try to compute the minimum no. of images that you need to identify a gesture in a given time and then verify in real-time whether you can process the same no. of images in the same time.

faster imread with matlab for images with small portion of data

I'm trying to track objects in separated frames of a video. If I do a background subtraction before storing the images the size of the images will be much smaller (like one fifth). so I was wondering if I can also read these images faster since most of the pixels are zero. Still simple imread didn't make any difference.
I also tried the pixelregion option for loading only the location of objects and that didn't work either since there are like ten objects in each frame.
It may be faster to store the frames as a video file, rather than individual images. Then you can read them using vision.VideoFileReader from the Computer Vision System Toolbox.

Shift image pixels in objective-c/objective-c++

I have an application where I receive streaming data through bluetooth and would like to display the streamed data in an image. The data is just RGB values (0-255 for RGB). It's working fine in C#, but I'm having trouble doing the same thing on the iphone. In C# it's being implemented as a queue. When a new row of data arrives, I dequeue one row worth of pixels, then enqueue the new row of pixels. I then write the queue to the image pixel array. This seemed like it would be faster than reading the entire image, shifting it, adding new data, then writing the pixel array. In C# there is a method that allows me to convert a queue to an array, since a queue won't have contiguous memory. My question is, is there a faster way to do this than manually repopulating an array from a queue? Or is there a faster way than using a queue? The image only needs to shift down X number of pixels, then write X number of new pixels to the blank spot in the image. Either way, I can't figure out how to convert the queue to an array, since the only way to access anything besides the first value is by taking values off of the queue. Any suggestions?
The bottleneck on iOS devices won't be in converting a queue to an array, but in getting the bitmap array into the device's display (image or texture) memory. This path can be optimized by using a tiled layer, and only updating the pixels at the end of the end tiles. Then you can scroll the tiled layer inside a scroll view to line up the latest streamed data.