Is it possible to impose a shape, let's say rectangle, on video files in Matlab ? I know that it is easily possible to do it on image files using shape inserter but couldn't find a way to do it on videos.
So far my best guess is to extract the frames, impose the rectangles and somehow encode it again into the stream. However, I wonder if there's a more elegant way to achieve it.
That's exactly the way you have to do it.
read a frame of video using vision.VideoFileReader
insert whatever annotations you need into the frame using insertShape, insertMarker, insertText, or insertObjectAnnotation
write the modified frame out to a new video file using vision.VideoFileWriter
repeat for all frames
Related
I did read that Unity supports wav loop points metadata (e.g. https://stackoverflow.com/a/53934779/525873). We have, however, not found any official doc/release notes that confirm this. Loop points (using Wavosaur in my case) appear to still be ignored. We are on Unity 2018.2.17f1.
We know there are other options to make audio clips loop, but using wav loop points would be ideal. Anyone was able to get wav loop points to work in Unity?
Many thanks!
I might be wrong but I don't think looping other than 'the whole file' is natively supported. You can however achieve it by filling the audio buffer manually (using MonoBehaviour.OnAudioFilterRead )
Please keep in mind that this happens on the managed side so it might be a little bit expensive, especially if you want to do resampling
I have some time series data that I would like to create into movies. The data could be 2D (about 500x10000) or 3D (500x500x10000). For 2D data, the movie frames are simply line plot using plot, and for 3D data, we can use surf, imagesc, contour etc. Then we create a video file using these frames in MATLAB, then compress the video file using ffmpeg.
To do it fast, one would try not to render all the images to display, nor save the data to disk then read it back again during the process. Usually, one would use getframe or VideoWriter to create movie in MATLAB, but they seem to easily get tricky if one tries not to display the figures to screen. Some even suggest plotting in hidden figures, then saving them as images to disk as .png files, then compress them using ffmpeg (e.g. with x265 encoder into .mp4). However, saving the output of imagesc in my iMac took 3.5s the first time, then 0.5s after. I also find it not fast enough to save so many files to disk only to ask ffmpeg to read them again. One could hardcopy the data as this suggests, but I am not sure whether it works regardless of the plotting method (e.g. plot, surf etc.), and how one would transfer data over to ffmpeg with minimal disk access.
This is similiar to this, but immovie is too slow. This post 3 is similar, but advocates writing images to disk then reading them (slow IO).
maybe what you're trying to do is to convert your data into an image by doing the same kind of operation that surf, or imagesc or contour is doing and then writing it to a file directly, that would keep all the data in the memory until writing is needed.
I had little experience with real images that could also work here:
I saw that calling imshow took lot of time, but changing the CData of a presetted figure created by the imshow function took around 5ms, so, maybe you could set a figure using any of the function you like, and then update the underlying XData, YData etc. so that the figure will update in the same fashion?
best of luck!
I have a video which is represented as AVURLAsset. It comes from Camera Roll and is at full resolution.
Now if I only need it with 380 pixels width, what would be the fastest and most efficient way to get a downsampled copy of the video?
Is AVAssetExportSession the way to go? It also looks like AVAssetExportSession only works with presets. But in this case I want to specify custom pixel dimensions.
"An AVAssetExportSession object transcodes the contents of an AVAsset
source object to create an output of the form described by a specified
export preset."
Or must I look at other classes in AVFoundation? Or other frameworks even?
AVMutableVideoComposition has a renderSize property or AVAssetWriter/WriterInput has an ability to resize frame as you want, as well as other tweaks are possible (you are supplying output size/settings in dictionary), but it comes with necessity to set the whole stuff up (asset reader, feeding frames etc.). If you find better solution, let me know :)
I'm trying to track objects in separated frames of a video. If I do a background subtraction before storing the images the size of the images will be much smaller (like one fifth). so I was wondering if I can also read these images faster since most of the pixels are zero. Still simple imread didn't make any difference.
I also tried the pixelregion option for loading only the location of objects and that didn't work either since there are like ten objects in each frame.
It may be faster to store the frames as a video file, rather than individual images. Then you can read them using vision.VideoFileReader from the Computer Vision System Toolbox.
I have a video to process frame by frame. I want to extract the key frames out of it. So the first task is to grab all the frames. We can read an AVI video using readavi but I am having no idea how to extract the RGB frames. Secondly, if someone can point to any MATLAB implementation for key frame extraction (using any standard method) or can post the code here it will be great.
To extract frames use the function frame2im. Here is an example how to do it.
For the key frame extraction I suggest you use some kind of similarity measure (like cross correlation, histogram distance, optical flow, etc..), and look for large changes in neighbouring images.