I currently have a video that has been selected out with a mask and saved as a separate video. The surrounded regions of the mask are all black. I want to superimpose this mask video onto another video of the same dimensions, replacing the black pixels with the pixels in the underlying video. Is this sort of thing possible in Matlab?
Any help would be greatly appreciated!
One easy way out could be to extract frames from both the videos and replace all the black pixels of the first video with the corresponding pixel values from the second video.
Related
Example Image
I have 2 images captured from my iPhone 8 plus and each of them have been captured at different zoom levels of the iPhone camera (this implies one image was captured with a 1x zoom and the other at 2x zoom). And now I wanna try and extract an image patch in Picture with 1x zoom corresponding to the zoomed in image i.e. Image with 2x Zoom. How would one go about doing that ?
I understand that SIFT features maybe helpful but is there a way I could use the information about the camera extrinsic and intrinsic matrices to find out the desired region of interest ?
(I'm just looking for hints)
I'm trying to get snapshots from the screen. I don't necessarily need the full hi-res stills, just the image from the viewport without the 3D models I've overlaid already.
If you mean the image that is captured by the camera, it's available under
sceneView.session.currentFrame?.capturedImage
https://developer.apple.com/documentation/arkit/arframe/2867984-capturedimage
The pixel buffer is in the YCbCr format so you might need to access the luma and chroma planes and convert them to RGB depending on what kind of image you're trying to produce. For a CGImage/UIimage I think it's pretty straightforward: How to turn a CVPixelBuffer into a UIImage?
What is the method of cropping and combining video using AVFoundation? I need to crop away x number of pixels from the top (of a video) and fill that potion of the video with a different video. Is this possible?
Yes, it's possible, but tricky. You need to:
create an AVMutableVideoComposition and add two tracks with the two videos you want to combine.
shift the video on top up by the amount you want. You do this by figuring out the appropriate affine transform and build the AVMutableVideoCompositionInstructions with that affine transform applied.
It's all very messy. These slides will help you through it:
http://www.slideshare.net/invalidname/advanced-av-foundation-cocoaconf-aug-11
Unfortunately, no. There's no way to crop or mask video on the fly in AVFoundation. You could translate the video down the render area (effectively 'cropping' the bottom) using AVMutableVideoComposition and AVMutableVideoCompositionLayerInstructions setTransform:atTime:
I m trying to overlay an image on a live video.I have used alpha blending method to overlay image on the video. On overlaying the image, it gets overlayed five times rather than one as expected.
Both frame data and the image data is taken as BYTE* for overlaying and displaying it.
The image used is a bitmap image.
The data (BYTE*) of both the video and the image is overlayed and the resultant is stored back in the variable of the video and den drawn on the picture control of vc++.
The video resolution is 640x480.
The image I m overlaying is 128x128.
The IDE used is visual studio professional.The code is developed using c++.
How do I overlay the bitmap image as a single image on the live video at a specific position.
I'm trying to play around with turning an image into mosaic bricks like the Lego Photo app.
How is it done, and where can I find more info?
You basically need to iterate through the pixels, to calculate the average colour for say every 4x4 block of pixels. Once you have this average colour, you 'round' it to the nearest colour that you can use in your mosaic. I don't know the specifics of it but this sample code does exactly what you want.