Perhaps I'm misunderstanding the capabilities of AVComposition, but I have a task that I am failing to know how to approach.
I have a background video, which is a video of a tree;
I also have a foreground video, which is a video of a horse;
I have a transparent .png mask, which is a circle (shown with background for clarity).
My ideal goal is to create a H.264 video, save to the user's device, that shows the tree in the background, with the horse video masked into a circle.
While I believe using init(asset:applyingCIFiltersWithHandler:) to apply a CIBlendWithMask filter could be feasible, I have no idea how I would render the "masked" video (since it would be transparent around the circle) over the background video.
Related
Overstretched Photo Frame Render texture settingsSo, basically, my problem is that my unity game, I want it to look like a PS1 era game, I made a render, and I made the size to 225x240. I also added a raw image so the camera can actually look, and I stretched it across the screen so the player could see the map on a full screen. Now comes the problem: Everything is stretched. The floor, the paintings, the table, etc,. All the 3D objects on the map are stretched. I would like to know why this happens and what I have to do to prevent it. I think it has something to do with raw image, as it is stretched. But I do not know how to directly fix it.
Raw Image stretch.
I am using Unity 2019.3.5f1 and I am using the Universal Render Pipeline (URP).
I am trying to use the post processing in URP for the foreground only (in my case, the players), and I want the leave the background (which in my case, is just a quad with a texture), as is.
I have tried using the Camera Stack but it wont work for me because the overlay camera can't have post processing effects according to the documentation.
The only solution that I could come up with is create the some sort of custom render which:
Render the background to buffer A.
Render the foreground to buffer B, and save the depth.
Combine the two using a shader that gets both textures, and the depth texture, and based on the depth, takes buffer A or buffer B.
The problem with this I believe is that I cant use it with the post processing for Unity.
Any idea what I can do?
EDIT:
I tried another thing in Unity which seem to not be working (might be a bug):
I created 3 cameras: Foreground Camera, Depth Camera (only render the foreground), and a Background Camera.
I set up the Depth camera so it will render to a render texture and, indeed now I have a render texture with the proper depth I need.
Now, from here everything went wrong, there seem to be odd things happening when using Unity new Post processing (the built in one):
The Foreground Camera is set to Tag=MainCamera, and when I enable Post Processing and add an effect, indeed we can see it. (as expected)
The Background Camera is essentially a duplicate of the Foreground one, but with Tag=Untagged, I use the same options (enabled Post).
Now, the expected thing is we see the Background Camera with effects like Foreground, but no:
When using Volume Mask on my background layer, the Post processing just turns off, no effect at all no matter what (and I have set my background the Background layer).
When I disable the Foreground Camera (or remove its tag) and set the Background Camera to MainCamera, still nothing changes, the post still wont work.
When I set Volume Mask to Default (or everything), the result is shown ONLY in the scene view, I tried rendering the camera to a RenderTexture but still, you clearly see no effect applied!
I am creating a Virtual Reality app using Google's GVRKit for Google Cardboard.
I need to implement transparency in GVRVideoRenderer.
The use case for this is for playing a video over a 360° photo background to give a VR effect.
I have used Chroma Key blending with an input and mask image to implement the transparency in AVPlayerItem:
let filter = AlphaFrameFilter()
filter.inputImage = request.sourceImage.cropped(to: sourceRect)
filter.maskImage = request.sourceImage.cropped(to: alphaRect).transformed(by: transform)
Unfortunately, when passing this into GVRVideoRenderer, the view still has a black background.
How can I fix this?
Is it possible to composite green screen images -- an animated actor against a green background, with a backdrop photo and make a video of that using avassetwriter on the iPhone.
I have an application that creates a sequence of screenshots of an animated character against a green background. I'd like to composite those with a photograph from their library.
Is there some way to composite the two into a video on the iPhone?
Thanks,
Yes, there is. I just added a chroma key filter to my GPUImage framework, which should let you do realtime green screen effects from camera, image, or movie sources. You just need to use a GPUImageChromaKeyBlendFilter, set the color you want to replace in the first image or video source, set the sensitivity threshold, and optionally set the amount of smoothing to use on colors that are not quite matches of your target.
It acts like the other blend filters in the framework, where you supply the video source to filter as the first input to the filter, and the image or video to replace you target color with as the second input.
I haven't yet tuned this particular filter for performance, but you should easily be able to get 30 FPS processing for 640x480 frames on an older iPhone 4 (~15-20 FPS for 720p).
i've a question about UIImagePickerController class reference, in photo-camera mode.
I'm developing an OpenGl game. I should add to game photo-camera features: game should open a window (say for example 200x200 pixels, in the middle of screen), that display in real time a preview of photo-camera IN FRONT OF GL VIEWPORT. So photo-camera preview must be in a window of 200x200 in front of our Gl viewport that display game.
I've some questions:
- main problem is that i've difficult to open UIImagePickerController window in front of our Gl viewport (normally UIImagePickerController window covers all iPhone screen);
which is the better way to capture an image buffer periodically to perform some operations, like face detection (we have library to perform this on a bitmap image) ?
iPhone can reject such approach? It's possible to have this approach with camera (camera preview window that partially overlap an openGl viewport) ?
at the end, it's possible to avoid visualization of camera shutter? I'd like to initialize camera without opening sound and shutter visualization.
This is a screenshot:
http://www.powerwolf.it/temp/UIImagePickerController.jpg
If you want to do something more custom than what Apple intended for the UIImagePickerController, you'll need to use the AV Foundation framework instead. The camera input can be ported to a layer or view. Here is an example that will get you half way there (it is intended for frame capture). You could modify it for face detection by taking sample images using a timer. As long as you use these public APIs, it'll be accepted in the app store. There are a bunch of augmented reality applications that use similar techniques.
http://developer.apple.com/library/ios/#qa/qa2010/qa1702.html