How can I play transparent videos in GVRKit? - swift

I am creating a Virtual Reality app using Google's GVRKit for Google Cardboard.
I need to implement transparency in GVRVideoRenderer.
The use case for this is for playing a video over a 360° photo background to give a VR effect.
I have used Chroma Key blending with an input and mask image to implement the transparency in AVPlayerItem:
let filter = AlphaFrameFilter()
filter.inputImage = request.sourceImage.cropped(to: sourceRect)
filter.maskImage = request.sourceImage.cropped(to: alphaRect).transformed(by: transform)
Unfortunately, when passing this into GVRVideoRenderer, the view still has a black background.
How can I fix this?

Related

Mask Video Into Shape and Compose Over Another Video (AVComposition)

Perhaps I'm misunderstanding the capabilities of AVComposition, but I have a task that I am failing to know how to approach.
I have a background video, which is a video of a tree;
I also have a foreground video, which is a video of a horse;
I have a transparent .png mask, which is a circle (shown with background for clarity).
My ideal goal is to create a H.264 video, save to the user's device, that shows the tree in the background, with the horse video masked into a circle.
While I believe using init(asset:applyingCIFiltersWithHandler:) to apply a CIBlendWithMask filter could be feasible, I have no idea how I would render the "masked" video (since it would be transparent around the circle) over the background video.

Apply custom camera filters on live camera preview - Swift

I'm looking to make a native iPhone iOS application in Swift 3/4 which uses the live preview of the back facing camera and allows users to apply filters like in the built in Camera app. The idea was for me to create my own filters by adjusting Hue/ RGB/ Brightness levels etc. Eventually I want to create a HUE slider which allows users to filter for specific colours in the live preview.
All of the answers I came across for a similar problem were posted > 2 years ago and I'm not even sure if they provide me with the relevant, up-to-date solution I am looking for.
I'm not looking to take a photo and then apply a filter afterwards. I'm looking for the same functionality as the native Camera app. To apply the filter live as you are seeing the camera preview.
How can I create this functionality? Can this be achieved using AVFoundation? AVKit? Can this functionality be achieved with ARKit perhaps?
Yes, you can apply image filters to the camera feed by capturing video with the AVFoundation Capture system and using your own renderer to process and display video frames.
Apple has a sample code project called AVCamPhotoFilter that does just this, and shows multiple approaches to the process, using Metal or Core Image. The key points are to:
Use AVCaptureVideoDataOutput to get live video frames.
Use CVMetalTextureCache or CVPixelBufferPool to get the video pixel buffers accessible to your favorite rendering technology.
Draw the textures using Metal (or OpenGL or whatever) with a Metal shader or Core Image filter to do pixel processing on the CPU during your render pass.
BTW, ARKit is overkill if all you want to do is apply image processing to the camera feed. ARKit is for when you want to know about the camera’s relationship to real-world space, primarily for purposes like drawing 3D content that appears to inhabit the real world.

Camera screen size in iphone 5

I am designing a camera app in ios. I see that Video uses the whole screen in iphone 5 but camera does not. So I wanted to know the screen width and height of the camera screen that the default camera uses to design my app.
That's because unlike the feed from the video camera, the feed from the camera has a different aspect ratio from the devices screen. To rectify this, you'll need to resize your preview layer to match the aspect ratio of the feed, or you'll need to change the video gravity property of your video preview layer.
Once of these should suffice:
[myPreviewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[myPreviewLayer setVideoGravity:AVLayerVideoGravityResize];

avassetwriter with greenscreen or chromakey

Is it possible to composite green screen images -- an animated actor against a green background, with a backdrop photo and make a video of that using avassetwriter on the iPhone.
I have an application that creates a sequence of screenshots of an animated character against a green background. I'd like to composite those with a photograph from their library.
Is there some way to composite the two into a video on the iPhone?
Thanks,
Yes, there is. I just added a chroma key filter to my GPUImage framework, which should let you do realtime green screen effects from camera, image, or movie sources. You just need to use a GPUImageChromaKeyBlendFilter, set the color you want to replace in the first image or video source, set the sensitivity threshold, and optionally set the amount of smoothing to use on colors that are not quite matches of your target.
It acts like the other blend filters in the framework, where you supply the video source to filter as the first input to the filter, and the image or video to replace you target color with as the second input.
I haven't yet tuned this particular filter for performance, but you should easily be able to get 30 FPS processing for 640x480 frames on an older iPhone 4 (~15-20 FPS for 720p).

iPhone UIImagePickerController: show camera preview on top of openGl viewport

i've a question about UIImagePickerController class reference, in photo-camera mode.
I'm developing an OpenGl game. I should add to game photo-camera features: game should open a window (say for example 200x200 pixels, in the middle of screen), that display in real time a preview of photo-camera IN FRONT OF GL VIEWPORT. So photo-camera preview must be in a window of 200x200 in front of our Gl viewport that display game.
I've some questions:
- main problem is that i've difficult to open UIImagePickerController window in front of our Gl viewport (normally UIImagePickerController window covers all iPhone screen);
which is the better way to capture an image buffer periodically to perform some operations, like face detection (we have library to perform this on a bitmap image) ?
iPhone can reject such approach? It's possible to have this approach with camera (camera preview window that partially overlap an openGl viewport) ?
at the end, it's possible to avoid visualization of camera shutter? I'd like to initialize camera without opening sound and shutter visualization.
This is a screenshot:
http://www.powerwolf.it/temp/UIImagePickerController.jpg
If you want to do something more custom than what Apple intended for the UIImagePickerController, you'll need to use the AV Foundation framework instead. The camera input can be ported to a layer or view. Here is an example that will get you half way there (it is intended for frame capture). You could modify it for face detection by taking sample images using a timer. As long as you use these public APIs, it'll be accepted in the app store. There are a bunch of augmented reality applications that use similar techniques.
http://developer.apple.com/library/ios/#qa/qa2010/qa1702.html