I believe native reading of OpenEXR format is (unofficially) supported both on recent macOS and iOS versions: Is OpenEXR supported on iOS / macOS?
I'd like to load an OpenEXR image to a Metal texture, probably via MTKTextureLoader.newTexture().
My problem is that XCode doesn't recognise OpenEXR files as texture assets, but as data asset.
This means I cannot use MTKTextureLoader.newTexture(name: textureName, ...).
What cross platform (recent macOS / iOS) options are there to read an image from a data asset?
Since .newTexture supports CGImage, I'd guess that the natural way would be to load into CGImage, but I don't quite understand how.
Or should I simply make an URL out of the data asset's file and try to load that one?
Related
I'm trying to set up a custom video source for a video stream in Agora for Unity, following the instructions from Agora's developer center here (and particularly, the example code at the bottom):
https://docs.agora.io/en/Video/custom_video_unity?platform=Unity
THIS CODE WORKS. I can successfully send a video stream and watch it on another device and it looks correct.
However, the Unity console is reporting an error on every single frame, saying:
D3D11 unsupported ReadPixels destination texture format (14)
Unity's documentation for Texture2D.ReadPixels says that it works on RGBA32, ARGB32 and RGB24 texture formats, but Agora's example is using a texture in BGRA32 format.
If I alter the example to set the texture to RGBA32 format instead, then the program still works, except the colors are wrong--red and blue are swapped (unsurprisingly).
I tried to adjust the expected texture on Agora's end by modifying this line of the example:
externalVideoFrame.format = ExternalVideoFrame.VIDEO_PIXEL_FORMAT.VIDEO_PIXEL_BGRA;
But...there is no corresponding define for VIDEO_PIXEL_RGBA. The available options are VIDEO_PIXEL_UNKNOWN, VIDEO_PIXEL_I420, VIDEO_PIXEL_BGRA, VIDEO_PIXEL_NV12, VIDEO_PIXEL_I422
So....my app is functioning correctly, but I'm drowning in error messages of dubious significance, which seems like it's going to cause headaches for development and debugging down the road.
What can I do?
For the inverted color issue, make sure you have the same encoding format on the receiver side. If you are using the SDK script VideoSurface.cs, change the line where it instantiates the Texture (about line 172), where it should be like:
nativeTexture = new Texture2D((int)defWidth, (int)defHeight, TextureFormat.BGRA32, false);
(It was RGBA32 in the stock SDK code).
Update: This format issue has been resolved in version 3.0.1. If it hasn't been released in Asset Store, you may come grab the beta to try out. Check with slack channel here: https://agoraiodev.slack.com/messages/unity-help-me
I making a iOS video player using ffmpeg, the flow likes this:
Video File---> [FFMPEG Decoder] --> decoded frames --> [a media director] --> /iphone screen (full and partial)/
A media director will handle the tasks of rendering decoded video frames to iOS ui (UIView, UIWindow etc), outputting audio samples to iOS speaker, and threads management.
SDL is one of those libs, but SDL is mainly made for game making purpose and seem to be not really mature for iOS.
What can be the substitute for SDL?
On Mac OS X I used CoreImage/CoreVideo for this, decoding frame into a CVImageBuffer and rendering them into a CoreImage context. I'm not sure CoreImage contexts are supported on iOS though. Maybe this thread will help on this: How to turn a CVPixelBuffer into a UIImage?
A better way on iOS might be to draw your frames with OpenGLES.
SDL uses opengl and FFMpeg, you can come pretty close using ffmpeg and apple native api's functions. We've done it with several video players.
This certainly will get you started.
https://github.com/mooncatventures-group
I need to process the video frames from a remote video in real-time and present the processed frames on screen.
I have tried using AVAssetReader but because the AVURLAsset is accessing a remote URL, calling AVAssetReader:initWithAsset will result in a crash.
AVCaptureSession seems good, but it works with the camera and not a video file (much less a remote one).
As such, I am now exploring this: Display the remote video in an AVPlayerLayer, and then use GL ES to access what is displayed.
Questions:
How do I convert AVPlayerLayer (or a CALayer in general) to a CAEAGLLayer and read in the pixels using CVOpenGLESTextureCacheCreateTextureFromImage()?
Or is there some other better way?
Note: Performance is an important consideration, otherwise a simple screen capture technique would suffice.
As far as I know, Apple does not provide direct access to the h.264 decoder and there is no way around that. One API you can use is the asset interface, where you give it a URL and then that file on disk is read as CoreVideo pixel buffers. What you could try would be to download from your URL and then write a new asset (a file in the tmp dir) one video frame at a time. Then, once the download was completed and the new h264 file was fully written, close the writing session and then open the file as an asset reader. You would not be able to do streaming with this approach, the entire file would need to be downloaded first. Otherwise, you could try the AVPlayerLayer approach to see if that supports streaming directly. Be aware that the texture cache logic is not easy to implement, you need and OpenGL view already configured properly you would be better off just looking at an existing implementation that already does the rendering instead of trying to start from scratch.
This is now possible on modern iOS. If you're able to represent your real-time processing with Core Image—and you should be able to given Core Image's extensive support for custom filters nowadays—you can make use of AVAsynchronousCIImageFilteringRequest to pass into an AVPlayerItem per the documentation.
If you'd rather process things totally manually, you can check out AVPlayerItemVideoOutput and CVMetalTextureCache. With these, you can read sample buffers directly from a video and convert them into Metal textures from a texture buffer pool. From there, you can do whatever you want with the textures. Note with this approach, you are responsible for displaying the resultant textures (inside your own Metal or SceneKit rendering pipeline).
Here's a blog post demonstrating this technique.
Alternately, if you'd rather not manage your own render pipeline, you can still use AVPlayerItemVideoOutput to grab sample buffers, process them with something like vImage and Core Image (ideally using a basic Metal-backed CIContext for maximum performance!), and send them to AVSampleBufferDisplayLayer to display directly in a layer tree. That way you can process the frames to your liking and still let AVFoundation manage the display of the layer.
Am working Video Processing in IOS(iphone/ipod/ipad) Using Objective c. i am using AVFoundation Framework to Capture Video . i want to Encode/decode those video frame using ffmpeg-libx264. i have compiled ffmpeg-x264 lib for ios. i got kCVPixelFormatType_32BGRA from AVFoundation.
my problem is
1.How to convert kCVPixelFormatType_32BGRA to AVFrame for enode using avcodec_encode_video?
2.How to convert AVFrame to kCVPixelFormatType_32BGRA # decode side from avcodec_decode_video2?
Please help me to start above process or give path for working tutorial .Thanks in advance.
If you're trying to use FFMpeg you'll need to use kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange rather than kCVPixelFormatType_32BGRA and then you can shove it into an AVFrame. You'll probably also want to convert what you're getting from the iOS camera (YUVNV12) to YUV420P so you can receive it on other devices that aren't iOS. If you are just using iOS devices and that's all you care about, you can skip this side of the color conversion and just pack it into the AVFrame.
Since you're already putting it into a YUV format, you can just use CVPixelBufferGetBaseAddressOfPlane(buf,0) and encode that address.
Once you decode the image, you'll need to change the colors to BGRA from YUV420P. If you didn't swap the colors properly in the first place before you encoded it, you'll just change YUVNV12 to BGRA.
Hope this helps a bit. You can find the proper color conversion algorithms online.
Is there a way to access the metadata (such as width and height of the video) via iOS SDK for m4v inside the resource bundle? I would like to use this info to center the video around the screen bounds.
On a mac this would be easy, because you can utilise the spotlight metadata, but on iOS it's slightly more complicated. A couple of suggestions:
The easiest way to do this is just to load your movie into an MPMoviePlayerController and check the naturalSize property. There are a number of other properties including playableDuration. The downside to this is you have to load the movie into an MPMoviePlayerController, which may be fine for you if you're going to play the file straight away.
The harder but more efficient way is to use the open source libmediainfo library, but it's obviously more complex to integrate than using an MPMoviePlayerController 'out of the box'.