The project I am working on takes an h264-encoded bitstream and displays the video frames.
The decoder––which takes raw bytes and outputs a CMSampleBuffer containing a video frame––is working well. It's formatDescription contains the expected values, it correctly interprets NALU types, all that. I may be wrong though, as this is my first time working with all of this, so if anyone suspects that the problem may be in the decoder I would be happy to provide code.
My problem arises when I pass a CMSampleBuffer to an AVSampleBufferDisplayLayer. The video frames aren't appearing on screen.
This is the function that is supposed to display the video frames:
func videoFrameReceived(_ frame: CMSampleBuffer) {
videoLayer.enqueue(frame)
DispatchQueue.main.async { [weak self] in
self?.videoLayer.setNeedsDisplay()
}
}
This is how I initialized my AVSampleBufferDisplayLayer:
if let layer = videoLayer {
layer.frame = CGRect(x: 0, y: 150, width: 640, height: 480)
layer.videoGravity = .resizeAspect
let cmTimebasePointer = UnsafeMutablePointer<CMTimebase?>.allocate(capacity: 1)
let status = CMTimebaseCreateWithMasterClock(kCFAllocatorDefault, CMClockGetHostTimeClock(), cmTimebasePointer)
layer.controlTimebase = cmTimebasePointer.pointee
if let controlTimeBase = layer.controlTimebase, status == noErr {
CMTimebaseSetTime(controlTimeBase, kCMTimeZero)
CMTimebaseSetRate(controlTimeBase, 1.0)
}
self.layer.addSublayer(layer)
}
Thanks!
User figured out how to do it and posted a link to the solution as a comment to the question. The link was broken,
here is the updated link to solution
Related
I'm capturing the output of a playing video using AVPlayerItemVideoOutput.copyPixelBuffer
I'm able to convert the pixel buffer into a CIImage, then render it back into a pixel buffer again, and then an AVAssetWriter writes the buffer stream out to a new movie clip successfully.
The reason I'm converting to CIImage is I want to do some manipulation of each frame. (So far I don't understand how to manipulate pixel buffers directly).
In this case I want to overlay a "scribble" style drawing that the user does with their finger. While the video plays, they can draw over it. I'm capturing this drawing successfully into a CAShapeLayer.
The code below outputs just the overlay CAShapeLayer successfully. When I try to reincorporate the original frame by uncommenting the lines shown, the entire process bogs down drastically and drops from 60fps to an unstable 10fps or so on an iPhone 12. I get stable 60fps in all cases except when I uncomment that code.
What's the best way to incorporate the shape layer into this stream of pixel buffers in 60fps "real time"?
Note: some of this code is not finalized -- setting bounds correctly, etc. However this is not related to my question and I'm aware that has to be done. The rotation/translation are there to orient the shape layer -- this all works for now.
func addShapesToBuffer(buffer: CVPixelBuffer, shapeLayer: CAShapeLayer) -> CVPixelBuffer? {
let coreImage = CoreImage.CIImage.init(cvImageBuffer: buffer)
let newBuffer = getBuffer(from: coreImage)
CVPixelBufferLockBaseAddress(newBuffer!, [])
let rect = CGRect(origin: CGPoint.zero, size: CGSize(width: 800, height: 390))
shapeLayer.shouldRasterize = true
shapeLayer.rasterizationScale = UIScreen.main.scale
shapeLayer.backgroundColor = UIColor.clear.cgColor
let renderer = UIGraphicsImageRenderer(size: rect.size)
let uiImageDrawing = renderer.image {
context in
// let videoImage = UIImage(ciImage: coreImage)
// videoImage.draw(in: rect)
let cgContext = context.cgContext
cgContext.rotate(by: deg2rad(-90))
cgContext.translateBy(x: -390, y: 0)
return shapeLayer.render(in: cgContext)
}
let ciContext = CIContext()
let newImage = CIImage(cgImage: uiImageDrawing.cgImage!)
ciContext.render(_: newImage, to: newBuffer!)
CVPixelBufferUnlockBaseAddress(newBuffer!, [])
return newBuffer
}
Hey I have been struggling with this for a couple of days now and can't seem to find any documentation out side of the standard grid views for MSStickerView sizes
I am working on an app that creates MSStickerViews dynamically - it does this via converting a UIView into an UIImage saving this to disk then passing the URL to MSSticker before creating the MSStickerView the frame of this is then set to the size of the original view.
The problem I have is that when I drag the MSStickerView into the messages window, the MSStickerView shrinks while being dragged - then when dropped in the messages window, changes to a larger size. I have no idea how to control the size when dragged or the final image size
Heres my code to create an image from a view
extension UIView {
func imageFromView() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
self.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}
}
And here's the code to save this to disk
extension UIImage {
func savedPath(name: String) -> URL{
let paths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
let filePath = "\(paths[0])/name.png"
let url = URL(fileURLWithPath: filePath)
// Save image.
if let data = self.pngData() {
do {
try data.write(to: url)
} catch let error as NSError {
}
}
return url
}
}
finally here is the code that converts the data path to a Sticker
if let stickerImage = backgroundBox.imageFromView() {
let url = stickerImage.savedPath(name: textBox.text ?? "StickerMCSticker")
if let msSticker = try? MSSticker(contentsOfFileURL: url, localizedDescription: "") {
var newFrame = self.backgroundBox.frame
newFrame.size.width = newFrame.size.width
newFrame.size.height = newFrame.size.height
let stickerView = MSStickerView(frame: newFrame, sticker: msSticker)
self.view.addSubview(stickerView)
print("** sticker frame \(stickerView.frame)")
self.sticker = stickerView
}
}
I wondered first off if there was something I need to do regarding retina sizes, but adding #2x in the file just breaks the image - so am stuck on this - the WWDC sessions seem to show stickers being created from file paths and not altering in size in the transition between drag and drop - any help would be appreciated!
I fixed this issue eventually by getting the frame from the view I was copying's frame then calling sizeToFit()-
init(sticker: MSSticker, size: CGSize) {
let stickerFrame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
self.sticker = MSStickerView(frame: stickerFrame, sticker: sticker)
self.sticker.sizeToFit()
super.init(nibName: nil, bund
as the StickerView was not setting the correct size. Essentially the experience I was seeing was that the sticker size on my view was not accurate with the size of the MSSticker - so the moment the drag was initialized, the real sticker size was implemented (which was different to the frame size / autoLayout I was applying in my view)
I've been trying to wrap my head around this problem with no luck. I have a very simple Swift command-line application which takes one argument - image path to load. It crops the image and filters that image fragment with SepiaTone filter.
It works just fine. It crops the image to 200x200 and filters it with SepiaTone. Now here's the problem that I'm facing - the whole process takes 600ms on my MacBook Air. Now when I RESIZE (instead of cropping) input image to the same dimensions (200x200) it takes 150ms.
Why is that? In both cases I'm filtering an image which is 200x200 in size. I'm using this particular image for testing (5966x3978).
UPDATE:
It's this particular line of code that takes 4x longer when dealing with cropped image:
var ciImage:CIImage = CIImage(cgImage: cgImage)
END OF UPDATE
Code for cropping (200x200):
// parse args and get image path
let args:Array = CommandLine.arguments
let inputFile:String = args[CommandLine.argc - 1]
let inputURL:URL = URL(fileURLWithPath: inputFile)
// load the image from path into NSImage
// and convert NSImage into CGImage
guard
let nsImage = NSImage(contentsOf: inputURL),
var cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil)
else {
exit(EXIT_FAILURE)
}
// CROP THE IMAGE TO 200x200
// THIS IS THE ONLY BLOCK OF CODE THAT IS DIFFERENT
// IN THOSE TWO EXAMPLES
let rect = CGRect(x: 0, y: 0, width: 200, height: 200)
if let croppedImage = cgImage.cropping(to: rect) {
cgImage = croppedImage
} else {
exit(EXIT_FAILURE)
}
// END CROPPING
// convert CGImage to CIImage
var ciImage:CIImage = CIImage(cgImage: cgImage)
// initiate SepiaTone
guard
let sepiaFilter = CIFilter(name: "CISepiaTone")
else {
exit(EXIT_FAILURE)
}
sepiaFilter.setValue(ciImage, forKey: kCIInputImageKey)
sepiaFilter.setValue(0.5, forKey: kCIInputIntensityKey)
guard
let result = sepiaFilter.outputImage
else {
exit(EXIT_FAILURE)
}
let context:CIContext = CIContext()
// perform filtering in a GPU context
guard
let output = context.createCGImage(sepiaFilter.outputImage!, from: ciImage.extent)
else {
exit(EXIT_FAILURE)
}
Code for resizing (200x200):
// parse args and get image path
let args:Array = CommandLine.arguments
let inputFile:String = args[CommandLine.argc - 1]
let inputURL:URL = URL(fileURLWithPath: inputFile)
// load the image from path into NSImage
// and convert NSImage into CGImage
guard
let nsImage = NSImage(contentsOf: inputURL),
var cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil)
else {
exit(EXIT_FAILURE)
}
// RESIZE THE IMAGE TO 200x200
// THIS IS THE ONLY BLOCK OF CODE THAT IS DIFFERENT
// IN THOSE TWO EXAMPLES
guard let CGcontext = CGContext(data: nil,
width: 200,
height: 200,
bitsPerComponent: cgImage.bitsPerComponent,
bytesPerRow: cgImage.bytesPerRow,
space: cgImage.colorSpace ?? CGColorSpaceCreateDeviceRGB(),
bitmapInfo: cgImage.bitmapInfo.rawValue)
else {
exit(EXIT_FAILURE)
}
CGcontext.draw(cgImage, in: CGRect(x: 0, y: 0, width: 200, height: 200))
if let resizeOutput = CGcontext.makeImage() {
cgImage = resizeOutput
}
// END RESIZING
// convert CGImage to CIImage
var ciImage:CIImage = CIImage(cgImage: cgImage)
// initiate SepiaTone
guard
let sepiaFilter = CIFilter(name: "CISepiaTone")
else {
exit(EXIT_FAILURE)
}
sepiaFilter.setValue(ciImage, forKey: kCIInputImageKey)
sepiaFilter.setValue(0.5, forKey: kCIInputIntensityKey)
guard
let result = sepiaFilter.outputImage
else {
exit(EXIT_FAILURE)
}
let context:CIContext = CIContext()
// perform filtering in a GPU context
guard
let output = context.createCGImage(sepiaFilter.outputImage!, from: ciImage.extent)
else {
exit(EXIT_FAILURE)
}
Its very likely that the cgImage lives in video memory and when you scale the image it actually uses the hardware to write the image to a new area of memory. When you crop the cgImage the documentation implies that it is just referencing the original image. The line
var ciImage:CIImage = CIImage(cgImage: cgImage)
must be triggering a read (maybe to main memory?), and in the case of your scaled image it can probably just read the whole buffer continuously. In the case of the cropped image it may be reading it line by line and this could account for the difference, but thats just me guessing.
It looks like you are doing two very different things. In the "slow" version you are cropping (as in taking a small CGRect of the original image) and in the "fast" version you are resizing (as in reducing the original down to a CGRect).
You can prove this by adding two UIImageViews and adding these lines after each declaration of ciImage:
slowImage.image = UIImage(ciImage: ciImage)
fastImage.image = UIImage(ciImage: ciImage)
Here are two simulator screenshots, with the "slow" image above the "fast" image. The first is with your code where the "slow" CGRect origin is (0,0) and the second is with it adjusted to (2000,2000):
Origin is (0,0)
Origin is (2000,2000)
Knowing this, I can come up with a few things happening on the timing.
I'm including a link to Apple's documentation on the cropping function. It explains that it is doing some CGRect calculations behind the scenes but it doesn't explain how it pulls the pixel bits out of the full-sized CG image - I think that's where the real slow down is.
In the end though, it looks like the timing is due to doing two entirely different things.
CGRect.cropping(to:)
The app I am using for testing puposes is able to take a picture and save it as a PNG file. The next time the app is launched, it checks if a file is present and if it is, the image stored inside the file is used as the background view of the app. Up to this point all is OK.
I decided to add a clipping mask to this app and this where things go wrong.
The clipping itself works, but for some mysterious reason the clipped image gets expanded. If someone could tell me what I am doing wrong that would be very helpful.
Here is the relevant code (I can provide more information if ever needed):
if let videoConnection = stillImageOutput.connectionWithMediaType(AVMediaTypeVideo) {
stillImageOutput.captureStillImageAsynchronouslyFromConnection(videoConnection) {
(imageDataSampleBuffer, error) -> Void in
if error == nil {
var localImage = UIImage(fromSampleBuffer: imageDataSampleBuffer)
var imageSize = CGSize(width: UIScreen.mainScreen().bounds.height * UIScreen.mainScreen().scale,
height: UIScreen.mainScreen().bounds.width * UIScreen.mainScreen().scale)
localImage = resizeImage(localImage!, toSize: imageSize)
imageSize = CGSize(width: imageSize.height, height: imageSize.width)
UIGraphicsBeginImageContext(imageSize)
CGContextRotateCTM (UIGraphicsGetCurrentContext(), CGFloat(M_PI_2))
localImage!.drawAtPoint(CGPoint(x: 0.0, y: -imageSize.width))
localImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// Clipping code:
localImage = maskImage(localImage!,
path: UIBezierPath(CGPath: CGPathCreateWithEllipseInRect(UIScreen.mainScreen().bounds, nil)))
if let data = UIImagePNGRepresentation(localImage!) {
data.writeToFile(self.bmpFilePath, atomically: true)
}
} else {print("Error on taking a picture:\n\(error)")}
}
}
The maskImage function is this (taken from iOS UIImage clip to paths and translated to Swift) :
func maskImage(originalImage :UIImage, path:UIBezierPath) -> UIImage {
UIGraphicsBeginImageContextWithOptions(originalImage.size, false, 0);
path.addClip()
originalImage.drawAtPoint(CGPointZero)
let maskedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return maskedImage;
}
When the lines:
localImage = maskImage(localImage!,
path: UIBezierPath(CGPath: CGPathCreateWithEllipseInRect(UIScreen.mainScreen().bounds, nil)))
are commented out, I see what I expect.
Thaking the picture below and having it as background when relaunching the app.
But when they are present(not commented out), I get the background hereafter when relaunching the app (of course taking the same picture at start):
If things worked as they should the mouse should appear inside the elliptic clip with the same size as in the first picture (not magnified as it is now).
I'm trying to autoplay a video in my app. The video needs to play without the controls.
I've set up the video and the settings, including MPMovieControlStyle.None but the video controls appear for about 2 seconds before disappearing. I have no idea why.
I've used this code (exact code) for another project and it works well, but here for some reason it will not.
override func viewDidLoad() {
super.viewDidLoad()
generateVideo()
}
func generateVideo () {
let movieURL = NSBundle.mainBundle().pathForResource("video", ofType: "mp4")
let videoFilePath = NSURL(fileURLWithPath: movieURL!)
self.view.addSubview(MoviePlayerViewController.moviePlayer.view)
self.view.sendSubviewToBack(MoviePlayerViewController.moviePlayer.view)
MoviePlayerViewController.moviePlayer.contentURL = videoFilePath
MoviePlayerViewController.moviePlayer.shouldAutoplay = true
MoviePlayerViewController.moviePlayer.prepareToPlay()
MoviePlayerViewController.moviePlayer.view.frame = CGRect(x: 0, y: 0, width: self.view.frame.size.width, height: self.view.frame.size.height)
MoviePlayerViewController.moviePlayer.fullscreen = true
MoviePlayerViewController.moviePlayer.controlStyle = MPMovieControlStyle.None
MoviePlayerViewController.moviePlayer.play()
MoviePlayerViewController.moviePlayer.repeatMode = MPMovieRepeatMode.One
MoviePlayerViewController.moviePlayer.scalingMode = MPMovieScalingMode.AspectFit
}
Any idea why this is happening?
After checking how the view is loaded I'm sure the problem is there.
I used a different layout with one Storyboard calling another storyboard and that was the source of the problem.