How to programmatically export any MTLTexture to KTX in Swift - swift

Detail
We can export the raw texture data including other layers or slices in KTX format after we capture the frame by the frame capture.
After that, we can import by using MTKTextureLoader.
let texture = try! await loader.newTexture(URL: Bundle.main.url(forResource: "texture", withExtension: "ktx")!)
Goal
But how to programmatically export the MTLTexture to KTX format in Swift?

You can check ImageIO whether ImageIO supports the format you want to export to:
Call CGImageDestinationCopyTypeIdentifiers() from ImageIO module to get a list of all the UTI that the CGImageDestination supports.
For example, these are the types ImageIO supports on my machine:
(
"public.jpeg",
"public.png",
"com.compuserve.gif",
"public.tiff",
"public.jpeg-2000",
"com.apple.atx",
"org.khronos.ktx",
"org.khronos.ktx2",
"org.khronos.astc",
"com.microsoft.dds",
"public.heic",
"public.heics",
"com.microsoft.ico",
"com.microsoft.bmp",
"com.apple.icns",
"com.adobe.photoshop-image",
"com.adobe.pdf",
"com.truevision.tga-image",
"com.ilm.openexr-image",
"public.pbm",
"public.pvr"
)
Then, use CGImage to output it:
let texture: MTLTexture = ... // input texture
let ciContext = CIContext(mtlDevice: texture.device)
let ciImage = CIImage(mtlTexture: texture, options: nil) // you might need to add `.oriented(.downMirrored)` here
let colorspace = CGColorSpace(name: CGColorSpace.sRGB)
let cgImage = ciContext.createCGImage(ciImage, from: .init(origin: .zero, size: .init(width: texture.width, height: texture.height)), format: .RGBA8, colorSpace: colorspace)!
let uniformType: UTType = ... // uniform type from CGImageDestinationCopyTypeIdentifiers
let outputURL: URL = ... // the url
try FileManager.default.createDirectory(at: outputURL.deletingLastPathComponent(), withIntermediateDirectories: true) // you might needs this to create intermediate directories
let destination = CGImageDestinationCreateWithURL(outputURL as CFURL, uniformType.identifier as CFString, 1, nil)!
CGImageDestinationAddImage(destination, cgImage, nil)
let imageOutputSuccessfully = CGImageDestinationFinalize(destination)

Related

Swift 4: PHAsset to Image reduces quality

I'm using a framework called OpalImagePicker, it allows me to pick several images instead of one. It returns an array of PHAssets.
I want to get these PHAssets, turn them into image, then convert them to base64 string so that i can send them to my data base.
But there's a problem: the images have really low quality when I try to get them from the PHAsset array.
Here's my code:
let requestOptions = PHImageRequestOptions()
requestOptions.version = .current
requestOptions.deliveryMode = .opportunistic
requestOptions.resizeMode = .exact
requestOptions.isNetworkAccessAllowed = true
let imagePicker = OpalImagePickerController()
imagePicker.maximumSelectionsAllowed = 4
imagePicker.allowedMediaTypes = Set([PHAssetMediaType.image])
self.presentOpalImagePickerController(imagePicker, animated: true,
select: { (assets) in
for a in assets{
// print(a)
// self.img.append(a.image)
self.img.append(a.imagehd(targetSize: CGSize(width: a.pixelWidth, height: a.pixelHeight), contentMode: PHImageContentMode.aspectFill, options: requestOptions))
and the function:
func imagehd(targetSize: CGSize, contentMode: PHImageContentMode, options: PHImageRequestOptions?) -> UIImage {
var thumbnail = UIImage()
let imageManager = PHCachingImageManager()
imageManager.requestImage(for: self, targetSize: targetSize, contentMode: contentMode, options: options, resultHandler: { image, _ in
thumbnail = image!
})
return thumbnail
}
I tried to give "request options.version" the ".original" value, or even high quality to delivery Mode, but then it just gives me nothing (image is nil)
I'm really lost. Can someone help?
Thanks a lot.

CIImage says `Cannot render image (with an input Metal texture) using a metal context.`

I am attempting to convert a MTLTexture to a CGImage by adding an extension function to MTLTexture objects inside of my macOS playground.
The current function looks like this:
extension MTLTexture {
func toImage() -> CGImage? {
let context = CIContext()
let texture = self
let cImg = CIImage(mtlTexture: texture, options: nil)!
let cgImg = context.createCGImage(cImg, from: cImg.extent)!
return cgImg
}
}
However the line that is supposed to initialize cImg causes the concole to print Cannot render image (with an input Metal texture) using a metal context. and return nil.
Why is this happening and how can I fix it?

CMSampleBuffer frame converted to vImage has wrong colors

I’m trying to convert CMSampleBuffer from camera output to vImage and later apply some processing. Unfortunately, even without any further editing, frame I get from buffer has wrong colors:
Implementation (Memory management and errors are not considered in question):
Configuring video output device:
videoDataOutput = AVCaptureVideoDataOutput()
videoDataOutput.videoSettings = [String(kCVPixelBufferPixelFormatTypeKey): kCVPixelFormatType_32BGRA]
videoDataOutput.alwaysDiscardsLateVideoFrames = true
videoDataOutput.setSampleBufferDelegate(self, queue: captureQueue)
videoConnection = videoDataOutput.connection(withMediaType: AVMediaTypeVideo)
captureSession.sessionPreset = AVCaptureSessionPreset1280x720
let videoDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
guard let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice) else {
return
}
Creating vImage from CASampleBuffer received from camera:
// Convert `CASampleBuffer` to `CVImageBuffer`
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
var buffer: vImage_Buffer = vImage_Buffer()
buffer.data = CVPixelBufferGetBaseAddress(pixelBuffer)
buffer.rowBytes = CVPixelBufferGetBytesPerRow(pixelBuffer)
buffer.width = vImagePixelCount(CVPixelBufferGetWidth(pixelBuffer))
buffer.height = vImagePixelCount(CVPixelBufferGetHeight(pixelBuffer))
let vformat = vImageCVImageFormat_CreateWithCVPixelBuffer(pixelBuffer)
let bitmapInfo:CGBitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.last.rawValue | CGBitmapInfo.byteOrder32Little.rawValue)
var cgFormat = vImage_CGImageFormat(bitsPerComponent: 8,
bitsPerPixel: 32,
colorSpace: nil,
bitmapInfo: bitmapInfo,
version: 0,
decode: nil,
renderingIntent: .defaultIntent)
// Create vImage
vImageBuffer_InitWithCVPixelBuffer(&buffer, &cgFormat, pixelBuffer, vformat!.takeRetainedValue(), cgColor, vImage_Flags(kvImageNoFlags))
Converting buffer to UIImage:
For the sake of tests CVPixelBuffer is exported to UIImage, but adding it to video buffer has the same result.
var dstPixelBuffer: CVPixelBuffer?
let status = CVPixelBufferCreateWithBytes(nil, Int(buffer.width), Int(buffer.height),
kCVPixelFormatType_32BGRA, buffer.data,
Int(buffer.rowBytes), releaseCallback,
nil, nil, &dstPixelBuffer)
let destCGImage = vImageCreateCGImageFromBuffer(&buffer, &cgFormat, nil, nil, numericCast(kvImageNoFlags), nil)?.takeRetainedValue()
// create a UIImage
let exportedImage = destCGImage.flatMap { UIImage(cgImage: $0, scale: 0.0, orientation: UIImageOrientation.right) }
DispatchQueue.main.async {
self.previewView.image = exportedImage
}
Try setting the color space on your CV image format:
let vformat = vImageCVImageFormat_CreateWithCVPixelBuffer(pixelBuffer).takeRetainedValue()
vImageCVImageFormat_SetColorSpace(vformat,
CGColorSpaceCreateDeviceRGB())
...and update your call to vImageBuffer_InitWithCVPixelBuffer to reflect the fact vformat is now a managed reference:
let error = vImageBuffer_InitWithCVPixelBuffer(&buffer, &cgFormat, pixelBuffer, vformat, nil, vImage_Flags(kvImageNoFlags))
Finally, your can remove the following lines, vImageBuffer_InitWithCVPixelBuffer is doing that work for you:
// buffer.data = CVPixelBufferGetBaseAddress(pixelBuffer)
// buffer.rowBytes = CVPixelBufferGetBytesPerRow(pixelBuffer)
// buffer.width = vImagePixelCount(CVPixelBufferGetWidth(pixelBuffer))
// buffer.height = vImagePixelCount(CVPixelBufferGetHeight(pixelBuffer))
Note that you don't need to lock the Core Video pixel buffer, if you check the headerdoc, it says "It is not necessary to lock the CVPixelBuffer before calling this function".
The call to vImageBuffer_InitWithCVPixelBuffer is performing modifying your vImage_Buffer and CVPixelBuffer's contents, which is a bit naughty because in your (linked) code you promise not to modify the pixels when you say
CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly)
The correct way to initialise the CGBitmapInfo for BGRA8888 is alpha first, 32bit little endian, which is non obvious, but covered in the header file for vImage_CGImageFormat in vImage_Utilities.h:
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.first.rawValue | CGImageByteOrderInfo.order32Little.rawValue)
What I don't get is why vImageBuffer_InitWithCVPixelBuffer is modifying your buffer, as cgFormat (desiredFormat) should match vformat, although it is documented to modify the buffer, so maybe you should copy the data first.

Why filtering a cropped image is 4x slower than filtering resized image (both have the same dimensions)

I've been trying to wrap my head around this problem with no luck. I have a very simple Swift command-line application which takes one argument - image path to load. It crops the image and filters that image fragment with SepiaTone filter.
It works just fine. It crops the image to 200x200 and filters it with SepiaTone. Now here's the problem that I'm facing - the whole process takes 600ms on my MacBook Air. Now when I RESIZE (instead of cropping) input image to the same dimensions (200x200) it takes 150ms.
Why is that? In both cases I'm filtering an image which is 200x200 in size. I'm using this particular image for testing (5966x3978).
UPDATE:
It's this particular line of code that takes 4x longer when dealing with cropped image:
var ciImage:CIImage = CIImage(cgImage: cgImage)
END OF UPDATE
Code for cropping (200x200):
// parse args and get image path
let args:Array = CommandLine.arguments
let inputFile:String = args[CommandLine.argc - 1]
let inputURL:URL = URL(fileURLWithPath: inputFile)
// load the image from path into NSImage
// and convert NSImage into CGImage
guard
let nsImage = NSImage(contentsOf: inputURL),
var cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil)
else {
exit(EXIT_FAILURE)
}
// CROP THE IMAGE TO 200x200
// THIS IS THE ONLY BLOCK OF CODE THAT IS DIFFERENT
// IN THOSE TWO EXAMPLES
let rect = CGRect(x: 0, y: 0, width: 200, height: 200)
if let croppedImage = cgImage.cropping(to: rect) {
cgImage = croppedImage
} else {
exit(EXIT_FAILURE)
}
// END CROPPING
// convert CGImage to CIImage
var ciImage:CIImage = CIImage(cgImage: cgImage)
// initiate SepiaTone
guard
let sepiaFilter = CIFilter(name: "CISepiaTone")
else {
exit(EXIT_FAILURE)
}
sepiaFilter.setValue(ciImage, forKey: kCIInputImageKey)
sepiaFilter.setValue(0.5, forKey: kCIInputIntensityKey)
guard
let result = sepiaFilter.outputImage
else {
exit(EXIT_FAILURE)
}
let context:CIContext = CIContext()
// perform filtering in a GPU context
guard
let output = context.createCGImage(sepiaFilter.outputImage!, from: ciImage.extent)
else {
exit(EXIT_FAILURE)
}
Code for resizing (200x200):
// parse args and get image path
let args:Array = CommandLine.arguments
let inputFile:String = args[CommandLine.argc - 1]
let inputURL:URL = URL(fileURLWithPath: inputFile)
// load the image from path into NSImage
// and convert NSImage into CGImage
guard
let nsImage = NSImage(contentsOf: inputURL),
var cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil)
else {
exit(EXIT_FAILURE)
}
// RESIZE THE IMAGE TO 200x200
// THIS IS THE ONLY BLOCK OF CODE THAT IS DIFFERENT
// IN THOSE TWO EXAMPLES
guard let CGcontext = CGContext(data: nil,
width: 200,
height: 200,
bitsPerComponent: cgImage.bitsPerComponent,
bytesPerRow: cgImage.bytesPerRow,
space: cgImage.colorSpace ?? CGColorSpaceCreateDeviceRGB(),
bitmapInfo: cgImage.bitmapInfo.rawValue)
else {
exit(EXIT_FAILURE)
}
CGcontext.draw(cgImage, in: CGRect(x: 0, y: 0, width: 200, height: 200))
if let resizeOutput = CGcontext.makeImage() {
cgImage = resizeOutput
}
// END RESIZING
// convert CGImage to CIImage
var ciImage:CIImage = CIImage(cgImage: cgImage)
// initiate SepiaTone
guard
let sepiaFilter = CIFilter(name: "CISepiaTone")
else {
exit(EXIT_FAILURE)
}
sepiaFilter.setValue(ciImage, forKey: kCIInputImageKey)
sepiaFilter.setValue(0.5, forKey: kCIInputIntensityKey)
guard
let result = sepiaFilter.outputImage
else {
exit(EXIT_FAILURE)
}
let context:CIContext = CIContext()
// perform filtering in a GPU context
guard
let output = context.createCGImage(sepiaFilter.outputImage!, from: ciImage.extent)
else {
exit(EXIT_FAILURE)
}
Its very likely that the cgImage lives in video memory and when you scale the image it actually uses the hardware to write the image to a new area of memory. When you crop the cgImage the documentation implies that it is just referencing the original image. The line
var ciImage:CIImage = CIImage(cgImage: cgImage)
must be triggering a read (maybe to main memory?), and in the case of your scaled image it can probably just read the whole buffer continuously. In the case of the cropped image it may be reading it line by line and this could account for the difference, but thats just me guessing.
It looks like you are doing two very different things. In the "slow" version you are cropping (as in taking a small CGRect of the original image) and in the "fast" version you are resizing (as in reducing the original down to a CGRect).
You can prove this by adding two UIImageViews and adding these lines after each declaration of ciImage:
slowImage.image = UIImage(ciImage: ciImage)
fastImage.image = UIImage(ciImage: ciImage)
Here are two simulator screenshots, with the "slow" image above the "fast" image. The first is with your code where the "slow" CGRect origin is (0,0) and the second is with it adjusted to (2000,2000):
Origin is (0,0)
Origin is (2000,2000)
Knowing this, I can come up with a few things happening on the timing.
I'm including a link to Apple's documentation on the cropping function. It explains that it is doing some CGRect calculations behind the scenes but it doesn't explain how it pulls the pixel bits out of the full-sized CG image - I think that's where the real slow down is.
In the end though, it looks like the timing is due to doing two entirely different things.
CGRect.cropping(to:)

Making a GIF from images using Swift (macOS)

I was wondering if there was a way to convert an array of NSImages using Swift in macOS/osx?
I should be able to export it to file afterwards, so an animation of images displayed on my app would not be enough.
Thanks!
Image I/O has the functionalities you need. Try this:
var images = ... // init your array of NSImage
let destinationURL = NSURL(fileURLWithPath: "/path/to/image.gif")
let destinationGIF = CGImageDestinationCreateWithURL(destinationURL, kUTTypeGIF, images.count, nil)!
// The final size of your GIF. This is an optional parameter
var rect = NSMakeRect(0, 0, 350, 250)
// This dictionary controls the delay between frames
// If you don't specify this, CGImage will apply a default delay
let properties = [
(kCGImagePropertyGIFDictionary as String): [(kCGImagePropertyGIFDelayTime as String): 1.0/16.0]
]
for img in images {
// Convert an NSImage to CGImage, fitting within the specified rect
// You can replace `&rect` with nil
let cgImage = img.CGImageForProposedRect(&rect, context: nil, hints: nil)!
// Add the frame to the GIF image
// You can replace `properties` with nil
CGImageDestinationAddImage(destinationGIF, cgImage, properties)
}
// Write the GIF file to disk
CGImageDestinationFinalize(destinationGIF)