NSImage to Base64 string loses quality - swift

I'm trying to convert an NSImage from an NSImageView to a Base64 string but end up losing half the quality when decoding the output.
The code to convert to Base64 seems straightforward enough which I've put into an NSString extension:
extension NSImage {
func base64String() -> String? {
guard
let bits = self.representations.first as? NSBitmapImageRep,
let data = bits.representation(using: .JPEG, properties: [:])
else {
return nil
}
return "data:image/jpeg;base64,\(data.base64EncodedString())"
}
}
Trying that with a test JPG image that's 39KB gets decoded back to 20KB. I've tried converting the same image using online tools and get a perfect encode -> decode.
Other code I've tried:
func base64String() -> String? {
let cgImgRef = self.cgImage(forProposedRect: nil, context: nil, hints: nil)
let bmpImgRef = NSBitmapImageRep(cgImage: cgImgRef!)
let data = bmpImgRef.representation(using: NSBitmapImageFileType.JPEG, properties: [:])!
return "data:image/jpeg;base64,\(data.base64EncodedString())"
}
Which results in a 17KB file.
Any help would be very much appreciated as I've racked my brain with this for hours.

You have not specified the compression value so it defaults to the default compression. To have no compression use the code below:
let data = bits.representation(using: .JPEG, properties: [NSImageCompressionFactor:1.0])

According to the documentation, NSBitmapImageRep renders the image. Your code seems to re-encode the image as jpeg again. As jpeg is a loosy algorithm, this will result in loss of quality. You can try to:
use png as representation
use a high (or 1.0) compression factor for jpeg.

Related

Base64 Svg to UIImage return nil

I'm trying to convert Base64 Svg to UIImage, but I get nil.
Everything is OK on this site https://codebeautify.org/base64-to-image-converter
I try different frameworks on native methods, but all to no avail
I use:
let dataDecoded : Data = Data(base64Encoded: base64Str, options: .ignoreUnknownCharacters)!
polygonImage.image = UIImage(data: dataDecoded)!
2) ```
if let url = URL(string: base64StrUrl) {
if let data = try? Data(contentsOf: url) {
polygonImage.image = UIImage(data: data)
}
}
let anSVGImage: SVGKImage = SVGKImage(data: data)
self.polygonImage.image = anSVGImage
Problematic base64:
data:image/svg+xml;base64,PHN2ZyB4bWxucz0naHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmcnIGhlaWdodD0nMTAwJyB3aWR0aD0nMTAwJyB2aWV3Qm94PSc5MC42NDM5NTE0MTYwMTU2MiAxNTIuMDAzOTg4NjQ3NDYwOTQgMC4wMTE2NDI0NTYwNTQ2ODc1IDAuMDExMzg2MTA4Mzk4NDM3NScgc3R5bGU9J3RyYW5zZm9ybTogc2NhbGVYKC0xKSByb3RhdGUoOTBkZWcpJz48ZyBmaWxsPSdub25lJz48cG9seWdvbiBwb2ludHM9JzkwLjY0Nzc1MDg1NDQ5MjE5LDE1Mi4wMDU2MTUyMzQzNzUgOTAuNjQ1NjE0NjI0MDIzNDQsMTUyLjAwODg5NTg3NDAyMzQ0IDkwLjY0NjYyMTcwNDEwMTU2LDE1Mi4wMTI5Njk5NzA3MDMxMiA5MC42NTI3ODYyNTQ4ODI4MSwxNTIuMDEzNzQ4MTY4OTQ1MyA5MC42NTM5MzA2NjQwNjI1LDE1Mi4wMDY2Mzc1NzMyNDIyICcgc3R5bGU9J2ZpbGw6I0ZGN0EwMDsgZmlsbC1vcGFjaXR5OjAuMjU7IHN0cm9rZTogI0ZGN0EwMDsnIHN0cm9rZS13aWR0aD0nMC4wMDAyMzI4NDkxMjEwOTM3NScgLz48L2c+PC9zdmc+
I'm pretty sure it's not a valid base64 string.
The link you have given has used some image processing techniques to generate the image, but in swift you have to provide a valid base64 string and if you download the image from the same website then you cannot open that image.
you can validate your base64 string on https://base64.guru/tools/validator
if you want to convert this base64 to image
paste your base64 string in this website and download .svg file (because you can't get a image file from base64 directly)
convert your .svg file .png or .jpg with https://svgtopng.com
convert .png or .jpg file to base64 string with https://www.base64encoder.io/image-to-base64-converter/
then use that base64 string you will definitely get image.

Swift: Get the correct file size for JPEG image

I have a UIImage object, say from the camera roll via PHAsset. The image is saved as a .jpg file:
asset.requestContentEditingInput(with: nil) { (input, nil) in
print(input?.fullSizeImageURL) // somefile.jpg
}
To get the file size should not data.count from this return the correct file size in bytes?
PHImageManager.default().requestImageData(for: asset, options: nil) { data, _, _, _ in
if let _data = data {
print(_data.count) // 6759240
}
}
The output for a particular image is 6759240 while fileSize() returns 2978548.0 (which is the right file size) bytes.
func fileSize(forURL url: Any) -> Double {
var fileURL: URL?
var fileSize: Double = 0.0
if (url is URL) || (url is String)
{
if (url is URL) {
fileURL = url as? URL
}
else {
fileURL = URL(fileURLWithPath: url as! String)
}
var fileSizeValue = 0.0
try? fileSizeValue = (fileURL?.resourceValues(forKeys: [URLResourceKey.fileSizeKey]).allValues.first?.value as! Double?)!
if fileSizeValue > 0.0 {
fileSize = (Double(fileSizeValue))
}
}
return fileSize
}
Does it mean someUIImage?.jpegData(compressionQuality: 1)?.count does not return the correct size of JPEG image file (if saved)?
One more thing, Is there any way to determine the image file size before writing it on the disk?
All of these is to compare the file size between the original and compressed image.
This sounds like a misunderstanding of what the various terms and calls refer to.
You have no direct access to a file stored in the user's Photo library. There may in fact be no such file; you should make no assumptions about the storage format. When you ask PHImageManager for an image's data, you are given the bitmap data, ready for use. Thus you should expect this data to be big, in exact proportion to the dimensions of the image. 6759240 is more than 6MB, which sounds about right on an older iPhone; a newer iPhone, takes 4032x3024 photos which is more than 8MB.
Then in a different part of your code you call fileSize(forURL:). Now you're looking at an actual file, in the file system, in a place where you can access it. If this is an image file, it is compressed; just how much it is compressed depends on the format. 2978548 is about 3MB which is pretty good for a JPEG compressed without too much lossiness.
Finally, you ask about UIImage jpegData(compressionQuality: 1)?.count. You do not show any code that actually calls that. But this is data ready for saving as a file directly with write(to:) and a URL, and I would expect it to be the same as fileSize(forURL:) if you were to check the very same file later.

Difference between tiffRepresentation and bitmapRepresentation in terms of base64 string encoding?

I am trying to convert a NSImage into a base64 string encoding.
extension NSImage{
// Convert NSImage self to a string of base64 encoding
func getBase64String() -> String{
guard let tiffData = self.tiffRepresentation else {
printError("Failed to get tiffRepresentation")
exit(-1)
}
guard let bitmap: NSBitmapImageRep = NSBitmapImageRep(data: tiffData) else {
printError("Failed to get Bitmap representation from tiffRepresentation")
exit(-1)
}
guard let data = bitmap.representation(using: .png, properties: [:]) else {
printError("Failed to make image data with PNG type")
exit(-1)
}
let tiff_base64 = "data:image/png;base64," + tiffData.base64EncodedString()
let bitmap_base64 = "data:image/png;base64," + data.base64EncodedString()
return bitmap_base64
}
}
I used the result to embed images in a html file, and I found both tiff_base64 and bitmap_base64 work. However, the strings look kind of different of the same image.
Many examples of converting images into base64 on Stack Overflow are calling base64EncodedString() from a bitmap data. I am wondering does it really matter to use tiff_base64 or bitmap_base64?
TIFF and PNG are two different image formats. The use of base 64 is really irrelevant to your question.
First decide which image format you need (which depends on what you are doing with the result. Once you've decided whether you need your images represented as PNG or TIFF (or JPEG or any other supported format), then you apply the base64 encoding to that data.

Barcode string value when using the Vision Framework of iOS11

The following piece of Swift code is using the new iOS11 Vision framework to analyze an image and find QR codes within it.
let barcodeRequest = VNDetectBarcodesRequest(completionHandler {(request, error) in
for result in request.results! {
if let barcode = result as? VNBarcodeObservation {
if let desc = barcode.barcodeDescriptor as? CIQRCodeDescriptor {
let content = String(data: desc.errorCorrectedPayload, encoding: .isoLatin1)
print(content) //Prints garbage
}
}
}
}
let image = //some image with QR code...
let handler = VNImageRequestHandler(cgImage: image, options: [.properties : ""])
try handler.perform([barcodeRequest])
However, the problem is that the desc.errorCorrectedPayload returns the raw encoded data as it has been read from the QR code.
In order to get a printable content string from the descriptor one must decode this raw data (e.g. determine the mode from the first 4 bits).
It gets even more interesting because Apple already has code for decoding raw data in the AVFoundation. The AVMetadataMachineReadableCodeObject class already has the .stringValue field which returns the decoded string.
Is it possible to access this decoding code and use it in Vision framework too?
It seems that now you can get a decoded string from a barcode using new payloadStringValue property of VNBarcodeObservation introduced in iOS 11 beta 5.
if let payload = barcodeObservation.payloadStringValue {
print("payload is \(payload)")
}

How can we get H.264 encoded video stream from iPhone Camera?

I am using following to get video sample buffer:
- (void) writeSampleBufferStream:(CMSampleBufferRef)sampleBuffer ofType:(NSString *)mediaType
Now my question is that how can I get h.264 encoded NSData from above sampleBuffer. Please suggest.
Update for 2017:
You can do streaming Video and Audio now by using the VideoToolbox API.
Read the documentation here: VTCompressionSession
Original answer (from 2013):
Short: You can't, the sample buffer you receive is uncompressed.
Methods to get hardware accelerated h264 compression:
AVAssetWriter
AVCaptureMovieFileOutput
As you can see both write to a file, writing to a pipe does not work as the encoder updates header information after a frame or GOP has been fully written. So you better don't touch the file while the encoder writes to it as it does randomly rewrite header information. Without this header information the video file will not be playable (it updates the size field, so the first header written says the file is 0 bytes). Directly writing to a memory area is not supported currently. But you can open the encoded video-file and demux the stream to get to the h264 data (after the encoder has closed the file of course)
You can only get raw video images in either BGRA or YUV color formats from AVFoundation. However, when you write those frames to an mp4 via AVAssetWriter, they will be encoded using H264 encoding.
A good example with code on how to do that is RosyWriter
Note that after each AVAssetWriter write, you will know that one complete H264 NAL was written to a mp4. You could write code that reads a complete H264 NAL after each write by AVAssetWriter, which is going to give you access to an H264 encoded frame. It might take a bit to get it right with decent speed, but it is doable( I did it successfully).
By the way, in order to successfully decode these encoded video frames, you will need H264 SPS and PPS information which is located in a different place in the mp4 file. In my case, I actually create couple of test mp4 files, and then manually extracted those out. Since those don't change, unless you change the H264 encoded specs, you can use them in your code.
Check my post to SPS values for H 264 stream in iPhone to see some of the SPS/PPS I used in my code.
Just a final note, in my case I had to stream h264 encoded frames to another endpoint for decoding/viewing; so my code had to do this fast. In my case, it was relatively fast; but eventually I switched to VP8 for encoding/decoding just because it was way faster because everything was done in memory without file reading/writing.
Good luck, and hopefully this info helps.
Use VideoToolbox API. refer: https://developer.apple.com/videos/play/wwdc2014/513/
import Foundation
import AVFoundation
import VideoToolbox
public class LiveStreamSession {
let compressionSession: VTCompressionSession
var index = -1
var lastInputPTS = CMTime.zero
public init?(width: Int32, height: Int32){
var compressionSessionOrNil: VTCompressionSession? = nil
let status = VTCompressionSessionCreate(allocator: kCFAllocatorDefault,
width: width,
height: height,
codecType: kCMVideoCodecType_H264,
encoderSpecification: nil, // let the video toolbox choose a encoder
imageBufferAttributes: nil,
compressedDataAllocator: kCFAllocatorDefault,
outputCallback: nil,
refcon: nil,
compressionSessionOut: &compressionSessionOrNil)
guard status == noErr,
let compressionSession = compressionSessionOrNil else {
return nil
}
VTSessionSetProperty(compressionSession, key: kVTCompressionPropertyKey_RealTime, value: kCFBooleanTrue);
VTCompressionSessionPrepareToEncodeFrames(compressionSession)
self.compressionSession = compressionSession
}
public func pushVideoBuffer(buffer: CMSampleBuffer) {
// image buffer
guard let imageBuffer = CMSampleBufferGetImageBuffer(buffer) else {
assertionFailure()
return
}
// pts
let pts = CMSampleBufferGetPresentationTimeStamp(buffer)
guard CMTIME_IS_VALID(pts) else {
assertionFailure()
return
}
// duration
var duration = CMSampleBufferGetDuration(buffer);
if CMTIME_IS_INVALID(duration) && CMTIME_IS_VALID(self.lastInputPTS) {
duration = CMTimeSubtract(pts, self.lastInputPTS)
}
index += 1
self.lastInputPTS = pts
print("[\(Date())]: pushVideoBuffer \(index)")
let currentIndex = index
VTCompressionSessionEncodeFrame(compressionSession, imageBuffer: imageBuffer, presentationTimeStamp: pts, duration: duration, frameProperties: nil, infoFlagsOut: nil) {[weak self] status, encodeInfoFlags, sampleBuffer in
print("[\(Date())]: compressed \(currentIndex)")
if let sampleBuffer = sampleBuffer {
self?.didEncodeFrameBuffer(buffer: sampleBuffer, id: currentIndex)
}
}
}
deinit {
VTCompressionSessionInvalidate(compressionSession)
}
private func didEncodeFrameBuffer(buffer: CMSampleBuffer, id: Int) {
guard let attachments = CMSampleBufferGetSampleAttachmentsArray(buffer, createIfNecessary: true)
else {
return
}
let dic = Unmanaged<CFDictionary>.fromOpaque(CFArrayGetValueAtIndex(attachments, 0)).takeUnretainedValue()
let keyframe = !CFDictionaryContainsKey(dic, Unmanaged.passRetained(kCMSampleAttachmentKey_NotSync).toOpaque())
// print("[\(Date())]: didEncodeFrameBuffer \(id) is I frame: \(keyframe)")
if keyframe,
let formatDescription = CMSampleBufferGetFormatDescription(buffer) {
// https://www.slideshare.net/instinctools_EE_Labs/videostream-compression-in-ios
var number = 0
CMVideoFormatDescriptionGetH264ParameterSetAtIndex(formatDescription, parameterSetIndex: 0, parameterSetPointerOut: nil, parameterSetSizeOut: nil, parameterSetCountOut: &number, nalUnitHeaderLengthOut: nil)
// SPS and PPS and so on...
let parameterSets = NSMutableData()
for index in 0 ... number - 1 {
var parameterSetPointer: UnsafePointer<UInt8>?
var parameterSetLength = 0
CMVideoFormatDescriptionGetH264ParameterSetAtIndex(formatDescription, parameterSetIndex: index, parameterSetPointerOut: &parameterSetPointer, parameterSetSizeOut: &parameterSetLength, parameterSetCountOut: nil, nalUnitHeaderLengthOut: nil)
// parameterSets.append(startCode, length: startCodeLength)
if let parameterSetPointer = parameterSetPointer {
parameterSets.append(parameterSetPointer, length: parameterSetLength)
}
//
if index == 0 {
print("SPS is \(parameterSetPointer) with length \(parameterSetLength)")
} else if index == 1 {
print("PPS is \(parameterSetPointer) with length \(parameterSetLength)")
}
}
print("[\(Date())]: parameterSets \(parameterSets.length)")
}
}
}