I'm building an augmented reality app and would like to show preview images of the available DAE files.
Is it possible to use Swift to extract image data from a DAE file and to load that into an UIImage?
Out of the box there is no function to call that will pull a screen shot from a .dae file. Thats not to say that there isn't a way to do this on device.
To understand what you are asking lets consider the different components.
The .dae file is loaded using an ARSCNView, which is a reader for .dae files. The ARSCNView is traditionally added as a subview to your ViewController. In this instance you would need the viewer to render the 3D file offscreen so you need to create an ARSCNView but do not add it as a subview of the VC (i would add it during the initial setup to make sure it all loads, but then comment that line out)
Once you have your 3D loaded into your ARSCNView, use the below function to rip the context of the view into a JPG image
let viewSnapshot = takeSnapshotOfView(view: yourARSCNView)
and to rip the context and resize the JPG:
import UIKit
import CoreGraphics
func takeSnapshotOfView(view:UIView) -> UIImage? {
UIGraphicsBeginImageContext(CGSize(width: view.frame.size.width, height: view.frame.size.height))
view.drawHierarchy(in: CGRect(x: 0.0, y: 0.0, width: view.frame.size.width, height: view.frame.size.height), afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image?.resize(CGRect(x: 0, y: 0, width: 300, height: (image?.size.height)! / 300 * (image?.size.width)!)) // resize image before returning
}
extension UIImage {
func resize(_ toSize: CGRect) -> UIImage {
let size = self.size
let widthRatio = toSize.width / self.size.width
let heightRatio = toSize.height / self.size.height
var newSize: CGSize
if widthRatio > heightRatio {
newSize = CGSize(width: size.width * heightRatio, height: size.height * heightRatio)
} else {
newSize = CGSize(width: size.width * widthRatio, height: size.height * widthRatio)
}
let rect = CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height)
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
self.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}
Related
How to resize image from UIImagePickerController if the length, width, or both of the photos exceed 1500, both the length and width must be reduced to 65%. For example if the size of the photo is 2000x3000 after resizing both parameters should be multiplied by 0.65 and become 1300x1950.
And, if the length and width of the photo does not exceed 1500, the photo should remain unchanged
Please find the code to resize based on max size and scale.
extension UIImage {
func resizeImage(maxSize: CGFloat, scale : CGFloat) -> UIImage {
let size = self.size
if (size.width > maxSize || size.height > maxSize) {
let newSize = CGSize(width: size.width * scale, height: size.height * scale)
let rect = CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height)
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
self.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
return self
}
}
let resizedImage = yourImage.resizeImage(maxSize: 1500, scale: 0.65)
I'm creating an eraser app where i have a picture and the user can erase the background with the following func :
func eraseImage( image: UIImage ,line: [CGPoint] ,brushSize: CGFloat) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(proxy.size, false, 0)
let context = UIGraphicsGetCurrentContext()
let rect = AVMakeRect(aspectRatio: image.size, insideRect: CGRect(x: 0, y:0, width: proxy.size.width, height: proxy!.size.height))
//lassoImageView.image?.draw(in: calculateRectOfImageInImageView(imageView: lassoImageView))
image.draw(in: rect, blendMode: .normal, alpha: 1)
context?.move(to: CGPoint(x: line.first!.x - 50, y: line.first!.y - 50))
for pointIndex in 1..<line.count {
context?.addLine(to: CGPoint(x: line[pointIndex].x - 50, y: line[pointIndex].y - 50))
}
context?.setBlendMode(.clear)
context?.setLineCap(.round)
context?.setLineWidth(brushSize)
context?.setShadow(offset: CGSize(width: 0, height: 0), blur: 8)
context?.strokePath()
if let img = UIGraphicsGetImageFromCurrentImageContext() {
return img
}
UIGraphicsEndImageContext()
return nil
}
I'm struggling with figuring out how can i add a drawing function where the user can correct it's earaser mistake and draw back some parts.
I'm thinking of drawing on the edited image the original image with a mask of the path which tracks the cgpoints of the users location. is that possible?
I would suggest setting up your image with a second CALayer installed as a mask. Install an image view into the mask layer's contents, and draw with clear/opaque colors into the mask to mask/expose pixels from the image layer.
I have created an image from UIView using UIGraphicsGetCurrentContext. It's work fine but when I resized that image to larger size its be blurred with bad quality. Can it possibility to keep quality when resize? I have tried many ways but not work.
- code image:
func image(with view: UIView) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
view.drawHierarchy(in: view.bounds, afterScreenUpdates: true)
context.setAllowsAntialiasing(true)
context.setShouldAntialias(true)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}
code I have used to resize, this extension of UIImage
func resizedImage(newSize: CGSize) -> UIImage {
// Guard newSize is different
guard self.size != newSize else { return self }
let aspect_ratio = self.size.width / self.size.height
var image_w = newSize.height * aspect_ratio
let finalSize = CGSize(width: image_w, height: newSize.height)
UIGraphicsBeginImageContextWithOptions(finalSize, false, 0.0)
self.draw(in: CGRect(x: 0, y: 0, width: (CGFloat)(finalSize.width), height: (CGFloat)(newSize.height)))
let newImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
and the example image:image
Thanks
Try this:
extension UIImage {
func resizeImage(targetSize: CGSize) -> UIImage {
let size = self.size
let widthRatio = targetSize.width / size.width
let heightRatio = targetSize.height / size.height
let newSize = widthRatio > heightRatio ? CGSize(width: size.width * heightRatio, height: size.height * heightRatio) : CGSize(width: size.width * widthRatio, height: size.height * widthRatio)
let rect = CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height)
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
self.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}
I have an image in a UIImageView:
imageView.contentMode = .scaleAspectFit
imageView.backgroundColor = .red
because of .scaleAspectFit the image view has some red borders and thats OK:
User can added some UIView like label or images over the imageView.
In final step I used the following code to save edited image and user can share it or save it to photo library:
private func generateImage() -> UIImage? {
var finalImage: UIImage?
UIGraphicsBeginImageContextWithOptions(CGSize(width: imageView.frame.size.width, height: imageView.frame.size.height), true, 0)
imageView.drawHierarchy(in: CGRect(x: 0, y: 0, width: imageView.frame.size.width, height: imageView.frame.size.height), afterScreenUpdates: true)
finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
The problem is that the finalImage still has the red borders from imageView.
You can get CGRect of the UIImage displayed in the UIImageView in AspectFit content mode. Please create extension of UIImageView like this,
extension UIImageView {
var contentClippingRect: CGRect {
guard let image = image else { return bounds }
guard contentMode == .scaleAspectFit else { return bounds }
guard image.size.width > 0 && image.size.height > 0 else { return bounds }
let scale: CGFloat
if image.size.width > image.size.height {
scale = bounds.width / image.size.width
} else {
scale = bounds.height / image.size.height
}
let size = CGSize(width: image.size.width * scale, height: image.size.height * scale)
let x = (bounds.width - size.width) / 2.0
let y = (bounds.height - size.height) / 2.0
return CGRect(x: x, y: y, width: size.width, height: size.height)
}
}
You can now use imageView.contentClippingRect to read how read the position and size of the image inside.
You have to do minor changes in your method, call your function with appropriate bounds as contentClippingRect.
Let me know in case of any queries.
UPDATE
Please try this UIImageView+Extension, this might help you. It is in Objective-C code, convert it in Swift.
You can try this as well,
let image = #imageLiteral(resourceName: "Cat03")
let x: CGRect = AVMakeRect(aspectRatio: image.size, insideRect: imageView1.frame)
print(x)
Above code gives you size perfectly.
I know I can fill a rect using NSRectFill(bounds). However I wanted to preserve transparency for PDF output and I discovered that I can do that only with NSBezierPath(rect: bounds).fill()
What is the difference (behind the scenes) of those two?
func drawBackground() {
CGContextSaveGState(currentContext)
if (NSGraphicsContext.currentContextDrawingToScreen()) {
NSColor(patternImage: checkerboardImage).set()
NSRectFillUsingOperation(bounds, NSCompositingOperation.CompositeSourceOver)
}
NSColor.clearColor().setFill()
//NSRectFill(bounds) //option 1
NSBezierPath(rect: bounds).fill() // option 2
CGContextRestoreGState(currentContext)
}
extension NSImage {
static func checkerboardImageWithSize(size : CGFloat) -> NSImage {
let fullRect = NSRect(x: 0, y: 0, width: size, height: size)
let halfSize : CGFloat = size * 0.5;
let upperSquareRect = NSRect(x: 0, y: 0, width: halfSize, height: halfSize);
let bottomSquareRect = NSRect(x: halfSize, y: halfSize, width:halfSize, height: halfSize);
let image = NSImage(size: NSSize(width: size, height: size))
image.lockFocus()
NSColor.whiteColor()
NSRectFill(fullRect)
NSColor(deviceWhite: 0.0, alpha:0.1).set()
NSRectFill(upperSquareRect)
NSRectFill(bottomSquareRect)
image.unlockFocus()
return image
}
}
I'm mostly an iOS programmer and not very fluent these days over on the AppKit side of things, but my guess is that you're getting the wrong NSCompositingOperation. I see from the docs that NSRectFill uses NSCompositeCopy. Perhaps it would work better if you used NSRectFillUsingOperation, where you get to specify the compositing operation.