I wan´t to take a screenshot of a NSView but when I do this, I get an image without the background. All the subviews are show, but not the background. I use this code:
if let image = NSImage(data: background.dataWithPDF(inside: background.bounds)) {
let imageView = NSImageView(image: image)
imageView.imageScaling = .scaleProportionallyUpOrDown
return NSImageView(image: image)
}
I thought, ok when I get only the subviews, then I will make a screenshot of the superview and I thried the following:
if let superview = background.superview, let image = NSImage(data: superview.dataWithPDF(inside: background.frame)) {
let imageView = NSImageView(image: image)
imageView.imageScaling = .scaleProportionallyUpOrDown
return NSImageView(image: image)
}
But I get the same result. Even if I set the background color of my background view I don´t get an image without transparent background.
How can I resolve this?
thank you
Artur
I got an Answer:
background.lockFocus()
if let rep = NSBitmapImageRep(focusedViewRect: background.bounds){
let img = NSImage(size: background.bounds.size)
img.addRepresentation(rep)
return NSImageView(image: img)
}
background.unlockFocus()
You could have converted the views backing layer.backgroundColor to an image. Then assign the image to a backing image view that sits in the 0th index of your views heirarchy and use the method dataWithPDF -> that will successfully capture a background.
Related
Using CIGaussianBlur causes UIImageView to apply the blur from the border in, making the image appear to shrink (right image). Using .blur on a SwiftUI view does the opposite; the blur is applied from the border outwards (left image). This is the effect I’m trying to achieve in UIKit. How can I go about this?
I've seen a few posts about using CIAffineClamp, but that causes the blur to stop at the image boarder which is not what I want.
private let context = CIContext()
private let filter = CIFilter(name: "CIGaussianBlur")!
private func createBluredImage(using image: UIImage, value: CGFloat) -> UIImage? {
let beginImage = CIImage(image: image)
filter.setValue(beginImage, forKey: kCIInputImageKey)
filter.setValue(value, forKey: kCIInputRadiusKey)
guard
let outputImage = filter.outputImage,
let cgImage = context.createCGImage(outputImage, from: outputImage.extent)
else {
return nil
}
return UIImage(cgImage: cgImage)
}
When I used CIGaussianBlur I wanted my output image to be contained inside the image frame, so I used CIAffineClamp on the image before applying the blur, as you describe.
You might need to render your source image into a larger frame, clamp to that larger frame using CIAffineClamp, apply your blur filter, then load the resulting blurred output image. Core Image is a bit of a pain to set up and figure out, so I don’t have a full solution ready for you, but that’s what I would suggest.
I just started working on Swift last week and i need a suggestion if the following approach is right ways of laying partial image on top of another image.
I have a UIView in which I am creating 3 images programmatically. Left arrow image, middle mobile image and right arrow image as shown below. Can I partially place arrow images 50% on the mobile image?
I have tried:
func setupUI(){
let mobileImage = UIImage(named: "mobile")
let arrowImage = UIImage(named: "arrow")
middleView = UIImageView(frame: CGRect(x: arrowImage!.size.width/2, y:0 , width:mobileImage!.size.width, height:mobileImage!.size.height))
middleView.image = mobileImage
middleView.layer.borderWidth = 1.0
middleView.layer.cornerRadius = 5.0
self.addSubview(middleView)
let yForArrow = mobileImage!.size.height - arrowImage!.size.height
leftArrow = UIImageView(frame: CGRect(x: 0, y:yForArrow, width:arrowImage!.size.width, height:arrowImage!.size.height))
leftArrow.image = arrowImage
self.addSubview(leftArrow)
let rightArrowX = mobileImage!.size.width
rightView = UIImageView(frame: CGRect(x: rightArrowX, y:yForArrow, width:arrowImage!.size.width, height:arrowImage!.size.height))
rightView.image = arrowImage
self.addSubview(rightView)
}
*At start it was not working, as i forgot to add setupUI() in init method. As shown in answer bellow.
Is setting frame correct way of doing it OR i should be using constraints?
To me it looks bad approach as i am hard coding the numbers in CGRect.
*This image is created in MS paint to show what it should look on iPhone.
I found the problem i missed adding setupUI() in init method.
SetupUI programatically adds images on UIView. As it was missing so no image was appearing in the iPhone simulator.
override init(frame: CGRect) {
super.init(frame: frame)
setupUI() // Code to add images in UIView
}
I've a problem with UIImageView with custom border.
how can i make an image like this?
I add the normal image with bread, and i have this image, the border
any idea?
thanks
All you need to do is to use a bucket fill tool to fill the star outer border.
Afterwards, just create a new UIImageView with the original imageView frame and put it on top of the original image using "bringToFront()" method:
let imgView_bread = UIImageView(image: UIImage (named: "bread"))
imgView_bread.contentMode = .ScaleAspectFill
self.view.addSubview(imgView_bread);
let imgView_starBorder = UIImageView(image: UIImage (named: "star_border"))
imgView_starBorder.contentMode = .ScaleAspectFill
imgView_starBorder.frame = imgView_bread.frame
self.view.addSubview (imgView_starBorder);
self.view.bringSubviewToFront (imgView_starBorder);
I am trying to show a view modally. The view is shown as a "Cross Dissolve" "over full screen". I am passing the screen shot to the controller. I am then trying to crop the screenshot and retain only the part that would be under the view. This i am blurring and adding to the view.
The code works as far as blurring goes, but i have two problems.
1) The image is at double scale, which will be something to do with retina display, but i am never sure how to fix that.
2) the other is that, having tried everything I can think of, i cannot get the coordinate of the "canvas" in a coordinate system that helps me correctly crop the view.
I'd really appreciate help with this
thanks
karl
override func viewWillAppear(animated: Bool) {
super.viewWillAppear(animated)
let appDelegate = UIApplication.sharedApplication().delegate as! AppDelegate
let window = appDelegate.window!
let splitViewController = window.rootViewController as! UISplitViewController;
let cropFrame = UIView().convertRect(self.canvas.frame, toView: splitViewController.view)
let croppedScreenShotCG = CGImageCreateWithImageInRect(screenShot?.CGImage, cropFrame)
let croppedScreenShot = UIImage(CGImage:croppedScreenShotCG)
var blurEffect = UIBlurEffect(style: UIBlurEffectStyle.Light)
var blurEffectView = UIVisualEffectView(effect: blurEffect)
blurEffectView.frame = CGRectMake(0, 0, self.canvas.frame.width, self.canvas.frame.height)
let blurImageView = UIImageView(image:croppedScreenShot)
blurImageView.addSubview(blurEffectView)
let blurColorCast = UIView(frame: CGRectMake(0, 0, self.canvas.frame.width, self.canvas.frame.height))
blurColorCast.backgroundColor = UIColor.cloverColor10pc()
blurColorCast.alpha = 0.2
blurImageView.addSubview(blurColorCast)
self.canvas.addSubview(blurImageView)
self.canvas.sendSubviewToBack(blurImageView)
UIImageWriteToSavedPhotosAlbum(croppedScreenShot, nil, nil, nil)
}
I am getting the screen shot like this:
let layer = window.layer
let scale = UIScreen.mainScreen().scale
UIGraphicsBeginImageContextWithOptions(layer.frame.size, false, scale);
layer.renderInContext(UIGraphicsGetCurrentContext())
let screenshot = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Instead of using UIImage's CGImage: you can use imageWithCGImage:scale:orientation: which takes a scale parameter to account for retina screens.
See: https://developer.apple.com/library/ios/documentation/UIKit/Reference/UIImage_Class/#//apple_ref/occ/clm/UIImage/imageWithCGImage:scale:orientation:
But personally I prefer to avoid doing anything in code, if there's a reasonable way to do it with storyboards.
Here's an example project that has a blur effect for the back of a modal presentation that doesn't use any code at all.
The only thing worth noting is that for the view controller that's being presented, the root view background color is set to clear (otherwise the view controller that's presenting it would be obscured).
I have zoomable image in a UIScrollview. Once the image is scrolled to the user's liking, the scrollview is locked. A UITextField can then be edited, and is (currently) added to the UIImageView. The context of the image is captured and saved.
Without scrolling, this works fine, however, the context is based on the frame of the UIImageView. How can it capture the visible portion of the screen instead, leaving out the navigation bar and toolbar? The function used to save the image is below.
#IBAction func saveButtonPressed(sender: AnyObject) {
let newImage = imageStore.createImage(textLabelBottom.text, imageName: "image1") { () -> UIImage in
// Can't add the textLabel to imageView because its size changes
self.imageView.addSubview(self.textLabelBottom)
UIGraphicsBeginImageContextWithOptions(self.imageView.frame.size, false, 0.0)
let ctx = UIGraphicsGetCurrentContext()
self.imageView.layer.renderInContext(ctx)
self.imageToSave = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return self.imageToSave
}
imageStore.saveImage(newImage)
self.showActivityViewController()
}
I wan't to save the visible portion of the image based on its position in the scrollview with the textLabel in a static spot. The code currently adds textLabel to the same point in the image regardless of zoom, with the saved image either very small or very large based on the UIScrollView.
Update: I changed the code to the following:
#IBAction func saveButtonPressed(sender: AnyObject) {
let newImage = imageStore.createImage(textLabelBottom.text, imageName: "image1") { () -> UIImage in
// Can't add the textLabel to imageView because its size changes
self.imageView.addSubview(self.textLabelBottom)
UIGraphicsBeginImageContextWithOptions(self.imageScrollView.bounds.size, false, UIScreen.mainScreen().scale)
let ctx = UIGraphicsGetCurrentContext()
let offset: CGPoint = self.imageScrollView.contentOffset
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -offset.x, -offset.y)
self.imageScrollView.layer.renderInContext(ctx)
self.imageToSave = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return self.imageToSave
}
imageStore.saveImage(newImage)
self.showActivityViewController()
}
This works properly, but the text field is obviously out of place. Do I add it to a specific CGPoint within the context?
Update 2:
It's my understanding that CGContextTranslateCTM changes the coordinate system to that of the layer you're working on. In this case, the new {0, 0} would be at the offset of the scrollview. When I want to place my textfield into the context, though, it doesn't show up in the right place (offscreen). Trying something like this:
#IBAction func saveButtonPressed(sender: AnyObject) {
let newImage = imageStore.createImage(textLabelBottom.text, imageName: "image1") { () -> UIImage in
// Can't add the textLabel to imageView because its size changes
self.imageView.addSubview(self.textLabelBottom)
UIGraphicsBeginImageContextWithOptions(self.imageScrollView.bounds.size, false, UIScreen.mainScreen().scale)
let ctx = UIGraphicsGetCurrentContext()
let offset: CGPoint = self.imageScrollView.contentOffset
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -offset.x, -offset.y)
self.imageScrollView.layer.renderInContext(ctx)
//Move to the correct point and render
CGContextMoveToPoint(ctx, /*point1*/, /*point2*/)
self.textLabelBottom.layer.renderInContext(ctx)
self.imageToSave = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return self.imageToSave
}
imageStore.saveImage(newImage)
self.showActivityViewController()
}
What else do I need to do to the context to do this properly?