How to screenshot just part of a screen - swift

I need to crop an image to a rectangle with a height of 25 and a width the length of the screen.
I have the image ready and positioned in the center of the screen. I'd like to screenshot ONLY the rectangle.
I followed this answer, but it does not seem to be working for me for some reason.
UIGraphicsBeginImageContextWithOptions(CGSizeMake(self.view.frame.width,25), false, 0)
self.view.drawViewHierarchyInRect(CGRectMake(self.view.frame.width,25,view.bounds.size.width,view.bounds.size.height), afterScreenUpdates: true)
let image:UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
statusImage = image

You are using wrong coordinates. With your code, the drawing starts at the top right corner of the view and draws void space at the right side of it. You should use 0 as X value and a negative Y value. For example, if you want to screenshot a bar starting at Y=50, you should use:
self.view.drawViewHierarchyInRect(CGRectMake(0, -50, view.bounds.size.width, view.bounds.size.height), afterScreenUpdates: true)
If you want to draw from the top of the view, just put 0 as Y value as well.

Related

Rendering an SCNScene with transparent background makes the scene semi-transparent

My goal is to render an SCNScene off screen with a transparent background, as a PNG. Full reproducing project here.
It works, but when I enable jittering, the resulting render is semitransparent. In this example I pasted in the resulting PNG on top of an image with black squares, and you will notice that the black squares are in fact visible:
As you can see, the black boxes are visible through the 3D objects.
But, when I disable jittering, I get this:
As you can see, the black boxes are not visible.
I'm on Monterrey 12.1 (21C52). I'm testing the images in Preview and in Figma.
I'm using standard SDK features only. Here's what I do:
scene.background.contents = NSColor.clear
let snapshotRenderer = SCNRenderer(device: MTLCreateSystemDefaultDevice())
snapshotRenderer.pointOfView = sceneView.pointOfView
snapshotRenderer.scene = scene
snapshotRenderer.scene!.background.contents = NSColor.clear
snapshotRenderer.autoenablesDefaultLighting = true
// setting this to false does not make the image semi-transparent
snapshotRenderer.isJitteringEnabled = true
let size = CGSize(width: 1000, height: 1000)
let image = snapshotRenderer.snapshot(atTime: .zero, with: size, antialiasingMode: .multisampling16X)
let imageRep = NSBitmapImageRep(data: image.tiffRepresentation!)
let pngData = imageRep?.representation(using: .png, properties: [:])
try! pngData!.write(to: destination)
The docs for jittering says
Jittering is a process that SceneKit uses to improve the visual quality of a rendered scene. While the scene’s content is still, SceneKit moves the pointOfView location very slightly (by less than a pixel in projected screen space). It then composites images rendered after several such moves to create the final rendered scene, creating an antialiasing effect that smooths the edges of rendered geometry.
To me, that doesn't sound like something that is expected to produce semi-transparency?

How do you get the aspect fit size of a uiimage in a uimageview?

When print(uiimage.size) is called it only gives the width and the height of the original image before it was scaled up or down. Is there anyway to get the dimensions of the aspect fitted image?
Actually, there is a function in AVFoundation that can calculate this for you:
import AVFoundation
let fitRect = AVMakeRect(aspectRatio: image.size, insideRect: imageView.bounds)
now fitRect.size is the size inside the imageView bounds by maintaining the original aspect ratio.
You're going to need to calculate the resulting image size in Points yourself*.
* It turns out you don't. See Alladinian's answer. I'm going to
leave this answer here to explain what the library function is doing.
Here's the math:
let imageAspectRatio = image.size.width / image.size.height
let viewAspectRatio = imageView.frame.width / imageView.frame.height
var fitWidth: CGFloat // scaled width in points
var fitHeight: CGFloat // scaled height in points
var offsetX: CGFloat // horizontal gap between image and frame
var offsetY: CGFloat // vertical gap between image and frame
if imageAspectRatio <= viewAspectRatio {
// Image is narrower than view so with aspectFit, it will touch
// the top and bottom of the view, but not the sides
fitHeight = imageView.frame.height
fitWidth = fitHeight * imageAspectRatio
offsetY = 0
offsetX = (imageView.frame.width - fitWidth) / 2
} else {
// Image is wider than view so with aspectFit, it will touch
// the sides of the view but not the top and bottom
fitWidth = imageView.frame.width
fitHeight = fitWidth / imageAspectRatio
offsetX = 0
offsetY = (imageView.frame.height - fitHeight) / 2
}
Explanation:
It helps to draw the pictures. Draw a rectangle that represents the
imageView. Then draw a rectangle that is narrow but extends from the
top of the image view to the bottom. That is the first case. Then draw
one where the image is short but extends to the two side of the image
view. That is the second case. At that point, we know one of the
dimensions. The other is just that value multiplied or divided by the
image's aspect ratio because we know that the .aspectFit keeps the
image's original aspect ratio.
A note about frame vs. bounds. The frame is in the coordinate system of the view's superview. The bounds are in the coordinate system of the view itself. I chose to use the frame in this example, because the OP was interested in how far to move the imageView in it's superview's coordinates. For a standard imageView that has not been rotated or scaled further, the width and height of the frame will match the width and height of the bounds. Things get interesting though when a rotation is applied to an imageView. The frame expands to show the whole imageView, but the bounds remain the same.

How can I use CGAffineTransform to skew a UIView in multiple directions, for example make one side larger than the other?

I know how to rotate a UIView and also scale it and translate x and y coordinates. But I don't know how to skew it as shown in the images below.
First image is a normal red rectangle with margins of size 20px from the sides. This is the unskewed view.
Here is the desired end result rectangle. I have drawn arrows to emphasize the direction of change. Its height is larger on the right side, and lower on the left side, and it is also moved slightly towards the right, so it looks like the rectangle is being pushed into the screen from the left side.
How can I achieve this transform?
You could create a rotation in the Y-axis and adjust the m34 for your perspective:
// Assuming that you have a CALayer. It can be your view's layer.
// In my example this was a 100x100 square layer
let boxLayer = CALayer()
var transform = CATransform3DIdentity
let angle = -CGFloat.pi / 8
transform.m34 = -1.0 / 500.0 // [500]: Smaller -> Closer to the 'camera', more distorted
transform = CATransform3DRotate(transform, angle, 0, 1, 0)
boxLayer.transform = transform
which results to this:

How to change image's size? Image does not fill

I'm trying to make the image rounded. It is already 128x128 (width and height) and the corner radius is 64. I want to zoom in before cutting the photo to take the full rounded size of the picture.
How to zoom it?
My code:
profileImage.layer.cornerRadius = 64
profileImage.clipsToBounds = true
I want to make it like this:
You should set the UIImageView's content mode to .scaleAspectFill. That way, the image will fill up the whole container without distorting it.
profileImage.contentMode = .scaleAspectFill
please first ensure your image height and width must be equal. and constraint set properly
after that change below to line
profileImage.layer.cornerRadius = profileImage.frame.size.width/2
profileImage.clipsToBounds = true

Background image in gamescene.swift

What do I need to code in order to have an image (that is already in the assets.xcassets) displayed as the background of the GameScene.swift?
First of all you could call you scene with the scaleMode .resizeFill that modify the SKScene's actual size to exactly match the SKView :
scene.scaleMode = .resizeFill
.resizeFill – The scene is not scaled. It is simply resized so that its fits the view. Because the scene is not scaled, the images will all remain at their original size and aspect ratio. The content will all remain relative to the scene origin (lower left).
By default, a scene’s origin is placed in the lower-left corner of the view. So, a scene is initialized with a height of 1024 and a width of 768, has the origin (0,0) in the lower-left corner, and the (1024,768) coordinate in the upper-right corner. The frame property holds (0,0)-(1024,768).The default value for the anchor point is CGPointZero (so you don't need to change it), which places it at the lower-left corner.
Finally, you can use the code below to add your background image (called of example bg.jpg):
// Set background
let txt = SKTexture(imageNamed: "bg.jpg")
let backgroundNode = SKSpriteNode(texture: txt, size:size)
self.addChild(backgroundNode)
backgroundNode.position = CGPoint(x: self.frame.midX, y: self.frame.midY)
Although this might not be the best way, but it's what I always do, and it works.
Assuming that you have an image that is exactly the same aize as your scene, you can do this:
// please declare bg as a class level variable
bg = SKSpriteNode(imageNamed: "name of your texture")
// the below two lines of code is my preference only. I want the
// background's anchor point to be the bottom left of the screen
// because IMO it's easier to add other sprites as children of the background.
bg.anchorPoint = CGPoint.zero
bg.position = CGPoint(x: self.frame.width / -2, y: self.frame.height / -2)
self.addChild(bg)
Alernatively, just do this in an sks file. It's much easier.
After that, add all your game sprites as children of bg instead of self because it is easier to manage.