I have a custom dae file and am wanting to scale it down to fit in some areas, what I have found, unfortunately, is that if the scaling factor falls below .65, the node isn't rendered for some reason. I'm not sure what I am doing wrong. Here is the code I am currently using.
func logoPanel(height: CGFloat, width: CGFloat) -> SCNNode {
let nodeCollection = SCNNode()
var v1 = SCNVector3(x:0, y:0, z:0)
var v2 = SCNVector3(x:0, y:0, z:0)
let logoNode = collada2SCNNode(Double(height))
let padding = 0.3
logoNode.getBoundingBoxMin(&v1, max:&v2)
if Double(v2.y - v1.y) + padding > (Double(height) - (radius*2)) / 2 {
// scale logo node down
let scaleFactor = Float(((Double(height) - (radius*2)) / 2) / (Double(v2.y - v1.y) + padding))
logoNode.transform = SCNMatrix4MakeScale(scaleFactor, scaleFactor, scaleFactor)
logoNode.position = SCNVector3Make(0, Float((-height/2.0) + 0.1), 0)
}
nodeCollection.addChildNode(logoNode)
return nodeCollection
}
Related
I am trying to figure out how to create a plainNode in SceneKit that takes up exactly half of the screen.
So I found this routine to projectValues that seems correct.
extension CGPoint {
func scnVector3Value(view: SCNView, depth: Float) -> SCNVector3 {
let projectedOrigin = view.projectPoint(SCNVector3(0, 0, depth))
return view.unprojectPoint(SCNVector3(Float(x), Float(y), projectedOrigin.z))
}
}
And I fed these values into it...
let native = UIScreen.main.bounds
let maxMax = CGPoint(x: native.width, y: native.height * 0.5)
let newPosition1 = maxMax.scnVector3Value(view: view, depth: Float(0))
print("newPosition \(newPosition1)")
let minMin = CGPoint(x: 0, y: 0)
let newPosition2 = minMin.scnVector3Value(view: view, depth: Float(0))
print("newPosition \(newPosition2)")
let minMax = CGPoint(x: 0, y: native.height * 0.5)
let newPosition3 = minMax.scnVector3Value(view: view, depth: Float(0))
print("newPosition \(newPosition3)")
let maxMin = CGPoint(x: native.width, y: 0)
let newPosition4 = maxMin.scnVector3Value(view: view, depth: Float(0))
print("newPosition \(newPosition4)")
// approximations that look almost correct, but they are not...
let width = (maxMax.x - minMin.x) / 100 * 2
let height = (maxMax.y - minMin.y) / 100 * 2
let plainGeo = SCNPlane(width: width, height: height)
let planeNode = SCNNode(geometry: plainGeo)
planeNode.geometry?.firstMaterial?.diffuse.contents = UIColor.red
view.scene.rootNode?.addChildNode(planeNode)
But it isn't right? What am I doing wrong here?
Edited with link to repository.
I am using SwiftUI and so don't have access to the 'cropping view'. I am using gestures instead of ScrollView to capture a zoom level and offset (x and y) of an image. I am unable to return an image which crops properly based on these factors.
It seems as if SwiftUI itself might be a factor. Perhaps the offset of the image within the view needs to be accounted for in determining offsets and zoom levels?
I have the image and I have the following values from the gestures on the view to represent scale and x/y position:
#State var scale: CGFloat = 1.0
#State var currentPosition: CGSize = CGSize.zero
The current attempt, which gets closest for the function called:
func prepareImage( ) {
let imageToManipulate = UIImage(named: "landscape")
let currentPositionWidth = self.currentPosition.width
let currentPositionHeight = self.currentPosition.height
let zoomScale = self.scale
let imsize = imageToManipulate!.size
var scale : CGFloat = self.frameSize.width / imsize.width
if imsize.height * scale < self.frameSize.height {
scale = self.frameSize.height / imsize.height
}
let croppedImsize = CGSize(width: (self.frameSize.width/scale) / zoomScale, height: (self.frameSize.height/scale) / zoomScale)
let xOffset = (( imsize.width - croppedImsize.width ) / 2.0) - (currentPositionWidth / zoomScale)
let yOffset = (( imsize.height - croppedImsize.height) / 2.0) - (currentPositionHeight / zoomScale)
let croppedImrect: CGRect = CGRect(x: xOffset, y: yOffset, width: croppedImsize.width, height: croppedImsize.height)
let r = UIGraphicsImageRenderer(size:croppedImsize)
let croppedIm = r.image { _ in
imageToManipulate!.draw(at: CGPoint(x:-croppedImrect.origin.x, y:-croppedImrect.origin.y))
}
self.croppedImage = croppedIm
self.photoIsFinished = true
}
However, as you will see in the repository, when combining both zoom/scale and x/y offsets it is always 'off' a bit.
As well, when you try to crop to a square image the amount it is 'off' can be quite significant.
Thanks to Asperi's answer , I have implement a lightweight swiftUI library to crop image.Here is the library and demo.Here
The magic is below:
public var body: some View {
GeometryReader { proxy in
// ...
Button(action: {
// how to crop the image according to rectangle area
if self.tempResult == nil {
self.cropTheImageWithImageViewSize(proxy.size)
}
self.resultImage = self.tempResult
}) {
Text("Crop Image")
.padding(.all, 10)
.background(Color.blue)
.foregroundColor(.white)
.shadow(color: .gray, radius: 1)
.padding(.top, 50)
}
}
}
func cropTheImageWithImageViewSize(_ size: CGSize) {
let imsize = inputImage.size
let scale = max(inputImage.size.width / size.width,
inputImage.size.height / size.height)
let zoomScale = self.scale
let currentPositionWidth = self.dragAmount.width * scale
let currentPositionHeight = self.dragAmount.height * scale
let croppedImsize = CGSize(width: (self.cropSize.width * scale) / zoomScale, height: (self.cropSize.height * scale) / zoomScale)
let xOffset = (( imsize.width - croppedImsize.width) / 2.0) - (currentPositionWidth / zoomScale)
let yOffset = (( imsize.height - croppedImsize.height) / 2.0) - (currentPositionHeight / zoomScale)
let croppedImrect: CGRect = CGRect(x: xOffset, y: yOffset, width: croppedImsize.width, height: croppedImsize.height)
if let cropped = inputImage.cgImage?.cropping(to: croppedImrect) {
//uiimage here can write to data in png or jpeg
let croppedIm = UIImage(cgImage: cropped)
tempResult = croppedIm
result = Image(uiImage: croppedIm)
}
}
The answer was provided via the GitHub repository by juanj
let imageToManipulate = UIImage(named: "landscape")
let zoomScale = self.scale
let imsize = imageToManipulate!.size
var scale : CGFloat = self.frameSize.width / imsize.width
if imsize.height * scale < self.frameSize.height {
scale = self.frameSize.height / imsize.height
}
let currentPositionWidth = self.currentPosition.width / scale
let currentPositionHeight = self.currentPosition.height / scale
let croppedImsize = CGSize(width: (self.frameSize.width/scale) / zoomScale, height: (self.frameSize.height/scale) / zoomScale)
let xOffset = (( imsize.width - croppedImsize.width ) / 2.0) - (currentPositionWidth / zoomScale)
let yOffset = (( imsize.height - croppedImsize.height) / 2.0) - (currentPositionHeight / zoomScale)
let croppedImrect: CGRect = CGRect(x: xOffset, y: yOffset, width: croppedImsize.width, height: croppedImsize.height)
let r = UIGraphicsImageRenderer(size:croppedImsize)
let croppedIm = r.image { _ in
imageToManipulate!.draw(at: CGPoint(x:-croppedImrect.origin.x, y:-croppedImrect.origin.y))
}
self.croppedImage = croppedIm
self.photoIsFinished = true
The full code, demonstrating how to allow a user to zoom and pan an image within a frame in a SwiftUI view, and then crop the result to a new image can be viewed in the repository.
I am trying to display an image as the content of a CALayer slightly zoomed in by changing its bounds to a bigger size. (This is so that I can pan over it later.)
For some reason however setting the bounds does not change them or trigger an animation to do so.
This is the code I use to change the bounds:
self.imageLayer.bounds = CGRect(x: 0, y: 0, width: 10, height: 10)
I have a function to compute the CGRect, but this dummy one leads to exactly the same result of the size not changing.
I have also determined, that while I can't see the size change, if I check the bounds of the layer right after setting it, it correctly has the value I set it to.
The following code is executed after setting the bounds. I couldn't find anything in it, that changes them back.
self.imageLayer.add(self.generatePanAnimation(), forKey: "pan")
func generatePanAnimation() -> CAAnimation {
var positionA = CGPoint(x: (self.bounds.width / 2), y: self.bounds.height / 2)
var positionB = CGPoint(x: (self.bounds.width / 2), y: self.bounds.height / 2)
positionA = self.generateZoomedPosition()
positionB = self.generateZoomedPosition()
let panAnimation = CABasicAnimation(keyPath: "position")
if self.direction == .AtoB {
panAnimation.fromValue = positionA
panAnimation.toValue = positionB
} else {
panAnimation.fromValue = positionB
panAnimation.toValue = positionA
}
panAnimation.duration = self.panAndZoomDuration
self.panAnimation = panAnimation
return panAnimation
}
func generateZoomedPosition() -> CGPoint {
let maxRight = self.zoomedImageLayerBounds.width / 2
let maxLeft = self.bounds.width - (self.zoomedImageLayerBounds.height / 2)
let maxUp = self.zoomedImageLayerBounds.height / 2
let maxDown = self.bounds.height - (self.zoomedImageLayerBounds.height / 2)
let horizontalFactor = CGFloat(arc4random()) / CGFloat(UINT32_MAX)
let verticalFactor = CGFloat(arc4random()) / CGFloat(UINT32_MAX)
let randomX = maxLeft + horizontalFactor * (maxRight - maxLeft)
let randomY = maxDown + verticalFactor * (maxUp - maxDown)
return CGPoint(x: randomX, y: randomY)
}
I even tried setting the bounds as shown below, but it didn't help.
CATransaction.begin()
CATransaction.setValue(true, forKey: kCATransactionDisableActions)
self.imageLayer.bounds = CGRect(x: 0, y: 0, width: 10, height: 10)
CATransaction.commit()
I really hope someone has an idea. Thanks a lot!
The way to change the apparent drawing size of a layer is not to change its bounds but to change its transform. To make the layer look larger, including its drawing, apply a scale transform.
To have a similar effect to Snapchat's HUD movement, I have created a movement of the HUD elements based on UIScollView's contentOffset. Edit: Link to the Github project.
func scrollViewDidScroll(_ scrollView: UIScrollView) {
self.view.layoutIfNeeded()
let factor = scrollView.contentOffset.y / self.view.frame.height
self.transformElements(self.playButton,
0.45 + 0.55 * factor, // 0.45 = desired scale + 0.55 = 1.0 == original scale
Roots.screenSize.height - 280, // 280 == original Y
Roots.screenSize.height - 84, // 84 == minimum desired Y
factor)
}
func transformElements(_ element: UIView?,
_ scale: CGFloat,
_ originY: CGFloat,
_ desiredY: CGFloat,
_ factor: CGFloat) {
if let e = element {
e.transform = CGAffineTransform(scaleX: scale, y: scale) // this line lagging
let resultY = desiredY + (originY - desiredY) * factor
var frame = e.frame
frame.origin.y = resultY
e.frame = frame
}
}
With this code implemented the scroll as well as the transition appeared to be "laggy"/not smooth. (Physical iPhone 6S+ and 7+).
Deleting the following line: e.transform = CGAffineTransform(scaleX: scale, y: scale) erased the issue. The scroll as well as the Y-movement of the UIView object is smooth again.
What's the best approach to transform the scale of an object?
There are no Layout Constraints.
func setupPlayButton() {
let rect = CGRect(x: Roots.screenSize.width / 2 - 60,
y: Roots.screenSize.height - 280,
width: 120,
height: 120)
self.playButton = UIButton(frame: rect)
self.playButton.setImage(UIImage(named: "playBtn")?.withRenderingMode(.alwaysTemplate), for: .normal)
self.playButton.tintColor = #colorLiteral(red: 1, green: 1, blue: 1, alpha: 1)
self.view.addSubview(playButton)
}
This is happening because you are applying both: transform and frame. It will be smoother, if you apply only transform. Update your transformElements function as below:
func transformElements(_ element: UIView?,
_ scale: CGFloat,
_ originY: CGFloat,
_ desiredY: CGFloat,
_ factor: CGFloat) {
if let e = element {
e.transform = CGAffineTransform(scaleX: scale, y: scale).translatedBy(x: 0, y: desiredY * (1 - factor))
}
}
You can make these kinds of animation smoother by creating an animation then setting the speed of the layer to 0 and then changing the timeOffset of the layer.
first add the animation in the setupPlayButton method
let animation = CABasicAnimation.init(keyPath: "transform.scale")
animation.fromValue = 1.0
animation.toValue = 0.45
animation.duration = 1.0
//Set the speed of the layer to 0 so it doesn't animate until we tell it to
self.playButton.layer.speed = 0.0;
self.playButton.layer.add(animation, forKey: "transform");
next in the scrollViewDidScroll change the timeOffset of the layer and move the center of the button.
if let btn = self.playButton{
var factor:CGFloat = 1.0
if isVertically {
factor = scrollView.contentOffset.y / self.view.frame.height
} else {
factor = scrollView.contentOffset.x / Roots.screenSize.width
var transformedFractionalPage: CGFloat = 0
if factor > 1 {
transformedFractionalPage = 2 - factor
} else {
transformedFractionalPage = factor
}
factor = transformedFractionalPage;
}
//This will change the size
let timeOffset = CFTimeInterval(1-factor)
btn.layer.timeOffset = timeOffset
//now change the positions. only use center - not frame - so you don't mess up the animation. These numbers aren't right I don't know why
let desiredY = Roots.screenSize.height - (280-60);
let originY = Roots.screenSize.height - (84-60);
let resultY = desiredY + (originY - desiredY) * (1-factor)
btn.center = CGPoint.init(x: btn.center.x, y: resultY);
}
I couldn't quite get the position of the button correct - so something is wrong with my math there, but I trust you can fix it.
If you want more info about this technique see here: http://ronnqvi.st/controlling-animation-timing/
Is there an easy way to rotate a NSImage in a Mac OSX app? Or just set the orientation from portrait to landscape using Swift?
I am playing around with CATransform3DMakeAffineTransform but I can't get it to work.
CATransform3DMakeAffineTransform(CGAffineTransformMakeRotation(CGFloat(M_PI) * 90/180))
It's the first time for me to work with transformations. So please be patient with me :) Maybe I'm working on a wrong approach...
Can anybody help me please?
Thanks!
public extension NSImage {
public func imageRotatedByDegreess(degrees:CGFloat) -> NSImage {
var imageBounds = NSZeroRect ; imageBounds.size = self.size
let pathBounds = NSBezierPath(rect: imageBounds)
var transform = NSAffineTransform()
transform.rotateByDegrees(degrees)
pathBounds.transformUsingAffineTransform(transform)
let rotatedBounds:NSRect = NSMakeRect(NSZeroPoint.x, NSZeroPoint.y, pathBounds.bounds.size.width, pathBounds.bounds.size.height )
let rotatedImage = NSImage(size: rotatedBounds.size)
//Center the image within the rotated bounds
imageBounds.origin.x = NSMidX(rotatedBounds) - (NSWidth(imageBounds) / 2)
imageBounds.origin.y = NSMidY(rotatedBounds) - (NSHeight(imageBounds) / 2)
// Start a new transform
transform = NSAffineTransform()
// Move coordinate system to the center (since we want to rotate around the center)
transform.translateXBy(+(NSWidth(rotatedBounds) / 2 ), yBy: +(NSHeight(rotatedBounds) / 2))
transform.rotateByDegrees(degrees)
// Move the coordinate system bak to normal
transform.translateXBy(-(NSWidth(rotatedBounds) / 2 ), yBy: -(NSHeight(rotatedBounds) / 2))
// Draw the original image, rotated, into the new image
rotatedImage.lockFocus()
transform.concat()
self.drawInRect(imageBounds, fromRect: NSZeroRect, operation: NSCompositingOperation.CompositeCopy, fraction: 1.0)
rotatedImage.unlockFocus()
return rotatedImage
}
var image = NSImage(named:"test.png")!.imageRotatedByDegreess(CGFloat(90)) //use only this values 90, 180, or 270
}
Updated for Swift 3:
public extension NSImage {
public func imageRotatedByDegreess(degrees:CGFloat) -> NSImage {
var imageBounds = NSZeroRect ; imageBounds.size = self.size
let pathBounds = NSBezierPath(rect: imageBounds)
var transform = NSAffineTransform()
transform.rotate(byDegrees: degrees)
pathBounds.transform(using: transform as AffineTransform)
let rotatedBounds:NSRect = NSMakeRect(NSZeroPoint.x, NSZeroPoint.y, pathBounds.bounds.size.width, pathBounds.bounds.size.height )
let rotatedImage = NSImage(size: rotatedBounds.size)
//Center the image within the rotated bounds
imageBounds.origin.x = NSMidX(rotatedBounds) - (NSWidth(imageBounds) / 2)
imageBounds.origin.y = NSMidY(rotatedBounds) - (NSHeight(imageBounds) / 2)
// Start a new transform
transform = NSAffineTransform()
// Move coordinate system to the center (since we want to rotate around the center)
transform.translateX(by: +(NSWidth(rotatedBounds) / 2 ), yBy: +(NSHeight(rotatedBounds) / 2))
transform.rotate(byDegrees: degrees)
// Move the coordinate system bak to normal
transform.translateX(by: -(NSWidth(rotatedBounds) / 2 ), yBy: -(NSHeight(rotatedBounds) / 2))
// Draw the original image, rotated, into the new image
rotatedImage.lockFocus()
transform.concat()
self.draw(in: imageBounds, from: NSZeroRect, operation: NSCompositingOperation.copy, fraction: 1.0)
rotatedImage.unlockFocus()
return rotatedImage
}
}
class SomeClass: NSViewController {
var image = NSImage(named:"test.png")!.imageRotatedByDegreess(degrees: CGFloat(90)) //use only this values 90, 180, or 270
}
Thank for this solution, however it did not worked perfectly for me.
As you may have noticed that pathBounds is not used anywhere. In my opinion is has to be used like so:
let rotatedBounds:NSRect = NSMakeRect(NSZeroPoint.x, NSZeroPoint.y , pathBounds.bounds.size.width, pathBounds.bounds.size.height )
Otherwise the image will be rotated but cropped to a square bounds.
Letting IKImageView do the heavy lifting:
import Quartz
extension NSImage {
func imageRotated(by degrees: CGFloat) -> NSImage {
let imageRotator = IKImageView()
var imageRect = CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height)
let cgImage = self.cgImage(forProposedRect: &imageRect, context: nil, hints: nil)
imageRotator.setImage(cgImage, imageProperties: [:])
imageRotator.rotationAngle = CGFloat(-(degrees / 180) * CGFloat(M_PI))
let rotatedCGImage = imageRotator.image().takeUnretainedValue()
return NSImage(cgImage: rotatedCGImage, size: NSSize.zero)
}
}
Here's a simple Swift (4+) solution to drawing an image that is rotated around the center:
extension NSImage {
/// Rotates the image by the specified degrees around the center.
/// Note that if the angle is not a multiple of 90°, parts of the rotated image may be drawn outside the image bounds.
func rotated(by angle: CGFloat) -> NSImage {
let img = NSImage(size: self.size, flipped: false, drawingHandler: { (rect) -> Bool in
let (width, height) = (rect.size.width, rect.size.height)
let transform = NSAffineTransform()
transform.translateX(by: width / 2, yBy: height / 2)
transform.rotate(byDegrees: angle)
transform.translateX(by: -width / 2, yBy: -height / 2)
transform.concat()
self.draw(in: rect)
return true
})
img.isTemplate = self.isTemplate // preserve the underlying image's template setting
return img
}
}
This one works also for non-square images, Swift 5.
extension NSImage {
func rotated(by degrees : CGFloat) -> NSImage {
var imageBounds = NSRect(x: 0, y: 0, width: size.width, height: size.height)
let rotatedSize = AffineTransform(rotationByDegrees: degrees).transform(size)
let newSize = CGSize(width: abs(rotatedSize.width), height: abs(rotatedSize.height))
let rotatedImage = NSImage(size: newSize)
imageBounds.origin = CGPoint(x: newSize.width / 2 - imageBounds.width / 2, y: newSize.height / 2 - imageBounds.height / 2)
let otherTransform = NSAffineTransform()
otherTransform.translateX(by: newSize.width / 2, yBy: newSize.height / 2)
otherTransform.rotate(byDegrees: degrees)
otherTransform.translateX(by: -newSize.width / 2, yBy: -newSize.height / 2)
rotatedImage.lockFocus()
otherTransform.concat()
draw(in: imageBounds, from: CGRect.zero, operation: NSCompositingOperation.copy, fraction: 1.0)
rotatedImage.unlockFocus()
return rotatedImage
}
}
Building on #FrankByte.com's code, this version should extend correctly in both x and y on any image and any rotation.
extension NSImage {
func rotated(by degrees: CGFloat) -> NSImage {
let sinDegrees = abs(sin(degrees * CGFloat.pi / 180.0))
let cosDegrees = abs(cos(degrees * CGFloat.pi / 180.0))
let newSize = CGSize(width: size.height * sinDegrees + size.width * cosDegrees,
height: size.width * sinDegrees + size.height * cosDegrees)
let imageBounds = NSRect(x: (newSize.width - size.width) / 2,
y: (newSize.height - size.height) / 2,
width: size.width, height: size.height)
let otherTransform = NSAffineTransform()
otherTransform.translateX(by: newSize.width / 2, yBy: newSize.height / 2)
otherTransform.rotate(byDegrees: degrees)
otherTransform.translateX(by: -newSize.width / 2, yBy: -newSize.height / 2)
let rotatedImage = NSImage(size: newSize)
rotatedImage.lockFocus()
otherTransform.concat()
draw(in: imageBounds, from: CGRect.zero, operation: NSCompositingOperation.copy, fraction: 1.0)
rotatedImage.unlockFocus()
return rotatedImage
}
}