Programmatically place partial image over another in UIView using Swift 3 - swift

I just started working on Swift last week and i need a suggestion if the following approach is right ways of laying partial image on top of another image.
I have a UIView in which I am creating 3 images programmatically. Left arrow image, middle mobile image and right arrow image as shown below. Can I partially place arrow images 50% on the mobile image?
I have tried:
func setupUI(){
let mobileImage = UIImage(named: "mobile")
let arrowImage = UIImage(named: "arrow")
middleView = UIImageView(frame: CGRect(x: arrowImage!.size.width/2, y:0 , width:mobileImage!.size.width, height:mobileImage!.size.height))
middleView.image = mobileImage
middleView.layer.borderWidth = 1.0
middleView.layer.cornerRadius = 5.0
self.addSubview(middleView)
let yForArrow = mobileImage!.size.height - arrowImage!.size.height
leftArrow = UIImageView(frame: CGRect(x: 0, y:yForArrow, width:arrowImage!.size.width, height:arrowImage!.size.height))
leftArrow.image = arrowImage
self.addSubview(leftArrow)
let rightArrowX = mobileImage!.size.width
rightView = UIImageView(frame: CGRect(x: rightArrowX, y:yForArrow, width:arrowImage!.size.width, height:arrowImage!.size.height))
rightView.image = arrowImage
self.addSubview(rightView)
}
*At start it was not working, as i forgot to add setupUI() in init method. As shown in answer bellow.
Is setting frame correct way of doing it OR i should be using constraints?
To me it looks bad approach as i am hard coding the numbers in CGRect.
*This image is created in MS paint to show what it should look on iPhone.

I found the problem i missed adding setupUI() in init method.
SetupUI programatically adds images on UIView. As it was missing so no image was appearing in the iPhone simulator.
override init(frame: CGRect) {
super.init(frame: frame)
setupUI() // Code to add images in UIView
}

Related

I want to give the thumb image to NSSlider in swift

I`m working on an editor app, where I need to add image in the slider thumb as I'm doing macOS development so unfortunately I'm not able to do that in macOS, there is available in UISlider.
this is what I have and I want to insert the thumb Image as give below
I find out the solution, I hope this will help you all.
I made a NSSliderCell class and override the function drawKnob.
class CustomSliderCell: NSSliderCell {
#IBInspectable var knobImage: NSImage?{
didSet{
if let knobImage = knobImage{
self.knobImage = knobImage
}
}}
override func drawKnob(_ knobRect: NSRect) {
var rect = knobRect
rect.size = NSSize(width: rect.width, height: rect.height - 4)
self.knobImage?.draw(in: rect)
}
}
just assign this class to you nsslider cell from your storyboard and change the image on your storyboard.
enter image description here
The Result is given Below :
[This is the resultant Image][2]
enter image description here

Image resize in Swift only works properly in some iPhone models [duplicate]

This question already has an answer here:
images render larger when loaded programmatically on iPhone X,Xs?
(1 answer)
Closed 1 year ago.
I have an ImageView on my Storyboard layout and I put some images over this ImageView.
To maintain the proportion of the image I created a code to calculate the image scale.
static func getImageScale(_ myImage:UIImageView) -> (width: CGFloat, height: CGFloat) {
let imageViewHeight = myImage.bounds.height
let imageViewWidth = myImage.bounds.width
let imageSize = myImage.image!.size
let myScaledImageHeight = imageViewHeight / imageSize.height
let myScaledImageWidth = imageViewWidth / imageSize.width
return (width: myScaledImageWidth, height: myScaledImageHeight)
}
This is how I use the code above:
ImageMarcasHelper.addScaledImageToScreenWithoutMovement(imageStringName: nome_imagem, scaledImageWidth: percentImageScale.width, scaledImageHeigth: percentImageScale.height, view: view)
Finally, I call this:
static func addScaledImageToScreenWithoutMovement(imageStringName imageNameString:String, scaledImageWidth:CGFloat, scaledImageHeigth:CGFloat, view:UIView) {
var xFinal = 0
var scaledImageWidthMultiplied:CGFloat = 0.0
let vc: UIViewController = view.parentViewController!
let vc_name = type(of: vc)
let image = UIImage(named: imageNameString)
print(image!)
let imageView = UIImageView(image: image!)
imageView.isAccessibilityElement = true
imageView.restorationIdentifier = imageNameString
if vc_name == ResenhaMarcasCabecaController.classForCoder() ||
vc_name == ResenhaMarcasMembrosPosterioresController.classForCoder() {
print("ResenhaMarcasCabecaController view used")
xFinal = Int((image?.size.width)!/1.9)
scaledImageWidthMultiplied = (image?.size.width)! * 1
} else {
// identifica todas as outras views como ResenhaMarcasFocinhoController ou ResenhaMarcasPescocoController ou ResenhaMarcasMembrosAnterioresController
print("viewcontroller genenrica usada")
xFinal = 0
scaledImageWidthMultiplied = (image?.size.width)! * scaledImageWidth
}
imageView.frame = CGRect(
x: xFinal,
y: 0,
width: Int( scaledImageWidthMultiplied ),
height: Int( (image?.size.height)! * scaledImageHeigth )
)
view.addSubview(imageView)
}
On some iPhone models the image resize works perfectly, but on other models it is not calculated correctly.
Check below the images from an iPhone 8 and an iPhone 8 Plus
The red image on the left side is centered, but on the right side the red image is NOT centered.
How can I fix that? There is another code that can I use to fix it or do I need to adapt something on my code?
Or maybe another solution, there is any way to detect the type of screen size or dimension? The same problem happens with iPhone 11 Max and iPhone Max Pro.
The red image is centered on iPhone 11 Max, but is NOT centered on iPhone Max Pro.
--- EDIT ---
#IBOutlet weak var imagemPrincipalCabeca:UIImageView!
I have an IBOutlet that contains the ImageView that I created using Storyboard with AutoLayout and I use the image inside this ImageView to get the scale to apply to other images.
This is the code that I use to get and apply the scale from the IBOutlet that is assigned to the ImageView
let percentImageScale = ImageMarcasHelper.getImageScale(imagemPrincipalCabeca)
ImageMarcasHelper.addScaledImageToScreenWithoutMovement(
imageStringName: nome_imagem,
scaledImageWidth: percentImageScale.width,
scaledImageHeigth: percentImageScale.height,
view: view)
I find out what is the problem.
All calculations are done inside the "viewDidLoad" method. Inside this method, the calculations are not correct because the view still does't know the correct size of the subviews(the container view)
I change all the calculations to be made inside the "viewWillAppear" method. This way I was able to get the correct screen width and height for the subview.

UIImageView: applying multiple UIViewContentMode properties to one UIImageView?

How can I add UIViewContentMode.center to this UIImageView while also keeping .scaleAspectFill?
func insertImage() {
let theImageView = UIImageView(frame: CGRect(x: 0, y: 0, width: view.frame.width, height: 300))
theImageView.image = #imageLiteral(resourceName: "coolimage")
theImageView.contentMode = UIViewContentMode.scaleAspectFill
view.addSubview(theImageView)
}
Furthermore, can somebody explain to me what the "view" exactly is in the last line "view.addSubview(theImageView)"? Is it the mysterious "view hierarchy" that I read about? Why can't I simply initialize the UIImageView? Why must it be bound to something called "view" that I haven't explicitly created? There is only a UIViewController and a UIImageView so far.
As far as I know, you can't set content mode to both aspect fit and center. However, center will do what aspect fit does providing the image size is smaller than the size of the imageView. If not, use aspect fit. The following code ought to allow you to differentiate between the two:
if (theImageView.bounds.size.width > UIImage(named: "coolimage")?.size.width && theImageView.bounds.size.height > UIImage(named: "coolimage")?.size.height) {
theImageView.contentMode = .aspectFit
} else {
theImageView.contentMode = .center
}
As for the second part of your question, I'll refer you to this thread, which has a fairly comprehensive explanation of UIViewController vs UIView. Hope that helps.

I'm having some trouble using x and y coordinates from touchesBegan as the center key in a CI filter

I'm trying to setup having the users tap a location in an image view and the X,Y of the tap becomes the center point (kCIInputCenterKey) of the current image filter in use.
These are my global variables:
var x: CGFloat = 0
var y: CGFloat = 0
var imgChecker = 0
This is my touchesBegan function that checks if the user is touching inside the image view or not, if not then sets the filter center key to the center of the image view:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let position = touch.location(in: self.imageView)
if (touch.view == imageView){
print("touchesBegan | This is an ImageView")
x = position.x * 4
y = position.y * 4
imgChecker = 1
}else{
print("touchesBegan | This is not an ImageView")
x = 0
y = 0
imgChecker = 0
}
print("x: \(x)")
print("y: \(y)")
}
}
As you can see I have the checker there to make the filter center appear in the middle of the image if inside the image view was not tapped. I'm also printing out the coordinates tapped to xCode's console and they appear without issue.
This is the part where i apply my filter:
currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter.setValue(200, forKey: kCIInputRadiusKey)
currentFilter.setValue(1, forKey: kCIInputScaleKey)
if imgChecker == 1 {
self.currentFilter.setValue(CIVector(x: self.x, y: self.y), forKey: kCIInputCenterKey)
}else{
self.currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey)
}
x = 0
y = 0
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
let cgimg = context.createCGImage(currentFilter.outputImage!, from: currentFilter.outputImage!.extent)
currentImage = UIImage(cgImage: cgimg!)
self.imageView.image = currentImage
This is the CGRect I'm using, ignore the "frame" in there, its just a image view in front of the first one that allows me to save a "frame" over the current filtered image:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
When I do set the x,y by tapping inside the image view, the center of the filter in the image view keeps appearing in the lower left hand side of it regardless of where I tapped inside. If i keep tapping around the image view, the center does seem to move around a bit, but its no where near where I'm actually tapping.
any insight would be greatly appreciated, thank you.
Keep two things in mind.
First (and I think you probably know this), the CI origin (0,0) is lower left, not top left.
Second (and I think this is the issue) UIKit (meaning UIImage and potentially CGPoint coordinates) are not the same as CIVector coordinates. You need to take the UIKit touchesBegan coordinate and turn it into the CIImage.extent coordinate.
EDIT:
All coordinates that follow are X then Y, and Width then Height.
After posting my comment I thought I'd give an example of what I mean by scaling. Let's say you have a UIImageView sized at 250x250, using a content mode of AspectFit, displaying an image whose size is 1000x500.
Now, let's say the touchesBegan is CGPoint(200,100). (NOTE: If your UIImageView is part of a larger superview, it could be something more like 250,400 - I'm working on the point within the UIImageView.)
Scaling down the image size (remember, AspectFit) means the image is actually centered vertically (landscape appearing) within the UIImageView at CGRect(0, 62.5, 250, 125). So first off, good! The touch point not only began within the image view, it also began wishing the image. (You'll probably want to consider the not-so-edge case of touches beginning outside of the image.)
Dividing by 4 gives you the scaled down image view coordinates, and as you'd expect, multiplying up will give you the needed vector coordinates. So a touchesBegan CGPoint(200,100) turns into a CIVector(800,400).
I have some code written - not much in the way of comments, done in Swift 2 (I think) and very poorly written - that is part of a subclass (probably should have been an extension) of UIImageView that computes all this. Using the UIImageView's bounds and it's image's size is what you need. Keep in mind - images in AspectFit can also be scaled up!
One last note on CIImage - extent. Many times it's a UIImage's size. But many masks and generated output may have an infinite eatent.
SECOND EDIT:
I made a stupid mistake in my scaling example. Remember, the CIImage Origin is bottom left, not upper left. So in my example a CGPoint(200,100), scaled to CGPoint(800,400) would be CGVector(800,100).
THIRD EDIT:
Apologies for the multiple/running edits, but it seems important. (Besides, only the last was due my stupidity! Worthwhile, to note, but still.)
Now we're talking "near real time" updating using a Core Image filter. I'm planning to eventually have some blog posts on this, but the real source you want is Simon Gladman (he's moved on, look back to his posts in 2015-16), and his eBook Core Image for Swift (uses Swift 2 but most is automatically upgraded to Swift 3). Just giving credit where it is due.
If you want "near real time" usage of Core Image, you need to use the GPU. UIView, and all it's subclasses (meaning UIKit) uses the CPU. That's okay, using the GPU means using a Core Graphics, and specifically using a GLKView. It's the CG equivalent of a UIImage.
Here's my subclass of it:
open class GLKViewDFD: GLKView {
var renderContext: CIContext
var myClearColor:UIColor!
var rgb:(Int?,Int?,Int?)!
open var image: CIImage! {
didSet {
setNeedsDisplay()
}
}
public var clearColor: UIColor! {
didSet {
myClearColor = clearColor
}
}
public init() {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(frame: CGRect.zero)
context = eaglContext!
}
override public init(frame: CGRect, context: EAGLContext) {
renderContext = CIContext(eaglContext: context)
super.init(frame: frame, context: context)
enableSetNeedsDisplay = true
}
public required init?(coder aDecoder: NSCoder) {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(coder: aDecoder)
context = eaglContext!
enableSetNeedsDisplay = true
}
override open func draw(_ rect: CGRect) {
if let image = image {
let imageSize = image.extent.size
var drawFrame = CGRect(x: 0, y: 0, width: CGFloat(drawableWidth), height: CGFloat(drawableHeight))
let imageAR = imageSize.width / imageSize.height
let viewAR = drawFrame.width / drawFrame.height
if imageAR > viewAR {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / imageAR) / 2.0
drawFrame.size.height = drawFrame.width / imageAR
} else {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * imageAR) / 2.0
drawFrame.size.width = drawFrame.height * imageAR
}
rgb = (0,0,0)
rgb = myClearColor.rgb()
glClearColor(Float(rgb.0!)/256.0, Float(rgb.1!)/256.0, Float(rgb.2!)/256.0, 0.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(image, in: drawFrame, from: image.extent)
}
}
}
A few notes.
I absolutely need to credit Objc.io for much of this. This is also a great resource for Swift and UIKit coding.
I wanted AspectFit content mode with the potential to change the "backgroundColor" of the GLKView, which is why I subclassed and and called if clearColor.
Between the two resources I linked to, you should have what you need to have a good performing, near real time use of Core Image, using the GPU. One reason my afore-mentioned code to use scaling after getting the output of a filter was never updated? It didn't need it.
Lots here to process, I know. But I've found this side of things (Core Image effects) to be the most fun side (and pretty cool too) of iOS.

Swift: Mask Alignment + Auto-Layout Constraints

I have this PNG file, which I'd like to use as a mask for a UIView.
The view must be:
20 pixels/points in from each side
A perfect square
Centered vertically
I set the following constraints to accomplish this:
However, it seems these constraints don't play well with masks. When these constraints and the mask property are set, I get the following:
but I'd like the view to look just like the mask above, except orange (The backgroundColor here is just for simplicity—I later add subviews that need to be masked.)
However, when no constraints are set, the mask seems to work properly and I get something like this (borderColor added for visual purposes only):
Here's my code (viewForLayer is a UIView I made in the storyboard):
viewForLayer.layer.borderColor = UIColor.redColor().CGColor
viewForLayer.layer.borderWidth = 10
var mask = CALayer()
mask.contents = UIImage(named: "TopBump")!.CGImage
mask.frame = CGRect(x: 0, y: 0, width: viewForLayer.bounds.width, height: viewForLayer.bounds.height)
mask.position = CGPoint(x: viewForLayer.bounds.width/2, y: viewForLayer.bounds.height/2)
viewForLayer.layer.mask = mask
viewForLayer.backgroundColor = UIColor.orangeColor()
The problem is though, that now the view isn't the right size or in the right position—it doesn't follow the rules above—"The view must be: ". How can I have the mask work properly, and the auto-layout constraints set at the same time?
I found a way around it. Not sure if this is the best way but here we go...
http://imgur.com/pUIZbNA
Just make sure you change the name of the UIView class in the storyboard inspector too. Apparently, the trick is to set the mask frame for each layoutSubviews call.
class MaskView : UIView {
override func layoutSubviews() {
super.layoutSubviews()
if let mask = self.layer.mask {
mask.frame = self.bounds
}
}
}
class ViewController: UIViewController {
#IBOutlet weak var viewForLayer: MaskView!
override func viewDidLoad() {
super.viewDidLoad()
let image = UIImage(named: "TopBump")!.CGImage!
let maskLayer = CALayer()
maskLayer.contents = image
maskLayer.frame = viewForLayer.bounds
viewForLayer.layer.mask = maskLayer
viewForLayer.backgroundColor = UIColor.orangeColor()
// Do any additional setup after loading the view, typically from a nib.
viewForLayer.layer.borderColor = UIColor.redColor().CGColor
viewForLayer.layer.borderWidth = 10
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
}
I tried it for myself. Minus the nitpicking on 'let mask = CALayer()' (it's immutable reference to an updatable object), changing the autolayout constraints of the embedded view shows the mask is aligned correctly.
NSLog("\(viewForLayer.bounds.width), \(viewForLayer.bounds.height)")
returns 375.0, 667.0 on an iPhone 6 screen. What are you getting?