How to optimise swift code for drawing large datasets in realtime in graphs that can scroll and zoom - swift

I am writing my own code in swift to draw a graph to display a large data set. (The reason not to use existing frameworks is I would like to use logarithmic and linear views, as well as user interaction with the graph). I need help trying to speed my code to try and make the app feel responsive, as it is too slow currently.
I pass my dataset into my magnitudeGraph class (a UIView), and calculate the co-ordinates for the data line (divLine) and save them as a UIBezierPath using the function below:
func calcDataCoOrdinates(frequencies: [Float], Mag: [Float]) -> UIBezierPath {
let divLine = UIBezierPath()
let graphWidth:Int = width - dBLabelSpace
let graphHeight:Int = height - freqLabelSpace
let yRange = Float(yMax-yMin)
//Limit Data to the Values being displayed
let indexOfGreaterThanXMin = frequencies.firstIndex(where: {$0 > xMin})
let indexLessThanXMax = frequencies.lastIndex(where: {$0 < xMax})
let frequencyRange = frequencies[indexOfGreaterThanXMin!..<indexLessThanXMax!]
let MagnitudeRange = Mag[indexOfGreaterThanXMin!..<indexLessThanXMax!]
let graphRatio = (yMax-yMin)/Float(graphHeight)
let logX = vForce.log10(frequencyRange)
let logRange = log10(Double(xMax))-log10(Double(xMin));
let graphDivideByLog = Float(graphWidth) / Float(logRange)
let xMinArray:[Float] = Array(repeating: Float(log10(xMin)), count: logX.count)
let x5 = vDSP.multiply(subtraction: (a: logX, b: xMinArray), graphDivideByLog)
let x6 = vDSP.add(Float(dBLabelSpace + 10), x5)
let yMaxArray = Array(repeating:Float(yMax), count: MagnitudeRange.count)
let y2 = vDSP.subtract(yMaxArray, MagnitudeRange)
let y3 = vDSP.divide(y2, Float(graphRatio))
var allPoints:[CGPoint]=[]
for i in 0..<x6.count{
let value = CGPoint.init(x: Double(x6[i]), y: Double(y3[i]))
allPoints.append(value)
}
divLine.removeAllPoints()
divLine.move(to: allPoints[0])
for i in 1 ..< allPoints.count {
divLine.addLine(to: allPoints[i])
}
return divLine
}
Instead of calling setNeedsDisplay directly from the UI element that triggers this, I call redrawGraph(). This calls the calcDataCoOrdinates function in a background thread, and then once all coordinates are calculated, I call setNeedsDisplay on the main thread.
func redrawGraph(){
width = Int(self.frame.width);
height = Int(self.frame.height);
magnitudeQueue.sync{
if mag1.isEmpty == false {
self.data1BezierPath = self.calcData1CoOrdinates(frequencies: freq1, Mag: mag1)
}
if mag2.isEmpty == false {
self.data2BezierPath = self.calcData1CoOrdinates(frequencies: freq2, Mag: mag2)
}
if magSum.isEmpty == false {
self.dataSumBezierPath = self.calcData1CoOrdinates(frequencies: freqSum, Mag: magSum)
}
DispatchQueue.main.async {
self.setNeedsDisplay()
}
}
Finally, I post the code of actual drawing function:
override func draw(_ rect: CGRect) {
self.layer.sublayers = nil
drawData(Color: graphData1Color, layer: in1Layer, lineThickness: Float(graphDataLineWidth), line:data1BezierPath)
drawData(Color: graphData2Color, layer: in2Layer, lineThickness: Float(graphDataLineWidth), line:data2BezierPath)
drawData(Color: sumColor, layer: in3Layer, lineThickness: Float(sumLineThickness), line: dataSumBezierPath)
}
func drawData(Color: UIColor, layer: CAShapeLayer, lineThickness:Float, line:UIBezierPath){
layer.removeFromSuperlayer()
layer.path = nil
Color.set()
line.lineWidth = CGFloat(lineThickness)
layer.strokeColor = Color.cgColor
layer.fillColor = UIColor.clear.cgColor
layer.path = line.cgPath
layer.lineWidth = CGFloat(lineThickness)
layer.contentsScale = UIScreen.main.scale
let mask = CAShapeLayer()
mask.contentsScale = UIScreen.main.scale
mask.path = bpath?.cgPath
layer.mask = mask
self.layer.addSublayer(layer)
}
This code works well to calculate the three data sets and view them on the graph. I have excluded the code to calculate the background layer (grid lines), but that is just to simplify the code.
The issue I get is that this is a lot to try and process for 'realtime' interactivity. I figure that we want around 30fps for this to feel responsive, which gives approximately 33ms of processing and rendering time in total.
Now if I have say 32k data points, currently measuring these, each of the three passes of calcDataCoOrdinates can take between. 22-36ms, totalling around 90ms. 1/10th of a second is very laggy, so my question is how to speed this up? If I measure the actual drawing process this is currently very low (1-2ms), which brings me to believe it is my calcDataCoOrdinates function that needs improving.
I have tried several approaches, including using a Douglas-Peucker algorithm to decrease the amount of data points, however this is at a large time cost (+100ms).
I have tried skipping data points that overlap pixels
I have tried to use accelerate framework as much as possible, and the top half of this function operates in approximately 10ms, however the bottom half putting the coordinates into a CGPoint array, and iterating to make a UIBezierPath takes 20ms. I have tried pre-initializing the allPoints array with CGPoints, figuring this will save some time, but this is still taking too long.
Can this be computed more. efficiently or have I hit the limit of my app?

Related

Swift. UIBezierPath shape detection

I working with UIBezierPath and shape detection. For painting im using "UIPanGestureRecognizer".
Example of code:
My shape definder
var gesture = UIPanGestureRecognizer()
.
.
.
view.addGestureRecognizer(gesture.onChange { \[weak self\] gesture in
let point = gesture.location(in: self.view)
let shapeL = CAShapeLayer()
shapeL.strokeColor = UIColor.black.cgColor
shapeL.lineWidth = 2
shapeL.fillColor = UIColor.clear.cgColor
switch gesture.state {
case .began:
//some code
currentBezierPath = UIBezierPath()
break
case .changed:
//some code
shapeLayer.path = self.currentBezierPath.cgPath
break
case .ended:
//define what user was painted(circle, rectangle, etc)
shapeDefinder(path: currentBezierPath)
break
default:
break
})
shapeDefinder
func shapeDefinder(path: UIBezierPath) {
if(path.hasFourRightAngles()){
// square
}
}
extension hasFourRightAngles
extension UIBezierPath {
func hasFourRightAngles() -> Bool {
guard self.currentPoint != .zero else {
// empty path cannot have angles
return false
}
let bounds = self.bounds
let points = [
bounds.origin,
CGPoint(x: bounds.minX, y: bounds.minY),
CGPoint(x: bounds.maxX, y: bounds.maxY),
CGPoint(x: bounds.minX, y: bounds.maxY)
]
let angleTolerance = 5.0 // in degrees
var rightAngleCount = 0
for i in 0...3 {
let p1 = points[i]
let p2 = points[(i+1)%4]
let p3 = points[(i+2)%4]
let angle = p2.angle(between: p1, and: p3)
if abs(angle - 90) <= angleTolerance {
rightAngleCount += 1
}
}
return rightAngleCount >= 4
}
}
and
extension CGPoint {
func angle(between p1: CGPoint, and p2: CGPoint) -\> CGFloat {
let dx1 = self.x - p1.x
let dy1 = self.y - p1.y
let dx2 = p2.x - self.x
let dy2 = p2.y - self.y
let dotProduct = dx1*dx2 + dy1*dy2
let crossProduct = dx1*dy2 - dx2*dy1
return atan2(crossProduct, dotProduct) \* 180 / .pi
}
}
but my method hasFourRightAngles() doesnt work, it always has true.
Cant understand how i can detect square(the user must draw exactly a square, if the user draws a circle, then the check should not pass.)
Maybe someone know about some library which works with UIBezierPath for detect shapes?
The bounds of a path are always a rectangle, no matter the shape, so you should expect this function to always return true. From the docs:
The value in this property represents the smallest rectangle that completely encloses all points in the path, including any control points for Bézier and quadratic curves.
If you want to consider the components of the path itself, you'd need to iterate over its components using it's CGPath. See applyWithBlock for how to get the elements. That said, this probably won't work very well, since you likely don't care precisely how the shape was drawn. If you go down this road, you'll probably want to do some work to simplify the curve first, and perhaps put the stokes in a useful order.
If the drawing pattern itself is the important thing (i.e. the user's gesture is what matters), then I would probably keep track of whether this could be a rectangle at each point of the drawing. Either it needs to be roughly colinear to the previous line, or roughly normal. And then the final point must be close to the original point.
The better approach is possibly to consider the final image of the shape, regardless of how it was drawn, and then classify it. For various algorithms to do that, see How to identify different objects in an image?
Your code to get the bounds of your path will not let you tell if the lines inside the path make right angles. As Rob says in his answer, the bounding box of a path will always be a rectangle, so your current test will always return true.
The bounding box of a circle will be a square, as will the box of any shape who's horizontal and vertical maxima and minima are equal.
It is possible to interrogate the internal elements of the underlying CGPath and look for a series of lines that make a square. I suggest searching for code that parses the elements of a CGPath.
Note that if you are checking freehand drawing, you will likely need some "slop" in your calculations to allow for shapes that are close to, but not exactly squares, or you will likely never find a perfect square.
Also, what if the path contains a square plus other shape elements? You will need to decide how to handle situations like that.

Blur face in face detection in vision kit

I'm using Apple tutorial about face detection in vision kit in a live camera feed, not an image.
https://developer.apple.com/documentation/vision/tracking_the_user_s_face_in_real_time
It detects the face and adds some lines using CAShapeLayer to draw lines between different parts of the face.
fileprivate func setupVisionDrawingLayers() {
let captureDeviceResolution = self.captureDeviceResolution
let captureDeviceBounds = CGRect(x: 0,
y: 0,
width: captureDeviceResolution.width,
height: captureDeviceResolution.height)
let captureDeviceBoundsCenterPoint = CGPoint(x: captureDeviceBounds.midX,
y: captureDeviceBounds.midY)
let normalizedCenterPoint = CGPoint(x: 0.5, y: 0.5)
guard let rootLayer = self.rootLayer else {
self.presentErrorAlert(message: "view was not property initialized")
return
}
let overlayLayer = CALayer()
overlayLayer.name = "DetectionOverlay"
overlayLayer.masksToBounds = true
overlayLayer.anchorPoint = normalizedCenterPoint
overlayLayer.bounds = captureDeviceBounds
overlayLayer.position = CGPoint(x: rootLayer.bounds.midX, y: rootLayer.bounds.midY)
let faceRectangleShapeLayer = CAShapeLayer()
faceRectangleShapeLayer.name = "RectangleOutlineLayer"
faceRectangleShapeLayer.bounds = captureDeviceBounds
faceRectangleShapeLayer.anchorPoint = normalizedCenterPoint
faceRectangleShapeLayer.position = captureDeviceBoundsCenterPoint
faceRectangleShapeLayer.fillColor = nil
faceRectangleShapeLayer.strokeColor = UIColor.green.withAlphaComponent(0.7).cgColor
faceRectangleShapeLayer.lineWidth = 5
faceRectangleShapeLayer.shadowOpacity = 0.7
faceRectangleShapeLayer.shadowRadius = 5
let faceLandmarksShapeLayer = CAShapeLayer()
faceLandmarksShapeLayer.name = "FaceLandmarksLayer"
faceLandmarksShapeLayer.bounds = captureDeviceBounds
faceLandmarksShapeLayer.anchorPoint = normalizedCenterPoint
faceLandmarksShapeLayer.position = captureDeviceBoundsCenterPoint
faceLandmarksShapeLayer.fillColor = nil
faceLandmarksShapeLayer.strokeColor = UIColor.yellow.withAlphaComponent(0.7).cgColor
faceLandmarksShapeLayer.lineWidth = 3
faceLandmarksShapeLayer.shadowOpacity = 0.7
faceLandmarksShapeLayer.shadowRadius = 5
overlayLayer.addSublayer(faceRectangleShapeLayer)
faceRectangleShapeLayer.addSublayer(faceLandmarksShapeLayer)
rootLayer.addSublayer(overlayLayer)
self.detectionOverlayLayer = overlayLayer
self.detectedFaceRectangleShapeLayer = faceRectangleShapeLayer
self.detectedFaceLandmarksShapeLayer = faceLandmarksShapeLayer
self.updateLayerGeometry()
}
How can I fill inside the lines (different part of face) with a blurry view? I need to blur the face.
You could try placing a UIVisualEffectView on top of your video feed, and then adding a masking CAShapeLayer to that UIVisualEffectView. I don't know if that would work or not.
The docs on UIVisualEffectView say:
When using the UIVisualEffectView class, avoid alpha values that are less than 1. Creating views that are partially transparent causes the system to combine the view and all the associated subviews during an offscreen render pass. UIVisualEffectView objects need to be combined as part of the content they are layered on top of in order to look correct. Setting the alpha to less than 1 on the visual effect view or any of its superviews causes many effects to look incorrect or not show up at all.
I don't know if using a mask layer on a visual effect view would cause the same rendering problems or not. You'd have to try it. (And be sure to try it on a range of different hardware, since the rendering performance varies quite a bit between different versions of Apple's chipsets.)
You could also try using a shape layer filled with visual hash or a "pixellated" pattern instead of blurring. That would be faster and probably render more reliably.
Note that face detection tends to be a little jumpy. It might drop out for a few frames, or lag on quick pans or change of scene. If you're trying to hide people's faces in a live feed for privacy, it might not be reliable. It would only take a few un-blurred frames for somebody's identity to be revealed.

NSBezierPath stroke not scaled correctly

I have what should be a simple subclass of an NSView that draws circular nodes at specified locations.
To render the nodes in my view, I translate the graphics context's origin to the center of the view's frame and scale it such that it spans from -1.25 to 1.25 in the limiting dimension (the node coordinates are all in the range -1...1). I then create for each node an NSBezierPath using the ovalIn: constructor. Finally, I fill the path with yellow and stroke it with black.
But... While the yellow fill looks ok, the black outline is not being scaled correctly!
What am I missing?
Here's the code:
override func draw(_ dirtyRect: NSRect)
{
let nodeRadius = CGFloat(0.05)
let unscaledSpan = CGFloat(2.5)
super.draw(dirtyRect)
NSColor.white.set()
self.frame.fill()
guard let graph = graph else { return }
let scale = min(bounds.width/unscaledSpan, bounds.height/unscaledSpan)
NSGraphicsContext.current?.saveGraphicsState()
defer { NSGraphicsContext.current?.restoreGraphicsState() }
let xform = NSAffineTransform()
xform.translateX( by: 0.5*bounds.width, yBy: 0.5*bounds.height)
xform.scale(by: scale)
xform.concat()
for v in graph.vertices
{
let r = NSRect(x: v.x-nodeRadius, y: v.y-nodeRadius, width: 2.0*nodeRadius, height: 2.0*nodeRadius)
let p = NSBezierPath(ovalIn: r)
NSColor.yellow.set()
p.fill()
NSColor.black.set()
p.stroke()
}
}
This is what I'm seeing (shown with two different window sizes)
Clearly, the translation is working fine for both fill and stroke.
But, the scaling is off for stroke.
Thanks for any/all hints/suggestions.
Doh... I wasn't considering the effect of scaling on the line width.
Made the following edit and all is well:
...
let p = NSBezierPath(ovalIn: r)
p.lineWidth = CGFloat(0.01)
NSColor.yellow.set()
p.fill()
NSColor.black.set()
p.stroke()
...

Creating a progress indicator with a rounded rectangle

I am attempting to create a rounded-rectangle progress indicator in my app. I have previously implemented a circular indicator, but not like this shape. I would like it to look something like this (start point is at the top):
But I get this with 0 as the .strokeStart property of the layer:
My current code place in viewDidLoad():
let queueShapeLayer = CAShapeLayer()
let queuePath = UIBezierPath(roundedRect: addToQueue.frame, cornerRadius: addToQueue.layer.cornerRadius)
queueShapeLayer.path = queuePath.cgPath
queueShapeLayer.lineWidth = 5
queueShapeLayer.strokeColor = UIColor.white.cgColor
queueShapeLayer.fillColor = UIColor.clear.cgColor
queueShapeLayer.strokeStart = 0
queueShapeLayer.strokeEnd = 0.5
view.layer.addSublayer(queueShapeLayer)
addToQueue is the button which says 'Upvote'.
Unlike creating a circular progress indicator, I cannot set the start and end angle in the initialisation of a Bezier path.
How do I make the progress start from the top middle as seen in the first image?
Edit - added a picture without corner radius on:
It seems that the corner radius is creating the issue.
If you have any questions, please ask!
I found a solution so the loading indicator works for round corners:
let queueShapeLayer = CAShapeLayer()
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
// Queue timer
let radius = addToQueue.layer.cornerRadius
let diameter = radius * 2
let totalLength = (addToQueue.frame.width - diameter) * 2 + (CGFloat.pi * diameter)
let queuePath = UIBezierPath(roundedRect: addToQueue.frame, cornerRadius: radius)
queueShapeLayer.path = queuePath.cgPath
queueShapeLayer.lineWidth = 5
queueShapeLayer.strokeColor = UIColor.white.cgColor
queueShapeLayer.fillColor = UIColor.clear.cgColor
queueShapeLayer.strokeStart = 0.25 - CGFloat.pi * diameter / 3 / totalLength // Change the '0.25' to 0.5, 0.75 etc. wherever you want the bar to start
queueShapeLayer.strokeEnd = queueShapeLayer.strokeStart + 0.5 // Change this to the value you want it to go to (in this case 0.5 or 50% loaded)
view.layer.addSublayer(queueShapeLayer)
}
After I had did this though, I was having problems that I couldn't animate the whole way round. To get around this, I created a second animation (setting strokeStart to 0) and then I placed completion blocks so I could trigger the animations at the correct time.
Tip:
Add animation.fillMode = CAMediaTimingFillMode.forwards & animation.isRemovedOnCompletion = false when using a CABasicAnimation for the animation to wait until you remove it.
I hope this formula helps anyone in the future!
If you need help, you can always message me and I am willing to help. :)

I'm having some trouble using x and y coordinates from touchesBegan as the center key in a CI filter

I'm trying to setup having the users tap a location in an image view and the X,Y of the tap becomes the center point (kCIInputCenterKey) of the current image filter in use.
These are my global variables:
var x: CGFloat = 0
var y: CGFloat = 0
var imgChecker = 0
This is my touchesBegan function that checks if the user is touching inside the image view or not, if not then sets the filter center key to the center of the image view:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let position = touch.location(in: self.imageView)
if (touch.view == imageView){
print("touchesBegan | This is an ImageView")
x = position.x * 4
y = position.y * 4
imgChecker = 1
}else{
print("touchesBegan | This is not an ImageView")
x = 0
y = 0
imgChecker = 0
}
print("x: \(x)")
print("y: \(y)")
}
}
As you can see I have the checker there to make the filter center appear in the middle of the image if inside the image view was not tapped. I'm also printing out the coordinates tapped to xCode's console and they appear without issue.
This is the part where i apply my filter:
currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter.setValue(200, forKey: kCIInputRadiusKey)
currentFilter.setValue(1, forKey: kCIInputScaleKey)
if imgChecker == 1 {
self.currentFilter.setValue(CIVector(x: self.x, y: self.y), forKey: kCIInputCenterKey)
}else{
self.currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey)
}
x = 0
y = 0
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
let cgimg = context.createCGImage(currentFilter.outputImage!, from: currentFilter.outputImage!.extent)
currentImage = UIImage(cgImage: cgimg!)
self.imageView.image = currentImage
This is the CGRect I'm using, ignore the "frame" in there, its just a image view in front of the first one that allows me to save a "frame" over the current filtered image:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
When I do set the x,y by tapping inside the image view, the center of the filter in the image view keeps appearing in the lower left hand side of it regardless of where I tapped inside. If i keep tapping around the image view, the center does seem to move around a bit, but its no where near where I'm actually tapping.
any insight would be greatly appreciated, thank you.
Keep two things in mind.
First (and I think you probably know this), the CI origin (0,0) is lower left, not top left.
Second (and I think this is the issue) UIKit (meaning UIImage and potentially CGPoint coordinates) are not the same as CIVector coordinates. You need to take the UIKit touchesBegan coordinate and turn it into the CIImage.extent coordinate.
EDIT:
All coordinates that follow are X then Y, and Width then Height.
After posting my comment I thought I'd give an example of what I mean by scaling. Let's say you have a UIImageView sized at 250x250, using a content mode of AspectFit, displaying an image whose size is 1000x500.
Now, let's say the touchesBegan is CGPoint(200,100). (NOTE: If your UIImageView is part of a larger superview, it could be something more like 250,400 - I'm working on the point within the UIImageView.)
Scaling down the image size (remember, AspectFit) means the image is actually centered vertically (landscape appearing) within the UIImageView at CGRect(0, 62.5, 250, 125). So first off, good! The touch point not only began within the image view, it also began wishing the image. (You'll probably want to consider the not-so-edge case of touches beginning outside of the image.)
Dividing by 4 gives you the scaled down image view coordinates, and as you'd expect, multiplying up will give you the needed vector coordinates. So a touchesBegan CGPoint(200,100) turns into a CIVector(800,400).
I have some code written - not much in the way of comments, done in Swift 2 (I think) and very poorly written - that is part of a subclass (probably should have been an extension) of UIImageView that computes all this. Using the UIImageView's bounds and it's image's size is what you need. Keep in mind - images in AspectFit can also be scaled up!
One last note on CIImage - extent. Many times it's a UIImage's size. But many masks and generated output may have an infinite eatent.
SECOND EDIT:
I made a stupid mistake in my scaling example. Remember, the CIImage Origin is bottom left, not upper left. So in my example a CGPoint(200,100), scaled to CGPoint(800,400) would be CGVector(800,100).
THIRD EDIT:
Apologies for the multiple/running edits, but it seems important. (Besides, only the last was due my stupidity! Worthwhile, to note, but still.)
Now we're talking "near real time" updating using a Core Image filter. I'm planning to eventually have some blog posts on this, but the real source you want is Simon Gladman (he's moved on, look back to his posts in 2015-16), and his eBook Core Image for Swift (uses Swift 2 but most is automatically upgraded to Swift 3). Just giving credit where it is due.
If you want "near real time" usage of Core Image, you need to use the GPU. UIView, and all it's subclasses (meaning UIKit) uses the CPU. That's okay, using the GPU means using a Core Graphics, and specifically using a GLKView. It's the CG equivalent of a UIImage.
Here's my subclass of it:
open class GLKViewDFD: GLKView {
var renderContext: CIContext
var myClearColor:UIColor!
var rgb:(Int?,Int?,Int?)!
open var image: CIImage! {
didSet {
setNeedsDisplay()
}
}
public var clearColor: UIColor! {
didSet {
myClearColor = clearColor
}
}
public init() {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(frame: CGRect.zero)
context = eaglContext!
}
override public init(frame: CGRect, context: EAGLContext) {
renderContext = CIContext(eaglContext: context)
super.init(frame: frame, context: context)
enableSetNeedsDisplay = true
}
public required init?(coder aDecoder: NSCoder) {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(coder: aDecoder)
context = eaglContext!
enableSetNeedsDisplay = true
}
override open func draw(_ rect: CGRect) {
if let image = image {
let imageSize = image.extent.size
var drawFrame = CGRect(x: 0, y: 0, width: CGFloat(drawableWidth), height: CGFloat(drawableHeight))
let imageAR = imageSize.width / imageSize.height
let viewAR = drawFrame.width / drawFrame.height
if imageAR > viewAR {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / imageAR) / 2.0
drawFrame.size.height = drawFrame.width / imageAR
} else {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * imageAR) / 2.0
drawFrame.size.width = drawFrame.height * imageAR
}
rgb = (0,0,0)
rgb = myClearColor.rgb()
glClearColor(Float(rgb.0!)/256.0, Float(rgb.1!)/256.0, Float(rgb.2!)/256.0, 0.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(image, in: drawFrame, from: image.extent)
}
}
}
A few notes.
I absolutely need to credit Objc.io for much of this. This is also a great resource for Swift and UIKit coding.
I wanted AspectFit content mode with the potential to change the "backgroundColor" of the GLKView, which is why I subclassed and and called if clearColor.
Between the two resources I linked to, you should have what you need to have a good performing, near real time use of Core Image, using the GPU. One reason my afore-mentioned code to use scaling after getting the output of a filter was never updated? It didn't need it.
Lots here to process, I know. But I've found this side of things (Core Image effects) to be the most fun side (and pretty cool too) of iOS.