SKScene Custom Size - sprite-kit

I would like to create a new scene so that I can embed a UI element in it (a picker view) but I do not want the scene to take up the entire screen, but rather be smaller. I used the following code:
- (void)switchToPickerScene
{
// Present the new picker scene:
self.paused = YES;
CGSize pickerSceneSize = CGSizeMake(300, 440);
BBPickerScene *pickerScene = [BBPickerScene sceneWithSize:pickerSceneSize];
pickerScene.scaleMode = SKSceneScaleModeFill;
SKTransition *trans = [SKTransition doorsOpenHorizontalWithDuration:0.5];
[self.scene.view presentScene:pickerScene transition:trans];
}
But no matter which SKSceneScaleMode I choose it always fills the entire phone screen. Is there any way to do this?

Related

Take a Video with ARKIT

Hello Community,
I try to build a App with Swift 4 and the great upcoming ARKit-Framework but I am stuck. I need to take a Video with the Framework or at least a UIImage-sequence but I dont know how.
This is what I've tried:
In ARKit you have a session which tracks your world. This session has a capturedImage instance where you can get the current Image. So I createt a Timer which appends the capturedImage every 0.1s to a List. This would work for me but If I start the Timer by clicking a "start"-button, the camera starts to lag. Its not about the Timer i guess because If I invalidate the Timer by clicking a "stop"-button the camera is fluent again.
Is there a way to solve the lags or even a better way?
Thanks
I was able to use ReplayKit to do exactly that.
To see what ReplayKit is like
On your iOS device, go to Settings -> Control Center -> Customize Controls. Move "Screen Recording" to the "Include" section, and swipe up to bring up Control Center. You should now see the round Screen Recording icon, and you'll notice that when you press it, iOS starts to record your screen. Tapping the blue bar will end recording and save the video to Photos.
Using ReplayKit, you can make your app invoke the screen recorder and capture your ARKit content.
How-to
To start recording:
RPScreenRecorder.shared().startRecording { error in
// Handle error, if any
}
To stop recording:
RPScreenRecorder.shared().stopRecording(handler: { (previewVc, error) in
// Do things
})
After you're done recording, .stopRecording gives you an optional RPPreviewViewController, which is
An object that displays a user interface where users preview and edit a screen recording created with ReplayKit.
So in our example, you can present previewVc if it isn't nil
RPScreenRecorder.shared().stopRecording(handler: { (previewVc, error) in
if let previewVc = previewVc {
previewVc.delegate = self
self.present(previewVc, animated: true, completion: nil)
}
})
You'll be able to edit and save the vide right from the previewVc, but you might want to make self (or someone) the RPPreviewViewControllerDelegate, so you can easily dismiss the previewVc when you're finished.
extension MyViewController: RPPreviewViewControllerDelegate {
func previewControllerDidFinish(_ previewController: RPPreviewViewController) {
// Called when the preview vc is ready to be dismissed
}
}
Caveats
You'll notice that startRecording will record "the app display", so if any view you have (buttons, labels, etc) will be recorded as well.
I found it useful to hide the controls while recording and let my users know that tapping the screen stops recording, but I've also read about others having success putting their essential controls on a separate UIWindow.
Excluding views from recording
The separate UIWindow trick works. I was able to make an overlay window where I had my a record button and a timer and these weren't recorded.
let overlayWindow = UIWindow(frame: view.frame)
let recordButton = UIButton( ... )
overlayWindow.backgroundColor = UIColor.clear
The UIWindow will be hidden by default. So when you want to show your controls, you must set isHidden to false.
Best of luck to you!
Use a custom renderer.
Render the scene using the custom renderer, then get texture from the custom renderer, finally covert that to a CVPixelBufferRef
- (void)viewDidLoad {
[super viewDidLoad];
self.rgbColorSpace = CGColorSpaceCreateDeviceRGB();
self.bytesPerPixel = 4;
self.bitsPerComponent = 8;
self.bitsPerPixel = 32;
self.textureSizeX = 640;
self.textureSizeY = 960;
// Set the view's delegate
self.sceneView.delegate = self;
// Show statistics such as fps and timing information
self.sceneView.showsStatistics = YES;
// Create a new scene
SCNScene *scene = [SCNScene scene];//[SCNScene sceneNamed:#"art.scnassets/ship.scn"];
// Set the scene to the view
self.sceneView.scene = scene;
self.sceneView.preferredFramesPerSecond = 30;
[self setupMetal];
[self setupTexture];
self.renderer.scene = self.sceneView.scene;
}
- (void)setupMetal
{
if (self.sceneView.renderingAPI == SCNRenderingAPIMetal) {
self.device = self.sceneView.device;
self.commandQueue = [self.device newCommandQueue];
self.renderer = [SCNRenderer rendererWithDevice:self.device options:nil];
}
else {
NSAssert(nil, #"Only Support Metal");
}
}
- (void)setupTexture
{
MTLTextureDescriptor *descriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatBGRA8Unorm_sRGB width:self.textureSizeX height:self.textureSizeY mipmapped:NO];
descriptor.usage = MTLTextureUsageShaderRead | MTLTextureUsageRenderTarget;
id<MTLTexture> textureA = [self.device newTextureWithDescriptor:descriptor];
self.offscreenTexture = textureA;
}
- (void)renderer:(id <SCNSceneRenderer>)renderer willRenderScene:(SCNScene *)scene atTime:(NSTimeInterval)time
{
[self doRender];
}
- (void)doRender
{
if (self.rendering) {
return;
}
self.rendering = YES;
CGRect viewport = CGRectMake(0, 0, self.textureSizeX, self.textureSizeY);
id<MTLTexture> texture = self.offscreenTexture;
MTLRenderPassDescriptor *renderPassDescriptor = [MTLRenderPassDescriptor new];
renderPassDescriptor.colorAttachments[0].texture = texture;
renderPassDescriptor.colorAttachments[0].loadAction = MTLLoadActionClear;
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(0, 1, 0, 1.0);
renderPassDescriptor.colorAttachments[0].storeAction = MTLStoreActionStore;
id<MTLCommandBuffer> commandBuffer = [self.commandQueue commandBuffer];
self.renderer.pointOfView = self.sceneView.pointOfView;
[self.renderer renderAtTime:0 viewport:viewport commandBuffer:commandBuffer passDescriptor:renderPassDescriptor];
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> _Nonnull bf) {
[self.recorder writeFrameForTexture:texture];
self.rendering = NO;
}];
[commandBuffer commit];
}
Then in the recorder, set up the AVAssetWriterInputPixelBufferAdaptor with AVAssetWriter. And convert the texture to CVPixelBufferRef:
- (void)writeFrameForTexture:(id<MTLTexture>)texture {
CVPixelBufferPoolRef pixelBufferPool = self.assetWriterPixelBufferInput.pixelBufferPool;
CVPixelBufferRef pixelBuffer;
CVReturn status = CVPixelBufferPoolCreatePixelBuffer(nil, pixelBufferPool, &pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *pixelBufferBytes = CVPixelBufferGetBaseAddress(pixelBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
MTLRegion region = MTLRegionMake2D(0, 0, texture.width, texture.height);
[texture getBytes:pixelBufferBytes bytesPerRow:bytesPerRow fromRegion:region mipmapLevel:0];
[self.assetWriterPixelBufferInput appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CVPixelBufferRelease(pixelBuffer);
}
Make sure the custom renderer and the adaptor share the same pixel encoding.
I tested this for the default ship.scn and it and it only consume 30% CPU compared to almost 90% compared to use snapshot method for every frame. And this will not pop up a permission dialog.
I have released an open source framework taking care of this. https://github.com/svtek/SceneKitVideoRecorder
It works by getting the drawables from scene views metal layer.
You can attach a display link to get your renderer called as the screen refreshes:
displayLink = CADisplayLink(target: self, selector: #selector(updateDisplayLink))
displayLink?.add(to: .main, forMode: .commonModes)
And then grab the drawable from metal layer by:
let metalLayer = sceneView.layer as! CAMetalLayer
let nextDrawable = metalLayer.nextDrawable()
Be wary that nextDrawable() call expends the drawables. You should call this as less as possible and do so in an autoreleasepool{} so the drawable gets released properly and replaced with a new one.
Then you should read the MTLTexture from the drawable to a pixel buffer which you can append to AVAssetWriter to create a video.
let destinationTexture = currentDrawable.texture
destinationTexture.getBytes(...)
With these in mind the rest is pretty straightforward video recording on iOS/Cocoa.
You can find all these implemented in the repo I've shared above.
I had a similar need and wanted to record the ARSceneView in the app internally, and without ReplayKit so that I can manipulate the video that is generated from the recording. I ended up using this project: https://github.com/lacyrhoades/SceneKit2Video . The project is made to render a SceneView to a video, but you can configure it to accept ARSceneViews. It works pretty well, and you can choose to get an imagefeed instead of the video using the delegate function if you like.

CornerRadius exactly UIView

I want to clip the bounds of my UIView perfectly to interact as a circle, but however I set the corner radius, mask and clip to bounds and it shows correctly, it moves as a square, as you can see in the image:
The code I have used is:
let bubble1 = UIView(frame: CGRectMake(location.x, location.y, 128, 128))
bubble1.backgroundColor = color2
bubble1.layer.cornerRadius = bubble1.frame.size.width/2
bubble1.clipsToBounds = true
bubble1.layer.masksToBounds = true
What is wrong there that does still keeping the edges of the view?
PD: All the views moves dynamically, so when it moves and hit each other, it shows these empty space, acting as an square instead of as an circle
Finally, after all I found what to implement, and was just that class instead of UIView:
class SphereView: UIView {
// iOS 9 specific
override var collisionBoundsType: UIDynamicItemCollisionBoundsType {
return .Ellipse
}
}
Seen here: https://objectcoder.com/2016/02/29/variation-on-dropit-demo-from-lecture-12-dynamic-animation-cs193p-stanford-university/

UIScrollView in tvOS

I have a view hierarchy similar to the one in the image below (blue is the visible part of the scene):
So I have a UIScrollView with a lot of elements, out of which I am only showing the two button since they are relevant to the question. The first button is visible when the app is run, whereas the other one is positioned outside of the initially visible area. The first button is also the preferredFocusedView.
Now I am changing focus between the two buttons using a UIFocusGuide, and this works (checked it in didUpdateFocusInContext:). However, my scroll view does not scroll down when Button2 gets focused.
The scroll view is pinned to superview and I give it an appropriate content size in viewDidLoad of my view controller.
Any ideas how to get the scroll view to scroll?
Take a look at UIScrollView.panGestureRecognizer.allowedTouchTypes. It is an array of NSNumber with values based on UITouchType or UITouch.TouchType (depending on language version). By default allowedTouchTypes contains 2 values - direct and stylus. It means that your UIScrollView instance will not response to signals from remote control. Add the following line to fix it:
Swift 4
self.scrollView.panGestureRecognizer.allowedTouchTypes = [NSNumber(value: UITouchType.indirect.rawValue)]
Swift 4.2 & 5
self.scrollView.panGestureRecognizer.allowedTouchTypes = [NSNumber(value: UITouch.TouchType.indirect.rawValue)]
Also, don't forget to set a correct contentSize for UIScrollView:
self.scrollView.contentSize = CGSize(width: 1920.0, height: 2000.0)
Finally I solved this by setting scrollView.contentSize to the appropriate size in viewDidLoad.
You need to add pan gesture recognizer. I learned from here: http://www.theappguruz.com/blog/gesture-recognizer-using-swift. I added more code to make it not scrolling strangely, e.g. in horizontal direction.
var currentY : CGFloat = 0 //this saves current Y position
func initializeGestureRecognizer()
{
//For PanGesture Recoginzation
let panGesture: UIPanGestureRecognizer = UIPanGestureRecognizer(target: self, action: Selector("recognizePanGesture:"))
self.scrollView.addGestureRecognizer(panGesture)
}
func recognizePanGesture(sender: UIPanGestureRecognizer)
{
let translate = sender.translationInView(self.view)
var newY = sender.view!.center.y + translate.y
if(newY >= self.view.frame.height - 20) {
newY = sender.view!.center.y //make it not scrolling downwards at the very beginning
}
else if( newY <= 0){
newY = currentY //make it scrolling not too much upwards
}
sender.view!.center = CGPoint(x:sender.view!.center.x,
y:newY)
currentY = newY
sender.setTranslation(CGPointZero, inView: self.view)
}

Layering transparent images and text on top of a mapView (Swift)

I want to put a transparent box on top of a mapView and then some text on top of that.
I can do this treating the image as a button and altering the alpha setting but only the last UILabel I put in place remains black, the rest are muted.
Is this possible with Tags or something or does it have to be done programatically?
Anyway after a couple of hours sleuthing this is a working solution.
Declare your layer
#IBOutlet weak var layerView: UIView!
var myLayer: CALayer {
return layerView.layer
}
Put a UIView on your ViewController and link it to layerView.
Create a method/func to display the frame, e.g.
func displayMyFrame() {
myLayer.frame = CGRectMake(100, 100, 200, 40)
myLayer.backgroundColor = UIColor.whiteColor().CGColor
myLayer.borderWidth = 1.0
myLayer.borderColor = UIColor.redColor().CGColor
myLayer.shadowOpacity = 0.6
myLayer.cornerRadius = 5.0
myLayer.shadowRadius = 10.0
myLayer.opacity = 0.6
}
Put your UILabels on the view and change their tags to 1 (in this example).
Call the func in viewDidLoad.
displayMyFrame()
view.layer.insertSublayer(myLayer, atIndex: 1)
The atIndex: 1 means that this frame will be put behind the UILabels with a Tag of 1.
This works, hope that it helps.

Create PDF of dynamic size with typography using UIView template(s)

I'm new but have managed to learn a lot and create a pretty awesome (I hope) app that's near completion. One of my last tasks is to create a PDF of dynamically generated user data. It has been the most frustrating part of this whole process as there is no real modern clear cut template or guide. Apple's documentation isn't very descriptive (and some parts I don't understand) and the Q/A here on stack and examples on Google all seem very case specific. I thought I almost had it by using a UIScrollView but the end result was messy and I couldn't get things to line up neat enough, nor did I have enough control.
I believe my flaws with this task are logic related and not knowing enough about available APIs, help on either is greatly appreciated.
I have dynamically created user content filling an NSArray in a subclass of a UIViewController. That content consists of Strings and Images.
I would like to use a UIView (I'm presuming in a .xib file) to create a template with header information for the first page of the PDF (the header information is dynamic as well), any page after that can be done via code as it really is just a list.
I have a small understanding of UIGraphicsPDF... and would like to draw the text and images into the PDF and not just take a screen shot of the view.
My trouble with getting this going is:
(I'm being basic here on purpose because what I have done so far has led me nowhere)
How do I find out how many pages I'm going to need?
How do I find out if text is longer than a page and how do I split it?
How do I draw images in the PDF?
How do I draw text in the PDF?
How do I draw both text and images in the PDF but padded vertically so there's no overlap and account for Strings with a dynamic number of lines?
How do I keep track of the pages?
Thank you for reading, I hoe the cringe factor wasn't too high.
So here we go. The following was made for OSX with NSView but it's easily adapatable for UIView (so I guess). You will need the following scaffold:
A) PSPrintView will handle a single page to print
class PSPrintView:NSView {
var pageNo:Int = 0 // the current page
var totalPages:Int = 0
struct PaperDimensions {
size:NSSize // needs to be initialized from NSPrintInfo.sharedPrintInfo
var leftMargin, topMargin, rightMargin, bottomMargin : CGFloat
}
let paperDimensions = PaperDimensions(...)
class func clone() -> PSPrintView {
// returns a clone of self where most page parameters are copied
// to speed up printing
}
override func drawRect(dirtyRect: NSRect) {
super.drawRect(dirtyRect)
// Drawing code here.
// e.g. to draw a frame inside the view
let scale = convertSize(NSMakeSize(1, 1), fromView:nil)
var myContext = NSGraphicsContext.currentContext()!.CGContext
CGContextSetLineWidth(myContext, scale.height)
CGContextSetFillColorWithColor(myContext, NSColor.whiteColor().CGColor)
CGContextFillRect (myContext, rect)
rect.origin.x += paperDimensions.leftMargin
rect.origin.y += paperDimensions.bottomMargin
rect.size.width -= paperDimensions.leftMargin + paperDimensions.rightMargin
rect.size.height -= paperDimensions.topMargin + paperDimensions.bottomMargin
CGContextSetStrokeColorWithColor(myContext, NSColor(red: 1, green: 0.5, blue: 0, alpha: 0.5).CGColor)
CGContextStrokeRect(myContext, rect)
// here goes your layout with lots of String.drawInRect....
}
}
B) PSPrint: will hold the single PSPrintViews in an array and when done send them to the (PDF) printer
class PSPrint: NSView {
var printViews = [PSPrintView]()
override func knowsPageRange(range:NSRangePointer) -> Bool {
range.memory.location = 1
range.memory.length = printViews.count
return true
}
func printTheViews() {
let sharedPrintInfo = NSPrintInfo.sharedPrintInfo()
let numOfViews = printViews.count
var totalHeight:CGFloat = 0;//if not initialized to 0 weird problems occur after '3' clicks to print
var heightOfView:CGFloat = 0
// PSPrintView *tempView;
for tempView in printViews {
heightOfView = tempView.frame.size.height
totalHeight = totalHeight + heightOfView
}
//Change the frame size to reflect the amount of pages.
var newsize = NSSize()
newsize.width = sharedPrintInfo.paperSize.width-sharedPrintInfo.leftMargin-sharedPrintInfo.rightMargin
newsize.height = totalHeight
setFrameSize(newsize)
var incrementor = -1 //default the incrementor for the loop below. This controls what page a 'view' will appear on.
//Add the views in reverse, because the Y position is bottom not top. So Page 3 will have y coordinate of 0. Doing this so order views is placed in array reflects what is printed.
for (var i = numOfViews-1; i >= 0; i--) {
incrementor++
let tempView = printViews[i] //starts with the last item added to the array, in this case rectangles, and then does circle and square.
heightOfView = tempView.frame.size.height
tempView.setFrameOrigin(NSMakePoint(0, heightOfView*CGFloat(incrementor))) //So for the rectangle it's placed at position '0', or the very last page.
addSubview(tempView)
}
NSPrintOperation(view: self, printInfo: sharedPrintInfo).runOperation()
}
C) a function to perform printing (from the menu)
func doPrinting (sender:AnyObject) {
//First get the shared print info object so we know page sizes. The shared print info object acts like a global variable.
let sharedPrintInfo = NSPrintInfo.sharedPrintInfo()
//initialize it's base values.
sharedPrintInfo.leftMargin = 0
sharedPrintInfo.rightMargin = 0
sharedPrintInfo.topMargin = 0
sharedPrintInfo.bottomMargin = 0
var frame = NSRect(x: 0, y: 0, width: sharedPrintInfo.paperSize.width-sharedPrintInfo.leftMargin-sharedPrintInfo.rightMargin, height: sharedPrintInfo.paperSize.height-sharedPrintInfo.topMargin-sharedPrintInfo.bottomMargin)
//Initiate the printObject without a frame, it's frame will be decided later.
let printObject = PSPrint ()
//Allocate a new instance of NSView into the variable printPageView
let basePrintPageView = PSPrintView(frame: frame)
// do additional init stuff for the single pages if needed
// ...
var printPageView:PSPrintView
for pageNo in 0..<basePrintPageView.totalPages {
printPageView = basePrintPageView.clone()
//Set the option for the printView for what it should draw.
printPageView.pageNo = pageNo
//Finally append the view to the PSPrint Object.
printObject.printViews.append(printPageView)
}
printObject.printTheViews() //print all the views, each view being a 'page'.
}
The PDF drawing code:
import UIKit
class CreatePDF {
// Create a PDF from an array of UIViews
// Return a URL of a temp dir / pdf file
func getScaledImageSize(imageView: UIImageView) -> CGSize {
var scaledWidth = CGFloat(0)
var scaledHeight = CGFloat(0)
let image = imageView.image!
if image.size.height >= image.size.width {
scaledHeight = imageView.frame.size.height
scaledWidth = (image.size.width / image.size.height) * scaledHeight
if scaledWidth > imageView.frame.size.width {
let diff : CGFloat = imageView.frame.size.width - scaledWidth
scaledHeight = scaledHeight + diff / scaledHeight * scaledHeight
scaledWidth = imageView.frame.size.width
}
} else {
scaledWidth = imageView.frame.size.width
scaledHeight = (image.size.height / image.size.width) * scaledWidth
if scaledHeight > imageView.frame.size.height {
let diff : CGFloat = imageView.frame.size.height - scaledHeight
scaledWidth = scaledWidth + diff / scaledWidth * scaledWidth
scaledHeight = imageView.frame.size.height
}
}
return CGSizeMake(scaledWidth, scaledHeight)
}
func drawImageFromUIImageView(imageView: UIImageView) {
let theImage = imageView.image!
// Get the image as it's scaled in the image view
let scaledImageSize = getScaledImageSize(imageView)
let imageFrame = CGRectMake(imageView.frame.origin.x, imageView.frame.origin.y, scaledImageSize.width, scaledImageSize.height)
theImage.drawInRect(imageFrame)
}
func drawTextFromLabel(aLabel: UILabel) {
if aLabel.text?.isEmpty == false {
let theFont = aLabel.font
let theAttributedFont = [NSFontAttributeName: theFont!]
let theText = aLabel.text!.stringByTrimmingCharactersInSet(NSCharacterSet.whitespaceAndNewlineCharacterSet()) as NSString
let theFrame = aLabel.frame
theText.drawInRect(theFrame, withAttributes: theAttributedFont)
}
}
func parseSubviews(aView: UIView) {
for aSubview in aView.subviews {
if aSubview.isKindOfClass(UILabel) {
// Draw label
drawTextFromLabel(aSubview as! UILabel)
}
if aSubview.isKindOfClass(UIImageView) {
// Draw image (scaled and at correct coordinates
drawImageFromUIImageView(aSubview as! UIImageView)
}
}
}
func parseViewsToRender(viewsToRender: NSArray) {
for aView in viewsToRender as! [UIView] {
UIGraphicsBeginPDFPageWithInfo(CGRectMake(0, 0, 612, 792), nil)
parseSubviews(aView)
}
}
func createPdf(viewsToRender: NSArray, filename: String) -> NSURL {
// Create filename
let tempDir = NSTemporaryDirectory()
let pdfFilename = tempDir.stringByAppendingPathComponent(filename)
UIGraphicsBeginPDFContextToFile(pdfFilename, CGRectZero, nil)
// begin to render the views in this context
parseViewsToRender(viewsToRender)
UIGraphicsEndPDFContext()
return NSURL(string: pdfFilename)!
}
}
First, I made a xib file with a UIView that fit the dimensions of a single PDF page and for my header information. This size is 612 points wide by 790 points tall.
Then I added UILabels for all of the page 1 header information I want to use (name, address, date, etc.)
I took note of y position and height of the lowest UILabel for my header information and subtracted it from the amount of vertical space in a page.
I also took note of the font and font size I wanted to use.
Then I created a class called CreatePDF
In that class I created several variables and constants, the font name, the font size, the size of a page, the remaining vertical space after header information.
In that class I created a method that takes two different arguments, one is a dictionary that I used for header information, the other is an array of UIImages and Strings.
That method calls a few other methods:
Determine the vertical height required for the items in the array
To do this I created another two methods, one to determine the height of a UILabel with any given string and one to determine the height of an image (vertical and horizontal images having different heights the way that I scale them). They each returned a CGFloat, which I added to a variable in the method that kept track of all the items in array.
For each item that was “sized” I then added another 8 points to use as a vertical offset.
Determine how many pages will be needed
The above method returned a CGFloat that I then used to figure out if either all the items will fit on one page below the header or if another page will be needed, and if so, how many more pages.
Draw a UIView
This method accepts the above mentioned dictionary, array and an estimated number of pages. It returns an array of UIViews.
In this method I create a UIView that matches the size of one PDF Page, I run a loop for each page and add items to it, I check to see if an item will fit by comparing it’s y position and height with the reaming vertical space on a page by subtracting the current Y position from the page height then I add an item and keep track of it’s height and y position, If the remaining height won’t work and I’m out of pages, I add another page.
Send the array to draw a PDF
I create the PDF context here
I take the array of UIViews as an argument, for each view I create a PDF Page in the PDF Context and then iterate through it’s subviews, if it’s a UILabel I send it off to a function that draws the UILabel at it’s frame position with it’s text property as the string. I create an attributed front using the variables defined in the class earlier. If it’s an image I send it to another function that also uses it’s frame, however I have to send it to yet another function to determine the actual dimensions of the image that’s drawn inside the UIImage (it changes based on scaling) and I return that for where to draw the image (this happens above too to properly size it).
That’s pretty much it, in my case I created the PDF context with a file, then end up returning the file to whoever calls this function. The hardest part for me to wrap my head around was keeping track of the vertical positioning.
I’ll work on making the code more generic and post it up somewhere.