I am trying to build something for my own learning where I select an image from my photo library then divide that image up into sections. I had found info on how to split a single UIImage into sections, but in order to do that I need to have access to the cgImage property of the UIImage object. My problem is, the cgImage is always nil/null when selecting an image from the UIImagePickerController. Below is a stripped down version of my code, I'm hoping someone knows why the cgImage is always nil/null...
class ViewController: UIViewController, UINavigationControllerDelegate, UIImagePickerControllerDelegate {
#IBOutlet weak var selectButton: UIButton!
let picker = UIImagePickerController()
var image: UIImage!
var images: [UIImage]!
override func viewDidLoad() {
super.viewDidLoad()
picker.delegate = self
picker.sourceType = .photoLibrary
}
#objc func selectPressed(_ sender: UIButton) {
self.present(picker, animated: true)
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info [UIImagePickerController.InfoKey : Any]) {
guard let image = info[UIImagePickerController.InfoKey.originalImage] as? UIImage else {
self.picker.dismiss(animated: true, completion: nil)
return
}
self.image = image
self.picker.dismiss(animated: true, completion: nil)
self.makePuzzle()
}
func makePuzze() {
let images = self.image.split(times: 5)
}
}
extension UIImage {
func split(times: Int) -> [UIImage] {
let size = self.size
var xpos = 0, ypos = 0
var images: [UIImage] = []
let width = Int(size.width) / times
let height = Int(size.height) / times
for x in 0..<times {
xpos = 0
for y in 0..<times {
let rect = CGRect(x: xpos, y: ypos, width: width, height: height)
let ciRef = self.cgImage?.cropping(to: rect) //this is always nil
let img = UIImage(cgImage: ciRef!) //crash because nil
xpos += width
images.append(img)
}
ypos += height
}
return images
}
}
I can't seem to get the cgImage to be anything but nil/null and the app crashes every time. I know I can change the ! to ?? nil or something similar to avoid the crash, or add a guard or something, but that isn't really the problem, the problem is the cgImage is nil. I have looked around and the only thing I can find is how to get the cgImage with something like image.cgImage but that doesn't work. I think it has something to do with the image being selected from the UIImagePickerController, maybe that doesn't create the cgImage properly? Honestly not sure and could use some help. Thank you.
This is not an answer, just a beefed up comment with code.
Your assumption that the problem may be due to the UIImagePickerController could be correct.
Here is my SwiftUI test code. It shows your split(..) code (with some minor mods) working.
extension UIImage {
func split(times: Int) -> [UIImage] {
let size = self.size
var xpos = 0, ypos = 0
var images: [UIImage] = []
let width = Int(size.width) / times
let height = Int(size.height) / times
if let cgimg = self.cgImage { // <-- here
for _ in 0..<times {
xpos = 0
for _ in 0..<times {
let rect = CGRect(x: xpos, y: ypos, width: width, height: height)
if let ciRef = cgimg.cropping(to: rect) { // <-- here
let img = UIImage(cgImage: ciRef)
xpos += width
images.append(img)
}
}
ypos += height
}
}
return images
}
}
struct ContentView: View {
#State var imgSet = [UIImage]()
var body: some View {
ScrollView {
ForEach(imgSet, id: \.self) { img in
Image(uiImage: img).resizable().frame(width: 100, height: 100)
}
}
.onAppear {
if let img = UIImage(systemName: "globe") { // for testing
imgSet = img.split(times: 2)
}
}
}
}
Related
I'm taking snapshot from a PDFView in PDFKit for streaming (20 times per sec), and I use this extesnsion
extension UIView {
func asImageBackground(viewLayer: CALayer, viewBounds: CGRect) -> UIImage {
let renderer = UIGraphicsImageRenderer(bounds: viewBounds)
return renderer.image { rendererContext in
viewLayer.render(in: rendererContext.cgContext)
}
}
}
But the output UIImage from this extension has a high resolution which make it difficult to stream. I can reduce it by this extension
extension UIImage {
func resize(_ max_size: CGFloat) -> UIImage {
// adjust for device pixel density
let max_size_pixels = max_size / UIScreen.main.scale
// work out aspect ratio
let aspectRatio = size.width/size.height
// variables for storing calculated data
var width: CGFloat
var height: CGFloat
var newImage: UIImage
if aspectRatio > 1 {
// landscape
width = max_size_pixels
height = max_size_pixels / aspectRatio
} else {
// portrait
height = max_size_pixels
width = max_size_pixels * aspectRatio
}
// create an image renderer of the correct size
let renderer = UIGraphicsImageRenderer(size: CGSize(width: width, height: height), format: UIGraphicsImageRendererFormat.default())
// render the image
newImage = renderer.image {
(context) in
self.draw(in: CGRect(x: 0, y: 0, width: width, height: height))
}
// return the image
return newImage
}
}
but it add an additional workload which make the process even worse. Is there any better way?
Thanks
You can downsample it using ImageIO which is recommended by Apple:
extension UIImage {
func downsample(to resolution: CGSize) -> UIImage? {
let imageSourceOptions = [kCGImageSourceShouldCache: false] as CFDictionary
guard let data = self.jpegData(compressionQuality: 0.75) as? CFData, let imageSource = CGImageSourceCreateWithData(data, imageSourceOptions) else {
return nil
}
let maxDimensionInPixels = Swift.max(resolution.width, resolution.height) * 3
let downsampleOptions = [
kCGImageSourceCreateThumbnailFromImageAlways: true,
kCGImageSourceShouldCacheImmediately: true,
kCGImageSourceCreateThumbnailWithTransform: true,
kCGImageSourceThumbnailMaxPixelSize: maxDimensionInPixels
] as CFDictionary
guard let downsampledImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, downsampleOptions) else {
return nil
}
return UIImage(cgImage: downsampledImage)
}
}
I'm trying to create a share button with SwiftUI that when pressed can share a generated image. I've found some tutorials that can screen shot a current displayed view and convert it to an UIImage. But I want to create a view programmatically off the screen and then save that to a UIImage that users can then share with a share sheet.
import SwiftUI
import SwiftyJSON
import MapKit
struct ShareRentalView : View {
#State private var region = MKCoordinateRegion(center: CLLocationCoordinate2D(latitude: 32.786038, longitude: -117.237324) , span: MKCoordinateSpan(latitudeDelta: 0.025, longitudeDelta: 0.025))
#State var coordinates: [JSON] = []
#State var origin: CGPoint? = nil
#State var size: CGSize? = nil
var body: some View {
GeometryReader{ geometry in
VStack(spacing: 0) {
ZStack{
HistoryMapView(region: region, pointsArray: $coordinates)
.frame(height: 300)
}.frame(height: 300)
}.onAppear {
self.origin = geometry.frame(in: .global).origin
self.size = geometry.size
}
}
}
func returnScreenShot() -> UIImage{
return takeScreenshot(origin: self.origin.unsafelyUnwrapped, size: self.size.unsafelyUnwrapped)
}
}
extension UIView {
var renderedImage: UIImage {
// rect of capure
let rect = self.bounds
// create the context of bitmap
UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)
let context: CGContext = UIGraphicsGetCurrentContext()!
self.layer.render(in: context)
// get a image from current context bitmap
let capturedImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return capturedImage
}
}
extension View {
func takeScreenshot(origin: CGPoint, size: CGSize) -> UIImage {
let window = UIWindow(frame: CGRect(origin: origin, size: size))
let hosting = UIHostingController(rootView: self)
hosting.view.frame = window.frame
window.addSubview(hosting.view)
window.makeKeyAndVisible()
return hosting.view.renderedImage
}
}
This is kind of my code idea at the moment. I have a view I've built that onAppear sets the CGpoint and CGsize of the screen capture. Then an attached method that can then take the screen shot of the view. The problem right now this view never renders because I never add this to a parent view as I don't want this view to appear to the user. In the parent view I have
struct HistoryCell: View {
...
private var shareRental : ShareRentalView? = nil
private var uiimage: UIImage? = nil
...
init(){
...
self.shareRental = ShareRentalView()
}
var body: some View {
...
Button{action: {self.uiimage = self.shareRental?.returnScreenShot()}}
...
}
}
This doesn't work because there view I want to screen shot is never rendered? Is there a way to render it in memory or off screen and then create an image from it? Or do I need to think of another way of doing this?
This ended up working to get the a screen shot of a view that was not presented on the screen to save as a UIImage
extension UIView {
func asImage() -> UIImage {
let format = UIGraphicsImageRendererFormat()
format.scale = 1
return UIGraphicsImageRenderer(size: self.layer.frame.size, format: format).image { context in
self.drawHierarchy(in: self.layer.bounds, afterScreenUpdates: true)
}
}
}
extension View {
func asImage() -> UIImage {
let controller = UIHostingController(rootView: self)
let size = controller.sizeThatFits(in: UIScreen.main.bounds.size)
controller.view.bounds = CGRect(origin: .zero, size: size)
let image = controller.view.asImage()
return image
}
}
And then in my parent view
var shareRental: ShareRentalView?
init(){
....
self.shareRental = ShareRentalView()
}
var body: some View {
Button(action: {
let shareImage = self.shareRental.asImage()
}
This gets me almost there. The MKMapSnapshotter has a delay while loading and the image creation happens too fast and there is no map when the UIImage is created.
In order to get around the issue with the delay in the map loading I created an array in a class that builds all the UIImages and stores them in an array.
class MyUser: ObservableObject {
...
public func buildHistoryRental(){
self.historyRentals.removeAll()
MapSnapshot().generateSnapshot(completion: self.snapShotRsp)
}
}
}
}
private func snapShotRsp(image: UIImage){
self.historyRentals.append(image))
}
And then I made a class to create snap shot images like this
func generateSnapshot(completion: #escaping (JSON, UIImage)->() ){
let mapSnapshotOptions = MKMapSnapshotOptions()
// Set the region of the map that is rendered. (by polyline)
let polyLine = MKPolyline(coordinates: &yourCoordinates, count: yourCoordinates.count)
let region = MKCoordinateRegionForMapRect(polyLine.boundingMapRect)
mapSnapshotOptions.region = region
// Set the scale of the image. We'll just use the scale of the current device, which is 2x scale on Retina screens.
mapSnapshotOptions.scale = UIScreen.main.scale
// Set the size of the image output.
mapSnapshotOptions.size = CGSize(width: IMAGE_VIEW_WIDTH, height: IMAGE_VIEW_HEIGHT)
// Show buildings and Points of Interest on the snapshot
mapSnapshotOptions.showsBuildings = true
mapSnapshotOptions.showsPointsOfInterest = true
let snapShotter = MKMapSnapshotter(options: mapSnapshotOptions)
var image: UIImage = UIImage()
snapshotter.start(completionHandler: { (snapshot: MKMapSnapshotter.Snapshot?, Error) -> Void in
if(Error != nil){
print("\(String(describing: Error))");
}else{
image = self.drawLineOnImage(snapshot: snapshot.unsafelyUnwrapped, pointsToUse: pointsToUse)
}
completion(image)
})
}
}
func drawLineOnImage(snapshot: MKMapSnapshot) -> UIImage {
let image = snapshot.image
// for Retina screen
UIGraphicsBeginImageContextWithOptions(self.imageView.frame.size, true, 0)
// draw original image into the context
image.draw(at: CGPoint.zero)
// get the context for CoreGraphics
let context = UIGraphicsGetCurrentContext()
// set stroking width and color of the context
context!.setLineWidth(2.0)
context!.setStrokeColor(UIColor.orange.cgColor)
// Here is the trick :
// We use addLine() and move() to draw the line, this should be easy to understand.
// The diificult part is that they both take CGPoint as parameters, and it would be way too complex for us to calculate by ourselves
// Thus we use snapshot.point() to save the pain.
context!.move(to: snapshot.point(for: yourCoordinates[0]))
for i in 0...yourCoordinates.count-1 {
context!.addLine(to: snapshot.point(for: yourCoordinates[i]))
context!.move(to: snapshot.point(for: yourCoordinates[i]))
}
// apply the stroke to the context
context!.strokePath()
// get the image from the graphics context
let resultImage = UIGraphicsGetImageFromCurrentImageContext()
// end the graphics context
UIGraphicsEndImageContext()
return resultImage!
}
It's important to return the image back async with the callback. Trying to return the image directly from the func call yielded a blank map.
I am trying to drag an UIImage using UIDragInteraction. I am trying to implement the procedure as documented in Apple's documentation. My code is below. I am trying to perform the drag feature but currently my image is not moving even when i hold the left mouse button on the image and try to drag it (I have yet to implement the Drop implementation as i am trying to do this one step at a time). I am executing the code on a simulator.
my code:
class StartGameViewController: UIViewController{
var dragInteraction: UIDragInteraction!
var dragInteractionDelegate: UIDragInteractionDelegate!
var dragSourceImgView: UIImageView = {
let imgView = UIImageView()
imgView.backgroundColor = UIColor.red.withAlphaComponent(0.5)
imgView.contentMode = UIImageView.ContentMode.scaleToFill
return imgView
}()
var dropImgSourceView: UIImageView = {
let imgView = UIImageView()
imgView.contentMode = UIImageView.ContentMode.scaleAspectFit
imgView.backgroundColor = UIColor.blue.withAlphaComponent(0.5)
return imgView
}()
override func viewDidLoad() {
super.viewDidLoad()
dragSourceImgView.image = UIImage.init(named: "rider")
let frame = CGRect.init(x: 50, y: 50, width: 100, height: 100)
dragSourceImgView.frame = frame
container1.addSubview(dragSourceImgView)
let frameDrop = CGRect.init(x: 200, y: 50, width: 100, height: 100)
dropImgSourceView.frame = frameDrop
container1.addSubview(dropImgSourceView)
// Enable imageView as a drag source
dragInteraction = UIDragInteraction.init(delegate: self)
dragSourceImgView.addInteraction(dragInteraction)
}
}
// DRAG
extension StartGameViewController: UIDragInteractionDelegate{
// Create a Drag Item
func dragInteraction(_ interaction: UIDragInteraction, itemsForBeginning session: UIDragSession) -> [UIDragItem] {
guard let image = dragSourceImgView.image else{return []}
let itemProvider = NSItemProvider.init(object: image)
let dragItem = UIDragItem.init(itemProvider: itemProvider)
return [dragItem]
}
}
You should set your UIImageView to allow user interaction:
dragSourceImageView.userInteractionEnabled = true
If you are running on an iPhone you must also set the UIDragInteraction to be enabled:
dragInteraction.isEnabled = true
I would like to be able to save a UIImage array created on the Apple Watch with watchOS and play this series of images as an animation as a group background. I can make the image array and play it but I cannot figure out how to store/save these images so I can retrieve/load them the next time I run the app so I don't have to build them every time the app runs.
Here is an example of how I am building the images with Core Graphics (Swift 3):
import WatchKit
import Foundation
class InterfaceController: WKInterfaceController
{
#IBOutlet var colourGroup: WKInterfaceGroup!
override func awake(withContext context: AnyObject?)
{
super.awake(withContext: context)
}
override func willActivate()
{
var imageArray: [UIImage] = []
for imageNumber in 1...250
{
let newImage: UIImage = drawImage(fade: CGFloat(imageNumber)/250.0)
imageArray.append(newImage)
}
let animatedImages = UIImage.animatedImage(with:imageArray, duration: 10)
colourGroup.setBackgroundImage(animatedImages)
let imageRange: NSRange = NSRange(location: 0, length: 200)
colourGroup.startAnimatingWithImages(in: imageRange, duration: 10, repeatCount: 0)
super.willActivate()
}
func drawImage(fade: CGFloat) -> UIImage
{
let boxColour: UIColor = UIColor(red: 1.0, green: 1.0, blue: 1.0, alpha: fade)
let opaque: Bool = false
let scale: CGFloat = 0.0
let bounds: CGRect = WKInterfaceDevice.current().screenBounds
let imageSize: CGSize = CGSize(width: bounds.width, height: 20.0)
UIGraphicsBeginImageContextWithOptions(imageSize, opaque, scale)
let radius: CGFloat = imageSize.height/2.0
let rect: CGRect = CGRect(x: 0.0, y: 0.0, width: imageSize.width, height: imageSize.height)
let selectorBox: UIBezierPath = UIBezierPath(roundedRect: rect, cornerRadius: radius)
let boxLineWidth: Double = 0.0
selectorBox.lineWidth = CGFloat(boxLineWidth)
boxColour.setFill()
selectorBox.fill()
// return the image
let result: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return result
}
override func didDeactivate()
{
// This method is called when watch view controller is no longer visible
super.didDeactivate()
}
}
Basically I am looking for a way to save and load a [UIImage] in a manner that I can use UIImage.animatedImage(with:[UIImage], duration: TimeInterval) with the array
Is there a way to save the image array so I can load it next time I run the app rather than rebuild the images?
Thanks
Greg
NSKeyedArchiver and NSKeyedUnarchiver did the trick. Here is Swift code for XCode 8b4:
override func willActivate()
{
var imageArray: [UIImage] = []
let fileName: String = "TheRings"
let fileManager = FileManager.default
let url = fileManager.urls(for: .documentDirectory, in: .userDomainMask).first! as NSURL
let theURL: URL = url.appendingPathComponent(fileName)!
if let rings = NSKeyedUnarchiver.unarchiveObject(withFile: theURL.path) as? [UIImage]
{
print("retrieving rings - found rings")
imageArray = rings
}
else
{
print("retrieving rings - can't find rings, building new ones")
for imageNumber in 1...250
{
let newImage: UIImage = drawImage(fade: CGFloat(imageNumber)/250.0)
imageArray.append(newImage)
}
NSKeyedArchiver.archiveRootObject(imageArray, toFile: theURL.path)
}
let animatedImages = UIImage.animatedImage(with:imageArray, duration: 10)
colourGroup.setBackgroundImage(animatedImages)
let imageRange: NSRange = NSRange(location: 0, length: 200)
colourGroup.startAnimatingWithImages(in: imageRange, duration: 10, repeatCount: 0)
super.willActivate()
}
The app I'm working on uses collection view cells to display data to the user. I want the user to be able to share the data that's contained in the cells, but there are usually too many cells to try to re-size and fit onto a single iPhone-screen-sized window and get a screenshot.
So the problem I'm having is trying to get an image of all the cells in a collection view, both on-screen and off-screen. I'm aware that off-screen cells don't actually exist, but I'd be interested in a way to kind of fake an image and draw in the data (if that's possible in swift).
In short, is there a way to programmatically create an image from a collection view and the cells it contains, both on and off screen with Swift?
Update
If memory is not a concern :
mutating func screenshot(scale: CGFloat) -> UIImage {
let currentSize = frame.size
let currentOffset = contentOffset // temp store current offset
frame.size = contentSize
setContentOffset(CGPointZero, animated: false)
// it might need a delay here to allow loading data.
let rect = CGRect(x: 0, y: 0, width: self.bounds.size.width, height: self.bounds.size.height)
UIGraphicsBeginImageContextWithOptions(rect.size, false, UIScreen.mainScreen().scale)
self.drawViewHierarchyInRect(rect, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
frame.size = currentSize
setContentOffset(currentOffset, animated: false)
return resizeUIImage(image, scale: scale)
}
This works for me:
github link -> contains up to date code
getScreenshotRects creates the offsets to which to scroll and the frames to capture. (naming is not perfect)
takeScreenshotAtPoint scrolls to the point, sets a delay to allow a redraw, takes the screenshot and returns this via completion handler.
stitchImages creates a rect with the same size as the content and draws all images in them.
makeScreenshots uses the didSet on a nested array of UIImage and a counter to create all images while also waiting for completion. When this is done it fires it own completion handler.
Basic parts :
scroll collectionview -> works
take screenshot with delay for a redraw -> works
crop images that are overlapping -> apparently not needed
stitch all images -> works
basic math -> works
maybe freeze screen or hide when all this is happening (this is not in my answer)
Code :
protocol ScrollViewImager {
var bounds : CGRect { get }
var contentSize : CGSize { get }
var contentOffset : CGPoint { get }
func setContentOffset(contentOffset: CGPoint, animated: Bool)
func drawViewHierarchyInRect(rect: CGRect, afterScreenUpdates: Bool) -> Bool
}
extension ScrollViewImager {
func screenshot(completion: (screenshot: UIImage) -> Void) {
let pointsAndFrames = getScreenshotRects()
let points = pointsAndFrames.points
let frames = pointsAndFrames.frames
makeScreenshots(points, frames: frames) { (screenshots) -> Void in
let stitched = self.stitchImages(images: screenshots, finalSize: self.contentSize)
completion(screenshot: stitched!)
}
}
private func makeScreenshots(points:[[CGPoint]], frames : [[CGRect]],completion: (screenshots: [[UIImage]]) -> Void) {
var counter : Int = 0
var images : [[UIImage]] = [] {
didSet {
if counter < points.count {
makeScreenshotRow(points[counter], frames : frames[counter]) { (screenshot) -> Void in
counter += 1
images.append(screenshot)
}
} else {
completion(screenshots: images)
}
}
}
makeScreenshotRow(points[counter], frames : frames[counter]) { (screenshot) -> Void in
counter += 1
images.append(screenshot)
}
}
private func makeScreenshotRow(points:[CGPoint], frames : [CGRect],completion: (screenshots: [UIImage]) -> Void) {
var counter : Int = 0
var images : [UIImage] = [] {
didSet {
if counter < points.count {
takeScreenshotAtPoint(point: points[counter]) { (screenshot) -> Void in
counter += 1
images.append(screenshot)
}
} else {
completion(screenshots: images)
}
}
}
takeScreenshotAtPoint(point: points[counter]) { (screenshot) -> Void in
counter += 1
images.append(screenshot)
}
}
private func getScreenshotRects() -> (points:[[CGPoint]], frames:[[CGRect]]) {
let vanillaBounds = CGRect(x: 0, y: 0, width: self.bounds.size.width, height: self.bounds.size.height)
let xPartial = contentSize.width % bounds.size.width
let yPartial = contentSize.height % bounds.size.height
let xSlices = Int((contentSize.width - xPartial) / bounds.size.width)
let ySlices = Int((contentSize.height - yPartial) / bounds.size.height)
var currentOffset = CGPoint(x: 0, y: 0)
var offsets : [[CGPoint]] = []
var rects : [[CGRect]] = []
var xSlicesWithPartial : Int = xSlices
if xPartial > 0 {
xSlicesWithPartial += 1
}
var ySlicesWithPartial : Int = ySlices
if yPartial > 0 {
ySlicesWithPartial += 1
}
for y in 0..<ySlicesWithPartial {
var offsetRow : [CGPoint] = []
var rectRow : [CGRect] = []
currentOffset.x = 0
for x in 0..<xSlicesWithPartial {
if y == ySlices && x == xSlices {
let rect = CGRect(x: bounds.width - xPartial, y: bounds.height - yPartial, width: xPartial, height: yPartial)
rectRow.append(rect)
} else if y == ySlices {
let rect = CGRect(x: 0, y: bounds.height - yPartial, width: bounds.width, height: yPartial)
rectRow.append(rect)
} else if x == xSlices {
let rect = CGRect(x: bounds.width - xPartial, y: 0, width: xPartial, height: bounds.height)
rectRow.append(rect)
} else {
rectRow.append(vanillaBounds)
}
offsetRow.append(currentOffset)
if x == xSlices {
currentOffset.x = contentSize.width - bounds.size.width
} else {
currentOffset.x = currentOffset.x + bounds.size.width
}
}
if y == ySlices {
currentOffset.y = contentSize.height - bounds.size.height
} else {
currentOffset.y = currentOffset.y + bounds.size.height
}
offsets.append(offsetRow)
rects.append(rectRow)
}
return (points:offsets, frames:rects)
}
private func takeScreenshotAtPoint(point point_I: CGPoint, completion: (screenshot: UIImage) -> Void) {
let rect = CGRect(x: 0, y: 0, width: self.bounds.size.width, height: self.bounds.size.height)
let currentOffset = contentOffset
setContentOffset(point_I, animated: false)
delay(0.001) {
UIGraphicsBeginImageContextWithOptions(rect.size, false, UIScreen.mainScreen().scale)
self.drawViewHierarchyInRect(rect, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
self.setContentOffset(currentOffset, animated: false)
completion(screenshot: image)
}
}
private func delay(delay:Double, closure:()->()) {
dispatch_after(
dispatch_time(
DISPATCH_TIME_NOW,
Int64(delay * Double(NSEC_PER_SEC))
),
dispatch_get_main_queue(), closure)
}
private func crop(image image_I:UIImage, toRect rect:CGRect) -> UIImage? {
guard let imageRef: CGImageRef = CGImageCreateWithImageInRect(image_I.CGImage, rect) else {
return nil
}
return UIImage(CGImage:imageRef)
}
private func stitchImages(images images_I: [[UIImage]], finalSize : CGSize) -> UIImage? {
let finalRect = CGRect(x: 0, y: 0, width: finalSize.width, height: finalSize.height)
guard images_I.count > 0 else {
return nil
}
UIGraphicsBeginImageContext(finalRect.size)
var offsetY : CGFloat = 0
for imageRow in images_I {
var offsetX : CGFloat = 0
for image in imageRow {
let width = image.size.width
let height = image.size.height
let rect = CGRect(x: offsetX, y: offsetY, width: width, height: height)
image.drawInRect(rect)
offsetX += width
}
offsetX = 0
if let firstimage = imageRow.first {
offsetY += firstimage.size.height
} // maybe add error handling here
}
let stitchedImages = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return stitchedImages
}
}
extension UIScrollView : ScrollViewImager {
}
Draw the bitmap data of your UICollectionView into a UIImage using UIKit graphics functions. Then you'll have a UIImage that you could save to disk or do whatever you need with it. Something like this should work:
// your collection view
#IBOutlet weak var myCollectionView: UICollectionView!
//...
let image: UIImage!
// draw your UICollectionView into a UIImage
UIGraphicsBeginImageContext(myCollectionView.frame.size)
myCollectionView.layer.renderInContext(UIGraphicsGetCurrentContext()!)
image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
For swift 4 to make screenshot of UICollectionView
func makeScreenShotToShare()-> UIImage{
UIGraphicsBeginImageContextWithOptions(CGSize.init(width: self.colHistory.contentSize.width, height: self.colHistory.contentSize.height + 84.0), false, 0)
colHistory.scrollToItem(at: IndexPath.init(row: 0, section: 0), at: .top, animated: false)
colHistory.layer.render(in: UIGraphicsGetCurrentContext()!)
let row = colHistory.numberOfItems(inSection: 0)
let numberofRowthatShowinscreen = self.colHistory.size.height / (self.arrHistoryData.count == 1 ? 130 : 220)
let scrollCount = row / Int(numberofRowthatShowinscreen)
for i in 0..<scrollCount {
colHistory.scrollToItem(at: IndexPath.init(row: (i+1)*Int(numberofRowthatShowinscreen), section: 0), at: .top, animated: false)
colHistory.layer.render(in: UIGraphicsGetCurrentContext()!)
}
let image:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext();
return image
}