Which is the most preferred method for an in-app web browser? My app has the need to have a toolbar at the bottom, and I need to be able to take screenshots of visited web pages by hitting a button on the toolbar.
Here is what I'm running into. I want to be able to click a link in my app, open an in-app browser, take a screenshot, and save to core data along with other information about the site.
I'm able to save a screenshot to the camera roll with .takesnapshot() method for Webkit. I've been unable to save it to core data. As of last night, I've found a few functions on SO that show how to take a screenshot of the UIView and return a UIImage, but I've been unable to cast this back to Data since this is what Core Data expects for binary data. Does anyone have a good resource to save Webkit snapshots to Core Data?
In one of the functions I attempted last night, I was able to return the UIImage object, but I was unable to convert it back to Data. I can save all other data about the site to Core Data, but I'm unable to save the image - in fact, when I attempted to save the data directly with Webkit's .takesnapshot() method, the result was nil.
I'm going to answer your question in two parts.
WebKit vs SFSafariViewController
If you want to have a custom toolbar with a button for taking screenshots, I would use a UIViewController with a custom tab bar where you can add whatever buttons you want, then I would also embed a WebKit view in the view controller too. That should be pretty straightforward.
Saving screenshot to Core Data
By the sounds of it, you may be getting a screenshot the wrong way, you want to do so with graphics begin and end image context as follows:
let layer = UIApplication.shared.keyWindow!.layer
let scale = UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions(layer.frame.size, false, scale)
layer.render(in: UIGraphicsGetCurrentContext()!)
let screenshot = UIGraphicsGetImageFromCurrentImageContext()
Now you have a screenshot saved as a UIImage? in your screenshot variable. Next up you should convert this to data with:
let data = UIImagePNGRepresentation(screenshot!)
Which should give you a data representation of your image that you can then save to Core Data.
Related
I have an app that presents a bunch of thumbnail images and they are automatically cached using:
WebImage(url: URL(string: currentWallpaper.thumbnailURL), options: SDWebImageOptions.retryFailed)
This works great. When a user selects a thumbnail I present a full screen version:
WebImage(url: URL(string: currentWallpaper.hiresUrl), options: SDWebImageOptions.retryFailed)
When a user displays an image full res I have the ability to swipe to present the next or previous hires image without going back to my thumbnail catalog.. so I prefetch both the previous and next image so it feels immediate when they swipe.
After a while loading images gets sluggish. I'm thinking I'd like to setup a separate cache for the hires images so when the user goes back to the thumbnail view I can clear it but still keep the thumbs in cache. Or just limit the number of images in that cache, something like.
I'm just not finding any examples of how to point WebImage to a custom cache. I see how to create one:
let hiresCache = SDImageCache(namespace: "hiresCache")
SDImageCachesManager.shared.addCache(hiresCache)
Just not exactly how to use it. Ideally WebImage just allows you to pass a cache to it, but I'm not seeing that as an option. Or maybe someone has a better solution
I am captruing the content of WKWebView into a UIImage via its CALayer by using the following extenstion to UIImage:
extension UIImage {
class func imageFromLayer(layer: CALayer) -> UIImage {
UIGraphicsBeginImageContextWithOptions(layer.bounds.size, layer.isOpaque, 0.0)
layer.render(in: UIGraphicsGetCurrentContext()!)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img!
}
}
When I render the captured image to a UIImageView, I see everything from the web view in the image view; everything, except videos, whose content instead appears as a black box. I encounter the same problem if I use WKWebView's takeSnapshot function or the drawHierarchy function of the UIView class to capture UIImage of the web view. Does anyone know why videos are displayed as black boxes and how one can fix this?
Update 1: Additionally, all GIFs in Google Images are frozen until clicked. A GIF can be unfrozen by clicking on it, at which point it gets resized to take up the whole screen and starts looping.
Update 2: I have read that the recording software OBS sometimes experiences the same problem with capturing videos when the browser that it is capturing uses hardware acceleration. I also read that WKWebView uses hardware acceleration. Is there a way to disable hardware acceleration in WKWebView to try this out? Alternatively, does UIWebView use hardware acceleration?
Update 3: There is another post that reports a similar issue. The poster reports that all GPU rendered content appears black in his screenshots and he believes that hardware acceleration is causing the issue.
Update 4: By using Xcode's "Quick Look" feature I have observed that the layer that is passed to imageFromLayer does not contain the videos (i.e. videos are already represents as black boxes in the passed in layer). This makes me think that WKWebView does not render videos to the same layer as it renders its other content. It also makes me think that if I could access this hypothetical video layer I would be able to combine the two snapshots to form a snapshot of the entire web view.
Update 5: I tried modifying my imageFromLayer function to take a UIView instead of a CALayer so that I could pass WKWebView into it directly. The modified version is shown below, but it did not solve the issue of videos not being displayed. Nevertheless, through the use of Xcode's "Quick Look" feature I was able to observe that the videos are already missing in the UIView of the passed in WKWebView object.
class func createImage(view: UIView) -> UIImage {
UIGraphicsBeginImageContextWithOptions(view.frame.size, view.isOpaque, 0.0)
view.drawHierarchy(in: view.bounds, afterScreenUpdates: true)
let rasterizedView = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return rasterizedView
}
Update 6: Trying to capture the web view via the UIWindow that contains it also results in videos not being captured. I used the below code (note that keyWindow is deprecated) and Xcode's Quick Look feature to test this idea.
let window = UIApplication.shared.keyWindow
let windowLayer = window!.layer
I am using an imagewell to accept jpg images being pasted or dragged into my OSX app. The problme is that I am struggling to get the original jpg images as OSX seems to requires me to get the Tiff version of the NSImage if I want to uplod via Alamo Fire.
Does AlamoFireImage have a fancy way of getting the original url / original raw data without converting to Tiff first?
Actually, with an NSImageWell, you don't have much possibilities regarding the dropped image. It's a convenient class for showing dropped images but as soon as you need to do more, it's not up to the task.
I suggest you use an NSImageView instead, and add drag and drop capabilities to it like in my example here: https://stackoverflow.com/a/29233824/2227743
This way you can easily get the dropped image's URL (and filename), and the image data itself of course.
I'm looking for an open source photo gallery that allows to load photos from core data.
Here's what I want to do:
Give the gallery a core data object with binary image data
have the gallery inflate the image when it is time to display it
So far I was unable to find an open source gallery that does this. I tried to modify FGallery, but it crashes very often for me when I try to use core data as image sources. I'm unable to find out what's causing this behavior and have to abandon that approach.
I checked EGOPhotoViewer, but it only has support for local and network images.
Is there an open source photo gallery that is developed for images stored in core data ?
Your help is much appreciated.
Core Data is just a way to store your Data as you do it in Your document directory or Resource and it behaves like local data only, if you were using EGOPhotoViewer you can directly pass your image data in Uiimageview on slide or button tap its just that the image will be in binary format use:
uiimage initwithdata:<your image data from core data>
as per fetching the image data saved in your core data you can use ID or some parameter which is unique for your every image saved in core data.
I am wondering if anyone can offer any advice towards solving this problem.
I am building an app that uses a UIScrollView with paging enabled, with each page corresponding to downloaded and parsed XML data. Part of that XML data is a URL to an image.
Now, it would take forever to load an app that downloads the image for every XML entry and then push it to the respective UIScrollView page created on runtime with the rest of the XML data.
Is there a way to be able to detect which UIScrollView page you are on and then download the image as needed and still allow the rest of the data to download at runtime?
Try to read SDWebImage or Apple's LazyTableImages
Just as referenece, I solved it by adding all of the image views into an NSArray. Using the scroll view delegate, I was able to determine which page number I was on, and translated that page number to an integer that I used to access the appropriate uiimage view located within the array.
It seems to work great!
Might you offer a better solution?