I am trying to utilize the NSURLCache when loading a link in UIWebView. Inside the UIWebView I have a bunch of links of images and I want to cache this so I can utilize this later. How do I do this? I have set the NSURLSharedCache size.
EDIT:
I am doing a loadHTMLString:htmlString baseURL:baseURL to load the content into the UIWebView, maybe that's why it's not calling in, right?
If you are expecting disk cache, rather than memory cache, then you should take a look at SDURLCache:
https://github.com/steipete/SDURLCache
From the README:
On iPhone OS, Apple did remove on-disk cache support for unknown
reason. Some will say it's to save flash-drive life, others will arg
it's to save disk capacity. As it is explained in the
NSURLCacheStoragePolicy, the NSURLCacheStorageAllowed constant is
always treated as NSURLCacheStorageAllowedInMemoryOnly and there is no
way to force it back, the code is certainly gone on this platform. For
whatever reason Apple removed this feature, you may be interested by
having on-disk HTTP request caching in your application. SDURLCache
gives back this feature to this iPhone OS for you.
Related
I've been working with video_player package from Flutter. I'm mostly testing videos taken from the url. My code is very similar to the ones from the examples.
Everywhere I read about it, I can see that this library does not support caching of the videos. But what this exactly means? What exactly is happening behind the scenes and how the behaviour would change if the caching would actually be implemented? How this is different from buffering? Are the video files simply downloaded to our device?
If yes, then were those files are kept?
One additional question is, how can I check the network consumption caused by using such connection? I've tried using Dev Tools but the network tab is always empty.
One last thing is, is it possible to pre-initialize next videos, so when we would like to switch between them, they are already partially pre-loaded?
U can use a package that helps you manage caching futter_cache_manager
https://pub.dev/packages/flutter_cache_manager
U can use this in combination with video_player. However, u would have to download the whole file first to be then be able to retrieve it for video_player to consume it.
An idea would be to stream the video and also download a copy locally. This however would consume more data than just downloading and caching the video first, then playing it locally.
As for how to check for network consumption, i am not sure.
Starting with the theory:
1.Cache is a high-speed storage area while a buffer is a normal storage area on ram for temporary storage.
2.Cache is made from static ram which is faster than the slower dynamic ram used for a buffer.
3.The buffer is mostly used for input/output processes while the cache is used during reading and writing processes from the disk.
4.Cache can also be a section of the disk while a buffer is only a section of the ram.
5.A buffer can be used in keyboards to edit typing mistakes while the cache cannot.
When it comes to video buffering just look at youtube app. You can see the buffer being made when grey line grows bigger before the red line. Mostly information stored this way cannot be accessible at all as Android uses combination of RAM allocation for both caching and buffering as it sees fit for current active process.
Technically you could try pre-loading different videos by starting and pausing all of them at once but I cannot imagine how much tampering with system memory control it would take, even youtube doesn't work like that.
want to open a large text file and then search the content of it.
I loaded the file with stringWithContentsOfFile into a NSString.
Every thing works on a 30mb file. But I am concerned what happens if I load a 200mb file, which I want to do.
Is the complete NSString in the memory? if so it wouldn't work on iPhone. is there a solution for such large files on the iPhone?
A good way to read a large file would be to buffer small chunks of it at a time.
Not sure of the exact API methods you could use to do this, but it is fairly standard practice for audio, video, etc to read a small amount of the file into memory, process this, and remove it from memory as you continue through the file.
Since the limit isn't documented the way to check would be actually profiling it on target. People may be able to give their limits here but that is totally relative. Also a good memory management scheme and design would help you avoid problems of running out of memory.
I am using a custom UITableViewCell with 3 UIImages in a UITableView with 50-100 rows. Its similar to a UITableViewCell the Facebook iPhone app uses for its news feed view.
The application has 4 similar UITableViews which may be open at the same time via a UITabConroller.
The images are lazy loaded, there is a cache on disk so that no images are loaded twice from the server and there is also a NSMutableDictionary for images allowing in-memory reuse of the same image eg: a users profile picture appears multiple times
This setup is extremely fast but takes a lot of memory even after using the NSMutableDictionary for image reuse.
I tried a variation without the NSMutableDictionary where images are either loaded from the server or pulled from the disk cache every time cellForRowAtIndexPath is called. This setup is extremely memory efficient but causes a noticeable lag in the UITableView scrolling.
A mid-way approach is to free the NSMutableDictionary for images when a low memory warning is received.
Will really appreciate recommendations to optimize memory usage and speed in this scenario and or an insight into how the Facebook iphone app or three20 execute this conceptually.
I have an app that is very similar to yours in many respects (uses Three20, has several tabs across the bottom, each tab can have a table, each cell can have one or two images); and the approach I'm taking is the one you mentioned near the end of your post:
A mid-way approach is to free the NSMutableDictionary for images when a low memory warning is received.
Personally, I quite like iOS's approach to memory management, of warning me when memory is getting tight. The Mac/PC approach of "just use all the memory you want, we'll swap it out to disk if memory gets tight" has the disadvantage that even though the OS is the only one who really knows how much pressure there is on memory, it isn't telling you. I think what every polite app would really like to say (if apps could talk) is, "I'd be happy to use as much memory as you'll give me, but I don't want to be a bother, I don't want to slow down any other apps, so if you could please give me a hint as to how much memory I can use without causing problems, I would appreciate it."
Well that's what iOS's memory warnings give you, in my opinion. So, just keep as many images cached in memory as you want; and when you get a memory warning, empty the in-memory cache. To me it's really the best of both worlds.
Also, you should definitely take a look at Three20's TTURLCache, although I can't tell you a lot about it because I haven't dug into it very much. What I do know is:
If you retrieve your messages via TTURLImageResponse, it will automatically cache them in TTURLCache's image cache.
You can also store and load your own images (and other data) in the TTURLCache.
Three20 seems to take an approach similar to what I am talking about. Take a look at this code from Three20Network/Sources/TTURLCache.m (the NO argument means don't remove from disk, only remove from memory):
- (void)didReceiveMemoryWarning:(void*)object {
// Empty the memory cache when memory is low
[self removeAll:NO];
}
In addition, that class also allows you to set a maximum size for the in-memory cache, but by default there is no maximum size.
you would purge images when they go offscreen, then read the images from the locale cache on demand from a secondary worker thread when needed. since one can zip through tables, add support for read cancellation (esp. for requests which come off the server). NSOperation is a good api for this.
if you know your table's small, then you could opt to avoid purging in such cases.
also, rescaling the image to the size you'll display it as is often a good idea (depending on how far you want to take an optimization). assuming the source is larger than the displayed size: this will reduce memory requirements, drawing speed, disk space, and disk read times.
you can also read three20's sources to see what they have done.
Apple just released some sample code on lazy loading images in a UITableView a week ago. I checked it out and implemented it into my own UITableView (which is a drawRect one for fast scrolling), to see if there was a difference from what I was already doing.
After implementing I am not sure what is best; the new code or what I already had. I am not seeing much of a speed improvement on my 3GS.
"Sandbox" method: Load images lazily, then save to local tmp folder in the sandbox. Each time the cell is displayed it looks for whether an image with that filename is already located in the sandbox folder. If it is, it retrieves the image and displays it, if not it continues with the download, saves it locally and then displays it. The benefit with this is that the images won't be blank the second time you open the app. They will already be downloaded and ready for displaying.
Caching method: This also loads the images lazily, however, now I include a UIImage on each object in the array that's displayed in the tableview. Instead of saving the image locally, I now download the image and put it into the array for the object. Now, instead of checking for the filename every single time, it jut check whether the UIImage != nil and uses the cached image (or downloads if nil).
A small difference is also that the caching code resizes the image before caching it to the exact size of what is displayed in the cell, whereas the image used in the sandbox code example is actually a bit larger than what it needs to display, which means it has to resize on the fly when scrolling as well. I read months ago that this could be a bit expensive to do, and I am also not sure whether it makes much of a difference in terms of then using a cached image instead of the sandbox-stored image and therefore more CPU intensive anyway (compared to what you save from caching with the caching code above).
I guess my question would be whether I should even bother with the caching code? Again, the new code won't immediately load images on a new launch, whereas the old code actually does because it's already in the sandbox. Since I am not reusing images, I have a lot of images to load (from the sandbox or cache) so I am not noticing a huge difference in speed. In fact, on my 3GS it's almost impossible to tell, in my opinion. The scrolling is not silky smooth, and I assume this is due to the large amount of images that I cannot reuse (different image for each cell). I am also wondering whether the sandbox method would get slower once there's 1000+ images in the folder, for example, eventually having it look through many more images than just 100 or so.
I hope I am making sense. I wanted to be pretty thorough with the details, and I am happy to give more details if needed.
Thanks!
If you have code that already works, and there's not a pressing problem, then don't change it.
If your scrolling actually is too slow, then perhaps you could use a mixture of ideas, and try to get the UIImage, and if it's not there, load it from the sandbox, and if it's not there, then download it.
The only good way to tell if there is any discernible difference in performance is to use profiling tools like Instruments (for measuring things like display framerate for the two techniques) or Shark (to determine hotspots in your code). There could be small differences in your exact implementation that could potentially cause significant differences between any general answer we could give and the actual performance you see in your application.
The thing that primarily concerns me with the "sandbox" method is not performance but disk space usage. Users won't appreciate you filling up their iPhone or iPod Touch with unnecessary files, especially if all the images aren't consistently used or if the set of used images changes often. Without knowing more about your application its impossible to guess how often these cached images would be loaded.
If you're testing locally on your own device, you might be on Wifi network. My recommendation would be to turn Wifi off for part of your testing to see how the two approaches perform when you have to fetch all the images over the cellular network. I would also recommend trying to find an older device (iPhone 3G or worse) because the 3GS does in fact hide potential performance issues that could be annoying for users on older devices.
I have personally used the LazyTableImages technique in my apps many times (provided it hasn't changed drastically between WWDC09 and the recent 'release') and find it to be just what I need. Caching images on disk wouldn't be an option in my case, however, and you shouldn't take my anecdote too strongly into account - profile your own code and use the results it shows.
Edit: The obvious answer is that accessing an in-memory cache is going to be faster than accessing the filesystem, but of course the final word on that is left up to profiling. If the images are already in memory, they don't need to be read from flash and parsed by UIImage. The traditional tradeoff comes into play here though - in-memory caching vs. disk space.
While it may be faster for you to store your images in-memory, you need to be very sure that you correctly handle memory warnings in your application (as you should be doing anyway!). Otherwise long period of use will lead to many, many images in your in-memory cache and trigger memory warnings and if your application is not built to handle these, at best your application will be killed by the OS due to lack of memory resources.
There are pros and cons in both approaches that you present - I suggest using elements of both in your app.
It's better to keep your images in memory and save them later (perhaps when your app quits). If you have a lot of images, it might be faster to use Core Data to save them, than as regular files.
It's also better to avoid doing any resizing on the fly, i.e. in your tableView:cellForRowAtIndexPath: or tableView:willDisplayCell:forRowAtIndexPath: methods or in any method that has to do with drawing your cells' content view. If you can, ask the image provider (content management?) to supply images at the size that your table view displays.
Can anyone point into the right direction here. I want to respond when my application receives memory warning, (i want to know how to respond to this notification). Plus, How much memory can i wire with my application?
Any articles or book reference would be great. Thanks.
If your app gets a memory warning (such as in your view controller's didReceiveMemoryWarning method) you need to release any non-critical data. Anything that you're using that cached, for example, or that can be regenerated, should be dumped.
For example, if your app crunches some numbers and stores the result in a big array, if you're not actively using that array, you should release it. Then, regenerate it when you need it again.
A little more information is here:
Observing Low-Memory Warnings
I've heard informally that warnings get issued when your application hits about 22 MB. (Any allocated memory is included -- the iPhone keeps everything in physical RAM and doesn't page out to any other storage.) Given that the phone only has 128 MB of total RAM, this seems plausible.
That limit does not include the memory used by shared system libraries, such as the Objective-C runtime. And while I'm not entirely sure on this, I don't think WebKit's memory usage is included for the UIWebView component, as I believe WebKit is always loaded (but again, not 100% sure on that).
The best thing to do when you hit this limit is free anything that you can easily regenerate or re-read from input files, such as views, images, and cached data.