I have extracted some code from the PhotoScroller sample code from Apple to use the CATiledLayer. I have an image of 8000x7000px loaded from internet, in tiles.
This is kind of a map-function in my app. I also have two almost identical images, with different overlays (tried to only add the overlay without luck).
I have an UISegmentedControl to toggle between the three choices, and I want the imageView to load the images from the selected image. So if the user zoomes in on one of the images, and selects another option, then the scale and coordinated stays the same, and the imageView loads the chosen image in the necessary tile-spots.
I have partially managed this. Or, I have actually managed this, but when I select another option, the whole screen goes black when the function [imageView removeFromSuperview]; is called. After being black in a couple of seconds(depending on internet speed) it shows the correct tiles.
I want the layer to "fade" over to the next layer if possible. As you maybe know, when using CATiledLayer, the first layer is the whole image in low resolution, but when zooming in, the necessary tiles "fade" in to the next layer with smaller tiles.
I basically need to give the (TilingView*)imageView a "reload"-command, and want it to "fade" over the last image.
I tried to comment out the [imageView removeFromSuperview];, and that actually got me close to what I want. When I zoomed in, and selected another option, the new image actually faded over the old one, however, when I zoomed back out, the old image was clearly sticking around in the background(behind the new image), not responding to anything. I need to remove it from the superView at a later point, but I have no longer access to it as the new image has taken its place as imageView. I know people might want to see code here, but I have really no idea what code to show.. And the CATiledLayer is SO POORLY documented I am really having a hard time understanding what's going on.
I made it work through some sketchy code. There probably are other better solutions, but this is the only one I know of at the moment.
The problem was that when I removed [imageView removeFromSuperview];, it did not unload the last image, thus letting it be in the memory forever. As it was not affected by zooming and scrolling, when I zoomed out, it was always there in the background. If I was to switch between options multiple times, multiple images would add up in the background.
I simply created a "helper" as I call it. A new imageView, which gets the content of the old imageView when switching option, without removing it from the superview just yet, as a premature removal results in black screen when the new image hasn't loaded quite yet.
I now call the [helper removeFromSuperview]; in the delegate methods scrollViewWillBeginZooming AND in scrollViewWillBeginDragging. This results in memory containing at the most two images in the time between clicking the other option, and user scrolls or zoomes.
Note: If the user scrolls or zoomes immediately after switching option in the UISegmentedControl, it turns black because the new image hasn't been loaded yet.
Related
My app is behaving sluggishly. If i pop up a UIActionSheet, for example, instead of rolling in smoothly, it stutters in over about 5 frames. I know ideally you should have as little amount of views on screen as possible, but that's what I've got anyway.
Any suggestions for speeding it up?
EDIT:
On my view i have:
Custom navigation bar in place of the regular one. It's a UIImageView, using an image file. It has a quartzcore shadow. It contains 3 buttons. 2 of these buttons have 2 UIImages each, for normal and highlighted, generated from code when the view is shown. The other button uses an image file for normal and for highlighted.
An image file for a background lies under that. On top of the background is a UITableView. By default, it doesn't have any cells (the user adds them). We'll ignore the cell, since it's slugging regardless of their being there or not.
The header of the tableview contains some labels, and an editable uitextview. The size of the header changes as more lines are added to the textview. It also has a background image, which is transparent to allow you to see the view's background image behind it. It's loaded from a file, and a texture image on top of that is also loaded from a file.
The footer is simply a background image loaded from a file with the same texture on top.
Andrew, I'm afraid you haven't been quite specific enough to isolate the exact problem. However there are a couple of things I have picked out. Firstly, check your table view is set to be opaque. Also try to design your app so your table cells can be opaque. I'm assuming your design will allow this. You need to really know how to optimise view rendering performance if you want your table and it's cells to appear translucent over other content and it may be you would need to develop your own custom specialised alternative to UITableView if that is something you really need to know (can be done but quite advanced stuff).
Also you mention using Quartz shadow. You should be able to use UIKit for drawing shadows around images, unless you have some specialist requirement. Are you sure you need to use Quartz for what you want to do? Apologies if you already know this, but if you are fairly new to iOS development and have been looking up how to do shadows, you may have found the Quartz API's for doing that and assumed that is the solution, when (depending on what you need) you will probably be better off staying with UIKit. As a general rule of thumb, only use Quartz if you are sure you can't do what you want to do with just the UIKit API's.
Another thing to check. If you are using Quartz, then you are probably getting getting the graphics context for the UIImage view and drawing on the views context in drawRect: depending on how your view hierarchy is configured, and if you have your navigation bar view set to be transparent over the top of the UITableView, then your custom drawRect implementation may be getting called unnecessarily with every animation frame and this would be a big drain on performance.
Given the level of information you have given I'm having to guess a bit and can't give a precise answer. However for a definitive understanding of how to optimise UIView performance I recommend checking out this video (though you will need an Apple Developer account to be able to access it):
https://developer.apple.com/videos/wwdc/2011/
Session 121 – Understanding UIKit Rendering
Hope this helps. Paul.
I have a big UIimage (2000x2000). Image drawed every time on app start, and copied to CALayer.
On Current time i put UIScrollView on main view, and make CALayer with drawn image.
Scrolling on small zoom looks fine. But on min zooming , when whole image visible, image scroll slowed, it becomes not quick responsible on move touch.
So, the question. What can I do, to increase scrolling performance?
The approach I would take is to use a lower resolution version of your image at lower zoom levels (lower = zoomed out).
First, see this post for resizing UIImages.
Respond to the scrollViewDidEndZooming:withView:atScale: method in UIScrollViewDelegate, and switch the images when a certain zoom level is reached. This will take some trial and error to find the correct balance. You may even want to render your image at several different resolutions. Be sure to generate the different sized UIImages in advance so there is no delay while zooming.
I'm trying to make a level-of-detail line chart, where the user can zoom in/out horizontally by using two fingers, and grow the contentSize attribute of the the UIScrollView field. They can also scroll horizontally to shift left or right and see more of the chart (check any stock on Google Finance charts to get an idea of what I'm talking about). Potentially, the scroll view could grow to up to 100x its original size, as the user is zooming in.
My questions are:
- Has anyone had any experience with UIScrollViews that have such large contentSize restrictions? Will it work?
- The view for the scroll view could potentially be really huge, since the user is zooming in. How is this handled in memory?
- Just a thought, but would it be possible to use UITableViewCells, oriented to scroll horizontally, to page in/out the data?
This is kind of an open ended question right now - I'm still brainstorming myself. If anyone has any ideas or has implemented such a thing before, please respond with your experience. Thanks!
This is quite an old topic, but still I want to share some my experiences.
Using such a large UIView (100x than its origin size) in UIScrollView could cause Memory Warning. You should avoid render the entire UIView at once.
A better way to implement this is to render the only area which you can see and the area just around it. So, UIViewScroll can scroll within this area smoothly. But what if user scrolled out of the area that has been rendered? Use delegate to get notified when user scroll out of the pre-rendered area and try to render the new area which is going to be showed.
The basic idea under this implementation is to use 9 UIViews (or more) to tile a bigger area, when user scrolled (or moved) from old position to new position. Just move some UIViews to new place to make sure that one of UIView is the main view which you can see mostly, and other 8 UIViews are just around it.
Hope it is useful.
I have something similar, although probably not to the size your talking about. The UIScrollView isn't a problem. The problem is that if you're drawing UIViews on it (rather than drawing lines yourself) UIViews that are well, well off the screen continue to exist in memory. If you're actually drawing the lines by creating your own UIView and responding to drawRect, it's fine.
Assuming that you're a reasonably experienced programmer, getting a big scroll view working that draws pars of the chart is only a days work, so my recommendation would be to create a prototype for it, and run the prototype under the object allocations tool and see if that indicates any problems.
Sorry for the vagueness of my answer; it's a brainstorming question
But still, this approach (in the example above) is not good enough in some cases. Cause we only rendered a limited area in the UIScrollView.
User can use different gestures in UIScrollView: drag or fling. With drag, the pre-rendered 8 small UIViews is enough for covering the scrolling area in most of the case. But with flinging, UIScrollView could scroll over a very large area when user made a quick movement, and this area is totally blank (cause we didn't render it) while scrolling. Even we can display the right content after the UIScrollView stops scrolling, the blank during scrolling isn't very UI friendly to user.
For some apps, this is Ok, for example Google map. Since the data couldn't be downloaded immediately. Waiting before downloading is reasonable.
But if the data is local, we should eliminate this blank area as possible as we can. So, pre-render the area that is going to be scrolled is crucial. Unlike UITableView, UIScrollView doesn't have the ability to tell us which cell is going to be displayed and which cell is going to be recycled. So, we have to do it ourselves. Method [UIScrollViewDelegate scrollViewWillEndDragging:withVelocity:targetContentOffset:] will be called when UIScrollView starts to decelerating (actually, scrollViewWillBeginDecelerating is the method been called before decelerating, but in this method we don't know the information about what content will be displayed or scrolled). So based on the UIScrollView.contentOffset.x and parameter targetContentOffset, we can know exactly where the UIScrollView starts and where the UIScrollView will stop, then pre-render this area to makes the scrolling more smoothly.
I'm trying to place a red tint on all the screens of my iPhone application. I've experimented on a bitmap and found I get the effect I want by compositing a dark red color onto the screen image using Multiply (kCGBlendModeMultiply).
So the question is how to efficiently do this in real time on the iPhone?
One dumb way might be to grab a bitmap of the current screen, composite into the bitmap and then write the composited bitmap back to the screen. This seems like it would almost certainly be too slow. In addition, I need some way of knowing when part of the screen has been redrawn so I can update the tinting.
I can almost get the effect I want by putting a red, translucent, fullscreen UIView above everything. That tints everything red within further intervention on my part, but the effect is much "muddier" than results from the composite.
So do any wizards out there know of some mechanism I can use to automatically composite the red over the app in similar fashion to what the translucent red UIView does?
I managed to somewhat make this work but with some side-effects:
I setup a UIView on top of all my app-views (attached to the window) which is not userInteractionEnabled and which is opaque
This UIView carries some custom drawRect-method which first fills the complete area with red color and then after having made a "screenshot" of my window-viewhierarchy I am rendering this image with
CGContextSetBlendMode( c, kCGBlendModeMultiply);
to the UIView.
To constantly update this UIView to the current state of the apps UIViews I constantly produce "screenshots" and render them as fast as possible.
I setup an NSTimer which is doing this snapshotting/rendering in a defined frequency and which is added to the the NSRunLoop for "Tracking".
RESULT: some really laggy response from the UI with several fancy effects, but still usable though if you do not set the frequency of snapshotting/rendering to high.
See screenshot here...
The result looks okay, but the usability really suffers a lot. I had a look at the OpenGL-examples before trying this aproach, but OpenGL is a whole lot of different (mostly C) code which seems to be very near to the hardware and which gives you a real headache.
So, the described approach is what I will shoot for with my next app. I hope Apple accepts it even though it degrades UXP during nightvision mode. They should simply make CALayer filter-backed then my problem will definitely be solved a whole lot better and performing nicely.
You could try this: subclass UIView. Add code to -drawRect method to draw the overlay. Make your UIView subclass pose as UIView everywhere in your app with
class_poseAs ([CustomUIView class], [UIView class]);
I have a bunch of small png images with about 45 x 45 pixels of size. not really big ones. there are about 40 of them right now.
I want that the user can select one of them as his avatar image. For this, I created an brand new view with an controller class. Now the problem is: How to display all those images to the user? There's no "big view". When the user touches one of them, it's going to be selected and the view switches back to the main view, where he's going to see his selected image. When he touches it, the image selection view will appear again.
So I thought about an table view, but it feels not right. The images have no title to be displayed, so it would be a big waste of screen space.
Any ideas? Should I programmatically generate a grid of UIImageView objects?
A grid is correct. Think iPhone Photos application. No need to make a completely new widget though, add multiple image views to table rows, segmenting them.
A grid seems like a good solution as it mirrors the wallpaper ui in settings so the user knows what to expect.
Another option in this case would be to use a UIPickerView. It takes up less screen space, and can be shown on the main screen (just pop up from the bottom, let them pick one, then disappear).