I'm trying to achieve a sort of dynamic UIView masking effect. Here is a sketch:
So as you can see, I'm trying to create a UIView that can effectively cut through an image to reveal the image behind it. I already know how to return an image with a mask statically, however I would like the "revealer" to be draggable (I'll use pan gesture) and live.
Does anyone have any ideas or starting points on how to achieve this? Thanks
(NOTE: My demo says White layer, but I'd actually like to show another image or photo).
masking an image is not that difficult.
This link shows the basics.
http://iosdevelopertips.com/cocoa/how-to-mask-an-image.html
But personally I think i would make 2 UIImage views and crop the content of the draggable UIView. I'm not sure but I would expect that clipping and panning the second image will be less computationally expensive then applying the mask and will get you a better frame rate.
So I would do: UIImageView of the full image. A UIView on top of it with a white and some transparency setting to make it look white, then a UIImageView with the image either places or cropped so that only the correct section is showing.
Related
I have an NSImageView which contains an image. Is there a simple way to draw a border around just the image (which might be smaller than the NSImageView), and not the entire NSImageView?
[Answering my own question.]
On the off chance that someone comes cross this page looking to get something similar done, a simple way to do it is by using a wrapper view, and setting a border on the inner view.
I've been searching for that many time ago and I can't find a solution. I have a animated label that crosses the screen of the iPhone (like the title of a song does in the Music app.Well, I'd like to add the "fade in/out" effect like the music app has. The easy solution is open Photoshop and create this simple image and then add it up to the label. Well, under the label I have an image with black backgroud. The image can be zoomed in and then the image with the fade in/out effect can be seen, and it doesn't look well. Is there any possibility to do this programatically? Thanks
PD: if there's another possibility rather than doing this programatically, I'll apreciate the answer as well.
Edit: Here's the image capture of the problem
I'll approach it in a non-programming way.
The image reference you gave us for the Music app you seem to be emulating has a different gradient than the one you drew in the second image.
If you notice in the image, the gradient has not fully completed its transition from clear to black before the words are cut off. I would say in photoshop run the gradient from clear to 80% alpha black and then draw a 100% alpha black rectangle to finish it off as per image. The white is just showing you what it looks like without the black background.
Now as for the zooming. Correct me if I am wrong, but it sounds like you want a viewing window for the image so that once you have zoomed into it, it will fade to either side, but still be viewable/movable in the center. This means that the image has to be zoom-able, but once you have zoomed the "fade in/out" should not be zoom-able.
Just make sure you aren't scaling the fader by keeping it separate from the scrollView of your background image.
I'm trying to recreate this UITableView cell design.
This is as far as I have got...
I'm looking for a way of stretching the background of my UILabel dynamically. It needs to stretch at a specific point in the middle of my background.png image.
Does something like this exist? Am I going about solving this problem the right way?
I'm very new to iPhone dev so please be gentle.
Well there are many solutions, first you can set the image as a background color.
This is what you have done already by the looks of it, but you need to make the image repeatable.
Meaning it can't really have a start or an end.
Another way is to add an UIImageView behind the UILabel and set the image to a stretchable with a left and top cap.
This wil stretch the image but will leave the top/bottom and beginning/end the way they are.
You can read more about this in the Apple doc: http://developer.apple.com/library/ios/#documentation/uikit/reference/UIImage_Class/Reference/Reference.html
I want to implement dialog borders that scale to the size I require the dialog to be. Perhaps there is a better more conventional name for this sort of thing. If there is, if someone would edit the title, that'd be great.
Anyhow, I'd like to do this so I can have dialogs of any size without the visual artifacts that come with scaling border art to small, large, or wacky unproportional dimentions. I have a few ideas on how this is done, but am not sure which is better for iphone. I have a few questions.
1) Should I make a containing view object that basically overloads its drawRect method and draws the images where they should be at their appropriate scale when the method is called, or should I main a containing view object that simply contains 8 UIImageViews? I suspect the latter approach won't work if I need to actively scale the resulting dialog class like in an animation.
1b) If overloading drawRect is the way to go, does someone have some sample code or a link to an example that demonstrates drawing an image directly from drawRect()?
2) Is it generally better to create
a) a 3 x 3 image where the segments are in their appropriate 1x1 grid of the image? If so, is it simple to draw from a portion of this image onto my target view in drawRect (if the former assumption is correct that I should use drawRect)?
b) The pieces separately in 8 different files?
UPDATE:
To clarify, the idea is to take any customized border art and be able to stretch the 2nd, 4th, 6th, and 8th cell (in a 3x3-cell grid) to form a border of any size with just those assets. Stretching just a plain image would result in distortion of the corners, so I'd like to stretch those even numbered cells as needed and tack on the corners so there is no distortion. I'd seen this done before so thought it might be a standard thing and have a standard naming to it other than what I called it.
Anyhow, I was advised that adding 8 UIImageViews to a container would not be as efficient as drawing the UIImages on the fly in drawRect so took that approach using CGContextDrawImage() after applying the necessary transformations to the context to translate and scale the Y. Because this function draws from the bottom left corner of an image but onto a top-left origined UIView, the image is upside down without the Y axis invert. I noticed the suggestion to use UIImage functions like drawAtPoint works as well and similarly but for the invert since UIImage draws in the same orientation as UIViews. I will continue my implementation with the former and see how it goes, but one other question.
Would someone happen to know which of these approaches is more efficeint, faster, etc?
I'm not sure I follow, but here's my best shot at an answer...
Using drawRect: or adding individual UIImageViews to a parent view is entirely up to you. UIImageView gives you a bit of encapsulated functionality for free, but otherwise they are the same as far as appearances go.
If you do want to go the drawRect route, you just need to use UIImage's drawAtPoint: method. Do the math for where you want it to be, and draw it. You can calculate your points based on the parent view's dimensions.
As far as scaling, it's impossible to resize these images without scaling them, so I'd plan ahead and make your originals as large or larger than you ever expect to display them.
Hope that helps a little?
Cheers
If you want a border on a dialog box, assuming the box is a UIView (or subclass), then set the layer's border properties and let the system draw the border for you.
#import <QuartzCore/QuartzCore.h>
// ...
view.layer.borderWidth = 2;
view.layer.borderColor = [UIColor whiteColor].CGColor;
view.layer.cornerRadius = 0; // 0=square corners, >0 for rounded
I have a png image file that is partly opaque and partly transparent. I display it in a UIImageView as a mask of sorts over another UIImageView layered behind it (as a sibling subview of a common superview). It gives me perfect borders around something painted using a finger on the lower UIImageView in my stack of UIImageViews. Perhaps there are better ways to do this, but I am new-ish, and this is the best way I came up with thus far. None the less, my app is in the App Store and now I want to enhance it to provide more images to use as the mask of sorts over the finger painting. But I don't want to bloat my bundle size by adding more static mask images as I did for the initial implementation. Not to mention I don't want to spend lots of time in photoshop making 100 masks. I'd rather programmatically change the color of the mask, without affecting the clear portion in the middle, which is not a simple regtangle or circle, but rather a complex shape. So my question is this: How can I change the colored portion of my loaded image without affecting the clear color portion in the middle? Is there a reasonably easy way to do this? Essentially I want to do what is described in this post (How would I tint an image programmatically on the iPhone?) without affecting the clear portion of my image. Thanks for any insights.
Have a look at the Tinted Image sample project. Try out the different modes until you get the effect you want.