I am having difficulties scaling the size of an image on an HTML canvas in GWT. I can successfully render an image using this:
ImageData id = context.createImageData(width, height);
-- do some image manipulation here...
context.putImageData(id, 0, 0);
That works great. But then I'd like to scale the size of the image, so I add this to the very next line:
context.scale(scale, scale);
But nothing happens to the image, it does not scale. What am I missing?
Here is an example on how to scale an image:
http://code.google.com/p/gwt-examples/wiki/gwt_hmtl5#Image_Scale_/_Resize
Related
image = Image.open('red.jpg')
image1 = image.resize((160,120), Image.ANTIALIAS)
The original image red.jpg was of dimension 80x60.....
i obtained an image with less clarity or with blurness...
so please specify some methods to increase the clarity of the resized image
I am trying to stretch a UIImage with the following code:
UIImage *stretchyImage = [[UIImage imageNamed:#"Tag#2x.png"] stretchableImageWithLeftCapWidth:10.0 topCapHeight:0.0];
UIImageView *newTag = [[UIImageView alloc] initWithImage:stretchyImage];
The image before stretching looks like this:
And after, looks like this:
Why hasn't the stretching worked properly? The corners have all gone pixelated and look stretched, when in fact only the middle should be stretched. FYI: I am running this app on iOS 6.
Why your implementation doesn't work is because of the values you give to the stretchableImageWithLeftCapWidth:topCapHeight: method.
First of all, stretchableImageWithLeftCapWidth:topCapHeight: is deprecated with iOS 6. The new API is resizableImageWithCapInsets:
The image has non-stretchable parts on the top, bottom and the right sides. What you told the API was "get -10 from the left side, stretch the rest according to the size I give you".
Since you have a non-repeatable custom shape on the right side, by both height and width, we should take that piece as a whole.
So the top cap width should be the height of the image (to preserve the shape of the thing on the right side), left cap width should be ~20 pixels (rounded rectangle corners), the bottom cap can be 0, since the top cap is the whole image height, and finally the right cap should be the width of the custom orange shape on the right side (which I take as ~40 pixels).
You can play with the cap values and achieve a better result.
UIImage *image = [UIImage imageNamed:#"Tag"];
UIImage *resizableImage = [image resizableImageWithCapInsets:UIEdgeInsetsMake(image.size.height, 20, 0, 40)];
Should do the trick.
Also, -imageNamed works fine when you get rid of the file extension & #2x.
I have a UIImageView with a frame of (0, 0, 568, 300) containing a UIImage with a native size of 512x384 pixels. The contentmode is set to Aspect fit.
If the user double-Taps on the view I change the size of the UIImageView with the following code:
self.imageViewerViewController.imageView.frame = CGRectMake(0, -63, 568, 426);
the result is that the right edge of the image is distorted, it does not properly scale to the new size.
Attached is an image with a black and white matrix, the distortion is on the right.
It seems that the right column of pixels is repeated to the right edge of the view.
Can anyone help?
I have changed the creation of the affected UIImageView from a xib file to creating it in code and using the method - (id)initWithFrame:(CGRect)aRect: with the maximum size of the scaled image for aRect. Now the image is properly scaled.
Obviously the UIImageView in the xib file has been inited with a 512x384 pixel frame.
I added two UIView to ViewController.view, and applied 2 squares image into each view.layer.mask to make it like a square is sliced into 2 pieces, and addSubview the imageview over it.
I am having a problem rendering the masked layers and save it to photo album.
I want the saved photo to be look like picture no. 1, but it always looks like picture no. 2 after I save it to photo album.
Is there any solution to capture like picture No. 1 after applying mask?
the below is the reference from apple regarind renderIngContext.
Important The OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of OS X may add support for rendering these layers and properties.
I've created an image capture function before, which literally does a printscreen of a UIView. I don't use it, because it does not work well for my needs but maybe you can use it:
UIImage *img;
UIGraphicsBeginImageContextWithOptions(UIViewYouWantToCapture.bounds.size, self.opaque, 0.0);
[[UIViewYouWantToCapture layer] renderInContext:UIGraphicsGetCurrentContext()];
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
When we apply masking to any image then the we get the resultant image with alpha property of masked image to 1 and remaining image to 0,
and when we are capturing image of view then there is still complete image(we are able to seen half image due due to alpa = 0 of half image, but there is still a complete image) , so we getting the screenshot of complete view.
Basically I'm downloading images off of a webserver and then caching them to the disk, but before I do so I want to mask them.
I'm using the masking code everyone seems to point at which can be found here:
http://iosdevelopertips.com/cocoa/how-to-mask-an-image.html
What happens though, is that the image displays fine, but the version that gets written to the disk with
UIImage *img = [self maskImage:[UIImage imageWithData:data] withMask:self.imageMask];
[UIImagePNGRepresentation(img) writeToFile:cachePath atomically:NO];
has it's alpha channel inverted when compared to the one displayed later on (using the same UIImage instance here).
Any ideas? I do need the cached version to be masked, otherwise displaying the images in a table view get's awfully slow if I have to mask them every time.
Edit: So yeah, UIImagePNGRepresentation(img) seems to invert the alpha channel, doesn't have anything to do with the code that writes to disk, which is rather obvious but I checked anyway.
How about drawing into a new image, and then save that?
UIGraphicsBeginImageContext(img.size);
[img drawAtPoint:CGPointZero];
UIImage *newImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[UIImagePNGRepresentation(newImg) writeToFile:cachePath atomically:NO];
(untested)
See the description in CGImageCreateWithMask in CGImage Reference:
The resulting image depends on whether the mask parameter is an image mask or an image. If the mask parameter is an image mask, then the source samples of the image mask act as an inverse alpha value. That is, if the value of a source sample in the image mask is S, then the corresponding region in image is blended with the destination using an alpha value of (1-S). For example, if S is 1, then the region is not painted, while if S is 0, the region is fully painted.
If the mask parameter is an image, then it serves as an alpha mask for blending the image onto the destination. The source samples of mask' act as an alpha value. If the value of the source sample in mask is S, then the corresponding region in image is blended with the destination with an alpha of S. For example, if S is 0, then the region is not painted, while if S is 1, the region is fully painted.
It seems for some reason the image mask is treated as a mask image to mask with while saving. According to:
UIImagePNGRepresentation and masked images
http://lists.apple.com/archives/quartz-dev/2010/Sep/msg00038.html
to correctly save with UIImagePNGRepresentation, there are several choices:
Use inverse version of the image mask.
Use "mask image" instead of "image mask".
Render to a bitmap context and then save it, like epatel mentioned.