This question already has answers here:
how to crop image in to pieces programmatically
(3 answers)
Closed 8 years ago.
how can I slice an image into multiple pieces ? my image size is 300x300 an I want to make 9 pieces of it.
Thanks..
CWUIKit project available at https://github.com/jayway/CWUIKit has a category on UIImage adding a method like this:
UIImage* subimage = [originalImage subimageWithRect:CGRectMake(0, 0, 100, 100)];
Should be useful for you.
Related
This question already has answers here:
Best method to find edge of noise image
(2 answers)
Closed 6 years ago.
I have used adapthisteq to improve the visibility of the foreground objects. However, this seems to have created grainy noisy details. How can I remove these grainy details from the image? I have tried Gaussian blurring through imgaussfilt and while it does remove some of the grainy details, the shape of the cells in the image become less defined. The second image shows the binary image of the first image.
You can use a filter that takes into consideration the edge information like bilateral filter. https://en.wikipedia.org/wiki/Bilateral_filter
The bilateral filter doesn't only weighs the value according to the distance in pixels (like a regular Gaussian blurring) but also according to the distance in color between the pixels.
taken from: http://www.slideshare.net/yuhuang/fast-edge-preservingaware-high-dimensional-filters-for-image-video-processing
An Matlab implementation you can find here:
https://www.mathworks.com/matlabcentral/fileexchange/12191-bilateral-filtering
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I'm trying to convert a grayscale image to rgb image . I searched the net and found this code:
rgbImage = repmat(grayImage,[1 1 3]);
rgbImage = cat(3,grayImage,grayImage,grayImage);
but what this does is only give the image in gray-scale with 3-D matrix.
I want to have a way that i can convert it into true color image.
It's impossible to directly recover a colour image from the grey scale version of it. As correctly said by #Luis Mendo in the comments, the needed information are not physically stored there.
What you can do is try to come up with a mapping between intensity level and colour, then play around with interpolation. This however will not produce a good result, but just some colour mapping information that may be very far from what you want.
If you have another colour image and you want to fit its colours to your grey scale image, you may want to have a look at: http://blogs.mathworks.com/pick/2012/11/25/converting-images-from-grayscale-to-color/ .
In particular the function that is there cited can be found here: http://www.mathworks.com/matlabcentral/fileexchange/8214-gray-image-to-color-image-conversion#comments.
Bare in mind that this will be slow and will not produce a great result, however I don't think you can do much more.
yes it's impossible to directly convert gray scale image to rgb..but it is possible to add features or show something like an edge detected on gray scale image in rgb by adding it..if you want you can use this..
rgb = zeros([size(I_temp) 3]);
rgb(:,:,1) = im2double(rr);
rgb(:,:,2) = im2double(rg)+.05*double(I_temp) ;
rgb(:,:,3) = im2double(rb);
where rr,rg,rb are an rgb base image..
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How does one compare one image to another to see if they are similar by a certain percentage, on the iPhone?
I've found this code and am trying to understand it better:
UIImage *img1 = // Some photo;
UIImage *img2 = // Some photo;
NSData *imgdata1 = UIImagePNGRepresentation(img1);
NSData *imgdata2 = UIImagePNGRepresentation(img2);
if ([imgdata1 isEqualToData:imgdata2]) {
NSLog(#"Same Image");
}
Will this confirm that image 1 is exactly the same as image 2? Is this method best practice, or is there a better approach to this?
Your code is comparing the two images bit by bit, so yes it's a 100%-comparison.
If you need something faster you can generate an hash from each UIImage and compare the two hashes, as explained here.
Take a look at this link, it talks all about sampling to images to see the percentage similarity: How does one compare one image to another to see if they are similar by a certain percentage, on the iPhone?
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?
Get Pixel color of UIImage
I have a scenario in which the user can select a color from an image, for example the one below:
Depending on the tap location of the user on the image I need to extract the RGB and alpha value at that very point—or say, pixel.
How do I accomplish this?
You need to create a bitmap context (CGContextRef) from the image and convert the CGPoint that was tapped to an array offset location to retrieve the color information from the pixel data.
See What Color is My Pixel? for a tutorial and this similar Stack Overflow question.
Methods:
- (CGContextRef) createARGBBitmapContextFromImage:(CGImageRef) inImage
Returns a CGContextRef representing the image passed as an argument
using the correct color space. This method is called by:
- (UIColor*) getPixelColorAtLocation:(CGPoint) point
This is the function you would call to get the UIColor at the passed
CGPoint.
Note that these methods are in a subclass of a UIImageView to make the process more straightforward.
This question already has an answer here:
Closed 10 years ago.
Possible Duplicate:
Good way to calculate ‘brightness’ of UIImage?
For a UIImage how can you determine the percentage whiteness of the whole image?
cheers
Depending on your definition of 'whiteness', you may be able to simply draw the image to a 1x1 CGBitmapContextRef, then check the whiteness of that single pixel.