pixel value in UIImageView's image - iphone

I'm trying to find the value of an pixel in the image displayed in an UIImageView.
getting the pixel on the display is (relatively) simple. But when the image displayed is scaled (say the orgig. image is 1200*1600) it's much more complicated because of the scaling ..
Any good advice ?

If you're trying to get the pixel value from the original image, this question's accepted answer shows a way to get pixel values from an UIImage.
Edit:
Scaling means a loss of information, so there won't be a one-to-one mapping. One pixel on your scaled-down image corresponds to a patch of pixels in your original image, and the best that you can hope for is a lossy interval mapping from [0..scaledX] to [0..originalX] and from [0..scaledY] to [0..originalY]. An explanation on how to do such a mapping can be found in the answer to this question.

Related

Rectified images of same size as the initial ones

I want to rectify a stereo image pair in MATLAB. To rectify, I use the following call:
[J1,J2] = rectifyStereoImages(I1,I2, cameraParamsStereo);
If I do this, then I only get the so called valid part of each image which is smaller than the initial image size. If I specify the argument OutputView as full, then I get rectified images which are larger than the original ones.
Is there a way to get rectified images that have the same size as the original ones?
It is possible in principle, but rectifyStereoImages does not support this.

Extract Rectangular Image from Scanned Image

I have scanned copies of currency notes from which I need to extract only the rectangular notes.
Although the scanned copies have a very blank background, the note itself can be rotated or aligned correctly. I'm using matlab.
Example input:
Example output:
I have tried using thresholding and canny/sobel edge detection to no avail.
I also tried the solution given here but it detects the entire image for cropping and it would not work for rotated images.
PS: My primary objective is to determine the denomination of the currency. There are a couple of methods I thought I could use:
Color based, since all currency notes have varying primary colors.
The advantage of this method is that it's independent of the
rotation or scale of the input image.
Detect the small black triangle on the lower left corner of the note. This shape is unique
for each denomination.
Calculating the difference between 2 images. Since this is a small project, all input images will be of the same dpi and resolution and hence, once aligned, the difference between the input and the true images can give a rough estimate.
Which method do you think is the most viable?
It seems you are further advanced than you looked (seeing you comments) which is good! Im going to show you more or less the way you can go to solve you problem, however im not posting the whole code, just the important parts.
You have an image quite cropped and segmented. First you need to ensure that your image is without holes. So fill them!
Iinv=I==0; % you want 1 in money, 0 in not-money;
Ifill=imfill(Iinv,8,'holes'); % Fill holes
After that, you want to get only the boundary of the image:
Iedge=edge(Ifill);
And in the end you want to get the corners of that square:
C=corner(Iedge);
Now that you have 4 corners, you should be able to know the angle of this rotated "square". Once you get it do:
Irotate=imrotate(Icroped,angle);
Once here you may want to crop it again to end up just with the money! (aaah money always as an objective!)
Hope this helps!

How to re size a too big image into small by keeping original values

I have an gray scale image of size <2559x3105 uint16>. when I try to open this image, I get warning that it is too big. I have tried imresize() function to make it small<512x512 uint8> in size. When I plot the original image and re-sized image, the intensity gets decreased after re-sizing. I want to re-size original image without changing in its pixel values. Is there any solution?
If you would like to keep your final image as uint8, I think you would be needed to first convert the uint16 image to uint8 image using im2uint8 -
uint8_image = im2uint8(uint16_image);
Then you may apply imresize on uint8_image.
But, if you don't want your final image to be of uint8 type, you can directly use imresize and it would keep the datatype, i.e. the resized image would be of uint16 type.
Read the docs and use the nearest neighbor method. That is,
resized = imresize(original, scale, 'nearest')
This will not use interpolated values. The downside is of course that edges might be jagged.
It sounds like your 16-bit image uses linear codes while the resulting 8-bit image needs to be gamma corrected. If this is the case you can use imadjust with a gamma parameter of 1/2.2 to produce the brighter image.
Do you get the warning when you display it with imshow? Does it say something like "Image to large to fit the screen, resizing to xx%"? If so, then you can simply ignore the warning. Otherwise, you can can set the 'InitialMagnification' parameter of imshow to resize the figure, but not the image itself.

MATLAB: display RGB values of a fits image

I want to read a .fits image of wide field sky and display the RGB values contained in a star. Can you please suggest a method to do so?
I have used fitsread to read in the image but i am not able to show the RGB values for specific locations(star).
In order to do this, you'll need a proper rgb fits file. The only .fits viewer I know of, ds9, does not support saving rgb fits files, but rather as the three separate (r,g,b) fits images. You can use "getpix" from wcstools (http://tdc-www.harvard.edu/wcstools/) or scisoft (http://www.eso.org/sci/software/scisoft/) on the individual frames. Note that "getpix" returns the pixel value given an image (x,y) location. ds9 does not provide the physical image location, but rather the wcs coordinates, so you may have to convert to image coordinates before calling getpix.

Histogram of image

I have 2 images that look nearly identical. The histogram for one (256 bins) has intensities distributed pretty evenly throughout. The other has intensities at the lowest and highest bin. Why would this be? Then wouldnt it appear binary (thats not the case)?
Think about it this way: Imagine you are taking a histogram of two grayscale images with each pixel represented by a color value 0-255. One image contains pixels that all have gray levels of 128. The second image contains a "checkerboard" pattern (pixels alternate between 0 and 255). If you step back far enough that you no longer see individual pixels, they will appear identical to the naked eye. Your brain "averages" the alternating black and white pixels into a field of gray.
This is what your images are doing. The first image has colors distributed evenly throughout the range and the second image has concentrations of specific colors, but if you calculate an average color for the image (and also for sub-sections within the image) you should see similar values for both.
Never trust in your eyes! They will always lie to you.
Consider this silly example that can be illustrative here. An X-Ray 'photo' is nothing more than black and white dots. But as they are small and mixed along the image, your eyes see different shades of gray.
The same can happen in a digital image, where, although the pixels may have the same size, then can be black and white and 'distributed' in the image in such a way that you see it as having more graylevels. This is called halftone.
Without seeing the images it's hard to say, but it sounds like the second may be slightly clipped.
The difference also could just be a slight difference in contrast in the images that's no visible to the naked eye.