I was given some high-res images, which were originally made for a printed magazine, to show in an iPhone app, like the Xcode PhotoScroller app (like iPhone's native Photo viewer app). I'm down-sizing them to 1024 x 1536 px and I'm going to be slicing them up for use with UIScrollView and CATiledLayer.
When I'm resizing them, should I also convert them from CMYK to RGB?
I think so because RGB is for digital, right? But they also looked fine on the iPhone as CMYK. Why do they say to use RGB for digital?
What's the best way to resize them to 1/2 & 1/4 and slice all 3 sizes up?
1024/4 = 256, so I'm thinking of making every tile (except for the edge ones) 256 x 256 px. I tried Tile Cutter, which worked, but I have 20 images, so I'll have to do it 20 times. Plus, it doesn't let you put levels deep, so I'll also have to resize each image twice in PhotoShop. So, that's 60 images that I'll have to run through The Cutter. It shouldn't take too long, but odds are, I'll be doing this again, so I'd like to have a better solution. Ideally, it'd be cool to do this with the iPhone, but for now, I think I'll use Paul Alexander's Tile Ruby script unless you suggest a better option. I also might try Zoomify.
RGB has a wider range of colors then CMYK.
CMYK is the range of colors printed on a white paper. it stands for Cyan Magenta Yellow and blacK. (think of the 4 colors in your printer cardiges. CMY for colors and K for black.)
when miking CMY you have a very very dark grey. It goes on a scale of 0-100% from each color.
RGB are monitors colors. It's the way LCDs and CRTs process colors. With Red Green and Blue. It goes on a scale of 0-255. 255 of both 3 colors makes white.
Now since monitors are backed with backlight, it can make bright color printers can't do. like shiny green or shiny pink.
A CMYK picture will look fine on screen. A RGB will lose color on print (like those shiny greens will become matte).
For the iPhone, work on RGB. reasons:
- It process directly RGB values
- You'll get precise color
- RGB takes less memory then CMYK
Related
I have a to make an application that recognizes fruits. So far i have made that you can crop the image and get the color of the fruit you want. Now i am trying to get roundness of the fruit but i need the fruit to be black and the background to be white so i can find area and roundness value. This is my code so far for that part :
crop_temp = rgb2gray(crop);
threshold = graythresh(crop_temp);
bw = im2bw(crop_temp,threshold);
imshow(bw)
Crop i get passed when i crop the image. The problem gets when the fruit has a camera flash and that part stays white.
An example image is this lemon picture:
The problem is the white area in the lemon stays white after the code but i want it so that the whole lemon is black. But not just the lemon, but for other fruits to.
The problem is the white area in the lemon stays white after the code but i want it so that the whole lemon is black. But not just the lemon, but for other fruits to.
Yeah and how can you make so that the fruit is white and the background is black.
I am new to image processing so don't jump on me. I just can't find specific stuff for this.
Try this one:
fbw = ones(size(bw))-imfill(ones(size(bw))-bw);
imshow(fbw)
A brute force approach would be to check every white pixel in your image, and see if it is boxed in by black pixels in both the X and Y directions, turning it black if this is the case. This would take care of blobs inside your fruit, and shouldn't give you too many false-positives unless your fruit are strangely shaped, or you have a lot of noise around the edges of your image.
You can start by practicing with this Matlab demo, segmenting (and counting) rice in an image. In particular the part where the background is estimated.
Also helpful will be reading on Otsu's method and these two questions on background/foreground estimation on SO and DSP which take local statistics into account.
I'm working with this excellent example of converting an image to grayscale: Convert Image to B&W problem CGContext - iPhone Dev
However, for my purposes, I would like to have only pure black and pure white left in the image.
It appears that to do so, I need to pass a black and white color space to the recolor method using a call:
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(/*black and white name*/);
However, I was unable to find the proper iOS color space names. What I found was from Mac, and the "color space names" referenced from the iOS docs does not point anywhere.
How can I properly create a black and white CGColorSpaceRef?
Thank you!
I am not familiar with a black and white only color space but what you can do is calculate the total average RGB value from all the pixels (lets call it totalAvg) and use it as a threshold. Meaning for each pixel if its rgb average is greater than the calculated totalAvg than set it to pure white, otherwise set it to pure black.
I agree it is a bit of more work but thats whay I can think of unless you find the colorspace you are looking for.
You might try creating a gray color space, then creating an indexed color space with two colors (black and white, obviously) and using that.
I've converted a colored photo to black and white, and bolded the edges. Now i need to convert it back to its original color with the bolded edges. Is there any function in matlab which allows me to do so?
Once you remove the colour from an image, there is no possible way to automatically put it back. You're basically reducing a set of 16,777,216 colours to a set of 256 - on average each shade of grey has 65,536 equivalent colours, and without the original image there's no way to guess which it could be.
Now, if you were to take the bolded lines from your black-and-white image and paint them on top of the original coloured image, that might end up producing what you're looking for.
If what you are trying to do is to use some filter over the B/W image and then use that with the original color. I suggest you convert your image to a color space with Lightness channel that suits your needs (for example L*a*b* if you need the ligtness to be uniformly distributed regarding human recognition of differences) and apply your filter only over the Lightness channel.
24 bits are available per pixel.
Assuming
1. eyes are sensitive to brightness than color.
2. eyes are sensitive to red & green than blue.
What kind of encoding can I choose?
I thought about it,but didn't get an idea. Y'CbCr with 4:2:0 encoding works for the brightness part, but what about the color?
That's already accounted for. YUV420 meens that the color components are subsampled. I'm not sure if it was horizontally or vertically though. That means that your image will contain half the color information compared to luminence. Also, the quantization tables are different for the color components so that will also increase the compression rate.
I have 2 images that look nearly identical. The histogram for one (256 bins) has intensities distributed pretty evenly throughout. The other has intensities at the lowest and highest bin. Why would this be? Then wouldnt it appear binary (thats not the case)?
Think about it this way: Imagine you are taking a histogram of two grayscale images with each pixel represented by a color value 0-255. One image contains pixels that all have gray levels of 128. The second image contains a "checkerboard" pattern (pixels alternate between 0 and 255). If you step back far enough that you no longer see individual pixels, they will appear identical to the naked eye. Your brain "averages" the alternating black and white pixels into a field of gray.
This is what your images are doing. The first image has colors distributed evenly throughout the range and the second image has concentrations of specific colors, but if you calculate an average color for the image (and also for sub-sections within the image) you should see similar values for both.
Never trust in your eyes! They will always lie to you.
Consider this silly example that can be illustrative here. An X-Ray 'photo' is nothing more than black and white dots. But as they are small and mixed along the image, your eyes see different shades of gray.
The same can happen in a digital image, where, although the pixels may have the same size, then can be black and white and 'distributed' in the image in such a way that you see it as having more graylevels. This is called halftone.
Without seeing the images it's hard to say, but it sounds like the second may be slightly clipped.
The difference also could just be a slight difference in contrast in the images that's no visible to the naked eye.