iPhone iOS how to instantiate a black and white CGColorSpaceRef? - iphone

I'm working with this excellent example of converting an image to grayscale: Convert Image to B&W problem CGContext - iPhone Dev
However, for my purposes, I would like to have only pure black and pure white left in the image.
It appears that to do so, I need to pass a black and white color space to the recolor method using a call:
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(/*black and white name*/);
However, I was unable to find the proper iOS color space names. What I found was from Mac, and the "color space names" referenced from the iOS docs does not point anywhere.
How can I properly create a black and white CGColorSpaceRef?
Thank you!

I am not familiar with a black and white only color space but what you can do is calculate the total average RGB value from all the pixels (lets call it totalAvg) and use it as a threshold. Meaning for each pixel if its rgb average is greater than the calculated totalAvg than set it to pure white, otherwise set it to pure black.
I agree it is a bit of more work but thats whay I can think of unless you find the colorspace you are looking for.

You might try creating a gray color space, then creating an indexed color space with two colors (black and white, obviously) and using that.

Related

How to make it so that the whole fruit is white and the background is black

I have a to make an application that recognizes fruits. So far i have made that you can crop the image and get the color of the fruit you want. Now i am trying to get roundness of the fruit but i need the fruit to be black and the background to be white so i can find area and roundness value. This is my code so far for that part :
crop_temp = rgb2gray(crop);
threshold = graythresh(crop_temp);
bw = im2bw(crop_temp,threshold);
imshow(bw)
Crop i get passed when i crop the image. The problem gets when the fruit has a camera flash and that part stays white.
An example image is this lemon picture:
The problem is the white area in the lemon stays white after the code but i want it so that the whole lemon is black. But not just the lemon, but for other fruits to.
The problem is the white area in the lemon stays white after the code but i want it so that the whole lemon is black. But not just the lemon, but for other fruits to.
Yeah and how can you make so that the fruit is white and the background is black.
I am new to image processing so don't jump on me. I just can't find specific stuff for this.
Try this one:
fbw = ones(size(bw))-imfill(ones(size(bw))-bw);
imshow(fbw)
A brute force approach would be to check every white pixel in your image, and see if it is boxed in by black pixels in both the X and Y directions, turning it black if this is the case. This would take care of blobs inside your fruit, and shouldn't give you too many false-positives unless your fruit are strangely shaped, or you have a lot of noise around the edges of your image.
You can start by practicing with this Matlab demo, segmenting (and counting) rice in an image. In particular the part where the background is estimated.
Also helpful will be reading on Otsu's method and these two questions on background/foreground estimation on SO and DSP which take local statistics into account.

How to convert a black and white photo that was originally colored, back to its original color?

I've converted a colored photo to black and white, and bolded the edges. Now i need to convert it back to its original color with the bolded edges. Is there any function in matlab which allows me to do so?
Once you remove the colour from an image, there is no possible way to automatically put it back. You're basically reducing a set of 16,777,216 colours to a set of 256 - on average each shade of grey has 65,536 equivalent colours, and without the original image there's no way to guess which it could be.
Now, if you were to take the bolded lines from your black-and-white image and paint them on top of the original coloured image, that might end up producing what you're looking for.
If what you are trying to do is to use some filter over the B/W image and then use that with the original color. I suggest you convert your image to a color space with Lightness channel that suits your needs (for example L*a*b* if you need the ligtness to be uniformly distributed regarding human recognition of differences) and apply your filter only over the Lightness channel.

Convert print, CMYK images to tiled, RGB images for iPhone?

I was given some high-res images, which were originally made for a printed magazine, to show in an iPhone app, like the Xcode PhotoScroller app (like iPhone's native Photo viewer app). I'm down-sizing them to 1024 x 1536 px and I'm going to be slicing them up for use with UIScrollView and CATiledLayer.
When I'm resizing them, should I also convert them from CMYK to RGB?
I think so because RGB is for digital, right? But they also looked fine on the iPhone as CMYK. Why do they say to use RGB for digital?
What's the best way to resize them to 1/2 & 1/4 and slice all 3 sizes up?
1024/4 = 256, so I'm thinking of making every tile (except for the edge ones) 256 x 256 px. I tried Tile Cutter, which worked, but I have 20 images, so I'll have to do it 20 times. Plus, it doesn't let you put levels deep, so I'll also have to resize each image twice in PhotoShop. So, that's 60 images that I'll have to run through The Cutter. It shouldn't take too long, but odds are, I'll be doing this again, so I'd like to have a better solution. Ideally, it'd be cool to do this with the iPhone, but for now, I think I'll use Paul Alexander's Tile Ruby script unless you suggest a better option. I also might try Zoomify.
RGB has a wider range of colors then CMYK.
CMYK is the range of colors printed on a white paper. it stands for Cyan Magenta Yellow and blacK. (think of the 4 colors in your printer cardiges. CMY for colors and K for black.)
when miking CMY you have a very very dark grey. It goes on a scale of 0-100% from each color.
RGB are monitors colors. It's the way LCDs and CRTs process colors. With Red Green and Blue. It goes on a scale of 0-255. 255 of both 3 colors makes white.
Now since monitors are backed with backlight, it can make bright color printers can't do. like shiny green or shiny pink.
A CMYK picture will look fine on screen. A RGB will lose color on print (like those shiny greens will become matte).
For the iPhone, work on RGB. reasons:
- It process directly RGB values
- You'll get precise color
- RGB takes less memory then CMYK

GLPaint with white background

I'm trying draw on a white background by reverse engineering GLPaint. I've gone through every combination of kSaturation, kLuminosity and glBlendFunc, AND just about every combination I can think of for brush texture (black on white, white on black, white on trans, alias/no alias, etc), but haven't stumbled upon the desired effect.
The best I've been able to achieve is with a white-on-trans circle, with glBlendFunc (GL_ SRC_ ALPHA, GL_ ONE_ MINUS_ SRC_ ALPHA), but this still gives me a dull colour, and the semi-trans outer bits are interpreted as black (i.e. dull green with black edges, instead of vibrant green with transparent edges). It's as though it still assumes I'm on a black background.
Any advice?
(source: straandlooper.com)
Did you also:
glEnable(GL_BLEND);
?
I believe this is what you want for it to work on White:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
Then use a white 'Particle.png' circle that feathers out to transparency (your 'best' example).
This should give in the desired result.
Yup, glEnable(GL_BLEND) is in there.
I've basically started with GLPaint and changed the background to white and the rest of the code is the same. I have to assume that since GLPaint was made with black background, there's some bit of code that is set to blend ideally to black, and I don't know which switch to flip (or, indeed, what switches can be flipped. Heck, what switches even exist).
Here's what's there by default. Lemme know if you see anything awry...
glDisable(GL_DITHER)
glMatrixMode(GL_PROJECTION)
glMatrixMode(GL_MODELVIEW)
glEnable(GL_TEXTURE _2D)
glEnableClientState(GL_VERTEX _ARRAY)
glEnable(GL_BLEND)
glBlendFunc(GL_SRC _ALPHA, GL_ONE _MINUS _SRC _ALPHA)
glEnable(GL_POINT _SPRITE _OES)
glEnable(GL_COLOR _MATERIAL)
I don't mind telling you, I haven't a clue what any of that is.
-kev.

How do I get a colored texture brush to show up in Open GL ES on white background?

I want to draw with a texture brush, in a color of my choice, on a white background using OpenGLES.
I have a bitmap image which I use CG to load and turn into a texture. This bitmap is mostly black, but has a white circle in the center that I want to use as the "brush". In other words, I want the black part to vanish in the final compositing, but the white part to take on the color that I set using glColor.
The best I can get is with the blend parameters (GL_SRC_ALPHA, GL_ONE) after setting some opaque bright color is a faded color line on a grey (not pure white background). But when I set the background to pure white, the line isn't visible.
At least in the current situation, the black edges of the original texture don't appear. Most other blend combinations I'm trying cause either nothing to show up even on grey, or to see the entire brush including the black edges, which is no good.
Is anyone willing to explain to me how I should set up my texture and/or GL states to make the bright color show through on pure white, without showing the black texture edges at all? This might be a newbie question, but I've tried working through the blend math, and I still just don't understand how the colors are all being factored together.
Here's the image I'm using as the brush:
alt text http://www.coldcoffeeandjuice.com/OpaqueBrush.png
Here's some resulting output, when the background is grey, and the glColor4f is set to (1, 0, 0, 1), eg pure red, and the brush is used on a bunch of consecutive GL_POINTs. Note that what's good about this is that only the white part of the brush image shows the color red-- that's right. The bad parts is that the red which I want to be pure and bright is pale due to being blended with the background (?) and/or the white of the brush (?) so that it washes out entirely if the background is pure white. This uses the blend params as given above (src_alpha, one).
(source: coldcoffeeandjuice.com)
Here's what I want to see, given the pure red color (thanks Paintbrush):
(source: coldcoffeeandjuice.com)
Can anyone help me understand what I'm doing wrong?
Thanks!
Ok, so you're trying to "paint" a given colour (in this case "red") on to a background, using a mask for the brush shape.
You need to do the following before you start rendering the "paint":
First make sure your brush has an alpha channel that corresponds with its shape - that is the alpha channel should look similar to the brush image you posed.
Render with these states set (note space to get around wiki markup):
// Make the current material colour track the current color
glEnable( GL_COLOR_MATERIAL );
// Multiply the texture colour by the material colour.
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
// Alpha blend each "dab" of paint onto background
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );
See also:
http://www.opengl.org/documentation/specs/man_pages/hardcopy/GL/html/gl/colormaterial.html
http://www.khronos.org/opengles/documentation/opengles1_0/html/glTexEnv.html