iPad UIColor Saturation Issues - iphone

I am trying to draw a UIColor on the screen of a view-based app, and I am trying to do so using HSB. It is absolutely necessary for me to use HSB in this case. I can create a UIColor object with any S value from 0.0f to 0.75f, but past that the numerical changes have no effect on the actual saturation displayed. I need it to be 1.0f, but it is still using 0.75f. Any ideas on why it is doing that, and how I can make it work?

Because of how it works, + (UIColor *)colorWithHue:(CGFloat)hue saturation:(CGFloat)saturation brightness:(CGFloat)brightness alpha:(CGFloat)alpha actually does not use HSBA values internally; it is simply a wrapper around the device RGB color space.
I think that under extreme cases there surely would be chances that a constant H/B/A + a .75–1 S yields colors that differ so slightly it became imperceptible, despite the color components being digitally tracked as very precise floats. As saturation drops, the number of “available” colors decreases (as the display could only show this many colors, dropping the saturation compresses the usable colors) and the chance of collision simply rises.
Given that your scenario uses H0-1, B1, A1 colors which nearly invalidates my assumption, I was curious and have made a test project; the colors however worked correctly. I’m on iOS 4 SDK GM, so maybe it’ll help if we know which SDK you’re working against.

After doing some experimentation, I've discovered what my issue was.
I was using a for loop to draw single-pixel lines across a view, each with a hue value greater than the previous one. I was doing this to create a color spectrum to be used for a color picker.
My issue arose because I was using CGContext paths, not rects, to do the drawing. Paths, by default, "straddle" the created path with pixels. Because I was setting the width to one, CoreGraphics was forced to average between pixels, creating a desaturated effect. Setting the width to two set the saturation correctly, but the gradient of the spectrum was no longer smooth.
My fix for this issue was to use rects instead of paths. They did not blend between pixels, and the saturation issue was fixed.

Related

Is it possible to fill transparency with white in a texture in code?

I have some textures containing some transparency parts (a donut, for example, which would show a transparent center). What I want to do is fill the middle of the donut (or anything else) with a plain white, in code (I don't want to have a double of all my assets that need this tweak in one part of my game).
Is there a way to do this? Or do I really have to have 2 of each of my assets?
First it is possible to change a transparent texture to not-transparent, if it wasn't then graphic editors would be in trouble.
Solution 1 - Easy but takes repetitive editing by hand
The question you should be asking yourself is can you afford the transition at run time or would have two sets of textures be more efficient; from experience I find that the later tends to be more efficient.
Solution 2 - Extremely hard
You will need a shader that supports transparency and that it marks the sections that have to be shaded white. That is, it keeps track of which area will be later filled with white. It is implied that since your "donut" is already transparent on some parts then it already uses that texture that has an alpha, but you will have to write your own shader mask and be able to distinguish which is okay to fill white and which is not (fun problem here). What you need to do is find the condition in which that alpha no longer needs to be alpha and has to be white. Once the condition is met you can change the alpha of via the Color's alpha property. The only way I see you able to do this is if there is a pattern to the objects, so that you can apply some mathematical model to them and use that to find which area gets filled. If the objects are very different then the make two sets of textures starts to look more appealing.
Solution 3 - Medium with high re-use value
You could edit the textures to have two different colors, say pink and green. Green is the area that gets turned white and pink is always transparent. When green should not be white then it is transparent. You would have to edit your textures by hand as well.

GIMP will not blend completely to color using opacity and paint tool

I am stuck on a blending problem that appears to have only started once I started blending without any color. I am painting a grey suit and using shades to capture the lighting realistically. For some reason, when I paint with a dark grey over a light grey, with say 20% opacity, with enough strokes, the color I am painting will match the color in the color picker. With the reverse situation (light to dark), the paint tool never quite blends to the color in the color picker, it is always a shade or two off. No matter how many times I stroke the area, it will not become the color I have chosen. It has me dumbfounded and is crippling my ability to make light and shadow and show depth.
I have tried googling and messing with every possible option, deselecting all, triple checking what layer I am in, but I cannot seem to find anyone else with this problem...
I openned GIMP 2.8 (stable version) and the development version of GIMP and tried a procedure like the one you tell about:
Indeed, when working with 8 bit color, the wayGIMP is structured internally will prevent the gray values to converge toa precise shade of gray in same cases. When GIMP applies 20% of the difference between a value 129 to a pixel valued 127, that "20%" is a "0.4" darkening, which is rounded down to zero.
This certainly won't be dealt with on current GIMP stable, since it is fundamental to the way 8 bit color works, ansd given that GIMP unstable - the 2.9 version that will eventually be out as GIMP 2.10 can be set to use higher color precision so that this behavior does not happen. (With floating point pixel values, you just get as close as you want from your shade of gray).
I'd suggest you either find a compiled "nightly" version of GIMP 2.9 for your system, or try some other way of painting: maybe using a more spread shade of gray with values varying over a broader range, and after you are done, compressing the results to the desired range with the Levels or Curves tool.
Anyway, this is offtopic here - if you have further doubts on painting, please take the question to graphicdesign.stackexachange.com

Pixel color matching estimate

For image scanning purposes, I'd like a pixel (which I can get from a UIImage) to match (for a certain percentage) to a pre-set color.
Say pink. When I scan the image for pixels that are pink, I want a function to return a percentage of how much the RGB value in the pixel looks like my pre-set RGB value. This way I'd like all (well, most) pink pixels to become 'visible' to me, not just exact matches.
Is anyone familiar with such an approach? How would you do something like this?
Thanks in advance.
UPDATE: thank you all for your answers so far. I accepted the answer from Damien Pollet because it helped me further and I came to the conclusion that calculating the vector difference between two RGB colors does it perfectly for me (at this moment). It might need some tweaking over time but for now I use the following (in objective c):
float difference = pow( pow((red1 - red2), 2) + pow((green1 - green2), 2) + pow((blue1 - blue2), 2), 0.5 );
If this difference is below 85, I accept the color as my target color. Since my algorithm needs no precision, I'm ok with this solution :)
UPDATE 2: on my search for more I found the following URL which might be quite (understatement) useful for you if you are looking for something similar.
http://www.sunsetlakesoftware.com/2010/10/22/gpu-accelerated-video-processing-mac-and-ios
I would say just compute the vector difference to your target color, and check that it's norm is less than some threshold. I suspect some color spaces are better than others at this, maybe HSL or L*ab, since they separate the brightness from the color hue itself, and so might represent a small perceptual difference by a smaller color vector...
Also, see this related question
Scientific answer: You should convert both colors to the LAB color space and calculate the euclidian distance there. That value is also called deltaE.
The LAB space was developed (using test persons) for exactly that reaason: so that different color pairs with equal distances in tnis space correspond to equal perceived color differences.
However, it sounds like you are not looking for matching a specific color, but rather a color range (lets say all skin tones). That might require more user input than just a reference color + a deltaE tollerance:
a reference color with 3 tollerances for hue, saturation and brightness
a cloud of refence color samples
...

Histogram equalization with color correction (iPhone/objective-C)

I am trying to implement a histogram equalization method (HE) for a UIImage in my iphone app.
I read the following:
http://en.wikipedia.org/wiki/Histogram_equalization
But it says:
Still, it should be noted that applying the same method on the Red, Green, and Blue components of an RGB image may yield dramatic changes in the image's color balance since the relative distributions of the color channels change as a result of applying the algorithm. However, if the image is first converted to another color space, Lab color space, or HSL/HSV color space in particular, then the algorithm can be applied to the luminance or value channel without resulting in changes to the hue and saturation of the image.
So would this be a feasible approach?
Grab UIImage data and convert from RGB to HSL
Apply HE on luminance channel
convert data back to RGB
Create new UIImage from data
Will this be slow, I wonder? Also, will I have to deal with 8/16/24 bit data differently, as I have no idea what kind of image will be used with my app? Or can I assume 24 bit for images in the iPhone?
I would appreciate any pointers to objective-C code that does color corrected histogram equalization.
I have looked at the library below, but it does not do any color correction for HE:
http://code.google.com/p/simple-iphone-image-processing/source/browse/#svn/trunk/Classes%3Fstate%3Dclosed
Thanks!
Yes you can do it this way, that will work. Yes it will "cost more" since you have to do the conversion back and forth - but that's the price you will have to pay if you don't want to affect the hue and saturation. Is that worth it for the images you're correcting? It would depend on your application, are you OK with a hit in performance vs best quality? You will likely only have to deal with 8bit color components, you can assume "24 bit" for images but that's 3 x 8bit components The only way to know your answers though is to try.
I recommend using YUV Colorspace. Both for accuracy and for computation simplicity (Linear Combination).
One method would be applying the histogram equalization on the RGB image (Image2).
Then let the user to chose what he wants, Apply only on luminosity or all 3 channels.
For the first choice take UV channels of the original image with the Y channel of the equalized image and convert back to RGB.
For the second choice just leave the user with Image2.
Since after transformation, you deal with I/V as being continuous values, you will have to apply some binning strategy, which results in a step Histogram for the quantity you wish to equalize. Therefore, you might speed this up by reducing the bin size?
Just write the codes and model applying HE to each of the RGB component. Although there are much calculation for its 3 components, but programming speed is OK. In most of the cases, the contrast is improved, but the "look" of the image is changed. So agree to transform the RGB into another space then apply the HE again. I am looking for the formula and also the correct color space for the HE. Which color space is easier?
I write the HE in the iPad platform, but I find after opening a big image taken from my Canon, the whole program crashes after UIPopoverContoller, UIImagePickerController functions. I think it maybe due to I am pushing too much on the phone's OS, or the OS allocates only a limit amount of memory space for each of the apps. If apps is using more than pre-set memory, then the iOS just kills the apps right away. So must take care of the size of the input image, and the garbage collection of unused memory, and memory leak. Using XCode's instrument tool to check for leakage is a must.

iPhone: How to Determine Average Light/Dark of an Area of an UIImage

I need to place labels with a transparent background over a variable-content UIImage. Readability will vary significantly depending on the relationship between the color of the label's text and the color/luminosity of the area of the image displayed under the label. Since the image will be constantly changing, the color of the label's text needs to change in sync.
I have found several techniques for determining the color, perceived luminosity etc of a single pixel. However, I need to rather quickly (while a view loads) determine the rough perceived color/luminosity of an area of the UIImage under the frame of the UILabel. I presume I will also need to measure the alpha because the same color/luminosity looks different at different alpha values.
Is there a way to calculate such a value for an area? Will I be reduced to simply summing pixels? If it comes to that, is there an algorithm to accomplish this?
I've thought of two possible approaches:
Perform some "folding" operations i.e. combining pixels from one half of the area to the other half. Then repeat until I get a single value. Would this be practical? How would you logically combine pixels to average their perceived color/luminosity?
Sample a statistically significant number of pixels in the area and then combine them (somehow) to get a rough measure.
I think this problem comes up a lot these days with people being so found of customizing backgrounds. Seems like something that would be worth my time to bang out a category or class to handle this and then share it around.
What about simply outlining your text in a way that it will show on both dark and light backgrounds?
This is how it is handled in other situations where text must be displayed over a background with unknown content (for example, films with subtitles).