iPhone: How to Determine Average Light/Dark of an Area of an UIImage - iphone

I need to place labels with a transparent background over a variable-content UIImage. Readability will vary significantly depending on the relationship between the color of the label's text and the color/luminosity of the area of the image displayed under the label. Since the image will be constantly changing, the color of the label's text needs to change in sync.
I have found several techniques for determining the color, perceived luminosity etc of a single pixel. However, I need to rather quickly (while a view loads) determine the rough perceived color/luminosity of an area of the UIImage under the frame of the UILabel. I presume I will also need to measure the alpha because the same color/luminosity looks different at different alpha values.
Is there a way to calculate such a value for an area? Will I be reduced to simply summing pixels? If it comes to that, is there an algorithm to accomplish this?
I've thought of two possible approaches:
Perform some "folding" operations i.e. combining pixels from one half of the area to the other half. Then repeat until I get a single value. Would this be practical? How would you logically combine pixels to average their perceived color/luminosity?
Sample a statistically significant number of pixels in the area and then combine them (somehow) to get a rough measure.
I think this problem comes up a lot these days with people being so found of customizing backgrounds. Seems like something that would be worth my time to bang out a category or class to handle this and then share it around.

What about simply outlining your text in a way that it will show on both dark and light backgrounds?
This is how it is handled in other situations where text must be displayed over a background with unknown content (for example, films with subtitles).

Related

How to scale up image where objects's size are remained same?

If you have a file which include objects for example for EE like transistors, resistors etc and if you group them into one and then from the corner drag it to zoom in a bigger figure.
How can I make sure that these components are not zoom in only wiring changes?
The problem is that I have like 30 images with different sizes and I'm placing them in a table with many images side by side. However, if I keep the same scale then some images looks small compared to other. So I tried to scale them to get the same size. However, this make the components's sizes are also scaled up with different scale factors.
Here is an example of circuit using the bult-in shapes in Visio. As you can see the components'sizes got bigger when I scaled up the object. This is usually desired. However, in my specific case I want to keep the component's size same.
Here is the Visio file or I think you can also use any available components in Visio.
https://file.io/VRUCR8yVgYxs

tableau adjust marks size using number

In Marks, click Size and there pops a slider where I can adjust the size of a shape. But how to accurately control the size, is there some property with numbers to accurately control it? I have two sheets to show something similar and I want to display exactly the same sized shapes.
If you want to ensure 'sizes' are the same across two worksheets, I'd suggest snapping the 'size' setting to the center on both, as this is the easiest option to select. You can then use a measure to set the size, if this is desirable, and then the difference in size will be relative on both worksheets.
There isn't a numerical value override for the size slider.
Ben is correct, there isn't yet a numerical value override for the slider. You can use parameters with Min/Max/Sum etc. and a variable to somewhat change the sizes but they have to have multiple entries per line. It is unfortunate that Tableau still doesn't get that people want both a 'relative' sizing system that uses numbers from the dataset and a 'static' sizing system that allows for shapes to be set to '11px' or something along those lines. Yes, you can control that kind of in the dashboard with a vertical and fill entire box etc; but that doesn't address the very real scenario where you want a user to be able to re-size on the fly. Just my two cents.
I ran into this today. Very annoying. Need to keep shapes the same size across all worksheets and therefore same on dashboard.

Extract Rectangular Image from Scanned Image

I have scanned copies of currency notes from which I need to extract only the rectangular notes.
Although the scanned copies have a very blank background, the note itself can be rotated or aligned correctly. I'm using matlab.
Example input:
Example output:
I have tried using thresholding and canny/sobel edge detection to no avail.
I also tried the solution given here but it detects the entire image for cropping and it would not work for rotated images.
PS: My primary objective is to determine the denomination of the currency. There are a couple of methods I thought I could use:
Color based, since all currency notes have varying primary colors.
The advantage of this method is that it's independent of the
rotation or scale of the input image.
Detect the small black triangle on the lower left corner of the note. This shape is unique
for each denomination.
Calculating the difference between 2 images. Since this is a small project, all input images will be of the same dpi and resolution and hence, once aligned, the difference between the input and the true images can give a rough estimate.
Which method do you think is the most viable?
It seems you are further advanced than you looked (seeing you comments) which is good! Im going to show you more or less the way you can go to solve you problem, however im not posting the whole code, just the important parts.
You have an image quite cropped and segmented. First you need to ensure that your image is without holes. So fill them!
Iinv=I==0; % you want 1 in money, 0 in not-money;
Ifill=imfill(Iinv,8,'holes'); % Fill holes
After that, you want to get only the boundary of the image:
Iedge=edge(Ifill);
And in the end you want to get the corners of that square:
C=corner(Iedge);
Now that you have 4 corners, you should be able to know the angle of this rotated "square". Once you get it do:
Irotate=imrotate(Icroped,angle);
Once here you may want to crop it again to end up just with the money! (aaah money always as an objective!)
Hope this helps!

Is it possible to fill transparency with white in a texture in code?

I have some textures containing some transparency parts (a donut, for example, which would show a transparent center). What I want to do is fill the middle of the donut (or anything else) with a plain white, in code (I don't want to have a double of all my assets that need this tweak in one part of my game).
Is there a way to do this? Or do I really have to have 2 of each of my assets?
First it is possible to change a transparent texture to not-transparent, if it wasn't then graphic editors would be in trouble.
Solution 1 - Easy but takes repetitive editing by hand
The question you should be asking yourself is can you afford the transition at run time or would have two sets of textures be more efficient; from experience I find that the later tends to be more efficient.
Solution 2 - Extremely hard
You will need a shader that supports transparency and that it marks the sections that have to be shaded white. That is, it keeps track of which area will be later filled with white. It is implied that since your "donut" is already transparent on some parts then it already uses that texture that has an alpha, but you will have to write your own shader mask and be able to distinguish which is okay to fill white and which is not (fun problem here). What you need to do is find the condition in which that alpha no longer needs to be alpha and has to be white. Once the condition is met you can change the alpha of via the Color's alpha property. The only way I see you able to do this is if there is a pattern to the objects, so that you can apply some mathematical model to them and use that to find which area gets filled. If the objects are very different then the make two sets of textures starts to look more appealing.
Solution 3 - Medium with high re-use value
You could edit the textures to have two different colors, say pink and green. Green is the area that gets turned white and pink is always transparent. When green should not be white then it is transparent. You would have to edit your textures by hand as well.

Histogram of image

I have 2 images that look nearly identical. The histogram for one (256 bins) has intensities distributed pretty evenly throughout. The other has intensities at the lowest and highest bin. Why would this be? Then wouldnt it appear binary (thats not the case)?
Think about it this way: Imagine you are taking a histogram of two grayscale images with each pixel represented by a color value 0-255. One image contains pixels that all have gray levels of 128. The second image contains a "checkerboard" pattern (pixels alternate between 0 and 255). If you step back far enough that you no longer see individual pixels, they will appear identical to the naked eye. Your brain "averages" the alternating black and white pixels into a field of gray.
This is what your images are doing. The first image has colors distributed evenly throughout the range and the second image has concentrations of specific colors, but if you calculate an average color for the image (and also for sub-sections within the image) you should see similar values for both.
Never trust in your eyes! They will always lie to you.
Consider this silly example that can be illustrative here. An X-Ray 'photo' is nothing more than black and white dots. But as they are small and mixed along the image, your eyes see different shades of gray.
The same can happen in a digital image, where, although the pixels may have the same size, then can be black and white and 'distributed' in the image in such a way that you see it as having more graylevels. This is called halftone.
Without seeing the images it's hard to say, but it sounds like the second may be slightly clipped.
The difference also could just be a slight difference in contrast in the images that's no visible to the naked eye.