Scale down a larger number from 0 - 1 - 0 - range

I'm not math savvy - but what I want to do seems to be math heavy.
I'm looking to scale down a number which ranges from 0-800 so that it is within a range of 0-1-0 (so 1 being 400). Not sure if this is possible, but my attempts at a solution have not been fruitful. Any indication as to where to look for a solution would be of great benefit!
It's so I can change the alpha of images depending on the screen location - the centre-most images should be 100% visible, whilst as it get's further to the edge of the screen, the images should become more transparent. The range for alpha is 0-1.
Kind regards, and thanks in advance!

Here is the graph of 1-abs(1-2x/a) where a=800:
Here is sin(pi*x/a):

Related

Calculating transform-origin of two overlapping elements

I have an image (represented as green) overlaying a box (represented as blue), and the image is going to be transform: scale()ing in size. When this happens I need all edges of the image to complete their transformation at the same time.
To do this I need to calculate the transform-origin based on where the image is located overtop of the bounding box, using JavaScript. Assume I know all the coordinates that getBoundingClientRect() provides, for both elements.
In the six examples below I’ve placed a red dot where the transform-origin percentages should intersect.
I just can’t figure out the math to get there. The closest I've come to finding an answer is with this question, but it's a little vague and I'm not sure I fully understand the answer itself. I would greatly appreciate help with this, and will happily provide more details if I'm missing something.
After fiddling around, I figured out the formula is:
(
(box.left - image.left) /
(image.width - box.width)
) * 100

Unity Image.FillAmount not working as wanted

I'm working on something that relies on the fill amount of the image.
However, the fill amount may be weighted slightly? I'm unsure. Here is my evidence:
This is the amount set to 0.2
However, setting it to 0.1 shows
I would have thought that 0.1 would have shown half of 0.2. Does unity therefore actually measure fill amount from 0.1 -> 1, instead of 0 -> 1 as the documentation suggests, or am I just being stupid?
Think I know the answer to this one.
I am assuming your image may not be clipped to the edge, and you have a transparent boarder? Also ensure the image is a power of 2 when you import it, as forcing a power of 2 in the texture import settings could also introduce a boarder.
Having a board would mislead you into thinking nothing it showing up until 0.1, when infact it is showing the transparent boarder.

Extract Rectangular Image from Scanned Image

I have scanned copies of currency notes from which I need to extract only the rectangular notes.
Although the scanned copies have a very blank background, the note itself can be rotated or aligned correctly. I'm using matlab.
Example input:
Example output:
I have tried using thresholding and canny/sobel edge detection to no avail.
I also tried the solution given here but it detects the entire image for cropping and it would not work for rotated images.
PS: My primary objective is to determine the denomination of the currency. There are a couple of methods I thought I could use:
Color based, since all currency notes have varying primary colors.
The advantage of this method is that it's independent of the
rotation or scale of the input image.
Detect the small black triangle on the lower left corner of the note. This shape is unique
for each denomination.
Calculating the difference between 2 images. Since this is a small project, all input images will be of the same dpi and resolution and hence, once aligned, the difference between the input and the true images can give a rough estimate.
Which method do you think is the most viable?
It seems you are further advanced than you looked (seeing you comments) which is good! Im going to show you more or less the way you can go to solve you problem, however im not posting the whole code, just the important parts.
You have an image quite cropped and segmented. First you need to ensure that your image is without holes. So fill them!
Iinv=I==0; % you want 1 in money, 0 in not-money;
Ifill=imfill(Iinv,8,'holes'); % Fill holes
After that, you want to get only the boundary of the image:
Iedge=edge(Ifill);
And in the end you want to get the corners of that square:
C=corner(Iedge);
Now that you have 4 corners, you should be able to know the angle of this rotated "square". Once you get it do:
Irotate=imrotate(Icroped,angle);
Once here you may want to crop it again to end up just with the money! (aaah money always as an objective!)
Hope this helps!

D3 Stacked Bar Chart outer padding

I've been working on adapting the stacked bar chart example (http://bl.ocks.org/mbostock/1134768). The problem I'm having is that there's
always outerpadding. The API lists the outer padding as a 3rd option, but omitting
it or setting it to 0 still leaves some padding. In most cases, it isn't too bad,
but with large data sets it tends to be a huge amount of padding. For all the code
relevant to my issue, you can check the link above. It's not very noticeable in that
example, but the first bar isn't drawn until about 12 pixels (in larger data sets I'm using
this can be at 100 or more pixels); I want it to start at 0 pixels.
Thanks! If you need any more explanation just let me know and I'll do my best.
EDIT: After testing, it appears rangeBands() starts at 0, but I'm still not sure why the rounding
from round bands would round as much as it did. Oh well, I can deal with using rangeBands.

Pixel color matching estimate

For image scanning purposes, I'd like a pixel (which I can get from a UIImage) to match (for a certain percentage) to a pre-set color.
Say pink. When I scan the image for pixels that are pink, I want a function to return a percentage of how much the RGB value in the pixel looks like my pre-set RGB value. This way I'd like all (well, most) pink pixels to become 'visible' to me, not just exact matches.
Is anyone familiar with such an approach? How would you do something like this?
Thanks in advance.
UPDATE: thank you all for your answers so far. I accepted the answer from Damien Pollet because it helped me further and I came to the conclusion that calculating the vector difference between two RGB colors does it perfectly for me (at this moment). It might need some tweaking over time but for now I use the following (in objective c):
float difference = pow( pow((red1 - red2), 2) + pow((green1 - green2), 2) + pow((blue1 - blue2), 2), 0.5 );
If this difference is below 85, I accept the color as my target color. Since my algorithm needs no precision, I'm ok with this solution :)
UPDATE 2: on my search for more I found the following URL which might be quite (understatement) useful for you if you are looking for something similar.
http://www.sunsetlakesoftware.com/2010/10/22/gpu-accelerated-video-processing-mac-and-ios
I would say just compute the vector difference to your target color, and check that it's norm is less than some threshold. I suspect some color spaces are better than others at this, maybe HSL or L*ab, since they separate the brightness from the color hue itself, and so might represent a small perceptual difference by a smaller color vector...
Also, see this related question
Scientific answer: You should convert both colors to the LAB color space and calculate the euclidian distance there. That value is also called deltaE.
The LAB space was developed (using test persons) for exactly that reaason: so that different color pairs with equal distances in tnis space correspond to equal perceived color differences.
However, it sounds like you are not looking for matching a specific color, but rather a color range (lets say all skin tones). That might require more user input than just a reference color + a deltaE tollerance:
a reference color with 3 tollerances for hue, saturation and brightness
a cloud of refence color samples
...