I am trying to replicate the behavior of CGContextClipToMask on iOS without any luck so far. Does anyone know how CGContextClipToMask works internally? I have read the documentation and it says it simply multiplies the image alpha value by the mask alpha value, so that is what I am doing in my custom function, however when I draw the result image using normal blending on to a CGContext multiple times, the result gets darker and darker, whereas with CGContextClipToMask the result is correct and does not get darker.
My theory is that CGContextClipToMask somehow uses the destination context, in addition to the image and mask to produce a correct result, but I just don't know enough about core graphics to be sure.
I've also read this question:
How to get the real RGBA or ARGB color values without premultiplied alpha?
and it's possible I am running into this problem, but then how does CGContextClipToMask get around the problem of precision loss with 8 bit alpha?
I found the problem. When multiplying the mask, I had to do a ceilf call on the RGB values like so:
float alphaPercent = (float)maskAlpha / 255.0f;
pixelDest->A = maskAlpha;
pixelDest->R = ceilf((float)pixelDest->R * alphaPercent);
pixelDest->G = ceilf((float)pixelDest->G * alphaPercent);
pixelDest->B = ceilf((float)pixelDest->B * alphaPercent);
Amazingly, this solves the problem...
I personally have no idea about the internals of CGContextClipToMask, but maybe you can find out something interesting by having a look at how it is implemented in GNUStep.
PS:
thinking more about it, when you write "multiplies the image alpha value by the mask alpha value", there is something that strikes my attention.
Indeed, as far as I know, a mask on iOS has no alpha channel and all my attempts to use alpha-channeled masks have given wrong results. Could this allow you to move further a bit?
Maybe the trick is multiplying the image by the mask (which is defined in the gray color space, and has got just one component), and leave the image alpha channel unchanged...
how does that sounds to you?
Related
I have a image:
The upper part of this image, which alpha value is 1 (or 255 in RGBA)
The lower part of this image, which alpha value is 0.3, I used it for shadow in game.
So When I import it to Unity ShaderGraph as a _MainTex, when I split it alpha, it looks like this:
imported alpha
My first questions is:
"alpha" is actually a VECTOR 1 type in Unity Documention, but as I could see from the preview, there are three colors, black indicates alpha's value 0, hard white for alpha's value 1 and soft white for alpha's value 0.3, how can one single value transfer so much messages?
My first understanding is:
each pixel's alpha value is stored in the images already, the "alpha" in the shadergraph is just
like a global parameter to control them based every pixel.[I dont know if this is correct]
but when I give alpha a smoothstep node, I
am going to set the pixels's alpha under 0.3 to 0, I found it worked like this:
smoothstep added to the alpha, as you can see, 0.3<0.99, so
the translucent of the image is removed!
So here comes my second question:
Since "alpha" in the input works like a global parameter, how does it affect a picture separately?
My second understanding is:
"alpha" is just like an one-dimensional array, it stores transparency likes this:
{1,1,1,0.3,0.3,0.3}
and when it calculated by smoothstep,its value will be changed like this:
{1,1,1,0,0,0}
But it comes to my first question, ALPHA IS A VECTOR1 TYPE, it only has one value to edit
in the node, it can not be an array!
So, How does an image'alpha transfer so much information to other nodes in Unity Shadergraph?
https://docs.unity3d.com/Packages/com.unity.shadergraph#6.9/manual/Data-Types.html
https://docs.unity3d.com/Packages/com.unity.shadergraph#6.9/manual/Smoothstep-Node.html
Someone who can help me really appreciated!
Shaders work in parallel: for any given vertex or pixel you only get data local to this element. Also critically here 'pixel' (or 'fragment') is a screen pixel, not a texel, which relates a texture's pixel.
In this context, the output of the texture node is a single rgba Vector4 (4 scalar values) at the provided coordinate. This is disconnected from how textures are stored: filtering, compression and mipmapping will come into play (and the control over this comes from the sampler, which you can also provide to the node even though it's most of the time implicit).
Smoothstep is a function that can remap a value - a vector (like the rgba output of the tex node), or a scalar (like the alpha) - into another range. More specifically it does it with smoothing both ends of the spectrum so that the slope is 0 at min and max. The linear equivalent is inverse lerp (which doesn't have a built in instruction in hlsl). You can read about the breakdown on the wikipedia page: https://www.wikiwand.com/en/Smoothstep
I have seen many tutorials that people blend two images that are placed on top of each other very nicely in Photoshop. For example here are two images that are placed on top of each other:
Then in Photoshop after some work, the edges (around the smaller image) will be erased and two images are nicely mixed.
For example, this is a possible end result:
As it can be seen there is no edge and two images are very nicely blended, without blurring.
Can someone point me to any article or post that shows the math behind it? If there is a MATLAB code that can do it, that would be even better. Or at least if someone can tell me what is the correct term for this so I can do Google search on the topic.
Straight alpha blending alone is not sufficient, as it will perform a uniform mixing of the two images.
To achieve nice-looking results, you will need to define an alpha map, i.e. an image of the same size where you adjust the degree of transparency depending on the image that should dominate.
To obtain the mask, you can draw it by hand, for example as a filled outline, as a path or a polygon. Then you have to strongly blur this mask to get a smooth blend.
It looks very difficult (if not impossible) to automate this, as no software can guess what you want to enhance.
The term you are looking for is alpha blending.
https://en.wikipedia.org/wiki/Alpha_compositing#Alpha_blending
The maths behind it boils down to some alpha weighted sums.
Matlab provides the function imfuse to achieve this:
https://de.mathworks.com/help/images/ref/imfuse.html
Edit: (as it still seems to be unclear)
Let's say you have 2 images A and B wich you want to blend.
You put one image over the other so for each coordinate you have 2 RGB touples.
Now you need to define the weight of both images. Will you only see the colour of image A or B or which ratio will you choose to mix them?
This is done by alpha values.
So all you need is a 2d function that defines the mixing ratio for each pixel.
Usually you have values between 0 and 1 where 0 shows one image, 1 shows the other image, 0.5 will mix them both equally and so on...
Just read the article I have linked. It gives you a clear mathematical definition. I can't provide more detail than that.
If you have problems understanding that I urge you to read a book on image processing fundamentals.
As part of my initial research, to see if using Cairo is a good fit for us, I'm looking to see if I can obtain an (x,y) point at a given distance from the start of a path. I have looked over the Cairo examples and APIs but I haven't found anything to suggest this is possible. It would be a pain if we had to build our own Bezier path implementation from scratch.
On android there is a class called PathMeasure. This allows getting an (x,y) point at a given distance from the start of the path. This allows me to draw a stamp easily at the wanted distance being able to produce something like the image below.
Hopefully someone can point me in the right direction.
Unless I have an uncomplete understanding of what you mean with "path", it seems that you can accomplish the task by starting from this guide. You would use multiple cr (image location, rotation, scale) and just one image instance.
From what I can understand from your image, you'll need to use blending (e.g. the alpha channel), I would say setting pixel by pixel the alpha channel (transparency) proportional to/same as your original grayscale values, and all the R G B pixels values to black (0).
For acting directly on your input image (on the file you will be loading), by googling "convert grayscale image to alpha" I found several results for Photoshop, some for gimp, I don't know what would you have available.
Otherwise you will have to do directly within your code accessing the image pixels. To read/edit pixel values you can use cairo_image_surface_get_data. First you have to create a destination image with cairo_image_surface_create
using format CAIRO_FORMAT_ARGB32
Similarly, you can use cairo_mask drawing a black rectangle of the size of your image, after having created an alpha channel image of format CAIRO_FORMAT_A8 from your original image (again, accessing pixel by pixel seems the only possible way given the limitations of cairo_image_surface_create_from_png).
Using cairo_paint_with_alpha in place of cairo_paint is not suitable because the alpha channel would be constant for the whole image.
I am trying to add a CCLabelTTF to my Cocos2d project and have the text be an inverted version of the graphics behind it.
I am having a hard time figuring out what blend fund to use.
I have to admit I do not really understand the concepts behind this, so I am basically just trying different modes.
I have tried several types:
This one inverts the background of the text, but leaves the text white:
[fontLabel setBlendFunc:(ccBlendFunc){GL_ONE_MINUS_DST_COLOR, GL_SRC_ALPHA}];
Can you help me in the right direction?
I want the text to be inverted, and the background to be invisible.
You can visually experiment with the various blendfunc methods with the aptly named Visual glBlendFunc tool.
You should also be aware that CCLabelTTF uses 8-Bit (alpha mask, kCCTexture2DPixelFormat_A8) textures on 1st and 2nd generation devices, and 16-Bit (alpha+intensity mask, kCCTexture2DPixelFormat_AI88) textures on 3rd generation and newer devices. This may or may not affect the blend mode results, or even make it impossible because the textures don't contain color information, only alpha.
It can not be done using glBlendFunc. Blending equation looks like this:
result = A * front_color OP B * back_color;
OpenGL allows you to configure A, B - glBlendFunc(A, B);
and OP (operation) - glBlendEquation(OP);
To invert colors, you need
result = 1 - back_color;
You can do that by setting A = 1, B = 1, OP = FUNC_SUBTRACT, but you will have to set front_color to (1,1,1,1) in fragment shader.
P.S. I might be wrong, so write a comment below and I will change my answer.
I would like to extract the white areas/bright areas of an image and place a custom objects in those areas. I need to know which framework to work with. if anyone has done something similar, I would appreciate an answer. I know how to get Pixel values, however the hard part is creating a Bloom/star effect in those highlighted areas.
You could make a mask where the luminance value was above a threshold, then blur or whatever the mask and composite above the image.