Mapping CoreGraphics Blend Modes to Porter-Duff - iphone

I have an iPhone app that does image manipulation via blending two UIImage objects via CoreGraphics, specifically CGContextSetBlendMode. I am currently researching porting it to Android. I've gone through the process of combining to Bitmap objects on Android using PorterDuff modes. However, I want much more complicate compositing. For example, I'm using kCGBlendModeHardLight for many blends:
Either multiplies or screens colors,
depending on the source image sample
color. If the source image sample
color is lighter than 50% gray, the
background is lightened, similar to
screening. If the source image sample
color is darker than 50% gray, the
background is darkened, similar to
multiplying. If the source image
sample color is equal to 50% gray, the
source image is not changed. Image
samples that are equal to pure black
or pure white result in pure black or
white. The overall effect is similar
to what you’d achieve by shining a
harsh spotlight on the source image.
Use this to add highlights to a scene.
But don't know of anyway (if it's even possible) to emulate this via Porter-Duff. Does Android not support better Image Manipulation algorithms out of the box? Is it possible to use Porter-Duff in some way to emulate more advanced blend modes?

In addition to the 12 Porter-Duff blending equations, Android supports Lighten, Darken, Multiply, Screen and soon Overlay. Unfortunately this means HardLight is not available and you would have to implement it yourself.

Related

Unity - Render Texture from Camera's targetTexture produces seams

I am attempting to render a specific section of my scene using a separate camera, and a render texture. That object is on a separate layer that the main camera is not rendering, but a separate camera is. The secondary camera has a target texture set to be a render texture that I have created. Everything is working as intended except for the fact that the object, when rendered to a texture, has a bunch of seams that are not present when rendering directly to the screen.
What it looks like when rendered directly to the screen:
Correct
What it looks like when rendered to a texture, and then displayed on a quad in the scene:
Incorrect
Notice how the second image has a bunch of transparent "lines" in between the sprites where there shouldn't be any.
I am using a basic transparent shader to display the render texture on the quad (since the background isn't part of the render texture, just the black crowd part). I have tried a number of different shaders, and none of them seem to make a difference.
The render texture's settings are: Width: Screen.width Height: Screen.height Format: RenderTextureFormat.ARGBFloat;
Unity Version: 5.2.3f1 - iOS Platform
Note: The reason I am doing this is so that I can apply a "Blur" image effect to the texture, and make the crowd in the foreground appear to be out of focus. Any alternative suggestions for how to do this are also welcome.
I'm not quite sure -- but it almost sounds like you have line ghosting. You may want to give this a read and let me know if that's what you're dealing with or not:
The reason for this is due to how the texture image was authored, combined with the filtering that most 3d engines use when textures are displayed at different sizes on screen.
Your image may have coloured areas which are completely opaque, coloured areas which are partially transparent, and areas which are completely transparent. However, the areas where your alpha channel is completely transparent (0% opacity) actually still have a colour value too. In PNGs (or at least, the way Photoshop exports PNGs) seems to default to using white for the completely transparent pixels. With other formats or editors, this may be black. Both are equally undesirable when it comes to use in a 3d engine.
You may think, "why is the white colour a problem if it's completely transparent?". The problem occurs because when your texture appears on screen, it's usually either upscaled or downscaled depending whether the pixels in the texture's image are appearing larger or smaller than actual size. For the downsizing, a series of downscaled versions get created during import. These downscaled versions get used when the texture is displayed at smaller sizes or steeper angles in relation to the view, and is intended to improve visual quality and make rendering faster. This process is called "mip-mapping" - read more about mip-mapping here. For upscaling, simple bilinear interpolation is normally used.
The scaled versions are usually created using simple bilinear interpolation, which means that the transparent pixels are mixed with the neighbouring visible pixels. With the mipmaps, for each smaller level, the problem with the invisible mixing with the visible pixel colours increases (with the result that your nasty white edges become more apparent at further distances away).
The solution is to ensure that these completely transparent pixels have a colour value which matches their neighbouring visible pixels, so that when the interpolation occurs, the colour 'bleed' from the invisible pixels is of the appropriate colour.
To solve this (in Photoshop) I always use the free "Solidify" tool from the Flaming Pear Free Plugins pack, like this:
Download and install the Flaming Pear "Free Plugins" pack (near the bottom of that list)
Open your PNG in photoshop.
Go to Select -> Load Selection and click OK.
Go to Select -> Save Selection and click OK. This will create a new alpha channel.
Now Deselect all (Ctrl-D or Cmd-D)
Select Filter -> Flaming Pear -> Solidify B
Your image will now appear to be entirely made of solid colour, with no transparent areas, however your transparency information is now stored in an explicit alpha channel, which you can view and edit by selecting it in the channels palette.
Now re-save your image, and you should find your white fuzzies have dissappeared!
Source: http://answers.unity3d.com/questions/10302/messy-alpha-problem-white-around-edges.html
Turns out that the shader I was using for my scene was using "Blend SrcAlpha OneMinusSrcAlpha" for some reason, when it should have been using "Blend One OneMinusSrcAlpha". This was causing objects with alpha less than 1 to make the objects under them become semi-transparent as well exposing the camera's clear colour background.

Photoshop magic wand-like action in an iOS app

Is there a way to use a magic-wand tool (like in Photoshop) in Xcode for iPhone? What I want to do is to cut out the background of an image (a person standing in front of a white background) to make the background transparent.
Edit:
i think i was not specific enough, sorry. I would like the iPhone or iPad app to automatically remove the background of an image just taken with the camera. Thus, i can't use Photoshop for it and need a function or so to do this. I was thinking about a "flood fill" kind of solution similar to this:
http://www.codeproject.com/Articles/16405/Queue-Linear-Flood-Fill-A-Fast-Flood-Fill-Algorith
but was hoping that there is a more convenient solution especially for "cutting" out custom shaped areas of an image.
Thanks!
Floodfill assumes a uniform background color; on a real life photo, it won't ever be uniform. What you need is a Chromakey algorithm. See here:
Green screen / chroma key iOS

Color overlaying algorithm

I'm looking for an algorithm to overlay a color on top of existing picture. Something similar to the following app (wall painter): http://itunes.apple.com/us/app/wall-painter/id396799182?mt=8
I want a similar functionality so I can paint walls in an existing picture and change them to a different color.
I can work both in yuv or rgb mode.
To successfully paint the walls in a picture, you have to do two steps:
Find the boundary of the wall within the picture (select the part of the image to be colored)
Apply the desired color to the selected area
The first step is the hard part. It similar to what Photoshop's magic wand tool would do. And indeed a search for magic wand algorithm turns up a few good articles such as this article with Objective-C code.
The second step is much easier and can be achieve with CGContextSetBlendMode and CGContextDrawImage.
You could try drawing into a graphics context with kCGBlendModeColor. From the documentation:
Uses the luminance values of the background with the hue and saturation values of the source image. This mode preserves the gray levels in the image. You can use this mode to color monochrome images or to tint color images.
Experimenting with other blend modes might also do the trick. See the documentation for details (search for "kCGBlendMode").
The RGB and YUV color models are not really great for changing colors in this way. I think the best color model for this is HLS.
Link: RGB to HLS and HLS to RGB conversion source code
H (hue) will change the base color
L (luminance) will change the brightness
S (saturation) will change the amount of color
You can evaluate the effect of these three components in a photo editing app, like Photoshop of The GIMP.

How to create an art asset that can be dynamically colored in software?

I asked this question on the Graphic Design site, but it includes a programming component that might be better answered here.
Specifically, I have a bunch of photographic crayon images. I would like to remove the color from one to produce a neutral image that I can load into an iPhone app that I'm writing and dynamically color. The crayon images have dark regions (shadows) and light regions (shine) which I would like to preserve. I will be dynamically coloring it with many different colors, ranging from white to rainbow colors to black.
My first inclination is to turn the image into a grayscale image and then somehow turn the color channel into an alpha channel, and change the color of all pixels to black. Then I could use it as a mask. However, this would only preserve the shadows, and I would lose all the highlights.
Any ideas?
Two options come to mind:
Make a grayscale version that could be tinted as you said, with the shadows and highlights simply white and gray.
Make an outline, i.e. an image with alpha that had 0% opacity in the colored parts, say 10% white over the highlights, 10% black on the shadows, and 100% black/dark gray for the lines/edges. The idea being that you could put any color under the outline and it would look right.

Convert an image to an iPhone toolbar icon

I have a grayscale icon that I'm editing with Photoshop with a transparent background, but I can't, for the life of me, figure out how to convert the icon to one that can be used as an iPhone toolbar icon. If I simply save the image as a PNG, it doesn't show up as anti-aliased on the iPhone because every pixel with color is being rendered as black, instead of a shade of gray.
According to the Apple docs and other sources, there needs to be an alpha channel on the image to specify varying levels of transparency for each pixel. However, I have no idea what that means. I've read these posts and docs from Adobe and I still can't figure out how to properly convert a grayscale image into one that can be used as an iPhone toolbar icon. The blog post is hard to comprehend and poorly written, and the Adobe docs don't really help.
http://cahit.hayalet.net/blog/514/converting-an-image-to-iphone-toolbar-icon/
http://livedocs.adobe.com/en_US/Photoshop/10.0/help.html?content=WS74B356C9-353F-4483-8632-7B1A102F2A2E.html
Can someone point me in the right direction or provide exact, step-by-step directions to doing this in Photoshop?
It's much more simple than having to muck with actual masks in Photoshop.
iPhone toolbar icons are about 30px by 30px, so make a new Photoshop file with those dimensions. Ensure the background is transparent (you can specify that when creating a new file).
Then, any pixels you draw on top of this transparency become what iOS uses for the icon. Doesn't matter what color it is in Photoshop for NSToolbar icons -- they're automatically used as masks by iOS.
Leave transparent the parts you want to show through. Save as 24-bit PNG, and chuck into XCode as usual.
For a few icons that serve as good starting examples, check out the ones I publish for free here: http://glyphish.com Just take one of the PNGs and open it in Photoshop and you'll see that it's drawn in an arbitrary color (#444444) with varying levels of opacity to create darker and lighter parts of the icon.
This is more of a photoshop question than coding but anyway, here's a suggestion.
Lunacore has a good tutorial on how to use masks.
What you want to do is:
Make sure you're background is transparent.
Create a new layer and
fill it with any solid color.
Create a mask on the solid color
layer, and fill your greyscale image into the mask. (Use your
greyscale image as the mask.)
Toolbar icons use your image as a mask. They only consider what transparancy the image has. Not what color or shade.