iOS Sprite Kit - SKSpriteNode - blend two sprites - sprite-kit

Actually, I'm migrating a game from another platform, and I need to generate a sprite with two images.
The first image will be something like the form, a pattern or stamp, and the second is only a rectangle that sets color to the first. If the color was plane, it will be easy, I could use sprite.color and sprite.colorBlendFactor to play with it, but there are levels where the second image is a rectangle with two colors (red and green, for example).
Is there any way to implement these with Sprite Kit?
I mean, something like using Core image filter, and CIBlendWithAlphaMask, but only with Image and Mask image. (https://developer.apple.com/library/ios/documentation/graphicsimaging/Reference/CoreImageFilterReference/Reference/reference.html#//apple_ref/doc/uid/TP40004346) -> CIBlendWithAlphaMask.
Thanks.

Look into the SKCropNode class (documentation here) - it allows you to set a mask for an image underneath it.
In short, you would create two SKSpriteNodes - one with your stamp, the other with your coloured rectangle. Then:
SKCropNode *myCropNode = [SKCropNode node];
[myCropNode addChild:colouredRectangle]; // the colour to be rendered by the form/pattern
myCropNode.maskNode = stampNode; // the pattern sprite node
[self addChild:myCropNode];
Note that the results will probably be more similar to CIBlendWithMask rather than CIBlendWithAlphaMask, since the crop node will mask out any pixels below 5% transparency and render all pixels above this level, so the edges will be jagged rather than smoothly faded. Just don't use any semi-transparent areas in your mask and you'll be fine.

Related

How to mask a UI.RawImage in Unity?

In Unity UI, I have an ordinary RawImage
(It's just sitting on a Panel)
I have a png, mask.png which is just a white shape on transparent.
How do you mask the RawImage ?
I have tried Mask, SpriteMask in many ways and it just won't work.
Even RectMask2D would be fine (to mask to a square shape) but it just doesn't seem to work?
Should I use Mask or SpriteMask
If so, do you perhaps have to / have to not set a Material, on the mask? On the RawImage?
I assume the Mask game object should be the parent of the RawImage, but??
What is the secret ?
The RawImagecomponent should work with masks just like your normal Image component does. Granted that the checkmark Maskable is ticked.
Note that the Mask or rect Mask 2D should be the parent of the (raw)images you are trying to mask. The hierarchy should be something like this:
Canvas<br>
| MaskObject (Contains (Raw)Image and Mask or Rect Mask 2d components)
| Object to mask (Contains the (raw)image to mask)
Notice how the white square (Image) gets cut off by the red square (Mask).
The component types between the masking image and the masked image do not need to match either. A RawImage can mask an Image and vice versa.
The Masking objects are again shown in red, where the white are the masked objects. The GameObject's names show the used (raw)image component for that gameobject.
The only exception is the SpriteMask which exclusively works with the Sprite Renderer component
There is not much explanation from Unity on masks... This being the closest to an explanation there is
Some more info about masks:
Masks work by comparing the ref(erence) values of the stencil buffers of the two (or more) objects (in this case images) and only drawing the pixels where the stencil buffer for both equals to 1 using the Stencil's Comp(arison) function. Meaning it is possible to create your own implementation of masks by creating a shader and implementing the Stencil buffer, this comes in handy when for example you want something like an inversed mask, where pixels are drawn everywhere except where the mask is (creating holes in an image) :)
just put the raw image as a child of a UI image and call the parent image "Mask" then put in any shape of sprite in the "Mask" image. Go to 'add component' then 'UI' then add 'mask'. Please look at this link https://learn.unity.com/tutorial/ui-masking#6032b6fdedbc2a06c234cd3e it work well for me

Sprite Kit Physics Body Complex Shape - Spritekit

I have this situation: http://mokainteractive.com/example.png
I'd like to move the white ball inside the red track and detect wherever the balls touch the limit of the red track.
Which is the best solution? I have to create multiple transparent shape along the borders? Do you have other ideas?
thanks so much
In iOS8 you can create a single physics body for that kind of shape.
Make a texture of your shape with p.e. Adobe Illustrator and use this method:
init(texture texture: SKTexture!,alphaThreshold alphaThreshold: CFloat,size size: CGSize) -> SKPhysicsBody
The SKTexture is your shaped image. The body is defined by the colored pixels.
The alphaThresHold: The minimum alpha value for texels that should be part of the new physics body.
The Size is clear I think.
The texture is analyzed and all invisible pixels around the egg are ignored and only the color pixels are interpreted as the body of the SKPhysicsNode. You should use too many of these because they are very expensive to calculate for the Physics Engine.
Variations of this method are found in the SpriteKit Class Reference.
To your problem. Make an inverse texture of your area which should be transparent and pass it as texture to the physics body. It will be analyzed and a body around the free zone is created.
You cannot create a single physics body for that kind of shape.
Using bodyWithPolygonFromPath: will only allow you to create a convex polygonal path which obviously does not work for your shape.
I think you have 3 options here:
Use a number of bodyWithPolygonFromPath: bodies (probably the hardest to do and time consuming).
Use a number of various size bodyWithRectangleOfSize: bodies (not so hard but time consuming).
Use only straight lines in your image and use bodyWithRectangleOfSize: (the easiest and fastest). If you choose this option remember you are still free to rotate your straight lines to various angles.

How to use a different texture on intersecting part of 2 quads

I'm looking for a way to dynamically change a part of a Quad that has a SpriteRenderer attached to it. Let's say I have a red Quad and a blue Quad, and then I drag one onto the other (fast or slow), the intersecting part should be colored using a green sprite. This illustration shows the scenario I'm trying to solve.
Can someone please help me with this?
You have two options:
First, if your mid color will be the correct mixture of other two color, in this case it would be yellow, you can use Mobile Particle/Additive or Mobile Particle/Multiply Shaders.
In a second way, you can write your own shader that takes the intersection area as parameter and paint your textures according to parameters.

iOS Quartz 2D Graphics Filled Irregular Shapes Blurry

I'm drawing irregular shapes using Core Graphics on a retina display. I do this by creating a UIBezierPath with 5 to 10 random points. In drawRect I stroke the path and fill it using solid red colour.
My problem is that diagonal lines in my drawing doesn't appear to be sharp.
I have tried anti aliasing but if anything this makes it appear worse. I have experimented with different line widths, not stroking, not filling, but I can not seem to get a really sharp diagonal line.
For comparison I created a similar shape in Photoshop (using similar size) and saved that as PNG. If I display that PNG on iOS it looks much sharper.
What can I do to make my shape that I create in code look sharp?
Make sure your CGPoint's are rounded to the nearest integer value.

Transparent Texture With OpenGL-ES 2.0

I am trying to add a transparent texture on top a cube. Only the front face is not transparent. Other sides are transparent. What could be the problem?. Any help is appreciated.
EDIT : I found that the face which is drawn first is opaque.
3 face of the cube is drawn.
Opaque face.((This face's index is given first in GLdrawElements))
Transparent face.
You most probably ran into a sorting problem. To display transparent geometries correctly the faces of the object have to be sorted from back to front.
Unfortunately there is no built-in support for that in opengl-es (or in any gfx-library in existance). The only possibility is to sort your polygons, recreate your object each frame and draw it with correctly ordered faces.
A workaround would be using additive transparency instead of normal transparency. Additive transparency is an order independent calculation. You have to remember to turn off z-buffer writes while drawing because otherwise some geometry may be ocluded.
Additive transparency is achieved by setting both blendfunc values to GL_ONE.