beginBitmapFill to Fill shape as a background cover works - easeljs

I'm building a game with createjs and I'm trying to do something like this:
var IMG = loader.getResult("imageID");
this.model.graphics.beginBitmapFill(IMG).drawRect(0, 0, window.width , window.height);
The image should cover the background no matter what size the image is. Any ideas how this can be accomplished?

Related

SpriteKit texture atlas leads to image artifacts and blank areas

I have a SpriteKit game in which there's a shape created via SKShapeNode(splinePoints:count:).
The shape has a fillTexture, loaded from a texture atlas via textureNamed(_:).
In my texture atlas there are 5 images -- 1 of which is used for the shape.
If I use the regular .xcassets folder instead of an atlas, the shape is textured correctly. So it's definitely an issue with the atlas. Also, the texture works correctly if it's the only image in the atlas. It's when I add additional images that the problem occurs.
Here's the code that results in the correct texture:
var splinePoints = mainData.getSplinePointsForGround()
let ground = SKShapeNode(splinePoints: &splinePoints, count: splinePoints.count)
let texture = SKTexture(imageNamed: "groundTexture")
ground.fillTexture = texture
ground.lineWidth = 1
ground.fillColor = UIColor(Color.purple)
ground.strokeColor = UIColor(Color.clear)
addChild(ground)
Expected results:
The shape, which has a purple gradient image, should look like this (please ignore the dotted white line):
Actual results:
Instead, the shape looks like this, with strange blank areas and little artifacts from the other images located in the atlas:
Here's the version of code that uses the atlas:
let textureAtlas = SKTextureAtlas(named: "assets")
var splinePoints = mainData.getSplinePointsForGround()
let ground = SKShapeNode(splinePoints: &splinePoints, count: splinePoints.count)
let texture = textureAtlas.textureNamed("groundTexture2.png")
ground.fillTexture = texture
ground.lineWidth = 1
ground.fillColor = UIColor(Color.purple)
ground.strokeColor = UIColor(Color.clear)
addChild(ground)
Why is this problem happening, and how do I resolve it?
Thank you!
It looks like I've solved the problem, though I'm not sure why my solution works.
Instead of having a top-level .atlas folder, I've put a .spriteatlas folder inside Assets.xcassets.
For whatever reason, this results in the correct textures being shown without any nasty artifacts or transparent areas.

Unity 2D: dynamically create a hole in a image

I am using Unity 2D and I am trying to achieve this simple effect.
I am making a Candy Crush Saga - like game.
I have a grid with all the items. In every level the grid field can have different shapes, created at runtime (a dark grey area in the middle of the background).
The background is a still image (blue sky and green hills). When the pieces (for example a blue pentagon) fall down from the top they must be hidden until they enter the grid field area (dark grey); so in practice the background image (sky and hills) is no more a background, but is a foreground with a hole that is represented by the grey area. The grey field is composed as well by tiles from a sprite sheet.
I have prepared a picture, but I cannot load it here unfortunately yet. How can I achieve this effect in Unity?
The most simple solution would be to create all the static levels graphics already with the hole, but I do not want to create them because it is a waste of time and also a waste of memory, I want to be able to create this effect at runtime.
I was thinking about creating a dynamic bitmap mask for the hole shape using a sprite sheet. Then using this bitmap mask as a material for example to be applied to the image in the foreground and make a hole.
click on your texture in the unity editor
change "texture type" from "texture" to "advanced"
check the "read/write enabled" checkbox
change "format" form "automatic compressed" to "RGBA 32 bit"
i attached this component to a raw image (you can attach it to something else, just change the "RawImage image" part)
this will create a hole with dimensions 100x100 at position 10,10 of the image, so make sure that your texture is at least 110x110 large.
using UnityEngine;
using UnityEngine.UI;
public class HoleInImage : MonoBehaviour
{
public RawImage image;
void Start()
{
// this will change the original:
Texture2D texture = image.texture as Texture2D;
// use this to change a copy (and not the original)
//Texture2D texture = Instantiate(image.mainTexture) as Texture2D;
//image.mainTexture = texture;
Color[] colors = new Color[100*100];
for( int i = 0; i < 100*100; ++i )
colors[i] = Color.clear;
texture.SetPixels( 10, 10, 100, 100, colors );
texture.Apply(false);
}
}
EDIT:
hole defined by one or more sprites:
do the same for these sprites: (advanced, read/write enabled, RGBA 32 bit)
for example: if sprite is white and the hole is defined with black:
for( int i = 0; i < 100*100; ++i )
colors[i] = Color.clear;
change to:
Texture2D holeTex; // set this in editor, it must have same dimensions (100x100 in my example)
Color[] hole = holeTex.GetPixels();
for( int i = 0; i < 100*100; ++i )
{
if( hole[i] == Color.black ) // where sprite is black, there will be a hole
colors[i] = Color.clear;
}

Copying a portion of an IplImage into another Iplimage (that is of same size is the source)

I have a set of mask images that I need to use everytime I recognise a previously-known scene on my camera. All the mask images are in IplImage format. There will be instances where, for example, the camera has panned to a slightly different but nearby location. this means that if I do a template matching somewhere in the middle of the current scene, I will be able to recognise the scene with some amount of shift of the template in this scene. All I need to do is use those shifts to adjust the mask image ROIs so that they can be overlayed appropriately based on the template-matching. I know that there are functions such as:
cvSetImageROI(Iplimage* img, CvRect roi)
cvResetImageROI(IplImage* img);
Which I can use to set crop/uncrop my image. However, it didn't work for me quit the way I expected. I would really appreciate if someone could suggest an alternative or what I am doing wrong, or even what I haven't thought of!
**I must also point out that I need to keep the image size same at all times. The only thing that will be different is the actual area of interest in the image. I can probably use the zero/one padding to cover the unused areas.
I believe a solution without making too many copies of the original image would be:
// Make a new IplImage
IplImage* img_src_cpy = cvCreateImage(cvGetSize(img_src), img_src->depth, img_src->nChannels);
// Crop Original Image without changing the ROI
for(int rows = roi.y; rows < roi.height; rows++) {
for(int cols = roi.x; rows < roi.width; cols++) {
img_src_cpy->imageData[(rows-roi.y)*img_src_cpy->widthStep + (cols-roi.x)] = img_src[rows*img_src + cols];
}
{
//Now copy everything to the original image OR simply return the new image if calling from a function
cvCopy(img_src_cpy, img_src); // OR return img_src_cpy;
I tried the code out on itself and it is also fast enough for me (executes in about 1 ms for 332 x 332 Greyscale image)

Trim alpha from CGImage

I need to get a trimmed CGImage. I have an image which has empty space (alpha = 0) around some colors and need to trim it to get the size of only the visible colors.
Thanks.
There's three ways of doing this :
1) Use photoshop (or image editor of choice) to edit the image - I assume you can't do this, it's too obvious an answer!
2) Ignore it - why not just ignore it, draw the image it's full size? It's transparent so the user will never notice.
3) Write some code that goes through each pixel in the image until it gets to one that has an alpha value > 0. This should give you the number of rows to trim from the top. However, this will slow down your UI so you might want to do it on a background thread.
e.g.
// To get the number of transparent rows at the top of the image
// Sorry this code is so ugly
uint32 *p = start_of_image;
while ( 0 == *p & 0x000000ff && p < end_of_image_data) ++p;
uint number_of_white_rows_at_top = (p - start_of_image) / width_of_image;
When you know the amount of transparent space from around the image you can draw it using a UIImageView, set the renderMode to center and let it do the trimming for you :)

Create masking effect over a view

I would like to create a masking effect over a UIView in order to accomplish the following. I will display a sealed box in the screen and the user will be able to touch (scratch) the screen in order to reveal whats behind that image(UIView). Something similar to those lottery tickets where u r suppose to scratch some cover material thats on top of the results..
If someone could point me in the right direction would be awesome, I'm not sure how to start doing this...
thanks
Sorry I'm late. I made some example code which might be of help: https://github.com/oyiptong/CGScratch
drawnonward's approach works.
Pixel editing is usually done with a CGBitmapContext. In this case, I think you will want to create a grayscale bitmap that represents just the alpha channel of the scratch area. As the user scratches, you will paint in this bitmap.
To use a GCBitmapContext as the mask for an image, you must first create a masking image from the bitmap. Use CGImageMaskCreate and pass in a data provider that points to the same pixels used to create the bitmap. Then use CGImageCreateWithMask with your scratch off image and the mask image that is the bitmap.
You cannot draw directly in the iPhone. Every time the user moves a finger, you will have to modify the mask bitmap then invalidate the UIView that draws the image. You may just be able to draw the same image again, or you may need to reconstruct the mask and masked image each time you draw. As long as the mask image refers directly to the bitmap pixel data, very little memory is actually allocated.
So in psuedocode you want something like this:
scratchableImage = ...
width = CGImageGetWidth( scratchableImage );
height = CGImageGetHeight( scratchableImage );
colorspace = CGColorSpaceCreateDeviceGray();
pixels = CFDataCreateMutable( NULL , width * height );
bitmap = CFBitmapContextCreate( CFDataGetMutableBytePtr( pixels ) , width , height , 8 , width , colorspace , kCGImageAlphaNone );
provider = CGDataProviderCreateWithCFData( pixels );
mask = CGImageMaskCreate( width , height , 8 , 8 , width , provider , NULL , false , kCGRenderingIntentDefault );
scratched = CGImageCreateWithImageInRect( scratchableImage , mask );
At this point, scratched will have an alpha channel dictated by bitmap but bitmap has garbage data. White pixels in bitmap are opaque, and black pixels are clear. Paint all white pixels in bitmap then, as the user scratches, paint black pixels. I think that changes to bitmap will automatically apply every time scratched is drawn, but if not then just recreate mask and scratched every time you draw.
You probably have a custom UIView for tracking user input. You could derive your custom view from UIImageView and let it draw the image or do the following:
-(void) drawRect:(CGRect)inDirty {
// assume scratched is a member
CGContextDrawImage( UIGraphicsGetCurrentContext() , [self bounds] , scratched );
}
Alternately you can skip creating scratched and instead use CGContextClipToMask and draw the original scratchableImage directly. If there is no scratchableImage and your scratch area is a texture or view hierarchy, take this approach.