I need a 'mask' layer that covers the whole screen, with the center part (a circle) to be transparent. Then I can move the mask layer around using touch. User are only able to see the transparent part in the middle.
I don't think a png file can help because the file need to be very large to cover the whole screen.
So is it possible to do it by coding?
i found this online, but don't know much about openGL. http://www.cocos2d-iphone.org/forum/topic/7921.
it would be great if i can use a CCMaskLayer and input with the radius. i can handle the touch event by my self.
the attached png file is expected result, the center part is transparent. i need this to cover my screen, and only show the middle part. the red part is covered.
I write a CCMaskLayer to do the exactly same thing.
https://github.com/smilingpoplar/CCMaskLayer
You may solve this task with cropped circle texture in two ways:
1) Draw sprite with circle texture in screen center and draw another 4 sprites around (on top, bottom, left and right sides) with small red texture but scaled to cover all screen.
2) (more elegant but harder to implement) Make your mask layer fullscreen but adjust texture coordinates. In details:
set wrap mode to GL_CLAMP_TO_EDGE to your circle texture
adjust texture coordinates of your layer vertices (to do this you need to subclass base CCLayer):
Here v means vertex position and t - texture coordinates. You need to set correct texture coordinates for four corner vertices of layer. For future if you will want to drag circle you will need to add some offset values to texture coordinates.
Related
I am trying to make connect the dots game in landscape mode, and i need to spawn dots from coordinates(0-1000) in json file onto screen. It's easy to map out different viewport size to coordinates and make them spawn in same location from the edges / center of the screen, but since when you connect the dots it's suppose to be a drawing I don't want it to strech. F.e 4/3 display and 21/9 one will stretch the drawing completely differently. How can I make sure it looks the same on all devices?
I am thinking of making an invisible rectangular shape in the middle of the screen with a size 1000x1000 or max screen height) and maping coordinates to that shape, so the drawing will always be in center of the screen and will always look exactly the same, but not sure if it's possible to get a specific point in shape.
I just think there's easy solution and I overcomplicate this.
I am able to detect images in Vuforia images and overlay 3d objects on them. But I want to draw borders around the ImageTarget.
The problem is that I can only get the center of it such as,
productTarget.transform.position
How can I get the corners of the image ? It is simply a 2D image but Vuforia doesn't have anything to help with this.
There is no way to detect coordinates of the corners, but you can manually calculate it.
This will help you to get the height and width of the image
for example: coordinates of upper left corner will be center - (width/2) - (height/2)
I recommend you just make flat border which you want, and then just resize it on tracking image, because its scale is always the same and vuforia only moves the camera.
image
border
after tracking just child border to image
after parenting
Pretend i have 3 nodes in total. One of the nodes is a large SCNShere and i put the camera inside this sphere and make the sphere double sided with a textured image. I then put in two smaller spheres next to each other in the center inside this sphere. I also allowCameraControl. I want to be able to zoom into these two smaller spheres without zooming into the larger sphere and messing up the detail on that sphere.
You can't put limits on the camera that's automatically created with allowCameraControl. You'll have to do your own camera management, using your own gesture recognizers.
Another solution would be to rethink your approach to the background image. Instead of using a sky sphere for the background (which is what it sounds like you're doing), use a skybox, or cube map. You can supply a cube map through the scene's background property. The SCNMaterial documentation explains the options for supply a cube map.
Hmm, I wonder what would happen if you use the large sphere's textured image/material as the scene's background, instead of putting it on an enclosing sphere?
I like the idea of using an image as the background but there are two problems. One is i looked on the web for ways to make an image the background and none of them work. Two I want the background to have depth so in order to go on that idea I need to find a way to zoom into the background and have the image pan in the opposite direction that I drag.
I an using GraphicsContext (gc) inside an AnimationTimer. I am scaling gc, drawing some objects, rotating gc and then drawing an image.
The image appears rotated as planned!
I need to be able to get the rotated rectangle that represents the image so I can check for proximity of the 4 (rotated) corners to other, non rotated objects.
I cannot see how to do this.
I have tried creating a rotated Rectangle to 'mirror' the objects rotation but I cannot see how to draw the Rectangle in the gc.
There is no gc.draw(Node), gc.draw(Rectangle) or gc.draw(Polygon). I need to draw it to confirm on screen that the two match.
My only other option is to break out the geometry set and rotate the points myself but it seem odd that javafx cannot do what I need.
Could somebody please help.
I have an PNG image with complete transparency. It has a picture of an animal , just bordered with black color.
Now , I want to paint the image as my finger move on the ipad Screen, but paint should only appear inside the bordered region not out side.
My Thinking -----
What i am thinking is to keep the color of the image inside the boundary line, a little bit differ from the out side. Then, Get the pixel of the image and for each pixel , the color component.
Keeping all the pixels of out side the boundary in an array and check for it when finger move on the iPad screen.
I am new to the concept of core graphic and open GL so not being able to think wisely. Please help.
First you should define where is "inside" to paint on your image.
I suggest to run a flood fill algortihm on the first touched place and define de desired "inside area" pixels. Notice that you will run the flood fill at the background on the original black and white image, not for actually painting the pixels, just to figure out the target ones.
For example; we want to paint the animals face and body to different colors. When user first toucehs the face and drags over, you can do a flood fill on black and white image to find the pixels of the face and only paint the intersection of the face area and the touched area. Then, when the user picks up her finger (changes the color) and retouches on the body and you do another flood fill operation to detect the pixels of the body and so on.
It was a long description, hope it makes sense.
Here are some flood fill sources that might help:
Floodfill in objective c
https://stackoverflow.com/questions/8121348/flood-fill-algorithm-objective-c-version
How to Implement FloodFill Algorithm in iphone