manageTiles:(ccTime)dt
{
tiles.position = ccp(tiles.position.x-speed*dt,tiles.position.y);
}
I am moving my batch node but it is causing tears, I think its because dt is .3, but how else do i move it based on time without causing tears. The tears are very very small barley noticeable, but still bothers me.
You mean black lines (gaps) between the batched sprites? Tearing is when you draw to the framebuffer while the screen updates.
Two solutions: move the batchnode instead of individual tiles. If the tiles always stay fixed relative to each other that's the fastest solution. Otherwise cast tiles position to int to ensure they're always at pixel locations, otherwise subpixel rendering will cause those gaps. You may also have to cast the batch node's position to int if you use the first solution.
Related
I'm creating a puzzle game that generates random sized pieces with 2D meshes. The images contain transparent portions and sometimes a piece is completely transparent. I need to detect what percentage of a piece is transparent. One way I found to do this is to go pixel by pixel. I posted my solution to this HERE. However, this process adds a few seconds during loading which I'd like to avoid and I'm looking for other ideas
I've considered using the selection outline of a MeshCollider to somehow to get a surface area I can compare to the surface area of the mesh but everything I find is on the rendering of outline with specialized shaders. Does anyone have any ideas on to solve this?
.
1) I guess you could add a PolygonCollider2D to your sprite and use its Path for the outline and calculation of the surface area. Not sure however if this will be faster.
PolygonCollider2D.GetPath:
A path is a cyclic sequence of line segments between points that define the outline of the Collider
Checking PolygonCollider2D.GetTotalPointCount or path length may be good enough to determine if the sprite is 'empty'.
Sprite.vertices, Sprite.triangles may also be helpful.
2) You could also improve performance of your first approach:
instead of calling GetPixel as you do now use GetPixels or GetPixels32 and loop through the array in one for loop.
Using GetPixels can be faster than calling GetPixel repeatedly, especially for large textures. In addition, GetPixels can access individual mipmap levels. For most textures, even faster is to use GetPixels32 which returns low precision color data without costly integer-to-float conversions.
check only every 2nd or nth pixel as it should be good enough for approximation
limit number of type casts
For now, I use a 3D array to represent my voxels in different chunks. I want to render voxels which can be visible by the player, but the way I do it is totally not efficient:
I iterate over the whole 10*10*10 chunk and check on every voxel if there is a neighbor equal to Air. Then I render separatly each faces which can be visible. So I mostly check every voxels 6 times. And I do this for all chunks.
Is there a better way to proceed or an algorithm to reduce iterating?
I basicly don't know if it is better to work with 3D Array or Octree...
Thank.
I've been thinking through this problem recently, and since nobody has answered you I thought I'd mention some of the ideas I've come across.
Firstly, it's work noting that you only need to calculate which faces to render once, since that only changes if you remove or add a voxel, and then you only need to recalculate the voxels immediately around the place where you made the change. Just use a flag to mark for rendering and cache that until something changes. If you aren't already doing this, this will give you a big performance boost over calculating every frame.
I also recommend looking into this extremely fast raycasting algorythm:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.42.3443&rep=rep1&type=pdf
You can use it for fast collision testing, and also for cull-testing. You can cast at grid nodes to see if any part of a face is visible.
I have a particle effect for a muzzle flare set up. What I'm currently using is a low numParticlesToEmit to limit the emitter to a short burst, and doing resetSimulation() on the emitter whenever I want to start a new burst of particles.
The problem I'm having is that resetSimulation() removes all particles onscreen, and I often need to create a new burst of particles before the previous particles disappear normally so they get erased early.
Is there a clean way start up the emitter again without erasing the particles already onscreen?
Normally particle systems have a feature missing from SKEmitters: a duration. This controls how long a system emits. I don't see this in SKEmitter, despite being in SCNParticleSystems
Never mind, a work around:
SKEmitters have a numParticlesToEmit property and a particleBirthRate. Combined, these determine how long the particle system emits before shutting down.
Using these as a control of the emission it's possible to create pulses of particles emitted in the way you want for something like a muzzle flash or explosion.
I'm not sure if it remvoes itself when it reaches this limit. If not, you'll have to create a removal function of some sort. Because the way to get your desired effect (multiple muzzle flashes on screen) is to copy() the SKEmitter. This is quite efficient, so don't worry about overhead.
There is a targetNode on SKEmitters that are suppose to move the particles to another node so that when you reset the emitter, the previous particles still stay. Unfortunately, this is still bugged from what I can tell, unless somebody else has figured out how to get it working and I just missed it. Keep this in mind though in case they do ever fix it.
Hi to help future readers, the code that I use to calculate the duration of the emitter is this:
let duration = Double(emitter.numParticlesToEmit) / Double(emitter.particleBirthRate) + Double(emitter.particleLifetime + emitter.particleLifetimeRange/2)
It works perfectly for me
Extension:
extension SKEmitterNode {
var emitterDuration: Double {
return Double(numParticlesToEmit) / Double(particleBirthRate) + Double(particleLifetime + particleLifetimeRange/2)
}
}
I'm working on a project, using unity 5.4.
In this projects blocks are stacked next to eachother.
However there appear some annoying weird lines. Also on android these
line occur more often than on PC.
For illustration purposes I added an image and video.
Please zoom in on the picture to see, the line I'm speaking of, clearly.
Could anyone please provide a solution to get red of this nuissance.
Thanks in advance.
Literature:
Block alignment code snippet:
for (int x = 0; x < xSize; x++)
for (int z = 0; z < zSize; z++)
{
Vector3 pos = new Vector3(x, -layerDepth, z);
InstantiateBlock(pos);
}
Video link: https://youtu.be/5wN1Wn51d_Y
You have object seams!
This occurs when there is a physical or perceived gap between objects.
There are multiple causes for this.
1. Floating Point Imprecision
This could be because you are setting the position of the cubes to int's but they have floating point dimensions. The symptom for this is usually no white seams when the camera is close to the objects, and then they gradually appear as you get further away due to floating point imprecision. More.
Most of these blocks appear to line up exactly, from most camera positions. But from the occasional unfortunate position, the exact value for A's position plus its vertex at (0.5,0.5,-0.5) might be slightly different to object B's position plus its vertex at (-0.5,0.5,-0.5) . The result is that Unity shows a tiny gap, within which you can see the shadowed side of cube A.
If you consider the following on paper 3 == 1/3 * 3 this is mathematically correct, however using floats, 1/3 == 0.333333... and subsequently 3 * 0.333333... == 0.999999... BINGO! random gap between objects!
So how to solve? Use floats to calculate the positions of your objects. new vector3(1,1,1); should be new vector3(1f,1f,1f); - for example. For further reading on this try this SOP.
2. Texture Wrap-mode
If you are using textures on your objects, try changing the Wrap-Mode of your texture from wrap to clamp, or try upping the texture padding.
3. Shadow Acne - (Lighting and Shadow artifacts)
This is the arbitrary patterns of pixels in shadow when they should really be lit or NOT lit.
To prevent shadow acne, a Bias value can be added to the distance in the shadow map to ensure that pixels on the borderline definitely pass the comparison as they should, or to ensure that while rendering into the shadow map. source.
In Unity... go to your light source and then increase the Shadow Type > shadow Bias I would suggest doubling the default value of 0.05 and then continue so until fixed. You don't want to crank this value to max because...
Do not set the Bias value too high, because areas around a shadow near the GameObject casting it are sometimes falsely illuminated. This results in a disconnected shadow, making the GameObject look as if it is flying above the ground.
Are you using different blocks that you put against eachother? Your problem sounds like the blocks are not completely against eachother which causes you to see the side of the next block (this explains the camera Y changing: you might see the side better from higher up). That side will have different lighting and appear as a different/lighter colour. To check if this is the problem, try overlapping them slightly manually in the editor and see if the problem still occurs.
Making the blocks kinematic solves that. The issue is the rigid bodies bumping up against one another.
My current project contains a gravity simulator where sprites move in accordance with the forces they experience in the game scene.
One of my features involve allowing moving sprites to draw a line behind them so you can see what paths they take.
Shown here:
However, as the Sprite continues it's movements around the screen, the FPS begins to dive. This can be seen in this second image where some time has passed since the sprite first started its movement.
When researching, I found other people had posted with similar problems:
Multiple skshapenode in one draw?
However, in the question above, the answer's poster detailed that it (The answer) was meant for a static image, which isn't something I want, because this line will change in real time depending on what influences the sprites path, this was reflected when I tried implementing a function to add a new Line to the old one which didn't work. That Code here
I'm asking if anyone can assist me in finding a way to properly stop this constant FPS drop that comes from all the draw operations. My current draw code consists of two Functions.
-(void)updateDrawPath:(CGPoint)a B:(CGPoint)b
{
CGPathAddLineToPoint(_lineToDraw, NULL, b.x, b.y);
_lineNode.path = _lineToDraw;
}
-(void)traceObject:(SKPlanetNode *)p
{
_lineToDraw = CGPathCreateMutable();
CGPathMoveToPoint((_lineToDraw), NULL, p.position.x, p.position.y);
_lineNode = [SKShapeNode node];
_lineNode.path = _lineToDraw;
_lineNode.strokeColor = [SKColor whiteColor];
_lineNode.antialiased = YES;
_lineNode.lineWidth = 3;
[self addChild:_lineNode];
}
updateDrawPath: Draws line to latest position of Sprite.
traceObject: Takes SKPlanetNode (Subclass of SKSpriteNode), and sets it up to have a line drawn after it.
If anyone can suggest a way to do this and also reduce the terrible overhead I keep accumulating, it would be fantastic!
A couple suggestions:
Consider that SKShapeNode is more or less just a tool for debug drawing mostly, due to the fact that it doesn't draw in batches it's really not suitable to make a game around that or to use it extensively (both many shapes as well as few but complex shapes).
You could draw lines using a custom shader which will likely be faster and more elegant solution, though of course you may have to learn how to write shader programs first.
Be sure to measure performance only on a device, never the simulator.