The game was created by support cocos2d 0.99.5 and Box2d.
Iphone SDK 4.3
We have a character. When a character moves quickly, it looks blurred (fuzzy // unfocused). On a simulator and on device (iPhone 3G).
To move a character using mouseJoint (dampingRatio = 0 // frequencyHz = -1).
In the screenshot image clearly. link
The character is focused. The screenshot not transfer problems.
All the time 60 fps.
Tried params:
use kCCDirectorProjection2D // 3D
alies // antialies to texture params
CC_COCOSNODE_RENDER_SUBPIXEL 1 and 0
Video sample: link
How to get a clear image of the character during the move?
I also had a problem like this and fixed it by changing this line in ccConfig.h:
#define CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL 0
to
#define CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL 1
This is the comment for this define, maybe it helps someone.
/** #def CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL
If enabled, the texture coordinates will be calculated by using this formula:
- texCoord.left = (rect.origin.x*2+1) / (texture.wide*2);
- texCoord.right = texCoord.left + (rect.size.width*2-2)/(texture.wide*2);
The same for bottom and top.
This formula prevents artifacts by using 99% of the texture.
The "correct" way to prevent artifacts is by using the spritesheet-artifact-fixer.py or a similar tool.
Affected nodes:
- CCSprite / CCSpriteBatchNode and subclasses: CCLabelBMFont, CCTMXTiledMap
- CCLabelAtlas
- CCQuadParticleSystem
- CCTileMap
To enabled set it to 1. Disabled by default.
#since v0.99.5
*/
I am pretty sure that what you are describing is an optical illusion. LCDs, especially lower-quality LCDs, have a finite response time. If this response time is too slow, it can cause ghosting, i.e. the moving object looks smeared. Basically what's happening is the previous frame's (or several frames') pixels take a long time to actually "turn off" and you see fainter versions of your sprite left behind as it moves.
With regards to your comment:
For the experiment, I took a pencil and put it to a sheet of paper
began to move quickly. Eyes see a pencil in focus, then problem is not
an optical effect, a code problems
Looking at a moving object in the real world is not the same as looking at a moving object on the screen, with or without a poor display response time. The real-world object moves continuously, but the screen object moves in discrete steps. Your eye can follow the pencil exactly and keep the image sharp on your retina. If you follow a screen image, however, your eye moves smoothly, while the screen image "jumps" from place to place. This can cause a "juddering" effect for sufficiently fast-moving objects, even at high framerates. If 60fps is still juddery, there is basically no way around this; it is a limitation of current technology.
Related
I have this simple domino scene where you can click a domino and apply a force to knock it.
At first I had this dominoes in a scale of (x=0.1), (y=0.6), (z=0.3) 1 is supposed to be 1 meter, they fell without a problem but too slow. According to unity documentation on Rigidbody this made total sense.
Use the right size.
The size of the your GameObject’s mesh is much more important than the mass of the Rigidbody. If you find that your Rigidbody is not behaving exactly how you expect - it moves slowly, floats, or doesn’t collide correctly - consider adjusting the scale of your mesh asset. Unity’s default unit scale is 1 unit = 1 meter, so the scale of your imported mesh is maintained, and applied to physics calculations. For example, a crumbling skyscraper is going to fall apart very differently than a tower made of toy blocks, so objects of different sizes should be modeled to accurate scale.
So I just re sized the dominoes to (x=0.01), (y=0.06), (z=0.03), this time they fell to the desired speed but for some reason they stop falling and don't knock the next domino.
example GIF
I don't know why this is happening but i can guess that this is because at the time of calculating physics the engine doesn't waste so much resources in calculating small objects that are probably not even going to be seen by the user.
Modifying mass doesn't seem to do anything, also draw and angular draw are both 0 and already tried every collision detection mode.
Is there any solution or workaround for this?
In my experience, Unity physics doesn't like too small objects since it introduces rounding errors. A game simulation usually does not need the same accuracy as when you try to land on Mars. Therefore, I usually avoid scales less than 0.1f.
In your case, I would keep the scales at 1.0f and instead experiment with either increasing the world gravitation, changing it from the default -9.81f to -98.1f (Edit - Project Settings - Physics). Or changing the default Time Scale from 1f to 5f (Edit - Project Settings - Time).
Try not to make too big changes in the beginning since it might introduce strange effects on other parts of the gameplay.
I'm in the process of learning sprite kit and decided a catapult style game would be a good project to start with.
I am launching a projectile by using physicsBody?.applyImpulse(CGVector(dx: strength * dx, dy: strength * dy)) and this all works fine.
I'm currently trying to predict the trajectory of the projectile. I am doing this by applying the same impulse to a hidden SKNode and plotting the path it takes.
The problem is that the user has to wait until the impulse/physics has finished until they can see the full path. (see image below)
Path Image
(from the image above) I want the green path to either appear instantly or appear a lot faster.
I've tried to adjust the speed of the SKNode but it doesn't seem to have any effect on the simulation (it always runs at the same speed). I've also tried using SKAction.applyImpulse with the speed property being adjusted but it didn't have any effect either.
The only thing that has worked is setting physicsWorld.speed but I don't want to change the speed of the whole physics world, only the hidden trajectory node.
I was wondering if there was a way to run the physics simulation on a specific node instantly, or at least speed it up so it runs quicker?
Thanks in advance.
You can massively increase the gravity, and then factor this increase into the increase in the force used to project your object. This will result in the same arc of flight, but rising and falling much faster.
TL;DR : I want to find a method to give an impulse to an object so that the speed of this object is precisely proportional to the scene size.
I am currently building a SpriteKit game that will be available on many different screen sizes, my scene resizes itself to be the same size in points as its view (scene.scaleMode = .ResizeFill), when I launched my game on other devices than the one which I had developed it, I noticed that :
The size of nodes was too small
The speed of the objects was too low (the way I give speed to my objects is by calling applyImpulse(:_) on their physics body).
I think I fixed the size issue with a simple proportionality operation : I looked at the objectArea/sceneArea ratio of the scene that had the correct object size and than, instead of giving fixed dimensions to my objects, I simply gave them dimensions so that the ratio is always the same regardless of the scene area.
For the object speed, it was trickier...
I first thought it was due to the physics body mass being higher since the object itself was bigger, but since I attributed to objects their mass directly via their mass property, the objects would have the exact same mass regardless of their size.
I finally figured out that it was just due to the screen size being different therefore, an object, even by moving at the same speed, would seem to move slower on a bigger screen.
My problem is that I don't know exactly how to tune the strength of my impulse so that it is consistent across different scene sizes, my current approach is this one :
force = sqrt(area) * k
Where k is also a proportionality coefficient, I couldn't do the proportionality on the area, otherwise, speed would have grown exponentially (so I did it with the square root of the area instead).
While this approach works, I only found this way of calculating it with my intuition. While I know that objects areas are correctly proportional to the scene size, I can't really know if the speed can be considered as equivalent on all screen sizes
Do you know what I should do to ensure that the speed will always be equivalent on all devices ?
Don't do it
Changing the physics world in relation of the screen of the device is wrong.
The physics world should be absolutely agnostic about its graphics representation. And definitively it should have the same properties (size, mass, distance, ...) regardless of the screen.
I understand you don't want the scene to be smaller of the screen when the game runs on a iPad Pro instead of an iPhone 5 but you need to solve this problem in another way.
I suggest you to try another scaleMode like aspectFill (capitalized if you're on Xcode 7: AspectFill). This way the scene is zoomed and all your sprites will appear bigger.
Another point of view
In the comments below #Knight0fDragon pointed out some scenarios where you might actually want to make some properties of the Physics World depending on the UI. I suggest the reader of this answer to take a look at the comments below for a point of view different from mine.
My current project contains a gravity simulator where sprites move in accordance with the forces they experience in the game scene.
One of my features involve allowing moving sprites to draw a line behind them so you can see what paths they take.
Shown here:
However, as the Sprite continues it's movements around the screen, the FPS begins to dive. This can be seen in this second image where some time has passed since the sprite first started its movement.
When researching, I found other people had posted with similar problems:
Multiple skshapenode in one draw?
However, in the question above, the answer's poster detailed that it (The answer) was meant for a static image, which isn't something I want, because this line will change in real time depending on what influences the sprites path, this was reflected when I tried implementing a function to add a new Line to the old one which didn't work. That Code here
I'm asking if anyone can assist me in finding a way to properly stop this constant FPS drop that comes from all the draw operations. My current draw code consists of two Functions.
-(void)updateDrawPath:(CGPoint)a B:(CGPoint)b
{
CGPathAddLineToPoint(_lineToDraw, NULL, b.x, b.y);
_lineNode.path = _lineToDraw;
}
-(void)traceObject:(SKPlanetNode *)p
{
_lineToDraw = CGPathCreateMutable();
CGPathMoveToPoint((_lineToDraw), NULL, p.position.x, p.position.y);
_lineNode = [SKShapeNode node];
_lineNode.path = _lineToDraw;
_lineNode.strokeColor = [SKColor whiteColor];
_lineNode.antialiased = YES;
_lineNode.lineWidth = 3;
[self addChild:_lineNode];
}
updateDrawPath: Draws line to latest position of Sprite.
traceObject: Takes SKPlanetNode (Subclass of SKSpriteNode), and sets it up to have a line drawn after it.
If anyone can suggest a way to do this and also reduce the terrible overhead I keep accumulating, it would be fantastic!
A couple suggestions:
Consider that SKShapeNode is more or less just a tool for debug drawing mostly, due to the fact that it doesn't draw in batches it's really not suitable to make a game around that or to use it extensively (both many shapes as well as few but complex shapes).
You could draw lines using a custom shader which will likely be faster and more elegant solution, though of course you may have to learn how to write shader programs first.
Be sure to measure performance only on a device, never the simulator.
I am trying to build an app for the iPhone 4 which enables the user to "point" at a hardcoded destination and a dot appears where the destination is located.
First, i use the compass to make a horizontal compass(this will cover the left/right rotation):
// Heading
nowHeading = heading.trueHeading;
// Shift image (horizontal compass)
float shift = bearing - nowHeading;
destinationImage.center = CGPointMake(shift+160, destinationImage.center.y);
I shift the dot 160 pixels because the screen is 320 pixels width. My question is now, how can I expand this code to handle up and down? Meaning that if i point the phone down in the table, the dot wont show.. I have to point (like taking a picture) at the destination in order for it to be drawn on the screen. I've already implemented the accelerator. But i don't know how to unite these components to solve my problem.
Bearing should depend on the field of vision of the camera. For iPhone 4 the horizontal angular view is 47.5 so 320 points/47.5 = xxx points per degree, use that to shift horizontally. You also have to add an adaptive filter to the accelerometers, you can get one from the AccelerometerGraph project from Apple.
You have the rotation in one axis (bearing) you should get the rotation on the other two from the accelerometers. The atan2 of two axis give you the rotation on the third. Go to UIAcceleration and imagine an axis physically piercing the device if that helps and do double xAngle = atan2(acceleration.y, acceleration.z); So once you have the rotation upside down you can repeat what you did for the horizontal with the vertical field of view, eg: 60 for the iPhone.
That is going to be one rough implementation :) but achieving smooth movement is difficult. One thing you can do is use the gyros to get a faster response and correct their signal periodically with the accelerometers. See this talk for the troubles ahead: Sensor Fusion on Android Devices. Here is a website dedicated to the Kalman Filter. If you dare with Quaternions I recommend "Visualizing Quaternions" from Andrew J. Hanson.
It sounds like you are trying to do a style of Augmented Reality. If that. Is the case there are several libraries and sample code suggested here:
Augmented Reality