I'm developing a game for mobile platforms. I have a menu with levels. There are a lot of levels so there should be kinetic scrolling. What did I do - every frame I read touches[0].position and based and difference between previous position I move camera.
But, because of the inaccurate touch position (I suppose so) the camera doesn't move smoothly. I'm thinking about calculate average speed for three frames for example and move camera according to speed. Can you give me any advice how to smooth movement?
Also, touches[0].deltaPosition seems to work incorrect.
Related
I am working in Unity2D and have a character who can shoot in all directions. The game has mobile controls and the player will choose his shooting direction with a joystick on the right side.After testing I found it rather difficult to correctly determin where the shot is going to land at. To convey this to the player I wanted to add a sort of "shooting line of sight" to the player. It is moving around the player according to the joysticks position. (As seen in the picture)
Because I want the line to be pixelated (to match the design theme of the game) I cant use the standard LineRenderer component as it doesn't support such pixelated lines. So I started looking for solutions to this problem and stumbled upon Bresenham's line algorithm. After implementing it into the game I knew which x, y coordinates I had to fill with a pixel. Currently every single white pixel in this line is a GameObject with its own SpriteRenderer and a single white pixel as a sprite. In my test scenario up to 400 pixels where rendered at once. I am already using an object pooling system to minimize the performance drag. Moving this line around drops me from around 1100 Fps down to like 200 Fps in the editor. I know it will be slightly better in the build game but I am sure it will take a toll on older mobile devices when enemys, animations etc. are present in the level.
So my question is: Is there a better or more efficient way to render the sprites? Or (preferably) do you have a better idea on how to render this line at all without creating ~300+ GameObjects. (e.g. Shaders (Zero experience), drawing on a texture)
I am grateful for any ideas that lead me into the right direction.
I am developing an augmented reality application that tracks an object via camera (real object, using Vuforia), my aim is to detect the distance it pass.
I am using unity + Vuforia.
For each frame, I calculate the distance between the first position and the current position (Vector calculating).
But I got wrong position/s details, and camera movements affect the result.
(I don't want to take the camera offset in account)
any solution?
for more clearing I want to implement this experience: (video):
https://youtu.be/-c5GiXuATh4
From the comments and the question i understood problem is using camera as origin. This means at all frames of your application camera will be origin and the position of all trackables will be calculated relative to camera. Therefore, even though if you do not move your target, it's position will change because of camera movement.
To eliminate this problem i would recommend using extended tracking. This will minimize the impact of camera movement to position of your target. You can try and test this by adding a trail renderer to your image and you will see your image will stay at a certain position regardless of camera movement.
I am searching for a solution to how to make a perfect pinch zoom in Unity by moving the camera along the forward:
Set up:
Horizontal plane centred at the origin with all Game objects.
Perspective camera with FOV 10, offset at (10,10,10) looking down at a 45 degrees angle, so that it looks at the origin (there is also a rotation of 45 degrees around the axis pointing up, to achieve this).
What I need:
When I place two fingers on the screen I am touching two GameObjects with them - so the screen coordinates under the fingers correspond to certain world coordinates. When I make a pinch movement (with moving two fingers or only one) I want the new screen coordinates to correspond to the same world coordinates that were under the fingers at the beginning of the whole interaction.
So to simplify even further - whenever I touch the screen with two fingers, I want the world coordinates corresponding to the screen coordinates under my fingers to always stay under the fingers (allowing a very small margin of error).
An example of this perfect zoom for which I am looking for you can see in the mobile game Boom Beach from Supercell.
I already tried to move the camera along its forward vector and to reposition it and I get pretty good results, but pretty much always the GameObjects underneath ‘slip’ away from under my fingers, that is at some points are no longer underneath them. It would be great if there was a mathematical solution to this, but if it’s necessary to compute the answer (through some search for example) then this is totally fine.
If the setup/scenario is not clear, I could provide some sketches to clarify it a bit more.
Hope someone can help me! :)
I would set up a system that detects when the user is zooming in and out if you are using GameObjects to pinpoint where the fingers are that is easy to do with Vector3.distance. After that, I would make a function that moves the camera closer to your desired zoom level with Vector3.MoveTowards(camera position, desired position, the speed of movement) where I would set "speed of movement" as a mathf.sqrt(vector3.distance(Camera position, Desired position));
as for the "desired position" I would set that Vector3(position) as a fraction of a line between two game objects that represent your maximum and minimum zoom level.
EDIT: with that, you should have a very nice camera system
I'm developing a game with large obstacle and sprites(in cocos2d+box2d for iPhone), then after zooming out my sprites and layer (by increasing cameraZ), I make my game to play by user, which causes some problem in touch detection of dynamic objects.
Can it be said a good approach to work with? If No then what will be the solution to work properly(consider that I have traveled so far with this approach)?
NOTE:[self.camera setEyeX:0 eyeY:0 eyeZ:180]; (i'm using this line for zooming out, putting camera far from my sprites by increasing z)
If you use a camera for zooming then cocos2d will no longer correctly convert your touch locations to opengl coordinates, since it doesn't invert the camera transform. I would recommend using scale on the layer that your objects reside on to implement zooming. This gives you precise control over the zoom factor and touches will be correctly transformed when you use methods to convert touches from screen space to node space.
I'm making an iPhone game in cocos2d.
I was wondering how I would make the camera / the view follow a specific sprite?
would I use the CCCamera class?
Yes, CCCamera would work. However, it has some drawbacks that make it undesirable for some uses. Moving the layers respectively all other objects relative to that sprite may be a better solution. It depends on the game.
First, read up what the different approaches and their drawbacks are, you can get a lot out of this cocos2d forum thread:
http://www.cocos2d-iphone.org/forum/topic/5363
It would be helpful if you could describe what your game is about and why you need the camera attached to that sprite.
For example, if you're thinking of a running game like Canabalt, i would not use the camera to scroll over the world, but instead scroll everything relative to the player (towards him) with the player sticking at about the same x coordinate while running. Perfect examples of games where you would not move the camera at all are the iCopter games, they are basically simplified versions of Canabalt. Notice that the player sprite always stays at the exact same x coordinate, and the game world just scrolls
Scrolling the camera itself in my opinion makes the most sense if you have a large game world that the player can traverse in all directions, and the number of objects are simply too numerous and also moving about in various directions, so updating their positions individually each frame would be both overkill and prone to errors. And since the game world is so huge, you would want to use the camera's position to limit what is drawn on screen.
use CCFollow actions
Like these :-
[self runAction:[CCFollow actionWithTarget:(u r hero) worldBoundary:CGRectMake(0,0,1050,350)]];
it will helps