I want to animate grass on the ground. Each blade of grass is represented in a bitmap as a straight (non-bended) object. The goal is then to bend the bitmap left/right in banana shape without steps.
The grass can bend very slowly. I could use hundreds of images for intermediate frames but I want to do something more sophisticated.
Are there solutions that allow to bend and warp bitmaps / textures in OpenGL ES? If not, how would you do it?
First, you need to decide if you really want to animate the textures, or the geometry. Morphing the geometry is the better approach and it can be done programatically or by using authoring tools. The authoring tools approach requires some sort of game engine and importing files with geometry, texture coordinates and animation data.
Related
I want to be able to draw any 2d shape dynamically in 2d world. My games has moving elements, looking like circle when alone, that can fuse and make complex shapes when getting close together. The elements can still move and thus separate or modify the shape.
How can I draw that with Unity? I don't think the usual sprites can do the trick.
I believe you can do that with shaders (sounds complex to me, but utilises GPU).
Or...
Modify sprite's vertices to introduce streching of the texture.
The project works under a isometric orthographic camera, in a 3d space using 2d sprites.
What we are using are billboarding sprites into 3D colliders to archieve the 3d feeling.
The problem is that we don't really believe the way we are doing it it's the most optimal. We are also having problems introducing high areas, because we need to reply the sprite form in isometric perspective as colliders.
Because we are using 3D world, the tilemaps tools conflicts with the other vertical sprites.
We can not use a entire 2d floor billboarding sprite because that suposes to have a huge vertical sprite in front of the camera, so we can not display the others.
We are just researching for a solution before to change to a 2D world.
If you plan on sticking with isometric in 3D, get rid of the tilemaps entirely. They are just going to give you a headache and make your game lag itself to death. If you want to convert to entirely 2D isometric, you can stick with them as they would work fine. Now, a few comparisons between the 2D and 3D approaches, and how best to approach them. This is a jumbled list of drawbacks/advantages to each type, so it's more of a ramble after this point than an answer, but I couldn't be more specific without knowing more about your project's overall requirements and specifications.
Unity recently added Isometric Tilemapping as a dedicated feature. So, if you choose to fake it with 2D, your life will be a lot easier.
Controls are a lot easier in 3D, as the physics won't ever have to be
faked.
3D allows foreground objects to automatically cover up background
objects without having to add an arbitrary system to achieve the same
effect.
2D is fundamentally faster than 3D, and if you're aiming for mobile,
that's going to be very important to your project's success.
3D allows you to rotate your camera if you design it right. (Check out Don't Starve Together for an example of this design).
I'm currently experimenting with OpenGL ES 1.1 on the iPhone and trying to get my head around some of the basics. So far I've managed to draw a grid of objects which are lit with one GL_LIGHT. Here is a screenshot of the current output (question to follow)...
So you can see that my test consists of a grid of about 140 cubes - some slightly elevated so I can see how the shaded areas work. Each cube consists of this model (from Blender) and have normals / texture coordinates...
What's puzzling me, is why I don't get a 'uniform' lighting across the entire surface. Each cube seems to be lit individually and I can kind of understand why that would be... but is it not possible to have the light transition 'normally' like it would if you arranged this model out of blocks and shone a light across it. I'd expect to not see a dark edge on each individual cube, but rather a smooth transition across the whole area.
(I'm still inwardly chuffed that I managed to get this far!)
Any help or explanations would be awesome.
Thanks,
Simon
The reason why you don't get 'uniform' lighting is because I presume you are using per vertex lighting. That is the lighting is calculated per vertex and interpolated over each triangle making up the model. Since your cube has a pretty low polygon count the transition of light across the model won't look smooth.
Using OpenGL ES 1.1 there are two solutions to this. You can use higher polygon count models or implement per-pixel (DOT3) lighting. I've not implemented this myself but have come across this problem before (my solution was to switch to OpenGL ES 2.0 and use shaders to perform per-pixel lighting).
Here is a link, which may be of use: What is DOT3 lighting?
All the best!
We are trying to achieve the following in an iphone game:
Using 2d png files, set-up a scene that seems 3d. As the user moves the device, the individual png files would warp/distort accordingly to give the effect of depth.
example of a scene: an empty room, 5 walls and a chair in the middle. = 6 png files layered.
We have successfully accomplished this using native functions like skew and scale. By applying transformations to the various walls and the chair, as the device is tilted moved, the walls would skew/scale/translate . However, the problem is since we are using 6 png files, the edges dont meet as we move the device. We need a new solution using a real engine.
Question:
we are thinking of instead of applying skew/scale transformations, that if given the freedom to move the vertices of the rectangular images, we could precisly distort images and keep all the edges 100% aligned.
What is the best framework to do this in the LEAST amount of time? Are we going about this the correct way?
You should be able to achieve this effect (at least in regards to the perspective being applied to the walls) using Core Animation layers and appropriate 3-D transforms.
A good example of constructing a scene like this can be found in the example John Blackburn provides here. He shows how to set up layers to represent the walls in a maze by applying the appropriate rotation and translation to them, then gives the scene perspective by using the trick of altering the m34 component of the CATransform3D for the scene.
I'm not sure how well your flat chair would look using something like this, but certainly you can get your walls to have a nice perspective to them. Using layers and Core Animation would let you pull off what you want using far less code than implementing this using OpenGL ES.
Altering the camera angle is as simple as rotating the scene in response to shifts in the orientation of the device.
If you're going to the effort of warping textures as they would be warped in a 3D scene, then why not let the graphics hardware do the hard work for you by mapping the textures to 3D polygons, then changing your projection or moving polygons around?
I doubt you could do it faster by restricting yourself to 2D transformations --- the hardware is geared up to do 3x3 (well, 4x4 homogenous) matrix multiplication.
How big of a difference is the description language of Quartz2d to OpenGL ES?
It seems they are similar in description power... except that Quartz is mostly 2d and that OpenGL is out of the box 3d ( but can be made 2d focused ).
Are the mappings from 2dQuartz to 2d OpenGL ES that different? Im sure there must be differences in some specific features that might be handled differently on one vs another... but to do a translator?
Anyone have experience with both OpenGL and Quartz2d have some insights?
Quartz and OpenGL ES are two completely different animals. While they both have a C-based API that deals with a state machine and that draws into a context, their purposes are dissimilar. In Quartz you specify lines, Bezier and quadratic curves, arcs, or rectangles, as well as fills, gradients, and shadows / glows. In OpenGL ES, you provide vertices, raster textures, and lighting information, from which a scene is generated.
They are both useful in particular cases. You might draw a 2-D static element using Quartz, into a view, layer, or texture, and then place and move that view or layer in 3-D space using Core Animation or do the same for a texture using OpenGL ES.
Rather than try to overlay one API on the other, use whichever is more appropriate for what you are doing, or look to a framework like cocos2d which lets you build and animate 2-D scenes or Core Animation where you can do Quartz drawing into a layer but still use a nicely abstracted API for moving these layers around.