Does anyone know a tutorial hat explains how to shade an object to look like
silver metal? (on iphone)?
Maybe starting with a spere like in this:
http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-part-5-living.html
Or can this not be accomplished without the new shaders in 2.0?
Thanks
Sebastian
What you look for is called environment mapping. This can be done using sphere mapping (this can be done on very simple hardware) or cube mapping.
Cube mapping could be done long before pixel shaders became popular, but it seems they are an extension to OpenGL ES 1.1, so the iPhone may or may not implement it (quick googling suggests not, but I didn't try).
Sphere mapping should be supported in ES. It has been in OpenGL since the beginning, I believe.
Anyway, to clarify: These methods only transform texture coordinates, so they need not work on pixel level. Hence a pixel shader is unnecessary. However, using a pixel shader you could do more advanced stuff like bump mapping, which would give your object more of a "surface".
Try something like this, transliterated to ES.
Related
I'm new to shaders and with the new Shader Graph from Unity I'm trying to experiment and archive some effects that I have in mind for my games.
I want to get something like this:
https://imgur.com/vqy9y3H
I want a glow effect to go arround my object. In my case it's a square neon light, so it's simple I think.
What I have so far, experimenting and unifying different tutorials, effects, etc.:
https://imgur.com/aPW95S0
This is the current Shader Graph, i know its a mess and maybe there are useless nodes, etc.:
https://imgur.com/J3jzGE6
This is the tutorial I think it's the most accurate to what I need:
https://www.youtube.com/watch?v=UJUlGJS3QpY
Thanks in advance for any tip that help me find the correct path to archive the effect.
EDIT:
To make it clear, the real problem for me is the motion effect. I already setup the glow effect with post-processing and bloom. My problem is how to do the effect arround the object. In my case it's a neon tube, so it's easiest I think, as the effect can be on all the object but from the start of the tube to the end. As the tube is closed, it will start again from the begining almost at the same point. Hope it make it clear.
you can use post processing to get the effect your after, specifically bloom and tone mapping effects.
With post processing applied you just need to increase the color into HDR levels in your shader.
Applying a blur outside of post processing is actually extremely expensive and isn't really recommended unless there is absolutely no other option.
Recently, I've been trying to learn how to use OpenGL ES 2.0 and GLKit to create simple 2D games. I've been following this tutorial by Ray Wenderlich and it's been very helpful so far. However, upon profiling my project (and his) for leaks I found that GLKBaseEffect's prepareToDraw: (specifically, GLKShaderBlockNode's copyWithZone) is leaking everywhere - I'm using ARC, by the way. After searching around quite a bit it seems that this is a bug in GLKBaseEffect and that I can't do anything about it. Is this true? The only solution I've found suggested is scrapping GLKBaseEffect entirely.
If that's the case, I have to roll my own custom vertex and fragment shaders as a result. However, I have no idea how to do this. I would appreciate any resources or help on creating custom shaders and adapting the code in the above tutorial to use those instead.
Thank you very much for your time. :)
For starters, in XCode do File->New Project and select "OpenGL Game".
Run it if you choose, you get 2 cubes going around each other.
Take a look at shader.fsh and shader.vsh. In viewcontroller.m, examine compilerShader, linkProgram and validateProgram (these compile the shader).
Examining that sample app should be enough to get you "in the door" on how to get a shader running, and from that point forward search for some OpenGL ES 2.0 shader tutorials or check out some of sample apps in the Apple code library.
Note: Going from Apple's built-in easy effects to shaders is a significantly wide "canyon".
Well, after a decent amount of procrastination, I decided to bite the bullet and just do it. The leaks are gone now, although it took me a while to get it working. I read through OpenGL ES 2.0 for iPhone, Chapter 4 and learned about vertex and fragment shaders along with how to compile them and link them. After understanding how it worked, I put the author's GLProgram helper class into my project to handle all the boilerplate stuff and got to work quickly. Then, I created two shaders, nearly identical to the ones found here.
I followed his instructions on getting the attribute and uniform locations and stored them in a structure, passing them all at once to my sprite objects as they were initialized. Then, when it came time to render I passed in all the information for my attributes as I had done earlier when I was using GLKBaseEffect; the only difference (aside from manually binding the texture to a texture unit) was that I had to pass in the modelviewMatrix and projectionMatrix uniforms in myself instead of setting a GLKBaseEffect property.
I'm currently experimenting with OpenGL ES 1.1 on the iPhone and trying to get my head around some of the basics. So far I've managed to draw a grid of objects which are lit with one GL_LIGHT. Here is a screenshot of the current output (question to follow)...
So you can see that my test consists of a grid of about 140 cubes - some slightly elevated so I can see how the shaded areas work. Each cube consists of this model (from Blender) and have normals / texture coordinates...
What's puzzling me, is why I don't get a 'uniform' lighting across the entire surface. Each cube seems to be lit individually and I can kind of understand why that would be... but is it not possible to have the light transition 'normally' like it would if you arranged this model out of blocks and shone a light across it. I'd expect to not see a dark edge on each individual cube, but rather a smooth transition across the whole area.
(I'm still inwardly chuffed that I managed to get this far!)
Any help or explanations would be awesome.
Thanks,
Simon
The reason why you don't get 'uniform' lighting is because I presume you are using per vertex lighting. That is the lighting is calculated per vertex and interpolated over each triangle making up the model. Since your cube has a pretty low polygon count the transition of light across the model won't look smooth.
Using OpenGL ES 1.1 there are two solutions to this. You can use higher polygon count models or implement per-pixel (DOT3) lighting. I've not implemented this myself but have come across this problem before (my solution was to switch to OpenGL ES 2.0 and use shaders to perform per-pixel lighting).
Here is a link, which may be of use: What is DOT3 lighting?
All the best!
I need to create a virtual tour tool for iOS. It's an archaeological application: the user could open it when he's inside an historic building or when he's visiting an archaeological dig. No need of doom-like subjective point of view: just a skybox. The application will have a list of points of interest (POIs). Every POI will have its own skybox.
I thought that I could use using OpenGL-ES to create a sort of textured skyboxes that could be driven/rotated by touches. Textures are hi-resolution PNG photos.
It's a funded project and I have 4 months.
Where do I have to go to learn how to develop it? Do I have to purchase a book? Which one?
I have just moderate Objectve-C and Cocoa-touch skills, since I've built just one application for the iPad. I have zero knowledge of OpenGL-ES.
Since I know OpenGL ES quite well, I had a go at a demo project, doing much of what you describe. The specific intention was to do everything in the simplest way available under OpenGL ES as long as the performance was good enough.
Starting from the OpenGL template that Apple supply, I have written one new class with a heavily commented implementation file 122 lines long that loads PNG images as textures. I've modified the sample view controller to draw a skybox as required and to respond to touches with a version of the normal iPhone inertial scrolling, which has meant writing less than 200 lines of (also commented) code.
To achieve this I needed to know:
the CoreGraphics means for getting pixel data from a PNG
how to set up the PROJECTION stack to get a perspective projection with the correct aspect ratio
how to manipulate the MODELVIEW stack to ensure two-axis rotation (first person shooter or Google StreetView style) of the scene according to member variables and to ensure that the cube geometry I defined doesn't visibly intersect the near clip plane
how to specify vertex locations and texture coordinates to OpenGL
how to specify the triangles OpenGL should construct between vertices
how to set the OpenGL texture parameters accordingly to supply only one level of detail for the texture
how to track a touch to manipulate the member variables dictating rotation, including a tiny bit of mechanics to give an inertial rotation
Of course, the normal view controller lifecycle instructions are obeyed. Textures are loaded on viewDidLoad and released on viewDidUnload, for example, to ensure that this view controller plays nicely with potential memory warnings.
The main observations are that, beyond knowing the Objective-C signalling mechanisms, most of this is C stuff. You're primarily using C arrays and references to make C function calls, both for OpenGL and CoreGraphics. So a prerequisite for coding this yourself is being happy in C, not just Objective-C.
The CoreGraphics stuff is a bit tedious but it's all just reading the docs to figure out how each type of thing relates to the next — none of it is really confusing. Just get into your head that you need a data provider for the PNG data, you can create an image from that data provider and then create a bitmap context with memory that you've allocated yourself, draw the image into the context and then release everything except the memory you allocated yourself to be left with the result. That result can be directly uploaded to OpenGL. It's relatively short boilerplate stuff, but OpenGL has no concept of PNGs and CoreGraphics has no convenient methods of pushing things into OpenGL.
I've assumed that textures are a suitable size on disk. For practical purposes, that means assuming they're a power-of-two in size along each edge. Mine are 512x512.
The OpenGL texture management stuff is easy enough; it's just reading the manual to learn about texture names, name allocation, texture parameters and uploading image data. More routine stuff that is more about knowing the right functions than managing an intuitive leap.
For supplying the geometry to OpenGL I've just written out the arrays in full. I guess you need a bit of a spatial mind to do it, but sketching out a 3d cube on paper and numbering the corners would be a big help. There are three relevant arrays:
the vertex positions
the texture coordinates that go with each vertex location
a list of indices referring to vertex positions that defines the geometry
In my code I've used 24 vertices, treating each face of the cube as a logically discrete thing (so, six faces, each with four vertices). I've defined the geometry using triangles only, for simplicity. Supplying this stuff to OpenGL is actually quite annoying when you're starting; making an error generally means your program crashes deep inside the OpenGL driver without giving you a hint as to what you did wrong. It's probably best to build up a bit at a time.
In terms of a UIView capable of hosting OpenGL content, I've more or less used the vanilla stuff Apple directly supply in the OpenGL template. The one change I made was explicitly to disable any attempted use of OpenGL ES 2.x. 1.x is more than sufficient for this task, so we gain simplicity firstly by not providing two alternative rendering paths and secondly because the ES 2.x path would be a lot more complicated. ES 2.x is the fully programmable pipeline with pixel and vertex shaders, but in ES land the fixed pipeline is completely removed. So if you want one then you have to supply your own substitutes for the normal matrix stacks, you have to write vertex and fragment shaders to do 'a triangle with a texture', etc.
The touch tracking isn't particularly complicated, more or less just requiring me to understand how the view frustum works and how touches are delivered in Cocoa Touch. Once you've done everything else, this bit should be quite easy.
Notably, the maths I had to implement was extremely simple. Just the touch tracking, really. Assuming you wanted a Google Maps-type view meant that I could rely entirely on OpenGL's built-in ability to rotate things, for example. At no point do I explicitly handle a matrix.
So, how long it would take you to write depends on your own confidence with C and with CoreGraphics, and how happy you are sometimes coding in the dark. Because I know what I'm doing, the whole thing took two or three hours.
I'll try to find somewhere to upload the project so that you can have a look at it. I think it'd be helpful to leaf through it and see how alien it looks. That'll probably give you a good idea about whether you could implement something that meets all of your needs within the time frame of your project.
I've left the view controller as having exactly one view, which is the OpenGL view. However, the normal iPhone compositing rules apply and in your project you can easily put normal controls on top. You can grab my little implementation at mediafire. StackOverflow post length limits prevent me from putting big snippets of code here, but please feel free to ask if you have any specific questions.
It's going to be pretty tough if you're learning OpenGL ES from scratch. I'd use a graphics engine to do most of the heavy lifting. I'm currently playing Ogre3d, from what I've seen so far I can recommend it: http://www.ogre3d.org/. It has Skybox (and much more) out of the box, and should be pretty straight forward to do.
I think you can do this, here are some links to help get you started:
http://sidvind.com/wiki/Skybox_tutorial
common problems:
( i would post direct links but stackoverflow wont let me )
look on stackoverflow items no 2859722 and 2297564.
some programs and tips to help make the textures:
spacescape
there are some great opengl tutorials here:
nehe.gamedev.net
they are not iphone specific, but they explain opengl pretty well. i think some folks have ported these to the phone as well, i just cant find them now.
I'm working on an app that basically revolves around 2D shapes (mostly simple polygons) being dynamically drawn and animated.
I'm looking for a way to easily time my animations. It's basically just moving a vertex to a specified point in a specified time, so just interpolating floats, with all the usual easing parameters. I come from a Flash/ActionScript 3 environment, so if you're familiar with that, think Tween Classes.
I probably could easily be doing this with Core Animation (BasicAnimation etc), but i will have up to a hundred gradient-filled shapes with varying opacity being animated dynamically,
and I need good performance (60fps would be great). So i went for OpenGL ES. Plus I'm totally for investing time into learning something that I'll be able to reuse cross-platform.
So I know OpenGL is only for graphic rendering, and I'm not going to find any 2D animation methods built in. And I heard using CA with OpenGL (if feasible) was not a good idea performance-wise.
But before I look deeper into interpolation algorithms to increment my vertex's coordinates every frame, I juste wanted to make sure I wasn't totally missing out on something much easier!?
Thanks!
I would look into the popular cocos2d library. It looks really nice; supports animation and uses OpenGL ES behind the scenes.