I want to render a splash screen on the iPhone whilst using an Open GL view. The iPhone screen as we know is 320x480, which is not a power of 2.
Before I enter into the world of chopping the texture up and rendering sub parts, or embedding the screen on another texture page I was wondering if there was another way?
Is it possible to overlay another view that I could render to using CoreGraphics functions? Or is it possible to render to a Open GL surface using Core Graphics functions.
What would you recommend?
Cheers
Rich
Its entirely possible to write some code, which creates a 512x512 texture, load an image into it and then render only a portion of that texture (by mapping onto a polygon and altering the texture mapping UV co-ordinates).
This method is best for static images only, you couldn't really perform pixel-by-pixel real-time updates for this; updating the texture via open GL ES is currently too slow.
I would recommend that you read Apple's Human Interface Guidelines for iPhone, especially the several parts where they warn you over and over not to make splash screens.
Related
I want to port a game I've made which renders the screen itself 50 fps (doesn't use opengl).
What is the best way to port this to the iPhone?
I was reading about Framebuffer Objects. Is this a good approach to render a buffer of pixels to the screen at high speeds?
The fastest way to get pixels on the screen is via OpenGL.
Need more info about how your game currently renders to the screen, but I don't see how FBOs will help as they're usually used for getting a copy of the render buffer, i.e. for creating a screen recording, or compositing custom textures on fly.
If i ever need to create an app where I have to access the pixels directly and dont have direct access to the hardware I use SDL as it just requires you to create a surface and from there you can manipulate the pixels directly. and as far as im aware you can use SDL on the Iphone, maybe even accelerate it using opengl too
I want to show distorted image as error page for my application. If possible this can be a screenshot of home screen with some graphics distortion. Is this possible.
Thanks You.
As Daniel A. White's comment mentions, this probably will cause your application to be rejected from the App Store, but it can be accomplished in many ways. I think this technique would be acceptable if your own interface appeared broken, but not acceptable if you made any iOS supplied looks appear broken.
You could just use your favorite image editor (i.e. Photoshop) to distort a screen shot, and displayed it by putting it in a separate UIView. The image would be static. It couldn't react to the contents of your program's interface.
If your interface is drawn with OpenGL ES 2.0, you could draw your regular interface to a texture, then use that texture as input to another GLSL program that applied the distortion.
I get how to do a 1 screen opengl view.
But....
How about multiple opengl(1.1) view(s) for a game or drawing program?
For example:
After the drawing program starts up and someone does a 3 finger touch, it would bring up a toolkit of sorts on the top third of the screen for making drawing adjustments.
Or a game that has an animated splash screen the going to the actual game. Info button and other choices done in opengl as well.
The main point being I want to do all in opengl and need to know how to do multiple opengl views.
Thanks for any advice!!
It's not hard to implement multiple OpenGL ES rendering surfaces, you simply need to create multiple views that have CAEAGLLayers backing them.
Generally, multiple OpenGL ES layers are not recommended for performance reasons. You generally want to have one opaque fullscreen OpenGL ES layer for all of the content that needs this. Anything you can render in multiple contexts should be able to be drawn in a single one.
However, all of the things you describe can easily be done using standard UIViews and Core Animation. Overlaying UIViews on opaque OpenGL ES content only leads to a ~5% reduction in rendering framerate in my tests, which I consider perfectly acceptable. This is what I do in Molecules, and you can check out the code for that application to see how straightforward it is to layer these controls on top of OpenGL ES content.
I'd recommend going this route, because re-implementing buttons and other UI elements will take far more code in OpenGL ES than just using the native controls. You'll be sinking a lot of time into reinventing the wheel that could be used to improve other areas of your application.
I would suggest don't use multiple views too. Instead, just set your viewport differently each time and draw with a different LookAt position.
i've a question about UIImagePickerController class reference, in photo-camera mode.
I'm developing an OpenGl game. I should add to game photo-camera features: game should open a window (say for example 200x200 pixels, in the middle of screen), that display in real time a preview of photo-camera IN FRONT OF GL VIEWPORT. So photo-camera preview must be in a window of 200x200 in front of our Gl viewport that display game.
I've some questions:
- main problem is that i've difficult to open UIImagePickerController window in front of our Gl viewport (normally UIImagePickerController window covers all iPhone screen);
which is the better way to capture an image buffer periodically to perform some operations, like face detection (we have library to perform this on a bitmap image) ?
iPhone can reject such approach? It's possible to have this approach with camera (camera preview window that partially overlap an openGl viewport) ?
at the end, it's possible to avoid visualization of camera shutter? I'd like to initialize camera without opening sound and shutter visualization.
This is a screenshot:
http://www.powerwolf.it/temp/UIImagePickerController.jpg
If you want to do something more custom than what Apple intended for the UIImagePickerController, you'll need to use the AV Foundation framework instead. The camera input can be ported to a layer or view. Here is an example that will get you half way there (it is intended for frame capture). You could modify it for face detection by taking sample images using a timer. As long as you use these public APIs, it'll be accepted in the app store. There are a bunch of augmented reality applications that use similar techniques.
http://developer.apple.com/library/ios/#qa/qa2010/qa1702.html
Does anyone know how can i render a string on the iphone? Its for displaying my frame per second with =p
There's no built-in way of rendering text in OpenGL but two more or less common techniques: Rendering the glyphs using geometry (less common) or using texture mapping (far more common). For your case texture mapping would be very easy: Set up a CGBitmapContext and render the text using Quartz. Then upload the image in the previously generated texture using glTexSubImage2D.
On the iPhone you could also just put a UILabel on top of you OpenGL view and let UIKit do the rendering. In my application this did not hit performance at all (even though Apple claims it does).
You can use a Texture2D and the initWithString method to draw text in OpenGL. See the crash landing example that is included in the iphone sdk.
You could also use a UILabel and have it on top of the opengl layer.
As said before, Texture2D is a good idea, but Crash Landing was removed from a lot of places in Apple, what you could do , is download the Cocos2d , and then extract the Texture2D class provided there ( it's the same class provided by Apple, but with a couple of more things )
Cocos 2D for iPhone