OpenGL ES based graphing library for iPhone - iphone

I'm trying to graph large amounts of data, say around a million data points in a line graph. I've tried CorePlot, but it is somewhat lacking in speed. It couldn't even graph 30,000 points with any sort of usable rendering speed (less than 1 fps). s7graphview is somewhat similar to CorePlot but with less features. I put together a simple OpenGL ES project and graphed 1,000,000 points and it rendered at around 10 fps, which is a very usable speed for manipulating a graph. My question is this, are there any purely OpenGL ES based graphing libraries for the iPhone? If not, are there any open source OpenGL based ones I could potentially port to the iPhone? I'd rather not resort to writing my own graphing library unless absolutely necessary.
Edit:
Ok, since there aren't any takers, would anyone be willing to work together with me to make an Open Source OpenGL ES graphing library?
Update:
I've finished my OpenGL graphing library. It can graph 1,400,000 points at 10 fps with multiple lines and multiple scales attached to those lines, with dynamic resizing and it is a self contained control that can be dropped onto any window/view. Much better than CorePlot's 10,000 points at 10 fps.

Related

is non-square tilable atlas effective

I have a building model with 6K faces and I want to texture with some pretty high detail 512x512 tilable textures (which represent about 32cm x 32cm), and I'd like to be as mobile-friendly as possible, but not necessarily with old phones but for like GearVR capable phones.
The model happens to have mostly long horizontal quads eg
|-----------|---|----------------|
|-----------|---|----------------|
|-----------| |----------------|
|-----------| |----------------|
|-----------|---|----------------|
|-----------|---|----------------|
So the uv's of each of those horizontal sections can be stacked on one tileable texture, to achieve both horizontal and vertical tiling.
Further, if the tiles were 512x512 textures, I could stack 8 of them in a 512x4096 non-square (but power of two) texture.
That way I could texture the main mesh with a single texture or one extra for metalic.
Is this reasonable, or should I keep them as separate 512x512 textures? Wouldn't separate textures mean like 8x the draw calls which would be far worse than a non-square 512x4096 texture?
After some research, I found the technique of stacking textures and tiling horizontally is called using a trim sheet, and is very much valid, and used extensively game development to be able to re-use high-detail textures on many different objects.
https://www.youtube.com/watch?v=IziIY674NAw
The trim sheet info I found though did not cover 'non square' which is the main question. But from several sources I found that some devices do not support non-quare and some do, and some do but don't do compression well on non-square, so it's a 'check your target devices' issue.
Assuming a device does support non-square, it should in fact save memory to have a strip of textures, and should save draw calls, but your engine may just 'repeat them horizontally until square' for you when importing the texture to 'be safe' (so again, check target devices and engines). It would perhaps be wise to limit to 4 rather than 16 stacked textures, to avoid 'worst case scenarios'.
Hopefully, the issue will be addressed by either having video cards able to do several materials in 1 draw call, or by more universal handling the texture strips well, but it seems state of the art has not focused on that yet.
Another solution is more custom, but some people have created custom shaders that use vertex color information on a mesh to choose which part of a texture to use, and then tile from there. Apparently the overhead turned out to be quite low, and it was a success, so it's good to have an idea about 'backup plans'. This however would be an engine/environment/device specific kind of optimization, not a general modeling practice.
http://www.gamasutra.com/blogs/MarkHogan/20140721/221458/Unity_Optimizing_For_Mobile_Using_SubTile_Meshes.php

Does glClear have a high impact on fill rate?

I am using multisampling in iPhone openGL ES project. Just using the glClear command causes the renderer utilization to go upto almost 42%. Is this the way its supposed to be or am I doing something wrong? I am using an iPod Touch 4th generation for testing.
Do you mean the renderer utilisation goes to ~42% in a render that consists of your scene + the clear compared to a render that just consists of your scene?
glClear() is a very efficient operation on PowerVR GPUs as the clear is done on a tile as it's processed by the GPU. The only case I can think of where an overhead would be introduced by a clear is if you're tests only consist of a clear and a swap each frame compared to just a swap (where GPU work would be optimised out).
Apple's online documentation gives some insight into how glClear is interpreted by their graphics driver.
If you want to know more about the PowerVR architecture and how operations like glClear() are processed, I would also recommend reading "SGX Architecture Guide for Developers" & "PowerVR Performance Recommendations" on Imagination's documentation page.

Taking Depth Image From Iphone (or consumer camera)

I have read that it's possible to create a depth image from a stereo camera setup (where two cameras of identical focal length/aperture/other camera settings take photographs of an object from an angle).
Would it be possible to take two snapshots almost immediately after each other(on the iPhone for example) and use the differences between the two pictures to develop a depth image?
Small amounts of hand-movement and shaking will obviously rock the camera creating some angular displacement, and perhaps that displacement can be calculated by looking at the general angle of displacement of features detected in both photographs.
Another way to look at this problem is as structure-from-motion, a nice review of which can be found here.
Generally speaking, resolving spatial correspondence can also be factored as a temporal correspondence problem. If the scene doesn't change, then taking two images simultaneously from different viewpoints - as in stereo - is effectively the same as taking two images using the same camera but moved over time between the viewpoints.
I recently came upon a nice toy example of this in practice - implemented using OpenCV. The article includes some links to other, more robust, implementations.
For a deeper understanding I would recommend you get hold of an actual copy of Hartley and Zisserman's "Multiple View Geometry in Computer Vision" book.
You could probably come up with a very crude depth map from a "cha-cha" stereo image (as it's known in 3D photography circles) but it would be very crude at best.
Matching up the images is EXTREMELY CPU-intensive.
An iPhone is not a great device for doing the number-crunching. It's CPU isn't that fast, and memory bandwidth isn't great either.
Once Apple lets us use OpenCL on iOS you could write OpenCL code, which would help some.

Draw ECG graph dynamically in iphone?

HI All,
Is there any way to draw a 2d graph of ECG (in wave form) in iphone .I am getting a large anount of data in couple of seconds i mean the gap will be of just 1 Secs.then is there any way to draw the graph in iphone or any library (IN c or c++) by which i can use it and it draw a live graph of any heart beat analysis and looks alike a live video.
Thanks
Balraj.
Apple's Accelerometer example app (available on their iOS developer site, and with full source code) shows how to draw a 2D graph animated several times a second.
In addition to the Accelerometer sample that hotpaw2 points to, which is reasonably performant when it comes to graphing realtime data, there is the Core Plot graphing framework. Core Plot can give you a drop-in component with more options when it comes to formatting your graphs, but it may not be the fastest for displaying many data points that change rapidly on the iPhone.

How should I organize OpenGL ES 1.x 2D layer tree?

I'm developing a cute puzzle app - http://gotoandplay.freeblog.hu/categories/compactTangram/ - , and for performance reasons I decided to render the view with OpenGL. I started to learning it, I'm ok with buffers, vertices, textures in a really basic way.
The situation:
In the game user manipulates 7 puzzlePiece, each has 5 sublayers to get some pretty lighting feel. Most of the textures are 256x256. The user manipulates only one piece at a time, so the rest is unchanged during play. A skeleton of app without any graphic here: http://gotoandplay.freeblog.hu/archives/2009/11/11/compactTangram_v10_-_puzzle_completement_test/
The question:
How should I organize them? Is it a good idea to "predraw" the actual piece states in separate framebuffers(?)/textures(?) or I can simply redraw every piece/layers (1+7*5=36 sprite) in a timestep? If I use "predraw", then what should I do? Drawing to a puzzePiece framebuffer? Then how can I draw it into the scene framebuffer? Or is there a simplier way to "merge" textures?
Hope you can understand my question, if it seems too dim please take a look at my idea on how render an actual piece in my blog (there is a simple flash implemetation of what I'm gonna do) here: http://gotoandplay.freeblog.hu/archives/2010/01/07/compactTangram_072_-_tan_rendering_labs/
A common way of handling textures is to pack all your images into a 'texture atlas' at the start of the game/level.
Your maximum texture size is 1024x1024 and you can have about three of them in memory on the iPhone.
When you have all the images in these 'super textures' you can just draw the relevant area of the large texture. This has the advantage that you have to bind textures less often and you gain better performance, as well as cutting out any excess space used by the necessity to put small images in power-of-two size textures.