Continously Updating UIViews in Objective C - iphone

I am very new to Iphone programming with Objective-C but I have picked up pretty fast in the last 1 month.
I have an application that reads data from a .csv which I then use to plot a continous graph on the Iphone. The problem is that there are close to 84,000 data points ( a major requirement) and the current design I used with Quartz 2D has helped to make the required plots but it takes close to 3mins for the UIView to show the infinitely long plot I desire.
The solution I am looking for is this
I intend to use a function in normal C language to sequential access the file within a thread and pass it to the drawing function which will then update the screen as the data arrives so that the user has a feel of how the data is been plotted continously. The problem I have however is that the CGRECT drawing function and the setNEEDSDisplay would just take all the points at once and display on the UIView,
How do I update only specific point on the UIView as the data arrives without clearing the whole View

You'll want to use setNeedsDisplayInRect as described here:
Drawing incrementally in a UIView (iPhone)

Have you looked at using Core Plot?
http://code.google.com/p/core-plot/
It's a little hard to figure out, google for example uses. But it may be able to handle what you need through selective plotting of large data sets.

Related

Draw graph using accelerometer data in core-plot

I'm new to this core-plot framework and trying to draw a line graph based on X and Y acceleration. I can already get my X and Y values and have successfully added core-plot in my project. I'm quite lost on how to start this, so basically, I will have X and Y values and how do I plot this using core-plot? any help is suggested. Thanks!
Maybe my answer is a little bit off topic, as it's not using core-plot, but did you look at the "AccelerometerGraph" sample app provided with Xcode?
It provides a nice plot which is dynamically updated while new accelerometer events are registered. The great of this sample is the way CoreAnimation has been used to "speed-up" Quartz2D. And you get all of this using system framework only and no third party code (apart your adaptation of Apple's one).
Look at the Core Plot examples to see how to set up a basic graph. You'll want to use a scatter plot. Call -reloadData on the plot whenever you have new data to display. If only some of the data points change on each frame, you can use the following methods to do a partial update:
-(void)reloadDataInIndexRange:(NSRange)indexRange;
-(void)insertDataAtIndex:(NSUInteger)index
numberOfRecords:(NSUInteger)numberOfRecords;
-(void)deleteDataInIndexRange:(NSRange)indexRange;
The Real Time Plot in the Plot Gallery example app shows one way to use these methods.

Zoomable and Panable Collection of Objects

I'm pretty new to iphone development, so this is more of a high-level question. The simplest description of what I am looking to do is create a zoomable/panable field on which I can place a bunch of circle objects. The number of these circles is likely to be in the hundreds, and ideally when the user zooms in close enough, more information can be displayed. From stuff I've read, it seems like UIScrollView provides the simplest way of making a zoomable/panable view but I'm not sure it's the best way to handle a view that includes a hundred graphic objects. I'm trying to figure out if I should progress further down that path or look into things like CALayers, Core Graphics, etc. Any guidance or advice would be greatly appreciated. Thanks in advance,
Roman
I suggest you to use UIScrollView, because it will save a lot of time for handling proper zooming/scrolling. So the workflow is next:
1. Zoom you scroll view
2. In delegate's callback scrollViewDidEndZooming:withView:atScale: you can obtain the scale and determine the level of detail that you need.
3. redraw the visible region (using Core Graphics) with appropriate level of detail (number of circles etc.)
So you should use the mix of Core Graphics and UIKit.

What's the best way to do this (iPhone SDK UI question)

Here's what I need to do:
I will have a toolbar with multiple objects on it (for this we'll call them A,B,C,D) and I want to be able to have the user click and drag them around and be able to snap them to a grid and connect to each other.
Sounds easy, right? Well here's my problem: some objects are different sizes, so A could be a 1x1, B be 1x3, C be 3x4, etc.
So how should I do this? I was thinking about just having each element as a separate UIImageView (or UIView, haven't decided yet) that can be dragged around, then take it's location and see what images are next to it.
Another thing is I have to be able to export these locations to either xml or json (not sure yet, probably xml)
It sounds like you would need a subclass of UIView with tessellation or some sort of underlying grid coordinate system with the units being 1x1. The 'tiles' could be subclassed from UIView, having a UIImage and grid position information. If adjacent tiles are by definition connected, then you wouldn't need additional state information about connectedness. And writing this out would be as easy as writing out origins.
I am working on something similar, but with single-sized tiles. It has been fun - especially the insertion logic: positioning a tile between two other tiles and figuring out what gets moved to make room.

How to morphing of two images in iphone programming

how to do morphing of two images in iphone programming.?
Your question is not iphone related.. the kind of algorithm you are looking for is language-agnostic since it just work with images.
By the way it's quite complex to morph two images, usually you have to
embed a grid of points over the two images that links characteristics that should be morphed. For example if you have two faces you would use a grid that connects eyes, the mouth, ears, the nose, the edge of the face and so on: these two grid tells the morpher how to "translate" a point into another one while blending the two images
the previous step can be done automatically (with specific software) or by hand. more points you place better will be your results
then you can do the real morphing sequence: basically you do an interpolation between the two images (in which the parameter that you use will decide how much will be the final risult similar to the first or the second image)
you should also apply some blending effect to actually create a believable result, always using a parametric function according to the morphing position
You can use UIView animation to transition from one UIView to another. This should provide some sort of lame morphing.
You can use XMRM, which is written in C++: http://www.cg.tuwien.ac.at/~xmrm/
There is no image morphing API in the iOS SDK.
No, there isn't an API for it. You'll have to do it yourself.
...ask a short question, get a short answer...

How does one interact with OBJ-based 3D models on iPhone?

I have a few different OBJ files that I am able to parse and display. This code is based on Jeff LaMarche's The Start of a WaveFront OBJ File Loader Class. However, I need some means of detecting what coordinates I have selected within a displayed model. Usually there is one model displayed at a time but sometimes there will be two or more on the screen and I want to set up a NSNotificationCenter object to notify other sections of code as to which object is "selected". I have also looked at javacom's "OpenGL ES for iPhone : A Simple Tutorial" and would like to model the behavior of what I'm trying to program after his.
This is my current line of logic:
Setup a means to detect where a user has touched the screen
Have those coordinates compared with the current coordinates of a OBJ-based model
If they match, indicate said touch as being within the bounds of the object
The touchable set of coordinates must scale with the model. Currently the model is able to scale so I will most likely need to be able follow this scaling.
Also note, I don't need to move the model around on the screen. Just detect when it's been touched whether there is one model or several being displayed.
While this is most likely quite simple, I've been stumped by this for months now. I would really appreciate any light others can shed on this topic.
Use gluUnProject on the touch coordinates to get a vector going from the screen into the world, and then intersect it with your models to see if one of them has been touched. gluUnProject isn't by default available on iPhone, but you can look up implementations of it. http://www.mesa3d.org/ has an open source implementation.
Read about gluUnProject here: http://web.iiit.ac.in/~vkrishna/data/unproj.html