I am starting a new project in iPhone SDK in order to represent and manipulate graphs (in the sense of networks, not bar/pie/* charts).
I want to be able to interactively add/move/delete nodes and arcs between them, as well as to pan/zoom the whole representation.
As far as I can see, I have two major options:
First option: A single view handles it all.
Second option: Each element (node or arc) has its own associated view.
I think that pros/cons of each option are:
First option makes the drawing easy (you just populate the data structure and draw the Quartz 2D paths). However detection of touches is problematic, since you have to decide if a given touch point hits a node (this is easy, but you have to poll all nodes) or an edge (this is not so easy).
Second option makes it easy to implement the detection and handling of touches. However the drawing part could be problematic, since you have to rotate and resize all your views; even worse, if you drag a node you have to tell all the views (holding an edge incident to the given node) to resize/rotate themselves.
Which of the two options would you recommend me?
Thanks in advance,
Biel
Related
Im planning to make a domino draw game but im not quite sure how to handle the tile placement so that each tile snaps to each value when a player sets a domino, could any one give me an idea on how to achive this?
Example image
This question is probably too broad. You really need to make an attempt at implementing a solution yourself and then come back if you run into a specific problem. However, if you really have no idea where to begin, you broadly need to do a few things:
Create a GameObjects for your dominoes and attach a scripts to the,
which define their numbers, set their corresponding texture etc.
Create an invisible play surface which is made up of a grid representing places where tiles can be put down.
Add code to handle picking up, moving and putting down your dominoes.
In the code that handles moving and/or putting down dominoes:
check whether the grid location where the domino is going to be placed is valid (e.g adjacent to another domino),
then check adjacent grid domino values
For adjacent dominoes, check their orientation
depending on the orientation relative to the domino you are placing, check the values at the nearest or both ends of the domino
if the adjacent domino values match a value on the domino being moved allow it to be placed, otherwise don't allow it to be placed
In the above example, "placing" a domino would simply mean moving it to a point on the play surface grid in either a vertical or horizontal orientation.
This is a very broad overview and there are plenty of gotchas that I haven't covered which may or may not give you trouble.
Edit: You could also do this without using a grid but it would be a little trickier when it comes to finding and inspecting adjacent dominoes.
I am trying to draw a box that can help someone understand the dimensions of an item, but I keep having the issue that since I first need to recognize a plane when I put my physical item on top of the plane, my box gets drawn in front of the item.
Is it possible to somehow overcome this?
#John Scalo is right, your problem is not having to first detect a plane, but it's that your render engine doesn't know that part of your green box frame is occluded (hidden) by a real-world object.
"…to somehow overcome this"
Yes, and by doing so you might be "solving" your original problem—help someone understand the dimensions of an item.
(Depending on your choice of render engine, e.g. SceneKit) You can add an invisible 3D object that has the same dimensions as the real-world object; so the render engine will "know" that some parts of your box frame are behind this (for the user invisible) 3D object. Therefor, you can tell it not to draw those parts of your box frame, which will give the illusion (borrowing from Apple here) that your soda can has the box around it.
These workarounds are inaccurate, but maybe their accuracy is enough for the level of realism you are trying to achieve:
Option 1: 1. After detecting the desk surface, place a semi-transparent 3D object over the soda can and then resize it (gestures/buttons your choice) until it's about the dimensions of the soda can. 2. Confirm that you're done, and just don't draw a texture on it at all just let it occlude the green box frame.
Option 2: Hold your device near the edges of the soda can and add "enough" ARAnchors to be able to create a "bounding shape" that (again) can be used to capture the real-word object and occlude that.
Option 3: (intense, and perhaps the least accurate) Use your finger to "brush" over the object from various angles, and on each touch perform a hit test (hopefully the top/nearest hit is a part of your soda can) and build up a "bounding shape" that way.
Option X: any combination of 1 - 2 - 3.
Good luck, there's lots of people trying to work around this device/ARKit limitation at them moment, so keep your eyes open for good ideas.
The problem you're dealing with is called occlusion, and ARKit doesn't (currently?) include occlusion support. Maybe some day soon iPhones and iPads will begin to ship with LIDAR (or similar), in which case ARKit will be able to detect objects in the scene, making occlusion much easier.
Here's what I need to do:
I will have a toolbar with multiple objects on it (for this we'll call them A,B,C,D) and I want to be able to have the user click and drag them around and be able to snap them to a grid and connect to each other.
Sounds easy, right? Well here's my problem: some objects are different sizes, so A could be a 1x1, B be 1x3, C be 3x4, etc.
So how should I do this? I was thinking about just having each element as a separate UIImageView (or UIView, haven't decided yet) that can be dragged around, then take it's location and see what images are next to it.
Another thing is I have to be able to export these locations to either xml or json (not sure yet, probably xml)
It sounds like you would need a subclass of UIView with tessellation or some sort of underlying grid coordinate system with the units being 1x1. The 'tiles' could be subclassed from UIView, having a UIImage and grid position information. If adjacent tiles are by definition connected, then you wouldn't need additional state information about connectedness. And writing this out would be as easy as writing out origins.
I am working on something similar, but with single-sized tiles. It has been fun - especially the insertion logic: positioning a tile between two other tiles and figuring out what gets moved to make room.
I'm working on an iPhone OS app whose primary view is a 2-D OpenGL view (this is a subclass of Apple's EAGLView class, basically setting up an ortho-projected 2D environment) that the user interacts with directly.
Sometimes (not at all times) I'd like to render some controls on top of this baseline GL view-- think like a Heads-Up Display. Note that the baseline view underneath may be scrolling/animating while controls should appear to be fixed on the screen above.
I'm good with Cocoa views in general, and I'm pretty good with CoreGraphics, but I'm green with Open GL, and the EAGLView's operations (and its relationship to CALayers) is fairly opaque to me. I'm not sure how to mix in other elements most effectively (read: best performance, least hassle, etc). I know that in a pinch, I can create and keep around geometry for all the other controls, and render those on top of my baseline geometry every time I paint/swap, and thus just keep everything the user sees on one single view. But I'm less certain about other techniques, such as having another view on top (UIKit/CG or GL?) or somehow creating other layers in my single view, etc.
If people would be so kind to write up some brief observations if they've travelled these roads before, or at least point me to documentation or existing discussion on this issue, I'd greatly appreciate it.
Thanks.
Create your animated view as normal. Render it to a render target. What does this mean? Well, usually, when you 'draw' the polygons to the screen, you're actually doing it to a normal surface (the primary surface), that just so happens to be the one that eventually goes to the screen. Instead of rendering to the screen surface, you can render to any old surface.
Now, your HUD. Will this be exactly the same all the time or will it change? Will only bits of it change?
If all of it changes, you'll need to keep all the HUD geometry and textures in memory, and will have to render them onto your 'scrolling' surface as normal. You can them apply this final, composite render to the screen. I wouldn't worry too much about hassle and performance here -- the HUD can hardly be as complex as the background. You'll have a few textures quads at most?
If all of the hud is static, then you can render it to a separate surface when your app starts, then each frame render from that surface onto the animated surface you're drawing each frame. This way you can unload all the HUD geom and textures right at the start. Of course, it might be the case that the surface takes up more memory -- it depends on what resources your app needs most.
If your had half changes and half not, then technically, you can pre-render the static parts and then render the other parts as you're going along, but this is more hassle than the other two options.
Your two main options depend on the dynamicness of the HUD. If it moves, you will need to redraw it onto your scene every frame. It sucks, but I can hardly imagine that geometry is complex compared to the rest of it. If it's static, you can pre-render and just alpha blend one surface onto another before sending to the screen.
As I said, it all depends on what resources your app will have spare.
I have a few different OBJ files that I am able to parse and display. This code is based on Jeff LaMarche's The Start of a WaveFront OBJ File Loader Class. However, I need some means of detecting what coordinates I have selected within a displayed model. Usually there is one model displayed at a time but sometimes there will be two or more on the screen and I want to set up a NSNotificationCenter object to notify other sections of code as to which object is "selected". I have also looked at javacom's "OpenGL ES for iPhone : A Simple Tutorial" and would like to model the behavior of what I'm trying to program after his.
This is my current line of logic:
Setup a means to detect where a user has touched the screen
Have those coordinates compared with the current coordinates of a OBJ-based model
If they match, indicate said touch as being within the bounds of the object
The touchable set of coordinates must scale with the model. Currently the model is able to scale so I will most likely need to be able follow this scaling.
Also note, I don't need to move the model around on the screen. Just detect when it's been touched whether there is one model or several being displayed.
While this is most likely quite simple, I've been stumped by this for months now. I would really appreciate any light others can shed on this topic.
Use gluUnProject on the touch coordinates to get a vector going from the screen into the world, and then intersect it with your models to see if one of them has been touched. gluUnProject isn't by default available on iPhone, but you can look up implementations of it. http://www.mesa3d.org/ has an open source implementation.
Read about gluUnProject here: http://web.iiit.ac.in/~vkrishna/data/unproj.html