iPhone - At user event create objects in the view - iphone

I am new to iPhone programming, so I think part of the problem is that I don't know what I really want to google to find my answer. I am looking for a method that allows a user to draw a line on the screen. There is no guarantee that it will be straight, it can be curved or whatever. I was thinking that I could create some small square image, and then as they draw, place them into a NSset. But I am not really sure how to communicate each new object up to the view. Up to this point, I've just been messing around with objects I put on the view and then assign movement to those, this is my first jump into on-the-fly object creation.
It might be that I just need to jump into a class/object type or even a tutorial, any guidance would be great.
Thanks!

Are you asking how to create a 'paint' type application? There's an apple example for that:
http://developer.apple.com/iphone/library/samplecode/GLPaint/Introduction/Intro.html
This question is relevant, but might be too complex when you're just starting out:
Improving Finger Painting Performance
If you're a bit more specific about what problem your app is to solve you might get some more specific answers.

Related

Building system like clash of clans / boom beach

Is anyone knows how to do a building system like coc / boom beach? I know how to do a fortnite building system but there's only 1x1 structures to do while i need 3x2, 5x3 and many more sizes to go. I'm going to do it using UE4 with Blueprints. I've been looking so long and couldn't find answer. Hope you'll help me!
Thanks.
As the question is rather vague and doesn't give anyone much to go on. I'll try to take a stab at it, conceptually at least.
I'd assume that you have a base building BP or struct so, in there I would create a Vector2D variable or something similar to give it a length and width per building.
Then when you are trying to spawn the building, check the tiles that are that length and width away from the center of the screen/cursor for any existing buildings. Then when you spawn the building make sure that the building takes claim over the tiles that it is using so others will not be able to overlap later on.
So your main "meat and potatoes" of this project will be creating a grid system and also creating a system that can check whether or not a tile is inhabited already and also using and releasing tiles when needed.
If you want someone to give you a more concrete answer, you will need to show what you have done and tried in your question. Especially for one as broad as this one.

How to use Core Graphics Layer Drawing

Can anyone tell me about how to color with pattern(Butterfly) like follwing link a link.
In addition to reading the entirety of the CGLayer reference you posted in your original question I strongly advise you to watch the 'Optimizing 2D Graphics and Animation Performance' session from the 2012 WWDC.
As you progress I think you'll find that it isn't particularly difficult to draw content to the screen using the likes of Quartz 2D and Core Animation but the real challenge will be doing so in a way that achieves an acceptable level of performance.
In the session they optimise a drawing app similar to the one you want to create. The fundamental principals they used to optimize their drawing app were:
Only ever update as little of the screen as you need to
Every so often create a flat composite image of what the user has drawn and re-use this image in proceeding drawing operations. This prevents having to draw everything the user has drawn to the canvas individually, making the application much more performant.
In addition to this they cover a collection of tricks to squeeze out every drop of performance.
I apologise that I have no code examples for you (I usually like my answers to include some) but your question was very broad. I suggest you watch the video I have suggested, continue your research and attempt to begin implementing the application yourself. Once you run into more specific problems you can return here for answers in the event you can't find them elsewhere.
Good luck!

Drawing tiles using UIImage or use buttons?

I'm new to IOS development, and want to create something with tiles similar to what you see here. http://www.youtube.com/watch?v=i7giaN5T7ww
Since I'm a beginner, I see myself placing a bunch of buttons on the screen and labeling them to how I see fit. Then figuring out how to move multiple "buttons" at once. I was wondering how you think the tiles are created in this program? If not done by a bunch of buttons, could someone show me code to draw a tile with dimensions of 48x48 pixels, then place a letter on the tile?
Also, if you could point me to some helpful resources that would help me develop something along the lines of a project like this, I would most appreciate it. I'm excited and very motivated to learn and master IOS development and consequently objective-c.
My knowledge is limited to what I've learned by watching and coding along to these two YouTube playlists:
http://www.youtube.com/user/thenewboston#p/c/640F44F1C97BA581
http://www.youtube.com/user/thenewboston#p/c/53038489615793F7
They've been extremely helpful in helping me understand the basics of Objective-C and Iphone development. Unfortunately, it didn't get into drawing and manipulating objects on the screen.
Thanks in advance for your help. I've found this site and its users quite helpful already. :)
I have kind of similar apps in the app store and I used in one case UILabels and in another case UIViews with UIImageViews as subviews. The movement of the labels I have implemented with the methods touchesBegan:withEvent, touchesMoved:withEvent and touchesEnded:withEvent.

Iphone/ipad architecture suggestions for game look-and-feel app

All you ios architects out there, please help me choose architecture/technology for the following iphone/ipad app.
The app itself is a financial app, but we want more of a game look-and-feel of the app, so we probably don't want to use the builtin looks of the cocoa widgets. The elements on the screen will probably be some kind of blob-shaped images.
The app will essentially have five "blob"-shaped areas, spread out evenly across the screen. One of the blobs will be centered and larger than the other ones. Within each blob there will be clickable areas which will pop up "details" and menu-action blobs. These blobs are also graphics objects and must not take over the whole screen. The blobs should animate nicely when popping up. The graphics elements will have a couple of lines of text, which are generated, so the overlaying text itself cannot be part of the static background-image.
The main user interaction will be swiping within the center blob, displaying summaries of the items that are conceptually contained within the blobs underlying data store. Now and then, the user will drag and drop the item to one of the other blobs. While dragging, the item should be traced by a line and when dropping on the other blob, the item should be animated to look like it's being "sucked into" the blob.
Now, what kind of technique would you suggest for this app? Is Cocoa suitable in this scenario? Should I use a game framework like Cocos2D? All kinds of suggestions including example code snippets are most welcome.
I realize that this question might not be as straightforward and to the point as questions generally are on SO, but I hope your answers will come to use by more people than me. Thanks!
EDIT (MY SOLUTION):
I eventually ended up doing everything in UIKit, which was a lot easier than I expected.
Briefly described I used UIButtons with Custom style and an image background, which gave me full control over the visual appearance of the "items". I also found it very useful to manipulate the underlying CALayer of many of my other UIViews. It is often easier than drawing things from scratch using Core Graphics programming.
Another thing that was useful were the UIGestureRecognizer:s. I found them useful for both handling "real" gestures like swiping, longpress etc, but also for handling normal "tap" for UIView classes that aren't subclasses of UIControl. Two examples are UIImage, UILabel and UIView itself. That way I could handle taps for these simple classes. I could for example use a normal UIView, modify it's CALayer to change the look of it completely and still handle taps. Using this technique, I didn't have to subclass any views at all in my app.
The animations were pretty easy too, even though I had to use a non-public method to use "suck" animation, so my app will never pass App Store moderation. It was just a prototype anyway so I don't care.
When this app will be for real, I will probably implement it in HTML5/JavaScript wrapped by Phonegap. The reason for this is mainly reuse of existing mobile web services and also for code reuse across platforms. It will probably also be easier to hook into the existing security solution when using a webapp.
Cocos2d is great if you need to move elements around really fast as it is a layer on top of OpenGLES. I think from what you have said the UIKit will be fine, you get nice animation support, you can do some nice things with UIScrollViews to handle moving elements around etc.
If you need more detailed graphics support and lots of moving elements, particle effects etc then by all means go for Cocos2D but be aware that in Cocos2d the application works more on a scheduled update method, i.e. you get notified every 1/60th of a second to move stuff draw stuff etc, whereas with normal UIKit approach it is more event drive, i.e. I click a button and show a view etc.

How does one interact with OBJ-based 3D models on iPhone?

I have a few different OBJ files that I am able to parse and display. This code is based on Jeff LaMarche's The Start of a WaveFront OBJ File Loader Class. However, I need some means of detecting what coordinates I have selected within a displayed model. Usually there is one model displayed at a time but sometimes there will be two or more on the screen and I want to set up a NSNotificationCenter object to notify other sections of code as to which object is "selected". I have also looked at javacom's "OpenGL ES for iPhone : A Simple Tutorial" and would like to model the behavior of what I'm trying to program after his.
This is my current line of logic:
Setup a means to detect where a user has touched the screen
Have those coordinates compared with the current coordinates of a OBJ-based model
If they match, indicate said touch as being within the bounds of the object
The touchable set of coordinates must scale with the model. Currently the model is able to scale so I will most likely need to be able follow this scaling.
Also note, I don't need to move the model around on the screen. Just detect when it's been touched whether there is one model or several being displayed.
While this is most likely quite simple, I've been stumped by this for months now. I would really appreciate any light others can shed on this topic.
Use gluUnProject on the touch coordinates to get a vector going from the screen into the world, and then intersect it with your models to see if one of them has been touched. gluUnProject isn't by default available on iPhone, but you can look up implementations of it. http://www.mesa3d.org/ has an open source implementation.
Read about gluUnProject here: http://web.iiit.ac.in/~vkrishna/data/unproj.html