I need to develop a very simple iPad application that takes RSS feed of images that will be updated constantly and will display them and you can slide through them. As simple as that.
Is there a way to get basic help on doing this, I am very new to iPhone/iPad development and would like help.
To make the question clearer, I would appreciate code samples (other than the ones displayed on Apple's developers site, tutorials, and guidelines.
Thank you :)
First things first, you need to go through some Cocoa Touch tutorials and learn how all the pieces fit together. ;-)
Then check out some NSURLConnection samples on how to pull data down from the network.
Related
I am trying to develop a Unity project with leapmotion and iPhone, and Currently I have came up with a problem which is how to collect leap motion hand information. Like the location of the hands. There is a link on the YouTube showing that it is possible to implement leapmotion with iPhone.
Can anyone help with that.
The current solution I try to implement is I use C++ API to collect all the hand information and send it to iPhone through Bluetooth. It is also possible to communicate iPhone with WIFI. Hope that helps.
Finally, I use C# socket communication to implement it. And I success. I post the solution in my blog. Hope can help.
http://hanslen.me/blog/index.php/2017/01/06/leap-motion-brings-gesture-control-to-iphone/
If you have any other questions, leave a comment in my blog.
I am trying to learn and build talking puppet iPhone application. The great example is "Talking Ben the Dog" and here is youtube video. I have no idea how am I going to build such application. I have a graphics designer who will do their part. As a being programmer, what would I need to be aware of? If someone can throw their ideas or point me some relavant documentation or sample code would be great help.
Thanks.
First, you'll need to create the content. That means the animation scenes and any associated audio. Next, you'll want to trigger those scenes based upon the user's input.
If you want more advanced functionality like "talk back" where the app repeats what you say, then you'll need to get a grip with AudioQueue and AudioUnit APIs. That means detecting levels of incoming audio then triggering writing audio in to stored buffers. These APIs are difficult so this will be the most technically challenging part. You'll need to be comfortable with pointers and other lower level programming concepts.
For an app without talk back, a lot of work will be required to create the content. Then you'll need to re-create the animations using UIImage and the Core Animation framework in your app.
There are a lot of great videos on the Apple site and sample code. This will be a brilliant learning curve for you to get up to speed with Core Animation.
Just make a couple of videos for every scene and play them according to button click!
So I am rather experienced with OpenGL on the desktop platform and am trying to integrate it with my iOS development experience. I have created several large scale iOS applications so I have a good understanding of that process as well. I was wondering if anyone knows of any useful techniques to integrate iOS UI components with an OpenGL scene, or if that is even possible. I apologize if this is to general. I can refine it if necessary.
For example, say you have an iPad application that has a table and whatnot on the left, and you want to add a little 3D OpenGL window on the right. (Perhaps a chart or something that the user can interact with?) This would not be for a game or anything, but more for my understanding on how to smoothly integrate the different platforms. Any advice or links that the community could provide would be greatly appreciated. Thanks in advance!
GL-Views do not have to cover the entire screen. A great and very easy to understand example is the sound=recorder SpeakHere iphone app within the iOS SDK.
This example uses a small GL-View for displaying a peak-level-meter of the audio signal; GLLevelMeter.
Hope this helps...
Just a quick question on the iphone technology within this business card reader
http://www.youtube.com/watch?v=F8z6pcxdrPo
as we can see this video allows users to take a photo of a business card, i have an idea where i would take a photo of some text , and that photo could then be turned into text on the iphone. how would i be able to implement this using the iOS API ?
cheers guys
The camera stuff is all standard-- use the UIImagePickerController for this.
Text recognition (OCR) is not a built in part of the iOS API, though, so that part really isn't trivial. There are multiple open-source projects that can handle this sort of thing if you want to go after them.
Tesseract is an older but possibly viable one. Check out this post which has info on cross compiling it for iOS.
Other users here might have more current recommendations.
this might sound silly but since i am new to iPhone i wanted to ask this question... :P
Where is the UIWebView best used? I mean which type of application?
Could i use that if i wanted to display a video in some part of the screen rather than fullscreen which MPmediaPlayer is really good at?
Thanks :)
The only places I have seen UIWebView really used in a way that is right is for help pages. Developers will make this page call a FAQ page hosted on their servers so that they can change the contents frequently without going back to Apple for review.