I've designed the User Interface of an iPhone app and I wish to show an online demo of that consisting for the moment of a series of static images representing the main steps of the app.
According to you what is the best way to do this simulation?
You know, something like a series of single webpage, optimized for mobile, containing a single image linking to the next step, but I was wondering if exists a much elegant and sophisticated solution, with a transition effect for example or other features.
I hope I was clear enough :)
Any help will be sincerely appreciated.
Thanks in advance for your attention.
This sounds like a good use for Briefs Briefs App Website. This pretty much allows you to create an interface and step through it as if it were an application. I believe you'll need to have a developer account to run the app that will read the brief on your phone (since it wasn't able to be released in the app store).
An alternative to static images would be to make a video. I use the iShowU video screen capture tool and set it to record the iPhone/iPad simulator window. I then run through the screens, type inputs, etc. In addition to recording the video, the program records my voice as I narrate the app's features.
As to transition effects, the video will capture whatever transition animations are in your program.
In the end you have a video that you could give your user, put on YouTube, or whatever.
You can do this easily and for free on AppDemoStore. You just have to upload the app screenshots and then add hotspots which are used for the navigation through the demo.
AppDemoStore offers also the sophisticated features you are asking for:
iPhone specific transition effects such as slide up/down/left/right, fade and flip
gestures icons for the hotspots
text boxes and callouts
multiple hotspots on a screen in order to create a simulation of the app (and not just a linear demo)
Here's a sample demo: http://www.appdemostore.com/demo?id=1699008
Moreover, the demos created on AppDemoStore run in any browser and mobile device and can be embedded in your webpage or blog (like you do it with a YouTube video). With the FREE account, you can create up to 10 demos with unlimited screenshots and all the features specified above.
Regards,
Daniel
Related
I am thinking of building a camera application - with the ability to do image processing (adjust contrast, apply different image filters) while you are taking picture or after the pictures has taken.
The app will also have the ability of drag and drop icons.
At the end you are able to export the edited images either to the camera roll or app memory.
There is already many apps out there like this. (Line Camera) etc...
Just wondering what is the best way to build such app.
Can I build the app purely with Objective C ios sdk? or do i need to build it with C++/cocos2d, etc...
Thanks for your help!
Your question is very broad, so here is a broad answer...
Accessing the camera/photo library
First you'll need to access the camera using UIImagePickerController to either take a new photo or grab one from your photo library. You can read up on how to accomplish this here: Camera Programming Topics for iOS
Image Manipulation
AviarySDK has much of this already built for you. Very easy to set up and use in your apps. You can download their sample app for free in the app store if you want to see what it can do. Check it out here: http://aviary.com/
Alternatively, read up on Core Image if you'd like to avoid third-party libraries. See Core Image Programming Guide for more information.
There is absolutely no need for cocos2d which is a game engine.
You can accomplish everything you mentioned using only Objective-C.
If you want real-time effects you will need to dive into OpenGL. you can use GLKit if you target iOS 5 and above.
I want to build a facebook app featuring a personalized video which imports content assets from the user's facebook profile and their extended social graph and integrates these assets within the timeline. I am thinking of using Flash however a key stipulation is that the app works on mobile - and so I would need to use HTML5. My question is: Can I use Flash to build the application and then compile the app as HTML5 - or is there an alternative solution in the form of a HTML5 video toolkit with a programming layer that would allow me to build a web app / access the Facebook API?
I have done this a few times over the years and yes flash was the easiest however there are a few options which you have available to you that I know of which will be purely HTML5 based, personally I'd stay away from flash here as it will end up just getting int he way:
1- The cleanest method is to use a video compositing tool on the server side which can be programmed to accept variables. Personally I have only ever done this using ffmpeg however there a couple of alternatives which are out there.
The basic process would be to grab the media from FB then to composite them at certain point on top/below/around a base video which is sitting on the server using a shell script which you then pass the media assets to as variables. There are so many options as to how you might want this to be done, probably best id to have a look at some of these examples:
http://broadcasterproject.wordpress.com/2010/05/18/how-to-layerremix-videos-with-free-command-line-tools/
http://graphcomp.com/ffmpeg/
ffmpeg watermark without vhook?
note that last time I did this I used vhooks and custom filters, vhooks are now deprecated
This method will mean a reasonably heavy server load if your app is popular but it's probably the most robust across devices etc.
2- Use Popcorn.js, and let the processing be done on the client side. you could hard code it using css/js/html but popcorn is pretty stable although I havent seen how it runs on devices but in theory it should work (all standardized technologies). Basically the process would be to use javascript to fire the display of images overlayed on the video base file at preset cue points. Popcorn has all of the methods and means for you to do this already.
Hope this helps a bit. Good luck, sounds fun.
we realised some interactive video apps and one recent project was quite like your question describes.
We used adobe flash to track the motion - and published the project via create.js. You could have an image sequence from within create.js or put a video in a layer behind. This video would then control the player head time of the create.js motion tracked sequence via jquery.
worked fine - here a link to a testsetup with an image sequence.
Video Integration would be the next step.
http://www.jungeroemer.net/projekte/testpersvid/elftest01.html
(German text, sorry but it's nothing important to read there.
Just click the images and go for it)
you can download the sources from the link, if you need i can also upload the flash file to show you the motion tracking.
I am developing a cocos2d app.
It's almost completed but now I want to record the activities of my app as a video file, including sound produced by the app.
How can I implement this?
Anybody can help me.
Please suggest a way to implement this.
Thanks in advance.
The question isn't new, but since it isn't answered I thought I'd pitch in:
We provide an SDK called "Everyplay" that allows you to do exactly what you're looking for. It's free to use, and is lightweight.
We provide out-of-the-box integrations for Unity3D, cocos2d (1.x, 2.x), cocos2d-x, and you can of course integrate to a custom OpenGL-based game engine.
The documentation is available at https://developers.everyplay.com/doc
The documentation contains an example app key to use when developing, but you can of course sign up for your own client key at https://developers.everyplay.com/
There are many options - and the fact that your app is cocos2d doesn't matter much.
iSimulate works well. You can actually play the app on your device and record the gameplay as well as the touch events. This is important if you want to show user interaction in your app. You run the app in the simulator but you control it from your device.
If you just want to record the app interaction without caring about showing users the touch events, you can use Screenflow or Jing or some other recording software. I used to use Jing (free) but Screenflow works better for me and it also lets you create more advanced video like a trailer with effects. edit You should be able to capture touch events through the simulator with Screenflow too. You can choose to show them or not. And can use different indicators for those events.
Search on google for mac or iphone recording software. There are many options. I had the best experience with Screenflow because I wanted to make a trailer and gameplay video.
I'm developing similar application which allow user record the activity within cocos2d-x activity.
I'm using screen capture method and then combine it using FFMPEG. The performance wasn't too good thought but is the easiest way to achieve.
I am trying to learn and build talking puppet iPhone application. The great example is "Talking Ben the Dog" and here is youtube video. I have no idea how am I going to build such application. I have a graphics designer who will do their part. As a being programmer, what would I need to be aware of? If someone can throw their ideas or point me some relavant documentation or sample code would be great help.
Thanks.
First, you'll need to create the content. That means the animation scenes and any associated audio. Next, you'll want to trigger those scenes based upon the user's input.
If you want more advanced functionality like "talk back" where the app repeats what you say, then you'll need to get a grip with AudioQueue and AudioUnit APIs. That means detecting levels of incoming audio then triggering writing audio in to stored buffers. These APIs are difficult so this will be the most technically challenging part. You'll need to be comfortable with pointers and other lower level programming concepts.
For an app without talk back, a lot of work will be required to create the content. Then you'll need to re-create the animations using UIImage and the Core Animation framework in your app.
There are a lot of great videos on the Apple site and sample code. This will be a brilliant learning curve for you to get up to speed with Core Animation.
Just make a couple of videos for every scene and play them according to button click!
Just a quick question on the iphone technology within this business card reader
http://www.youtube.com/watch?v=F8z6pcxdrPo
as we can see this video allows users to take a photo of a business card, i have an idea where i would take a photo of some text , and that photo could then be turned into text on the iphone. how would i be able to implement this using the iOS API ?
cheers guys
The camera stuff is all standard-- use the UIImagePickerController for this.
Text recognition (OCR) is not a built in part of the iOS API, though, so that part really isn't trivial. There are multiple open-source projects that can handle this sort of thing if you want to go after them.
Tesseract is an older but possibly viable one. Check out this post which has info on cross compiling it for iOS.
Other users here might have more current recommendations.