create video from images [duplicate] - iphone

I have come across some sample codes where set of images are added to make a QTmovie.
I am targeting this for OS X platform without any QT frameworks.
I have ague idea of creating a file with extension and embed it with appropriate metadata and find a way to insert images and audio in required format. So when the file is created it can simply be played.
I am not sure of what format/extension is better.
pointers are much appreciated.

Without QuickTime (or an equivalent multimedia framework), what you describe is quite a lot of work. Ordinarily, you would use a video compression algorithm (such as H.264) to encode your images into video, and an audio compression algorithm (such as AAC) to encode your audio track. Then you would write these streams into a container file, such as an MPEG-4 file, which interleaves the streams for playback, contains metadata and indexes and so on. Then for playback, you parse the file, decode the video and audio data, and schedule them for playback, taking care to keep them in sync.
QuickTime does all this (and more) for you, and it would be an enormous undertaking to write it all yourself. Is there some reason why you are running on OS X but cannot use QuickTime?
Given the question is tagged with iPhone, why can't you just use QTKit?
If you had to do it from scratch, you could adopt a very simple solution whereby you store your image sequence as a set of JPEG files (but then you would require libjpeg; use raw RGB or PPM if you must), the audio track as a raw WAV data, and then have another file (a text file you define) that stored timing information, so you would simply stream out the audio, and have the frame numbers of the images stored with their corresponding timecode/sample offset. That is a very simple solution that could be made to work without too much effort.
If you give us some more idea of what you are trying to achieve, we could offer some more specific suggestions.

If you want to write a program to do this, you could use Xuggler in Java to do it. It will allow you to save your final video in a format playable by almost any media player.
Start out by gaining an understanding of how video files (e.g. MP4, Quicktime) actually represent audio and video with this Overly Simplistic Guide to Internet Video.
Then, play around with the MediaTool tutorials. You can write programs that make raw images into video files (see this sample code). Finally, to write a program that makes audio and video that are in sync, see this tutorial; it generates a set of images, and makes some audio noise that is timed to change when a ball hits the edge of a box.
Hope that helps.
Art

Related

How to offline debug augmented reality in Unity?

I was wondering if there was a way to record the sensor and video data from my iPhone, save it in some way, and then feed it into Unity to test an AR app.
I'd like to see how different algorithms behave on identical input, and that's hard to do when the only way to test is to pick up my phone and wave it around.
What do you can do is capture the image buffer. I've done something similar using ARCore. Not sure if ARKit has a similar implementation. I found this when I did a brief search https://forum.unity.com/threads/how-to-access-arframe-image-in-unity-arkit.496372/
In ARCore, you can take this image buffer and using ImageConversion.EncodeToPNG you can create PNG files with the timestamp. You can pull your sensor data in parallel. Depending on what you want, you can write it to a file using a similar approach: https://support.unity3d.com/hc/en-us/articles/115000341143-How-do-I-read-and-write-data-from-a-text-file-
After which, you can use FFMPEG to convert these PNGs into a video. If you want to try different algorithms, there's a good chance the PNGs alone will be enough. Else you can use a command like so: http://freesoftwaremagazine.com/articles/assembling_video_png_stream_ffmpeg/
You should be able to pass these images and the corresponding sensor data to your algorithm to check.

how to insert and overwrite audio file in iOS

Am developing an application which has audio recorder. The user should be able to play the audio file & insert recording into that.. cut unwanted audio .. overwrite some part of audio file.
Have you seen how to Insert , overwrite audio files -Audio Editing iphone? but no one answered this...
Atleast suggest me a way to implement this....
Thanx in advance...
What type of audio file are you talking about? You will almost certainly need to convert whatever you are using into PCM WAV data for this type of manipulation. Luckily, Core Audio, which others have pointed you towards has some convenience methods for doing this.
Once you have the raw PCM data, you can insert by simply inserting other PCM data at the desired point in the data. You want to make sure you don't do something like write in the middle of a stereo packet, or something like that, but besides that, most simply-formatted PCM data is pretty easy to manipulate. Think of it like a string -- you can start with "Hello World" and change it to "Hello, Beautiful World" but simply inserting data in the middle.
Overwriting is the same principal.
Once you are done with the edits, you'll need to transform the PCM data back into whatever format you had saved in before.
Have a look at Core Audio
Core Audio provides software interfaces for implementing audio features in applications you create for iOS and OS X. Under the hood, it handles all aspects of audio on each of these platforms. In iOS, Core Audio capabilities include recording, playback, sound effects, positioning, format conversion, and file stream parsing, as well as:

Which one of these is better for short audio input in iPhone- .caf or .wav?

I am making a simple application for iPhone, and I want to enter a short audio file on an object click. Which of .caf and .wav would be better?
I am building a simple application in Cocos2d in which balloons produce a pop sound when clicked. What are the memory issues with both sound versions?
If you do not need specific Core Audio Format features, then WAV has more universal support (and it would be my default choice for that reason).
Core Audio Format basically functions as a container for other audio file formats, including WAV. Core Audio Format has many great features, but it's not evident from the description that you need any of these.
In response to a deleted comment, which was moved to the question:
I can't speak for Cocos2d specifically, so I will write about the file formats in general: WAV does not use data compression. CAF may. If it is a short sound file, you probably don't want data compression (because it requires a good amount of processing to convert to LPCM for playback). If you play the pop often, then you will want to hold onto an uncompressed version of the audio data for easy processing. 1 second will require 44100 * 2 bytes at CD quality in memory (per channel).
For a short sound file such as a balloon pop, a 16 bit WAV file sounds ideal. In that sense, the memory difference should not be a deciding factor. If you have a lot of audio files, or long audio files to load into memory, then the situation changes. For now, I don't consider memory to be a problem in your case. Since CAF is a container, its uncompressed representation will be nearly identical (the difference will be a little more header data in the CAF).
A CAF file is a basically Core Audio Format. So it is well suited for the Apple frameworks. The best advantage of CAF over WAV is while recording when you can have files more than 4 GB and also in CAF you don't need to update the WAV header after each packet recording.
Anyway, I assume you don't need these features related to CAF. And as Justin said, I do believe that WAV will be the better option as you can have more support for WAV than the CAF format.

iPhone SDK Audio Mixer

What I need to do is be able to mix 4 channels of audio (not from a live source, just prerecorded audio files in the app bundle), and change their volumes individually, in real time, preferably with MP3s. What's the best/correct road for me to take, regarding all the various sound APIs for the iPhone?
Thanks!
Storm Sim does this with AVAudioPlayer, which is certainly the simplest methdod. You can call prepareToPlay on each of the player objects then kick them off with play later so there won't be any delay. I also use a blank 1-second audio player on eternal loop to keep the deviceTime counting down, so you can use playAfter to give a specific deviceTime in the future to make all the samples play in-sync or offset relative to each other (deviceTime only ticks if there is some sort of audio playing). The AVAudioPlayerDelegate has interrupted/resumed events and finishedPlaying so you can get notification of what is happening.
However there is only one hardware MP3/AAC decoder, so the other three will use up CPU (and thus battery) doing the decoding. If you want to maximize battery life, use CAF files in IMA4#44100. It is about 1/4 the size of the raw WAV files so it isn't as good as MP3 but the performance is much better, especially if you are using a lot of small audio tracks. If you are using voice you can get away with much less fidelity and smash the files even more. afconvert in terminal can help you getting your source files in the CAF format (you should use CAF files no matter what the encoding).

Creating video file from images and audio( pre-recorded )

I have come across some sample codes where set of images are added to make a QTmovie.
I am targeting this for OS X platform without any QT frameworks.
I have ague idea of creating a file with extension and embed it with appropriate metadata and find a way to insert images and audio in required format. So when the file is created it can simply be played.
I am not sure of what format/extension is better.
pointers are much appreciated.
Without QuickTime (or an equivalent multimedia framework), what you describe is quite a lot of work. Ordinarily, you would use a video compression algorithm (such as H.264) to encode your images into video, and an audio compression algorithm (such as AAC) to encode your audio track. Then you would write these streams into a container file, such as an MPEG-4 file, which interleaves the streams for playback, contains metadata and indexes and so on. Then for playback, you parse the file, decode the video and audio data, and schedule them for playback, taking care to keep them in sync.
QuickTime does all this (and more) for you, and it would be an enormous undertaking to write it all yourself. Is there some reason why you are running on OS X but cannot use QuickTime?
Given the question is tagged with iPhone, why can't you just use QTKit?
If you had to do it from scratch, you could adopt a very simple solution whereby you store your image sequence as a set of JPEG files (but then you would require libjpeg; use raw RGB or PPM if you must), the audio track as a raw WAV data, and then have another file (a text file you define) that stored timing information, so you would simply stream out the audio, and have the frame numbers of the images stored with their corresponding timecode/sample offset. That is a very simple solution that could be made to work without too much effort.
If you give us some more idea of what you are trying to achieve, we could offer some more specific suggestions.
If you want to write a program to do this, you could use Xuggler in Java to do it. It will allow you to save your final video in a format playable by almost any media player.
Start out by gaining an understanding of how video files (e.g. MP4, Quicktime) actually represent audio and video with this Overly Simplistic Guide to Internet Video.
Then, play around with the MediaTool tutorials. You can write programs that make raw images into video files (see this sample code). Finally, to write a program that makes audio and video that are in sync, see this tutorial; it generates a set of images, and makes some audio noise that is timed to change when a ball hits the edge of a box.
Hope that helps.
Art