I am new to Unity 3D. And we are developing a mobile game in Unity 3D. Some of our *.wav sound files are relatively large, say 25MB for a level background music. And we are going to have different music for different levels. And the size could be a problem, consider most of the mobile game sizes are under 200MB.
So what formats are the best for Unity 3D games? which has a nice balance in size and sound quality? Is there any general guidelines of how to compress the music, etc?
Thanks!
I personally use OGG which I feel is a good compromise between small file sizes and good quality.
As far as I know and understand, Unity re-encodes your source files anyway. Therefore your question about your assets' original format may be not as relevant as you might expect, concerning the data format in the published game binaries. See also manual on Audio.
You may influence what is actually stored and distributed by changing the Import Settings for each audio asset file.
This is an outdated question, yes, but Unity supports a wide range of audio files. Including:
.mp3
.ogg
.wav
.aiff
and more. I prefer either .mp3 or .oog because of their small file size.
I think there are some devices that could have issues with mp3 files has they haven't the mp3 chip decoder so the best option it's ogg files, also ogg usually compress better.
Related
I have come across some sample codes where set of images are added to make a QTmovie.
I am targeting this for OS X platform without any QT frameworks.
I have ague idea of creating a file with extension and embed it with appropriate metadata and find a way to insert images and audio in required format. So when the file is created it can simply be played.
I am not sure of what format/extension is better.
pointers are much appreciated.
Without QuickTime (or an equivalent multimedia framework), what you describe is quite a lot of work. Ordinarily, you would use a video compression algorithm (such as H.264) to encode your images into video, and an audio compression algorithm (such as AAC) to encode your audio track. Then you would write these streams into a container file, such as an MPEG-4 file, which interleaves the streams for playback, contains metadata and indexes and so on. Then for playback, you parse the file, decode the video and audio data, and schedule them for playback, taking care to keep them in sync.
QuickTime does all this (and more) for you, and it would be an enormous undertaking to write it all yourself. Is there some reason why you are running on OS X but cannot use QuickTime?
Given the question is tagged with iPhone, why can't you just use QTKit?
If you had to do it from scratch, you could adopt a very simple solution whereby you store your image sequence as a set of JPEG files (but then you would require libjpeg; use raw RGB or PPM if you must), the audio track as a raw WAV data, and then have another file (a text file you define) that stored timing information, so you would simply stream out the audio, and have the frame numbers of the images stored with their corresponding timecode/sample offset. That is a very simple solution that could be made to work without too much effort.
If you give us some more idea of what you are trying to achieve, we could offer some more specific suggestions.
If you want to write a program to do this, you could use Xuggler in Java to do it. It will allow you to save your final video in a format playable by almost any media player.
Start out by gaining an understanding of how video files (e.g. MP4, Quicktime) actually represent audio and video with this Overly Simplistic Guide to Internet Video.
Then, play around with the MediaTool tutorials. You can write programs that make raw images into video files (see this sample code). Finally, to write a program that makes audio and video that are in sync, see this tutorial; it generates a set of images, and makes some audio noise that is timed to change when a ball hits the edge of a box.
Hope that helps.
Art
I know this is not a specific programming question but I hope someone can give me a suggestion. My applications (iPhone and Blackberry applications) use a lot of audio files. I need a solution for my applications in order to save some spaces.
Is it right that .aac is the most suitable audio format for iPhone? Is it the smallest one? It it also suitable for Blackberry?
Is there any way to make the audio files smaller without losing a lot of quality of the sounds? How about the bitrate, sampling freq and channels? Are they really matter?
AAC is a good format for the iPhone. The iOS is optimized to play AAC.
Yes, things like bitrate, sampling frequency and number of channels are all factors in the audio file's size.
What you should do is take your audio and convert it to different formats with different settings and then just play them on a real device to see if the quality is acceptable.
Sorry, there is no simple answer. Experiment.
Depends on what type of audio you're encoding. For speech, AMR is supported by all major smartphones, and will generally give the smallest file sizes. Quality degredation is noticeable enough that it's not suitable for music, but it's optimized for voice recording (the voice notes app on the BlackBerry uses it as its file format) so it'll give you very nice results with spoken audio.
I have come across some sample codes where set of images are added to make a QTmovie.
I am targeting this for OS X platform without any QT frameworks.
I have ague idea of creating a file with extension and embed it with appropriate metadata and find a way to insert images and audio in required format. So when the file is created it can simply be played.
I am not sure of what format/extension is better.
pointers are much appreciated.
Without QuickTime (or an equivalent multimedia framework), what you describe is quite a lot of work. Ordinarily, you would use a video compression algorithm (such as H.264) to encode your images into video, and an audio compression algorithm (such as AAC) to encode your audio track. Then you would write these streams into a container file, such as an MPEG-4 file, which interleaves the streams for playback, contains metadata and indexes and so on. Then for playback, you parse the file, decode the video and audio data, and schedule them for playback, taking care to keep them in sync.
QuickTime does all this (and more) for you, and it would be an enormous undertaking to write it all yourself. Is there some reason why you are running on OS X but cannot use QuickTime?
Given the question is tagged with iPhone, why can't you just use QTKit?
If you had to do it from scratch, you could adopt a very simple solution whereby you store your image sequence as a set of JPEG files (but then you would require libjpeg; use raw RGB or PPM if you must), the audio track as a raw WAV data, and then have another file (a text file you define) that stored timing information, so you would simply stream out the audio, and have the frame numbers of the images stored with their corresponding timecode/sample offset. That is a very simple solution that could be made to work without too much effort.
If you give us some more idea of what you are trying to achieve, we could offer some more specific suggestions.
If you want to write a program to do this, you could use Xuggler in Java to do it. It will allow you to save your final video in a format playable by almost any media player.
Start out by gaining an understanding of how video files (e.g. MP4, Quicktime) actually represent audio and video with this Overly Simplistic Guide to Internet Video.
Then, play around with the MediaTool tutorials. You can write programs that make raw images into video files (see this sample code). Finally, to write a program that makes audio and video that are in sync, see this tutorial; it generates a set of images, and makes some audio noise that is timed to change when a ball hits the edge of a box.
Hope that helps.
Art
I was using AVAudioPlayer to play multiple audio clips back to back but there was always a small silence between tracks and then i came to know of Finch, a library which uses OpenAL to play audio. with this the silence problem seems to be solved theoretically but then i found that it doesn't play m4a or any other compressed formats.
Now i am looking for an uncompressed audio format which would have relatively less file size (though uncompressed means that all of them should have almost same size) and a method to convert, i am also googling on afconvert in a mean while.
CAF files work great for this. I've built an application that loops audio files, and I was impressed with the relatively small file size.
Check out this question for more info on converting to CAF.
What kind of audio files are you using in your iPhone games/apps?
I have a game with 30MB of sounds in .wav format and I'm thinking of maybe converting to .mp3 to reduce the app size... Is there a major difference in performance? Any other issues?
Keep in mind that certain codecs run in hardware and others in software. Therefore not all compressions will allow for simultaneous playback of more than one sound. For example, if you have a sound playing, a UI sound like a beep may not play if both were trying to use the same codec. For more info, see:
http://developer.apple.com/iphone/library/documentation/iPhone/Conceptual/iPhoneOSProgrammingGuide/AudioandVideoTechnologies/AudioandVideoTechnologies.html#//apple_ref/doc/uid/TP40007072-CH19-SW6
iPhone Audio Hardware Codecs
iPhone OS applications can use a wide range of audio data formats. Starting in iPhone OS 3.0, most of these formats can use software-based encoding and decoding. You can simultaneously play multiple sounds in all formats, although for performance reasons you should consider which format is best in a given scenario. Hardware decoding generally entails less of a performance impact than software decoding.
The following iPhone OS audio formats can employ hardware decoding for playback:
AAC
ALAC (Apple Lossless)
MP3
The device can play only a single instance of one of these formats at a time through hardware. For example, if you are playing a stereo MP3 sound, a second simultaneous MP3 sound will use software decoding. Similarly, you cannot simultaneously play an AAC and an ALAC sound using hardware. If the iPod application is playing an AAC sound in the background, your application plays AAC, ALAC, and MP3 audio using software decoding.
To play multiple sounds with best performance, or to efficiently play sounds while the iPod is playing in the background, use linear PCM (uncompressed) or IMA4 (compressed) audio.
To learn how to check which hardware and software codecs are available on a device, read the discussion for the kAudioFormatProperty_HardwareCodecCapabilities constant in Audio Format Services Reference.
Both AAC and CAF formats work fine and offer decent file sizes. For certain background looping tracks I found MP3 files getting too big, but YMMV. Experimenting with a decent sound editing app is the only way to find the right balance between size and quality. I've had pretty good luck with Audacity and Amadeus Pro.
Suggest listening to the output with a pair of really good noise-isolating headphones on the device itself. Most people won't be listening to your stuff with these but as you decrease sound quality to shrink file sizes you'll start getting static and hum artifacts. It's just a matter of balancing size vs. quality and what you're willing to live with.
I use a combination of WAV files (for sound effects) and MP3 (for music), which seems to work fine. You can have trouble if you try to play multiple MP3 files at the same time - drop outs, or performance degradation, depending on your AudioSession settings.
If I had to compress my sound effects, I'm not sure which codec has the least decoding overhead. Something like Apple Lossless would likely work well, and would cut the size roughly in half.
I find mp3 fine, but keep in mind that decoding on the iPhone/Touch2G is only about 2.5x realtime speed.