Why doesn't unity audio work with 4 channels - unity3d

I'm trying to figure out how to do multi channel audio in unity, and it looks like something isn't working correctly.
Here is my setup:
Blank unity project (unity 2020.3.0f1)
Set project settings > audio > default speaker mode to "Quad"
Create a 4-channel audio clip with audacity (1 "click" on each channel, one at a time)
Import the clip into unity with default settings
I don't have an external sound card (yet) so I'm using VB-Cable as a virtual 4-channel output
VB-Cable provides a Control panel where I can see the levels for each channel.
Playing back the source file with quicktime, I can see the channel 1,2,3 and 4 levels moving as I expect.
However, playing back the imported audio clip in unity's asset preview merges channel 1 with 3, and 2 with 4. The same thing happens when playing the audio in-game.
Another setup I've tried is the same as above, but setting up my computer to use an "aggregate audio device":
2 outputs from my built-in output
2 outputs from VB-Cable
(I also changed the VB-Cable settings to make it a 2-channel output, so that when combined with the built-in output, I get a 4-channel output device)
With this setup, playing back the clip in Quicktime or in the unity asset preview works as expected (I hear the channels 1 and 2 through the Built-in output and "see" the channels 3 and 4 on the VB-Cable control panel).
Playing the clip in-game however, only the channels 1,2 and 3 seem to work. Nothing happens on channel 4 (Using a unity audio mixer, I can see channels 1,2 and 3 "lighting up" but 4 stays silent)
I know changing audio config with unity opened is not guaranteed to work, so for every test I do, I quit and restart Unity.
I hope I've explained the issue properly. If not, here is a zip file containing the test unity project (including the 4 channel audio track), and screen recording of different tests I've made.
System:
MacBook Pro (Retina, 13-inch, Late 2013)
MacOS 11.6.2
16GB RAM
2,8 GHz Dual-Core Intel Core i7

Related

Unity something went wrong when play too many audio clips simultaneously

I'm making a rhythm game, and I use 'AudioSource.PlayOneShot()' function to play the hit sound when the player hits the note. I downloaded a beatmap from Osu!Mania to test my game.
I found when there's too many audio clips playing simultaneously (20 audio clips or more per second), the audio output becomes stuttered, disorder, and the audio clips played earlier will temporarily be muted until the number of simultaneous audio clips playing is cut down to a certain amount...
It's common in difficult levels of rhythm games (play many audio clips simultaneously), I'm wondering how to solve this problem.
The problem exists on Windows whether on the Editor or not, and the FPS is OK when the problem happened.
Sorry for poor English.
The expected performance is playing many audio clips simultaneously fluently.
Okay... I've found the solution myself.
Open Edit->Project Settings->Audio,
and change the setting 'Max Real Voices' and 'Max Virtual Voices'.

App Store preview video error: The frame rate of one or more of your app previews is too high

When I tried to upload a preview video followed by apple's guide, itunesconenct popup an error:
Any one knows how to solve this? Thanks,
Apple's Guide:
https://developer.apple.com/app-store/app-previews/
Movies in App Reviews are to have max frame rate of 30 fps. Therefore, you have to find ways to reduce your video frame rate. I reduced mine to 24 fps and got accepted by App Store. There are options as discussed below.
A. Using Terminal:
i) install ffmpeg program if you don't have:
brew install ffmpeg --with-chromaprint --with-fdk-aac --with-libass --with-librsvg --with-libsoxr --with-libssh --with-tesseract --with-libvidstab --with-opencore-amr --with-openh264 --with-openjpeg --with-openssl --with-rtmpdump --with-rubberband --with-sdl2 --with-snappy --with-tools --with-webp --with-x265 --with-xz --with-zeromq --with-zimg
ii) run ffmpeg to force changing frame rate to, 24 fps, for example.
ffmpeg -i input.mp4 -r 24 output.mp4
B. Using a graphical program such LumaFusion. You can buy LumaFusion and install on an iPhone/iPad, and change frame rate upon saving the video.
C. I tried to change the frame rate on iMovie, version 10.1.9, (on the Mac High Sierra 10.13.6), but there is straight forward way to change frame rate.
Please check the official App Preview Properties: https://developer.apple.com/library/content/documentation/LanguagesUtilities/Conceptual/iTunesConnect_Guide/Chapters/Properties.html#//apple_ref/doc/uid/TP40011225-CH26-SW10
Specifically, the Max Frame Rate is 30 fps
As you didn't mention, how you took the App Preview, I would recommend a video editing tool like iMovie, Adobe After Effects or Final Cut Pro to reduce the frame rate to the 30 fps limit.
I use DaVinci Resolve to create previews for the App Store.
There is a free download here. https://www.blackmagicdesign.com/products/davinciresolve/.
Set the project to the proper resolution (e.g. 866x1920 or 1080x1920).
Import the clip to the timeline
Right click and set the clips properties to 30FPS.
Select File -> Media Management
Then Transcode option
Set the wrapper to MP4 and the CODEC to H.264
Your clip will be exported in the proper format.

Why can’t VLC go in to fullscreen mode?

I’m working on a Matlab application that uses a VLC class to control a VLC-instance. One of the features is to set the VLC player to fullscreen. This feature works perfectly fine.
The VLC player is downloaded from Matlab’s File Exchange: https://se.mathworks.com/matlabcentral/fileexchange/56215-vlc (Thanks a lot Léa Strobino)
However, one particular clip insists on resizing the player to a smaller size.
I have done some research and it turns out that this is a common problem in some VLC versions.
Normal workarounds are to uncheck the “adapt interface to video size” (something like that) and to check the “Fullscreen” box.
This ought to make the player open in fullscreen and not resize the screen to video size. The video still resizes the player to a smaller size.
All the specs of the clips are the same: Same file extension (.vob), formats and were made the same way (I did some video trimming and such using ffmpeg – but the same way every time).
I have noticed one difference and that is that this particular video has a lower Data and bitrate (~1000-1500kbps) where as the others are higher (<4000kbps). Also when showing the properties of the clip the frame height and width are blank as opposed to the others that have specific values.
This should however not have an effect of the fullscreen command from Matlab called after loading the video into the playlist. The command has no effect on this video, but does on all other.
It is possible to set the player to fullscreen manually by clicking the window, so it is not caused by some restriction in the video not allowing it to fullscreen.
Why does the video refuse to go in to fullscreen?
Hope somebody is able to help.
Okay so I seem to have solved the problem now. Without being completely sure why - the problem was in the lowered data/framerate.
I tried to add -crf 18 when converting my .mp4 to a .vob file:
ffmpeg -i input.mp4 -vcodec copy -acodec ac3 -crf 18 output.vob
The -crf stands for Constant Rate Factor and is a way to ensure a specific Data rate. The values goes from 0-51 and 18 seems to be the lowest 'sane' value (highest data rate). A good explanation can be found here: https://superuser.com/questions/677576/what-is-crf-used-for-in-ffmpeg
With this higher data rate the video opens up in fullscreen everytime :=)

iOS7 robotic/garbled in speaker mode on iPhone5s

We have a VOIP application, that records and plays audio. As such, we are using PlayAndRecord (kAudioSessionCategory_PlayAndRecord) audio session category.
So far, we have used it successfully with iPhone 4/4s/5 with both iOS 6 and iOS 7 where call audio and tones played clearly and were audible.
However, with iPhone 5s, we observed that both the call audio and tones sound robotic/garbled in speaker mode. When using earpiece/bluetooth/headset, sound is clear and audible.
iOS Version used with iPhone 5s: 7.0.4
We are using audiounits for recording/playing of call audio.
When setting audio properties like session category, audio route, session mode etc., we tried both the older (deprecated) AudioSessionSetProperty() and AVAudioSession APIs.
For playing tones, we are using AVAudioPlayer. Playing of tones during the VOIP call and also when pressing keypad controller within the app produces robotic sound.
When instantiating the audio component using AudioComponentInstanceNew, we set componentSubType to kAudioUnitSubType_VoiceProcessingIO.
When replacing kAudioUnitSubType_VoiceProcessingIO with kAudioUnitSubType_RemoteIO, we noticed that the sound of call audio and tones was no longer robotic, it was quite clear, but the volume level was very low when using speaker mode.
In summary, keeping all the other audio APIs the same:
kAudioUnitSubType_VoiceProcessingIO: Volume is high (desirable) but sound of tones and call audio was robotic in speaker mode.
kAudioUnitSubType_RemoteIO: Sound of tones and call audio was clear but it is not audible.
STEPS TO REPRODUCE
- Set audio session category to playAndRecord.
- Set audio route to speaker
- Set all the other audio properties like starting audio unit, activating the audio session, instantiating the audio components.
- Set the input and render callbacks
- Try both options
1. Play tones using AVAudioPlayer
2. Play call audio
Any suggestions on how to get over this issue. Raised as an issue with Apple but no response yet from them.
i have shared the code here github link
The only difference between kAudioUnitSubType_VoiceProcessingIO and kAudioUnitSubType_RemoteIO is that voiceProcessing includes code to tune out acoustic echo i.e. tunes out the noise from the speaker so the microphone doesn't pick it up. Its been a long time since I've played with the audio framework but I remember that to sound off there could be any number of things,
Are you doing any work in the audio callbacks that could be taking a long time?
The callbacks run on realtime threads. if your processing takes too long you can miss data. Would be helpful to track the data over a fixed period of time to see are you capturing it all. Use something like wireShark to sniff the network. Record the number of packets and see did the phone capture the same.
Are you modifying any of the audio?
Do you have a circular buffer that might be causing an issue?
I've had several issues doing this and one was using a third party circular buffer that was described as low latency and efficient ... it wasn't. I answered my own question here and included my circular buffer implementation that greatly improved my audio as the issue was I was skipping data.
Give this a go and let me know:
iOS UI are causing a glitch in my audio stream
Please be aware that some of this code is unique to the audio format ALaw, 0xD5 is a byte of silence in ALaw, if you are using linear PCM or any other that will probably be a noise of some kind.

How do I enable background audio for an iPhone app

Currently I am designing an iOS application that will connect to a music stream through a network and play the audio to the user.
I have a simple setup with a button the enables the stream to start, and a UIwebview that connects to the stream. When I run the app (on an iPhone, NOT a simulator), the button works fine and launches the Quicktime player to begin playback for the audio. Pausing and playing from this screen works like a charm as well.
However I want my user to be able to start up the stream, turn the phone off (sleep the display) and continue to listen to the stream. However sleeping the display will fade out the audio until it stops playback.
I have tried to go into the app's PList file like a few others have told me to do online and added the field "Required background modes" and added App plays audio or streams audio/video using Airplay to the 0 array field and App downloads content from the network to the 1 array field.
("App plays audio" was not offered through auto-complete even though that was the phrase told to make the stream work. Instead I left it as "App plays audio or streams audio/video using Airplay" before trying it the other way to little more luck)
However neither of these are allowing the audio to continue to play when the display has been put to sleep. Can anyone offer up a suggestion as to how to make it work?
In Xcode 5.1, there's another place aside from the Plist that this needs to be set, Target -> Capabilities -> Background Modes... this seems to do more than just affect the plist, though I'm not absolutely sure of this.
Look here an sample code https://github.com/jsagorin/iOSBackgroundAudio
and here .. some explanations (how to Set UIBackgroundModes key in app-info.plist file, Set Audio Session Category , etc) http://www.sagorin.org/ios-playing-audio-in-background-audio/
Just make an entry in plist
Application does not require background mode and set its value to 'NO'
And add background Mode to VOIP