I have put together a media converter that converts from 100Mbps ethernet on one side and 100BASE-FX fiber on the other. The application that this converter goes into uses Full Duplex / Pause Frame flow control to handle data overloads. I have configured my hardware to accept and deal with pause frames. What I need is a means of testing the setup to see that the media converter correctly handles the pause frames. To that end I have 2 questions....
1) Does anyone have a good method for testing a piece of hardware for it's handling of pause frames?
2) An idea that I had was to send data through the converter. While doing so, send a pause frame of a known length to the converter. Then look to see that the device pauses for the correct amount of time. Does this method seem plausible? If so, is there an easy way (software tool) to generate pause frames to accomplish what I am trying to do?
Any help here would be greatly appreciated.
Thanks,
Mike Nycz
There are two types of Pause frame testing:
1) Your device should detect Pause frames and stop transmitting for the time mentioned in received Pause frame. If you send a few pause frames to your device, it can be difficult to detect whether your device stopped transmission for that small duration. What you can do is to send Pause packets continuously. By doing this your device should altogether stop transmitting till it is receiving pause packets.
2) Your device should generate Pause frames when RX FIFO level is above a certain threshold and should stop generating them when level again goes below the threshold.
You can use a Packet generator like N2X/IXIA etc for generating pause packets. One more thing Pause packets should be of 64 bytes only. For size other than 64bytes, device may opt to reject them.
Related
I am having a problem finding resources on playing an attack (start of sound) / sustain (looping sound) / decay (ending of sound) sequence with no transition breaks. Are there any good libraries for handling this, or should I roll my own with AVAudioPlayer? Is AudioQueue a better place to look? I used to use SoundEngine.cpp, but that's been long gone for a while. Is CAF still the best format to use for it?
Thanks!
From your description, it sounds as if you're trying to write a software synthesizer. The only way that you could use AVAudioPlayer for something like this would be to compose the entire duration of a note as a single WAV file and then play the whole thing with AVAudioPlayer.
To create a note sound of arbitrary duration, one that begins playing in response to a user action (like tapping a button) and then continues playing until a second user action (like tapping a "stop" button or lifting the finger off the first button) begins the process of ramping the looped region's volume down to zero (the "release" part), you will need to use AudioQueue (AVAudioPlayer can be used to play audio constructed entirely in memory, but the entire playback has to be constructed before play begins, meaning that you cannot change what is being played in response to user actions [other than to stop playback]).
Here's another question/answer that shows simply how to use AudioQueue. AudioQueue calls a callback method whenever it needs to load up more data to play - you would have to implement all the code that loops and envelope-wraps the original WAV file data.
creating your own envelope generator is very simple. the tough part will be updating your program to use lower level audio services in order to alter the signal directly.
to do this, you will need:
the audio file's samples
set up an AudioQueue (that's one approach, but i am going with it because it was mentioned in the OP, and it is relatively high level API for a user provided sample buffer)
provide a signal to the queue
determine if your program is best in realtime or pre-rendered
Realtime
Allows live variations
manage your loop points
manage your render position
be able to determine the amplitude to apply based on the sample position range you are reading
or
Prerendered
May require more memory
Requires less CPU
apply the envelope to your copy of the sample buffer
manage your render position
I also assume that you need only slow/simple transitions. If you want some crazy/fast LFO, without aliasing, you will have a lot more work to do. This approach should not produce audible aliasing unless your changes are too abrupt:
Writing a simple envelope generator (EG) is easy; check out Apple's SinSynth for a very basic EG if you need a push in that direction.
I am wondering if there is a way to manipulate the audio buffer when the audio queue is paused. So the pseudo logic goes like this:
1. pause audio queue
2. manipulate the audio buffers in the queue except the one that is being handed to the callback function.
3. start the audio queue again
I notice the problem would be when I try to manipulate the audio buffer that is being decoded and fed to the device. So anyone has ever tried this before?
I think this path will lead to pain and suffering, and one that could be simulated without breaking the paradigm that AudioQueue sets forth. The whole point of the queue is to feed buffers to a callback that you implement so you can manipulate each sample as you see fit before passing it down the chain.
Maybe if you can explain the context of what you're trying to accomplish, a more suitable solution could be offered.
I'm struggling with an AudioQueue audio player I implemented. I initially thought it was truncating the 1st 1/2 of audio that it played but upon loading larger files I notice gaps every other 1/2-1 second. I've run it in debug and I've confirmed that I'm loading the queue correctly with audio. (There are no big zero regions loaded in the queue.) It plays without issue (no gaps) on the simulator but on device I get gaps as if its missing every other chunk of audio. In my app I decompress then pull audio from a memory NSMutableData object. I feed this data into the audio queue. I have a corresponding implementation in the same app that plays wave audio and this example works without issue on long and short audio clips. I'm comparing the wave implementation to the other which does the decompression. the only difference between the two is how I discover the audio meta data and where I get the audio samples for enqueuing. In the wave implementation I use AudioFileGetProperty and AudioFileReadPackets to get this data. In the other case I derive the data before hand using cached ivars loaded during callbacks from my decompressor. The meta data matches for both compressed and wave implementations. I've run the code in instruments and I don't see anything taking more than 1ms in my audio packet delivery/enqueuing logic during playback. I'm completely lost. Please speak up if you have any idea how to solve the situation.
I finally resolved this issue. I found that if I skipped the 1st 44 bytes (the exact size of a wave header) of audio then it plays correctly on the device. It pays correctly on the sim regardless of wether I skip 44 or not. Strange and I'm not sure why but that's the way it works.
Can anyone confirm how an app like this is done?
PocketCam
Is the way to capture the camera's video stream using AVFoundation?
http://developer.apple.com/iphone/library/documentation/AVFoundation/Reference/AVCaptureVideoDataOutput_Class/Reference/Reference.html#//apple_ref/doc/uid/TP40009544
To be clear, I dont want to capture the video and save it, I want to stream it over a wireless network as an IP camera.
thanks,
I can't confirm how PocketCam works, however, as of iOS4, AVFoundation is the correct way to do this. You will receive a callback with the frame you need. At that point you would push the frame as an image to some server computer listening on the network waiting for frames. Keep in mind that you could be receiving a lot of frames and you may not have enough bandwidth to push all of them depending on the quality/size/resolution of the frames. Here is a technical note from Apple that discusses how to implement frame captures with AVFoundation.
Consider aurioTouch sample application provided by Apple. I wanted to simulate a lengthy processing of the recorded audio and modified the sample by introducing a delay of 0.1 second in the render callback method: PerformThru. This leads to a distorted audio and static noise being played through iPhone's speaker or headphones. The noise is heard even when the mute button in the application is on which essentially outputs silence into AudioUnit output bus.
Can anybody give a detailed explanation what happens on the low level when the host's callback function (in our sample it is PerformThru) does not return in a timely fashion?
Why a callback function that performs poorly makes iPhone playback the noise?
I understand that the code in the callback function must be highly optimized. Still I would like to know if it is possible to prevent the noise from happening.
Is it possible to modify aurioTouch sample to make AudioUnit do just the recording and switch the playback off completely?
If you want to introduce a delay then you need to do this via buffering, not by simply delaying the callback. E.g. for 0.1 s and a 44.1 kHz sample rate you would need to buffer an additional 4410 samples. Initially you would pass 4410 0s and then start passing your buffered (delayed) samples.