You want to modulate and transmit voice using "flutter_webRTC".
Is it possible to modify the voice and send it?
If possible, how should we approach it?
Video and audio were successfully received using "flutter_webRTC".
Related
I want develop an video call application for mobile. I search a lot but I dont' find a library in flutter, dart programming language for capture audio and video from webcam and microphone. I want get a single data flow/stream thus I can send it via socket from client to client.
Can you tell me if there is a way to capture video with audio and send it via socket with dart?
Is there any way to send recorded voice in message programmatically using objective-c and iPhone.
I am sending text message using messageUI framework but now I want to send voice message too.
There's nothing straightforward built for voice messages built into iOS, unlike "MFMessageComposeViewController". You'll have to write your own voice recording code or make use of something open source or third party/commercial. Here's a related question that talks about how to record audio through iOS.
Plus this depends on how you want the recipient to receive the recorded voice message? A mp3 or some other audio file sent via e-mail (which you can do using the MFMailComposeViewCOntroller) or some other way?
I am trying to record the sound from iPhone speaker. I am able to do that, but I am unable to avoid mic input in the recorded output. Tried with sample code available in different websites with no luck.
The sample which I used does the recording with audio units. I need to know if there is any property for audio unit to set the mic input volume to zero. Above that I came to from other posts that Audio Queue services can do the thing for me. Can any one redirect me with sample code for the audio queue services implementation. I need to know whether there is a way of writing the data to an separate audio file before sending it as input to speaker.
Thanks in advance
There is no public iOS API or property for recording generic audio sent to the iPhone speaker. Only mic input can be recorded.
But if you are playing audio in your app using only uncompressed samples with Audio Queues or the RemoteIO Audio Unit, you can just copy those samples to a file before you write them to the audio callback buffers. Those saved samples can be used to construct a recording.
I want to know how to make an app to be able to make a circuit close using iphone's audio jack. An example of this this is Happy Trigger http://www.kickstarter.com/projects/1435018402/trigger-happy-camera-remote
I believe what they do is simply play some arbitrary audio and use the resulting signal.
I am working on radio app in which i want to record live radio streaming on an record button click.It should record just the radio streaming sound. Currently i am recording from Mike but its not the way i want as it also captures the surrounding/background voice. I did read few post but all of them are regarding AVaudioRecorder. I have used wunderradio api for playing the radio from url. I haven't got any method/function in their api which will record that audio from url. I want to record the buffered audio only from that url,so that I can play that recorded file later.
Any help would be appreciated.
Recording audio playback by using the microphone is really not the right way to do it. If you're using AudioQueue services for audio playback you can use the AudioFileWritePackets function to write audio data to a file.
If you implement recording this way you can write audio files in the format they're streamed in. So an mp3 audio stream can be written as an mp3 file to disk since you're writing encoded audio packets with the AudioFileWritePackets function.
To get a better understanding of how this all should be implemented you can have a look at the SpeakHere sample code of Apple. The SpeakHere project records from the microphone but it's still a good example to see how the AudioFileWritePackets function works. In your case you need to use the AudioFileWritePackets function in the output callback of your AudioQueue. This is the callback that's called when a buffer is finished playing. So it's a good to place to write the already played audio buffers data to a file.