I am using the flutter-webrtc-plugin and would like to record both local and remote audio streams. Is there any way for me to get audio buffers from the media streams? I have tried using the AudioFileRenderer in the unified-plan branch. In the startRecording function of MediaRecorderImpl.java, I supplied the file storage path e.g. "storage/emulated/0/Android/data", a file is successfully created everytime I ended my call but the recording file is broken so it can't be played. There are no errors coming from the terminal. I'm using flutter v1.22.6 and forked the flutter-webrtc from 0.5.8. I added the AudioFileRenderer file to the flutter-webrtc 0.5.8, my code is as below:
public void startRecording(File file) throws Exception {
recordFile = file;
if (isRunning)
return;
isRunning = true;
//noinspection ResultOfMethodCallIgnored
file.getParentFile().mkdirs();
if (videoTrack != null) {
System.out.println("try123 1");
videoFileRenderer = new VideoFileRenderer(
file.getAbsolutePath(),
EglUtils.getRootEglBaseContext(),
audioInterceptor != null
);
videoTrack.addSink(videoFileRenderer);
if (audioInterceptor != null)
audioInterceptor.attachCallback(id, videoFileRenderer);
} else {
Log.e(TAG, "Video track is null");
if (audioInterceptor != null) {
//TODO(rostopira): audio only recording
// throw new Exception("Audio-only recording not implemented yet");
Log.d(TAG, "Try to use onWebrtcSamplesReady");
audioFileRenderer = new AudioFileRenderer(file);
audioInterceptor.attachCallback(id, audioFileRenderer);
}
}
}
Any help is appreciated! Thanks!
I am also looking for the same solution, none has been found so far.
So, I am using webview for the the RTC part (communication & recording), while keeping the Firebase messaging and EventSource/SSE (I'm not using socket) in Flutter.
This is not directly answer your question, just providing alternative solution, it's better than having no solution at all, probably in the future when flutter RTC updated and supporting voice only recording, we can update the apps we develop.
Related
How do you get the output written to logcat back into the Flutter app that caused it? Or simpler asked: How to read logcat in Flutter?
The problem is this:
The app uses a stack of Android plugins to communicate with some custom hardware through Bluetooth. Those Android plugins write extensively to logcat. Now, for debugging, it would be very helpful to be able to read all the messages the App (including native plugins) has written to logcat. Question is, is this somehow possible?
How would you tackle that?
Check out the plugin called logcat on pub.dev.
Sadly, it seems to be no longer maintained and isn't updated for null safety.
But you can check out the source code here and see how the plugin gets access to the android logcat.
Because the logcat is a native thing, you'll have to use a MethodChannel to call a Java/Kotlin function:
// define MethodChannel
final platform = const MethodChannel('app.channel.logcat');
// call native method
logs = await platform.invokeMethod('execLogcat');
And the native part:
public class LogcatPlugin implements MethodCallHandler {
public static void registerWith(Registrar registrar) {
final MethodChannel channel = new MethodChannel(registrar.messenger(), "app.channel.logcat");
channel.setMethodCallHandler(new LogcatPlugin());
}
#Override
public void onMethodCall(MethodCall call, Result result) {
if (call.method.equals("execLogcat")) {
String logs = getLogs();
if (logs != null) {
result.success(logs);
} else {
result.error("UNAVAILABLE", "logs not available.", null);
}
} else {
result.notImplemented();
}
}
String getLogs() {
try {
Process process = Runtime.getRuntime().exec("logcat -d");
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(process.getInputStream()));
StringBuilder log = new StringBuilder();
String line;
while ((line = bufferedReader.readLine()) != null) {
log.append(line);
}
return log.toString();
} catch (IOException e) {
return "EXCEPTION" + e.toString();
}
}
}
The code samples are from github.com/pharshdev/logcat.
Maybe you can just fork the git repo and migrate it to null safety if needed.
Check the plugin called logcat_monitor on pub.dev.
Its biggest advantage over the other logcat plugin is that it allows continuous monitoring of logcat messages.
Follows a screenshot example:
how to use
Add the dependencies:
dependencies:
logcat_monitor: ^0.0.4
Create a function to consume the logcat messages
void _mylistenStream(dynamic value) {
if (value is String) {
_logBuffer.writeln(value);
}
}
Register your function as a listener to get logs then use it in anyway within your app.
LogcatMonitor.addListen(_mylistenStream);
Start the logcat monitor passing the filter parameters as defined in logcat tool.
await LogcatMonitor.startMonitor("*.*");
I'm creating a research experiment that uses WebAudio API to record audio files spoken by the user.
I came up with a solution for this using recorder.js and everything was working fine... until I tried it yesterday.
I am now getting this error in Chrome:
"The AudioContext was not allowed to start. It must be resumed (or
created) after a user gesture on the page."
And it refers to this link: Web Audio API policy.
This appears to be a consequence of Chrome's new policy outlined at the link above.
So I attempted to solve the problem by using resume() like this:
var gumStream; //stream from getUserMedia()
var rec; //Recorder.js object
var input; //MediaStreamAudioSourceNode we'll be recording
// shim for AudioContext when it's not avb.
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioContext = new AudioContext; //new audio context to help us record
function startUserMedia() {
var constraints = { audio: true, video:false };
audioContext.resume().then(() => { // This is the new part
console.log('context resumed successfully');
});
navigator.mediaDevices.getUserMedia(constraints).then(function(stream) {
console.log("getUserMedia() success, stream created, initializing Recorder.js");
gumStream = stream;
input = audioContext.createMediaStreamSource(stream);
rec = new Recorder(input, {numChannels:1});
audio_recording_allowed = true;
}).catch(function(err) {
console.log("Error");
});
}
Now in the console I'm getting:
Error
context resumed successfully
And the stream is not initializing.
This happens in both Firefox and Chrome.
What do I need to do?
I just had this exact same problem! And technically, you helped me to find this answer. My error message wasn't as complete as yours for some reason and the link to those policy changes had the answer :)
Instead of resuming, it's best practise to create the audio context after the user interacted with the document (when I say best practise, if you have a look at padenot's first comment of 28 Sept 2018 on this thread, he mentions why in the first bullet point).
So instead of this:
var audioContext = new AudioContext; //new audio context to help us record
function startUserMedia() {
audioContext.resume().then(() => { // This is the new part
console.log('context resumed successfully');
});
}
Just set the audio context like this:
var audioContext;
function startUserMedia() {
if(!audioContext){
audioContext = new AudioContext;
}
}
This should work, as long as startUserMedia() is executed after some kind of user gesture.
I'm getting some chinese characters when trying to access google play username with Unity3D:
void Start () {
PlayGamesPlatform.Activate();
// Social.localUser.Authenticate(ProcessAuthentication);
PlayGamesPlatform.DebugLogEnabled = true;
Social.localUser.Authenticate(success => {
if (success)
{
Debug.Log("Authentication successful");
userInfo = Social.localUser.userName ;
Debug.Log(userInfo);
}
else
Debug.Log("Authentication failed");
});
}
void Update () {
txt = GameObject.Find("txt").GetComponent<Text>();
txt.text = userInfo;
}
}
I have checked if user is really authenticated with google play, and he is. I'm getting this error on my mobile phone (Samsung S6).
Any ideas how to solve this?
I had the same problem, solved by updating google play games plugin
I was having the same problem as well with the 0.9.38 version of GPGS. They committed a fix 2 days ago (v0.9.38a) that appears to have fixed the issue. From the commit log:
Fixing string marshaling from C to C#.
Make sure you follow the upgrade instructions when upgrading.
I downloaded the code DigitalDJ / AudioStreamer to use in a player I'm doing, here's the project that I downloaded: https://github.com/DigitalDJ/AudioStreamer
Have used this library before I decided to upgrade it supports multi-thread,
but when I change the address of the streaming server http:// thor.nickpack.com:9000 to the address of my server, it does not run the audio.
to replace the server path that is in a TextField in viewController to my path: http:// 184.154.37.132:7075 see my problem.
Another solution would be to modify the old player that supports multi thread, I've tried several codes and could not, that was when I found the DigitalDJ / AudioStreamer, but I came across the problem I mentioned above,
this is the link for a sample app that does not have multi thread: http://www.mediafire.com/?eb7a6a87e8tqcbc
if someone has a clue how to implement audio in backgorund or how to solve the problem of streaming server I am grateful.
after a long time and almost going crazy trying to solved the problem by commenting the code in this trexo AudioStreamer.m
// hintForMIMEType
//
// Make a more informed guess on the file type based on the MIME type
//
// Parameters:
// mimeType - the MIME type
//
// returns a file type hint that can be passed to the AudioFileStream
//
/*
+ (AudioFileTypeID)hintForMIMEType:(NSString *)mimeType
{
AudioFileTypeID fileTypeHint = kAudioFileMP3Type;
if ([mimeType isEqual:#"audio/mpeg"])
{
fileTypeHint = kAudioFileMP3Type;
}
else if ([mimeType isEqual:#"audio/x-wav"])
{
fileTypeHint = kAudioFileWAVEType;
}
else if ([mimeType isEqual:#"audio/x-aiff"])
{
fileTypeHint = kAudioFileAIFFType;
}
else if ([mimeType isEqual:#"audio/x-m4a"])
{
fileTypeHint = kAudioFileM4AType;
}
else if ([mimeType isEqual:#"audio/mp4"])
{
fileTypeHint = kAudioFileMPEG4Type;
}
else if ([mimeType isEqual:#"audio/x-caf"])
{
fileTypeHint = kAudioFileCAFType;
}
else if ([mimeType isEqual:#"audio/aac"] || [mimeType isEqual:#"audio/aacp"])
{
fileTypeHint = kAudioFileAAC_ADTSType;
}
return fileTypeHint;
}*/
with this code commented out the audio played without problems on my server
I had problems connecting to MP3 stream with AudioStreamer. The sample would work on Simulator but not on device. I think because simulator isnt exact copy of ios device. On Simulator it uses the quicktime installed on the mac.
For local MP3 files use AVAudioPlayer.
For remote MP3 streams Use AVPlayer.
A good sample project is at
https://github.com/valvoline/CPStreamPlayer
Often remote streams take time to connect to time out. This sample shows that its buffering.
Search Github for AVPlayer theres a few samples.
CPStreamPlayer/AVPlayer supports redirects so for us we had
http://stream.fireplayer.com/greyhound/dyn?action=stream.StreamMix&id=1785
BUT THIS REDIRECTED TO GENERATE Mp3 file/stream on Amazon
http://s3.amazonaws.com/fireplayer_mp3/1785.mp3?AWSAccessKeyId=AKIAJAHV5HUV4TVRF5VA&Expires=1337595252&Signature=c%2FH%2FO9AACkovm%2BAhbWyD7E9Hb6A%3D";
I would like to know how I can open a Mp3 file from within a webview, basically a link that points to an MP3 file which would then open up the standard media player. Is this possible? I know it is because it works on the default webbrowser so I was wondering why I can't get it to work on a standard webview. Any help would be much appreciated.
public boolean shouldOverrideUrlLoading(WebView view, String url) {
if (url.endsWith(".mp3")) {
Intent intent = new Intent(Intent.ACTION_VIEW);
intent.setDataAndType(Uri.parse(url), "audio/*");
view.getContext().startActivity(intent);
return true;
} else if (url.endsWith(".mp4") || url.endsWith(".3gp")) {
Intent intent = new Intent(Intent.ACTION_VIEW);
intent.setDataAndType(Uri.parse(url), "video/*");
view.getContext().startActivity(intent);
return true;
} else {
return super.shouldOverrideUrlLoading(view, url);
}
}
Robbe's answer is correct, however I ran into some headaches implementing this functionality, so you should note that you must be passing a DIRECT LINK to an mp3, it can't be a magical URL that ends up at an mp3 after several redirects, the loading wheel will spin and you will be prompted with a "Can not play the requested stream" message.