Flutter Desktop Windows Play Audio from Buffer - flutter

I need to play audio to an audio device that I would like to be able to select in the App. So far I've looked at just_audio but that doesn't seem to support "read from byte stream" for windows yet. I've also looked at libwinmedia but that is deprecated.
The audio is received as a ffi.Pointer to float samples or alternatively as 16bit Integer samples. I just need an API whether plugin or .h with ? .dll to select an audio device and send the audio buffer to it.
Does anyone know if I've missed this functionality in the dart packages or knows a simple library for c++ that I can use FFI to bind to that?
Thanks any help is appreciated.
I also looked at gstreamer but I don't want the user to have to install the gstreamer runtime.

Related

Using flutter how to merge audio file with sync of any video like camera action like that there is any specific package is available in flutter

I have been searching for the packages or any method for the particular problem, i googled but i didn't find any solution ,
So my issue is , i want functionality like ticktock App , user can select any music audio and they can play according to that user can do action, with sync of both audio and video i want output file as video. how can i achieve this , is there any method or any packages are avalibale in flutter .
i have done selecting audio and video recording feature but i ma stuck with the merging both , if FFmpeg package we can use means , how to use that packages .. please explain me guys.
In FFmpeg You can do it by playing the audio, recording video and then merging two files like
final FlutterFFmpeg _ffMpeg = FlutterFFmpeg();
_ffMpeg.execute("-i video.mp4 -i audio.mp4 -c copy output.mp4")
.then((return_code) => print("Return code $return_code"));
With FFmpeg You need to find the command that suits You the best.
But...
Personally I think FFmpeg isn't a good choice:
It is heavy
You can't use it in commercial projects (I might be wrong, please correct me if I am)
What I suggest
You can record video and have audio file and video with Flutter. Then use platform specific code.
Even if you don't have experience in specific language I found those two libraries that could help:
iOS: https://github.com/dev-labs-bg/swift-video-generator
(does exactly what You need).
Android: https://github.com/israel-fl/bitmap2video
(I'm not sure if works with videos, but it accepts bitmaps)

chrome speech recognition WebKitSpeechRecognition() not accepting input of fake audio device --use-file-for-fake-audio-capture or audio file

I would like to use chrome speech recognition WebKitSpeechRecognition() with the input of an audio file for testing purposes. I could use a virtual microphone but this is really hacky and hard to implement with automation, but when I tested it everything worked fine and the speechrecognition converted my audio file to text. now I wanted to use the following chrome arguments:
--use-file-for-fake-audio-capture="C:/url/to/audio.wav"
--use-fake-device-for-media-stream
--use-fake-ui-for-media-stream
This worked fine on voice recorder sites for example and I could hear the audio file play when I replayed the recording. But for some reason when I try to use this on WebKitSpeechRecognition of chrome then it doesn't use the fake audio device but instead my actual microphone. Is there any way I can fix this or test my audio files on the website? I am using C# and I couldn't really find any useful info on automatically adding, managing and configuring virtual audio devices. What approaches could I take?
Thanks in advance.
Well it turns out this is not possible because chrome and google check if you are using a fake mic ect, they do this specifically to prevent this kind of behavior so people cannot get free speech to text. There is a paid api available from google (first 60 minutes per month are free)

How to play all video formats in a Chrome Packaged App?

I'm writing an Chrome Packaged App that needs to be able to play a lot of local video files. I can use the tag to play files encoded in h.264 and mp3, but not much else. I'll require playback of at least DivX videos and AC3 audio. Is there any way to do this using the HTML5 platform or otherwise using some kind of plugin?
There are alternatives, but in my opinion the final solution is not going to be very good.
1 - You can try to use a plug-in, for example:
VLC Plug-in - sorry, I have not enough reputation to post more than 2 links :(
Divx Web Player - sorry, I have not enough reputation to post more than 2 links :(
But then you need to rely on the user installing the plug-in. For VLC, the plug-in is not compatible with the latest versions of Mac OS X.
2 - Encode to H.264 or VP8 from a server with an ffmpeg or using a cloud video provider.
3 - Encode from the client side using JavaScript! There is a port of the ffmpeg on javascript (http://bgrins.github.io/videoconverter.js/). I didn't try this method with large files.
4 - Encode from the client side using a Native Client component (https://developers.google.com/native-client/dev/). But seems a daunting task to me.
If you are going to go with the first option, assure that your audience is going to install/configure your player and that their OS are supported.
VLC ported to NaCL would be a great first step.
According to a poster on https://forum.videolan.org/viewtopic.php?f=5&t=107178, libVLC has been ported to NaCL, but I am not familiar with VLC internals so I could not say how far this gets you in terms of being able to decode different streams.

Video streaming solutions

I am attempting to stream a video, in a format unity3d can access, like an mjpg. I have gone through several possible solutions, including gstreamer(only does client side as far as I could tell by the examples), yawcam(I couldn't find a way to access the image directly), and silverlight(due to simply not being able to find how the heck webcam streaming was doable) I am currently just looking for any more methods of getting video over from one side to the other. Could I possibly simply read the images into a byte array and send it over a socket? Maybe I missed something in the previous three possible solutions?
If you are looking to stream video from a server than you can use Ogg encoding + WWW.movie to map it to a texture. Assuming you have a Pro license, as I think this is a Pro only feature. If this is a local file, either bundled with the app or in external folder, we use the brilliant AVPro Windows Media or AVPro QuickTime. MJPEG does offers super smooth scrubbing with AVPro but generates enormous files. Definitely not ideal for streaming or even download!
Finally RenderHead also has a Live Camera capture plugin that could meet your needs.

“Hook” libMMS to FFmpeg for iPhone Streaming

These days, I was researching the software architechture for iPhone Streaming (Base on MMS protocol).
As we know, in order to playback MMS audio stream, we should call libMMS to read wma stream data from remote media server, and then call FFmpeg to decode the stream data from wma format into PCM data buffer, and finally, enqueue the PCM data buffer into iPhone’s audioqueue to generate real sound.
The introduction above just describe the working process of iPhone streaming. If we only need to implement this simple functionality, that is not difficult. Just follow the introduction above to call libMMS, FFMpeg and audioqueue step by step, we can achieve the streaming function. Actually, I have implemented the code last week.
But, what I need is not only a simple streaming function! I need a software architechture makes FFmpeg accessing libMMS just like accessing local filesystem!
Does anybody know how to hook the libMMS interfaces like mms_read/mms_seek onto FFmpeg filesystem interfaces like av_read_frame/av_seek_frame?
I think I have to answer my own question again this time……
After several weeks reseach and debuging, I got the truth finally.
Actually, we don’t need to “hook” libMMS onto FFMpeg. Why? Because the FFMpeg already has its native mms protocol process module “mms_protocol” (see in mms_protocol.c in FFMpeg).
All we need to do is just configuring the FFMpeg to enable the mms module like this (see in config.h in FFMpeg):
#define ENABLE_MMS_PROTOCOL 1
#define CONFIG_MMS_PROTOCOL 1
After this configuration, FFMpeg will add mms protocol into its protocol list. (Actually, the protocol list has already contained “local file system protocol”). As result, the FFMpeg could be able to treat the “mms://hostserver/abc” media file like local media file. Therefore, we can still open and read the mms media file using:
av_open_input_file();
av_read_frame();
like we did on local media file before!
By the way, in my ffmpeg version, there are still many bugs in libAVFormat module for proccessing mms protocol. It took one week for me to debug it, however, I think it will be much shorter for the guy as smart as you:-)