I am generating a video stream in real time, and I've got it as a series of bitmaps in memory.
I'd like to stream these bitmaps over the network using libvlc, but I wasn't able to find the right functions in the API (all streaming functions expect a file or other source).
I even thought of emulating a capture device, but that seemed too convoluted to be true, so I'd rather ask.
My question is, what do I now have to do with these bitmaps to be able to use libvlc to stream them?
I found a question which appears to be solving the same issue.
Other suggestion with significantly less overhead is "emulating" a file with named pipes, i.e. FIFOs.
Related
I've been working with video_player package from Flutter. I'm mostly testing videos taken from the url. My code is very similar to the ones from the examples.
Everywhere I read about it, I can see that this library does not support caching of the videos. But what this exactly means? What exactly is happening behind the scenes and how the behaviour would change if the caching would actually be implemented? How this is different from buffering? Are the video files simply downloaded to our device?
If yes, then were those files are kept?
One additional question is, how can I check the network consumption caused by using such connection? I've tried using Dev Tools but the network tab is always empty.
One last thing is, is it possible to pre-initialize next videos, so when we would like to switch between them, they are already partially pre-loaded?
U can use a package that helps you manage caching futter_cache_manager
https://pub.dev/packages/flutter_cache_manager
U can use this in combination with video_player. However, u would have to download the whole file first to be then be able to retrieve it for video_player to consume it.
An idea would be to stream the video and also download a copy locally. This however would consume more data than just downloading and caching the video first, then playing it locally.
As for how to check for network consumption, i am not sure.
Starting with the theory:
1.Cache is a high-speed storage area while a buffer is a normal storage area on ram for temporary storage.
2.Cache is made from static ram which is faster than the slower dynamic ram used for a buffer.
3.The buffer is mostly used for input/output processes while the cache is used during reading and writing processes from the disk.
4.Cache can also be a section of the disk while a buffer is only a section of the ram.
5.A buffer can be used in keyboards to edit typing mistakes while the cache cannot.
When it comes to video buffering just look at youtube app. You can see the buffer being made when grey line grows bigger before the red line. Mostly information stored this way cannot be accessible at all as Android uses combination of RAM allocation for both caching and buffering as it sees fit for current active process.
Technically you could try pre-loading different videos by starting and pausing all of them at once but I cannot imagine how much tampering with system memory control it would take, even youtube doesn't work like that.
I am trying to perform Time Difference of Arrival in real-time using the PS3 Eye. Since it has a built-in 4 microphone array, I've successfully rearranged the array into a square array and cross-correlated the signals using MATLAB to obtain a relatively accurate TDOA algorithm. However, so far I've been recording the signal, saving the files (4 individual files for each microphone in the array), and then feeding those files into MATLAB to read after-the-fact.
My problem is: MATLAB doesn't recognize the PS3 Eye's microphones separately; it only recognizes it as a whole. So far, Audacity is one of the few programs that actually works well in doing so, but I am inexperienced in using the program and don't know its real-time capabilities. Anyone have suggestions as to how can I can perform real-time signal analysis in this manner? If using something else besides the PS3 Eye would work better, then I am open to suggestions. Thanks.
I know very little about MATLAB or PS3 eye, but various hardware microphones allow you to capture a single audio stream containing multiple (typically 2) channels. The audio data will come to you in frames, each frame containing a single sample for each channel.
I'm not really sure what you mean by "recognizes as a whole", but I assume you mean MATLAB is mixing the channels so that the device only produces one usable channel. If you can capture the channels to file, and they all originate from the same device (i.e. hardware clock), you should be fine except that this solution is not "realtime".
There is a similar discussion on Sound Exchange which ends up suggesting the Microcone. There are a variety of other products, from microphone arrays to digital mixers for analog mic sources, also, but your question seems to be mainly about how to get the data with software.
In short, make sure you are seeing a single device with multiple channels. This will ensure each channel uses the same hardware clock and will prevent drift issues.
This is just a wild guess as I don't know know about MATLAB real time input options.
Maybe try reaper ( http://www.reaper.fm/ ).. it has great multi track capabilities and you can extend it (I think the scripting language is python ). Nice documentation and third party contributions, OSC and Rewire support. So maybe you could think of routing the audio to reaper, doing some data normalization there in python and then route data to MATLAB.
Or you could use PURE DATA which is open source and very open, with lots of patches (basic processing units) that you could probably put together.
HTH
BTW I am in no way affiliated wit reaper or PD.
EDIT: you might also want to consider supercollider (http://supercollider.github.io/) or Chuck (http://chuck.cs.princeton.edu/)
Here's a lead, but I haven't been able to test it, yet.
On Windows, you can record a single 4 track ogg audio file from the Eye with Audacity (using the WASAPI driver selection).
As of 23 Jul 2014, the pa-wavplay for 32-bit and 64-bit MEX supports WASAPI. You will have to rebuild the PortAudio library to select the WASAPI interface as described here and get all four tracks in MatLab (in Windows).
Sadly, if you're not on Windows, I don't have any suggestions. Adjusting the PortAudio build might help, but I only know that WASAPI works with the Eye.
Once a month the mp3 streams messes up and the only way to tell it has messed up is by listening to it as it streams. Is there a script or program or tool I can use to monitor the live streams at a given url and send some kind of flag when it corrupts?
What happens is normally it plays a song for example or some music but once a month, every month, randomly, the stream corrupts and starts random chimpmunk like trash audio. Any ideas on this? I am just getting started at this with no idea at all.
Typically, this will happen when you play a track of the wrong sample rate.
Most (all that I've seen) SHOUTcast/Icecast encoders (going straight from files) will compress for MP3 just fine, but assume a fixed sample rate of whatever they are configured for. Typically this will be 44.1kHz. If you drop in a 48kHz track, or a 22.05kHz track, they will play at different speeds while causing all sorts of random issues with the stream.
The problem is easy enough to verify. Simply create a file of a different sample rate and test it. I suspect you will reproduce the problem. If that is the case, to my knowledge there is no way to detect it, since your stream isn't actually corrupt... it just sounds incorrect. You will have to scan all of your files for sample rate. FFMPEG in a script should be able to help you with that.
Now, if the problem actually is a corrupt MP3 stream, then you have problems on your encoding side. I suspect simply swapping out whatever DLL or module you're using with a recent stable version of LAME will help.
To detect a corrupt MP3 stream, your encoder must be using CRC. If you enable it, you should be able to read through the headers of each frame to find the CRC, and then run it on the audio data. In the event you get an error (or several frames with errors), you can then trigger a warning.
You can find information on the MP3 stream header here:
http://www.mp3-tech.org/programmer/frame_header.html
I am attempting to stream video using Apple's http streaming technology. I am beginning to suspect that either the player on the iPhone or the Apple tools used to segment the videos is buggy.
http://developer.apple.com/iphone/library/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/Introduction/Introduction.html
I am getting really terrible behavior. The app never seems to do a good job of choosing what quality stream to use. It always starts at the lowest quality and often will job to the highest very suddenly and not be able to keep up. I have tried various ways of altering the bandwidth settings to test it.
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=5000
3/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=10000
4/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=459319
5/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=90268800
I have used very large and small setting to make certain streams the obvious choice, but it doesn't matter. Obviously I also have used default the values set by Apple's variantplaylistcreator tool. It always starts at the lowest quality and will jump to seaming random other qualities.
Anyone know whats going on with this?
Have you tried the sample reference streams provided at the bottom of the page here? Apple tests against these, so if it works there, you know it's on your end.
I'm trying to read a real-time http video stream (I've set one up using VLC and my webcam) using libavcodec from ffmpeg. so far I've been able to read an mpeg file fine but when I pass a url instead of the file name it hangs (I assume it's waiting for the end of the file).
anyone have experience/ideas on how to do this? I'm also open to alternatives if people have other suggestions.
the end goal is to do live video-streaming on the iphone. apple's http live streaming has too much lag for what I need but that's for another post :)
any help is greatly appreciated!
if you aren't using apple's way, you can't.
I have to admit I dont quite understand what you want to achieve, however, based on my interpretation of the question it seems like it is related to the segmenting of the video.
The segmenting can be achieved using ffmpeg and libavcodec (look for example here, key line is the one with packet.flags). Just remember that the segment length (in time) depends on the keyframe interval (for h264 at least). If you want an example of a full segmented streaming solution that works (most of the time), check here.
Otherwise, you have to dig into the codec and create the transport stream manually. The MPEG2-TS, which is what iOS supports, can be a little bit difficult sometimes, but it is not too bad. Good luck!
you can use libcurl to download http segment file