I found the AOSP source code from Google and also retrieved vendor's info from https://github.com/sonyxperiadev/device-sony-sgp321
Sony added its Bravia Engine library to AOSP to improve image and video quality. It can either be called in libstagefright's awesomelocalrenderer or called at the decoding phase, when OMX addPlugin is called.
I searched both places, the code there are the same compare with other native AOSP source code. I would like to know how does Sony use its BE library?
Bravia engine is mainly employed for video/image post-processing prior to rendering on the framework. There is an interesting link at http://developer.sonymobile.com/2012/06/21/mobile-bravia-engine-explained-video/.
In AOSP, I presume the user settings from the menu are read and subsequent filtering is enabled/applied in SurfaceFlinger or HwComposer parts of the framework. Another link of interest could be: http://blog.gsmarena.com/heres-what-sony-ericsson-mobile-bravia-engine-really-does-review/
EDIT: Interaction between Video Decoder - AwesomePlayer - HwComposer
The following is a summary of interactions between the different actors in the playback and composition pipeline.
AwesomePlayer acts as a sink to the OMX Video Decoder. Hence, it will continuously poll for a new frame that could be available for rendering and processing.
When OMX Video Decoder completes the decoder, the FillBufferDone callback of the codec will unblock a read invoked by the AwesomePlayer.
Once the frame is available, it is subjected to the A/V synchronization logic by the AwesomePlayer module and pushed into SurfaceTexture via the render call. All the aforementioned steps are performed as part of AwesomePlayer::onVideoEvent method.
The render will queue the buffer. This SurfaceTexture is one of the layers available for the composition to the SurfaceFlinger.
When a new layer is available, through a series of steps, SurfaceFlinger will invoke the HwComposer to perform the composition of all the related layers.
AOSP only provides a template or an API for the HwComposer, the actual implementation of which is left to the vendor.
My guess is that all vendor specific binaries are just implementing the standard interface defined by Android/OMX.
And these engine is complied into shared objects which can be found at /system/vendor directory.
The Android system just have to look at the directory and load the necessary shared objects.
Related
I've been working with video_player package from Flutter. I'm mostly testing videos taken from the url. My code is very similar to the ones from the examples.
Everywhere I read about it, I can see that this library does not support caching of the videos. But what this exactly means? What exactly is happening behind the scenes and how the behaviour would change if the caching would actually be implemented? How this is different from buffering? Are the video files simply downloaded to our device?
If yes, then were those files are kept?
One additional question is, how can I check the network consumption caused by using such connection? I've tried using Dev Tools but the network tab is always empty.
One last thing is, is it possible to pre-initialize next videos, so when we would like to switch between them, they are already partially pre-loaded?
U can use a package that helps you manage caching futter_cache_manager
https://pub.dev/packages/flutter_cache_manager
U can use this in combination with video_player. However, u would have to download the whole file first to be then be able to retrieve it for video_player to consume it.
An idea would be to stream the video and also download a copy locally. This however would consume more data than just downloading and caching the video first, then playing it locally.
As for how to check for network consumption, i am not sure.
Starting with the theory:
1.Cache is a high-speed storage area while a buffer is a normal storage area on ram for temporary storage.
2.Cache is made from static ram which is faster than the slower dynamic ram used for a buffer.
3.The buffer is mostly used for input/output processes while the cache is used during reading and writing processes from the disk.
4.Cache can also be a section of the disk while a buffer is only a section of the ram.
5.A buffer can be used in keyboards to edit typing mistakes while the cache cannot.
When it comes to video buffering just look at youtube app. You can see the buffer being made when grey line grows bigger before the red line. Mostly information stored this way cannot be accessible at all as Android uses combination of RAM allocation for both caching and buffering as it sees fit for current active process.
Technically you could try pre-loading different videos by starting and pausing all of them at once but I cannot imagine how much tampering with system memory control it would take, even youtube doesn't work like that.
I am having an issue where my audio engine stops working (or experiences dropouts) after being interrupted by a phone call, change to the audio output device, or other system audio events. I'd like to simply re-initialize my audio classes after such an event.
I am using a multi-Provider to provide the audio service throughout the app. I have a couple models that keep track of settings/state and could restore the state and the expected behavior after an interruption.
I considered doing something similar to the solution described here
However my models and audio service are all being provided at the same level with multiprovider so any restarting of a widget tree will restart both.
I basically want to re-create my audio service with a fresh object. Any suggestions as to how to approach "restarting" a provider provided service?
I'm using STM32f103 and in my program, I need to save some bytes in the internal flash memory. But as far as I know, I have to erase a whole page to write in it, which will take time.
This delay causes my display to blink.
Can anybody help me to save my data without consuming so much time?
Here is a list that may help:
1- MCU: STM32f103
2- IDE: Keil vision
3- using HAL driver provided by STM32CubeMx
4- sample data for saving in Flash: {0x53, 0xa0, 0x01, 0x54}
In the link below, you can find the code that I'm using.
FLASH_PAGE for Keil
The code you provide doesn't seem to be implemented well. It basically does 2 things each time you initiate a write operation:
Erase the page (this is the part that takes time)
Start form the given pointer, write until it hits a zero.
This is a very ineffective way of using the flash.
Probably the simplest and the most well-known way is to use the method described in ST's AN2594, although it has some limitations.
Still, at some point a page erase will be necessary regardless the method you use and there is no way to avoid some delay, unless your uC supports dual flash banks (STM32F103 don't have this feature). You need to plan the timing of flash writes and display refresh accordingly. If you need periodic writes to the flash, there is probably some high level error in your design.
To solve this problem, I used another library that STM itself presented. I had to include "eeprom.h" into your project and then add "eeprom.c" to it. You can easily find these files on the Internet.
In Flutter, there are three types of platform channels, and I want to know about the difference between them.
These channels are used to communicate between native code (plugins or native code inside of your project) and the Flutter framework.
MethodChannel
A MethodChannel is used for "communicating with platform plugins using asynchronous method calls". This means that you use this channel to invoke methods on the native side and can return back a value and vise versa.
You can e.g. call a method that retrieves the device name this way.
EventChannel
An EventChannel is used to stream data. This results in having a Stream on the Dart side of things and being able to feed that stream from the native side.
This is useful if you want to send data every time a particular event occurs, e.g. when the wifi connection of a device changes.
BasicMessageChannel
This is probably not something you will want to use. BasicMessageChannel is used to encode and decode messages using a specified codec.
An example of this would be working with JSON or binary data. It is just a simpler version because your data has a clear type (codec) and you will not send multiple parameters etc.
Here is a link to a good explanation for you https://medium.com/flutter-io/flutter-platform-channels-ce7f540a104e
Basically there are two main types:
Method Channels: designed for invoking named pieces of code across Dart and Java/Kotlin or Objective-C/Swift. (From flutter to the platform)
Event Channels: specialized platform channel intended for the use case of exposing platform events to Flutter as a Dart stream. (From the platform to flutter)
#creativecreatorormaybenot answer clears the things, let me add more to this.
Method Channel
This is more like RPC call. You invoke a method from your Flutter app to the native code, the native code does something and finally responds with a success or error. This call could be to get the current battery status, network information or temperature data. Once the native side has responded, it can no longer send more information until the next call.
Method Channel provides platform communication using asynchronous method calls.
Note:- If desired, method calls can also be sent in the reverse
direction, with the platform acting as client to methods implemented
in Dart.
Event Channel
This is more like reactive programming where platform communication using asynchronous event streams. These events could be anything that you need streamed to your Flutter application.Streaming data from the native code to Flutter app like continuously updating BLE or WiFi scanning results, accelerometer and gyro, or even periodic status updates from intensive data collection.
Basic Message Channel
It provides basic messaging services similar to BinaryMessages, but with pluggable message codecs in support of sending strings or semi-structured messages. Messages are encoded into binary before being sent, and binary messages received are decoded into Dart values. The MessageCodec used must be compatible with the one used by the platform plugin.
What is the best way to transfer binary data from plugin to browser.
We want to play YUV buffer received from network on browser tab.
currently am converting to base64 and giving via callback. but it is not efficient and am finding below issues
1> CPU and Memory is going up
2> Callback events are not passed when we change the browser tab, later all events are given at one shot on moving back to our tab.
I would also like to know is there any way we can directly draw YUV frame on browser using plugin thread itself.
Thanks in advance.
NPAPI has been removed from most major browsers... the last holdout, Safari, will be removing it as of macOS Mojave. That being the case, don't expect any updates of any kind to the spec -- however you're using it is likely a dying method.
That being the case, on windows there is a method (super hack, really) that you can use to draw directly to the window in the browser from a native message extension, but it's not portable and it depends on internal implementation details. I haven't actually looked into it since I wrote that other answer (linked in this paragraph) so I don't know if it still works or not.
Anyway, if you're on a browser which fully supports NPAPI then you could draw the YUV data directly to the plugin window given to you on the browser; there is an example of blitting image data in FireBreath which you could possibly trace through as an example.
You could also try some variation of listening on a TCP port in the plugin and connecting to it from the browser; you could easily run into some security issues there, but it is the only other method I can think of.
NPAPI simply wasn't ever designed to allow fast transfer of data between the plugin and the browser; I submitted a proposal to add that capability years ago but it was basically too close to the death of NPAPI (which is basically past at this point) for it to go anywhere. The issues you're seeing are 100% consistent with what I would expect, though... and it's still the best way I know.