How to correctly use Camera widget in Flutter - flutter

Camera usage is clearly documented at official document. It tells you to initiate camera from the Future<void> main() async which is the main entry point of the whole application. (correct me if this is wrong). After that, the initialised camera is then passed down to the home page widget. This is all that official document tell us. But obviously, this isn't close enough to real-world cases. Rarely there is an application needs to open camera on the very first running.
This confuses me a lot. I can get this to work by passing down down this camera object to whatever pages that actually required it. Or maybe put it into a global state management like redux. But is this the correct way to use camera?
More ideally, if possible, I think it's the duty of a page/widget that needs camera to initiate everything rather than at the main() function.
Any suggestion are appreciated.

You should initialize the Camera plugin where you need it not in the main function. docs just give us an example of how to do that not where to.

Related

Control Volume Level of Custom Notification Sound in Flutter

this is more so a question asking about how to approach a problem vs me posting code and asking where the error is.
But I am building a paging system in flutter (to work on both android and ios). As a result, I am using custom sounds in my application (so that I can have sounds stronger than the default OS sound). Because this sounds are naturally louder, I want to give the user the option to control the sound level of the notification (from 0 to 100%). I cannot seem to find any resources that speak on this, but I do know it is possible because I have apps on my phone that allow that (amazon pager app for instance).
Is there a way to allow a user control the volume level of a notification through the app that was built in flutter? If it's possible, could i get pointed to a tutorial or article or even sample code?
There's no dedicated way to do this but here's a hack to achieve this. You can use any package/widget of your choice like volume_control or volume_controller
Assuming that you use the first option:
Get the system volume by using
VolumeControl.volume
Get user desired volume level values(Using a slider etc).
Save both values.
Now here you need to set the device notifications/system volume to the user defined values before the notification is played by using
VolumeControl.setVolume(val)
After notification has been played, set the system volume back to the original/previous value.
Same way you can use other widgets.
I tried to do something similar recently and I used this package. It's well documented and there's some sample code for regulating the volume. I used other packages but none work so well as this one.
You can set the volume with something like this:
late AudioManager audioManager;
dynamic maxVol,
currentVol; // Value will be int type on Android device and double type on iOS device
void setVol({int androidVol = 0, double iOSVol = 0.0}) async {
await Volume.setVol(
androidVol: androidVol,
iOSVol: iOSVol,
showVolumeUI: false,
);
}
And then just call the method where you need it and pass the values when changed (in a slider for example).
Hope this helps!

How to share values beetwen widgets

I have a widget, that implements an audioplayers library and provide button to play/pause audio file and also displays current position and duration of audio file. Second widget that i have is a Slider with a lot of customization so it was defined as a separate widget. All i need to do, is somehow share position and duration of AppAudioPlayer widget between itself and Slider and also provide access for Slider to audioplayers AudioPlayer class seek() method to seek the position by changing the slider value. I've tried to implement Provider, and GlobalKey, but these task is a bit complicated for this approaches. Also thought about Stream to share position and duration, but doesn't understand how to share instance of AudioPlayer. Would love to consider all suggestions!
So you are in need of state management and need to share state between two widgets.
First i would read Flutters overview on State managment. I would also consider RiverPod, a new and improved state management solution by the creator of Provider. Or Binder, a simpler solution, although, not actively maintained ATM.
Although, i think in your case, i would look into using ValueNotifier to keep a single state and then ValueListenablebuilder to listen for state changes. I find this a very simple and pragmatic solution for most cases involving state, and also part of the core Flutter library.
You haven't posted any code hence I have no idea how your code looks like.
If you want to share data between 2 widgets then you can create a global variable and use that variable in both the widgets. Then change the variable accordingly. Now this might not be a solution to your problem but I don't have your code so...
Also thought about Stream to share position and duration, but doesn't understand how to share instance of AudioPlayer
I don't think that's how streams work. Streams are not used for passing data inside your code. You can read about streams here

When the build function really needed in flutter app?

I'm totally new to flutter app but have strong concept in android/kotlin. I'm trying to understand the basic structure of the flutter app. I read that every widget need a build function to override to draw the children that was fine for me because in android/kotlin there is onCreate(); or similar others. Then I saw this code on the official document page.
void main() {
runApp(
Center(
child: Text(
'Hello, world!',
textDirection: TextDirection.ltr,
),
),
);
}
It is working fine without build() function so what is the real purpose of the build function? And when we need it? What can be without it or what can't?
While you could have everything passed directly to runApp, it has a pretty big drawback:
Your app would be static. Without a build function (or a builder like with FutureBuilder), then your app will have no way of having dynamic content.
It is also pretty bad for reusability. You may want to extract some part of this widget tree into custom widgets, to reuse them in different locations – which implies a build method for that custom widget.
Flutter functions rather differently from many other platforms, and the name build() only adds to the confusion. :-)) To understand it, try to forget your previous experiences with other platforms temporarily.
Build() is really more like display()
Flutter uses the build() system to display the current frame. If the app has rapidly changing content, like an animation or a game, build() will be called 60 or more times per second, that is, for every single frame. Yes, that's the intention: no matter what its name is, think about it as a function to displayCurrentFrame(), not what the name might imply, to build a widget and then use it for the rest of the life of your app.
The system is perfectly optimized to know when it has to call build(). For content that doesn't change that often, it will not call it 60 times per second, it's smart enough not to do that. But every time it's really needed, it will be called (and for really complicated cases, you also have mechanisms to help Flutter decide when to call and what to call, to make sure that only parts that really change get redrawn, making it possible to avoid jerkyness in apps that really need rapidly changing content, mostly games).
The task of your build() is to take whatever data you currently have (that's called the state of your widget) and build up the widget (or widgets) just with that data, just for that single display frame. Next time around, with possibly different data, you will build it again and again.
So, a Flutter app works differently from many other platforms. You don't write code that waits for interaction from the user, then, for instance, calls a display function to show a new selection, a new text entry, a new image, anything directly. All your widgets function like this: whenever there is a change, the user does something, a response arrives from a call you made over the internet, a timer has elapsed, so basically, anything happens, your widget stores whatever new data you just received into its own state and tells Flutter that "hey, there are changes, please, call me so that I can draw a current version of myself." And Flutter will call its build() all right, and your widget will display itself according to this current new data. Again and again, as long as your app is alive and kicking.
At first, all this might seem like a waste of resources. Why rebuild everything on potentially every frame, instead of building a widget, keeping it alive and using it while the app lasts? The only realistic answer is: don't worry. The system was conceived explicitely with that structure in mind, it gets compiled into optimized code that works just fine, it's fast and responsive. Those widgets are lightweight enough so that this doesn't mean the slightest problem in real life (and, as already mentioned, in really complex programs where it starts to matter, you can have your say in how it should work, but you don't have to worry about that, either, until you reach that stage when it really counts). Just get used to it and accept that this is the way Flutter works. :-)
Other implications
All this has other implications as well. You should never do anything in build() that you're not comfortable doing on every display cycle: both because it shouldn't take too much time and because it will be called over and over again. Especially not anything that's a longish operation. You may start an async operation (actually, you probably do so quite often when acting on some user input), but that's async, it goes away to do its job and simply returns later. When it does return, you store the new data in the state, as described above, and use that in build() the next time around. But you never wait for anything there, or do any real complex programming logic and perform tasks there. It's nothing more than a display(), really.

Firebreath NPAPI plugin rendering video to top level browser window (HWND)

I am working on a audio/video rendering plugin that is using FireBreath and we have a need to get HTML elements to overlay on top of the video. I am aware that to do this I need to use the windowless mode in FireBreath. However since I am using DirectX to render the video I cannot initialize DirectX with the HDC handle (it requires a HWND) that I get when I am instructed to render in windowless mode.
Also for other software security reasons I cannot render the video to an off-screen surface then Blt the bits to the HDC.
The alternative I was trying to accomplish is to use the Hardware Overlay feature in DirectX and use the browser's TOP level HWND to initialize DirectX, then use the HDC and coordinates to tell directX where in the TOP browser window to render the video frame. And render it directly to the top parent browser window.
I have tired a proof of concept, but I am seeing my video frames getting erased quite often after I draw them and thus the video appears to be flickering. I am trying to understand why that might be and I am wondering if this is not a viable solution given my parameters.
Also I am wide open to suggestions on how to accomplish this given my constraints.
Any help would be greatly appreciated!
In the FireBreath-dev group, John Tan wrote:
As what I know, you practically have no control precisely when the screen is going to draw. What can only be done is:
1) Inform the browser to repaint by issuing the windowless invalidatewindow
2) browser draw event arrives with the hdc. Draw on the hdc
John is completely correct. In addition, the HDC could potentially (perhaps likely will) be different each time your draw is called. I don't know of anyone who has successfully gotten directx drawing using windowless mode, and you have absolutely no guarantee that what you are doing will ever work as even if you got it working the browser may change the way or order that it draws in in a way that would break it.
You might want to look at the async surface API; I don't know which browsers this works on but I suspect likely only Firefox and IE. It was implemented in this commit.
I haven't used this at all, so I can't tell you how it works, but it was intended to solve exactly the problem you're describing. Your main issue will be browser support. What documentation there is is here.
Hope this helps

ios zxing with front or rear camera

Im using ZXing, is working fine on my new app, but i would like to integrate the option of front or rear camera,
so far the only reference to this i've found is on the google group
But is not very clear what they mean with that,
so any pointers on what i have to do to accomplish this?
thanks !
ZXWidgetController doesn't provide that functionality and it's not really set up to make it easy to change.
The code that needs to change is in - (void)initCapture. It calls [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]. This returns the default camera and you don't want the defalt.
You need code similar to that in - (ZXCaptureDevice*)device in ZXCapture.mm. That code won't work out of the box (it's designed to work with both AVFF and QTKit) but it's the same idea. Instead of using the default video input device, go through the devices and look at device position to find the device you want.
Be nice to port that code to the widget but that hasn't happened at this point.