What I need is access to the camera and the normal features the camera would provide (e.g. changing the ISO value manually).
After doing some research I found two options to take a photo from within Flutter:
Image Picker: Here the built-in camera app will be used, but is heavily restricted. There are hardly any settings which can be changed and not a lot of freedom to make a photo with specific settings.
Camera: If I understood correctly, this is basically an access to the camera lens itself, so there is no camera software involved and I would need to implement all the basic features a camera provides, by myself.
My aim is to be able to take a photo with all (or at least most) of the common functionality a camera provides included. Is there another plugin I did not find by now or some way the remove the restrictions by the Image Picker?
Related
I'm wondering if there's a way to recreate the "Object" experience when viewing a .usdz file through Apple's AR Quick Look. I want an experience that showcases a 3D object without "augmenting reality".
Some options that I'm thinking of that might be able to recreate this feature:
1) Using ARKit, disabling the camera and setting my own background with a custom image. I would then set the usdz/object in the center of the device's screen while having all the interaction functionalityfor the 3D object.
2) Web AR - recreate this 3D experience elsewhere and showcase this on a webview.
Any guidance or discussion about this is much appreciated - thank you!
You can use Google's model-viewer if you are going with the web solution. Another easy and effective solution would be echoAR (full disclosure, this is where I work). You can simply upload your models there and then get a link to thier model-view. You can upload models in different formats (obj, fbx, glTF, glb, USDZ) and it'll automatically convert it to the format you need to view on any device.
I’m pretty new in flutter. So bear with me. I’m not sure the difference between plugin “camera” and “image picker”.
I was able to capture video and image by using “image picker”. From my perspective,”image picker” is more straightforward and easy to implement but on the internet, seems like “camera” plugin is more popular.
Then I want to ask, when it comes to taking video and picture, especially video. Is there any pros and cons? Any help appreciated!
The two plugins differ in functionality and most importantly in purpose:
camera allows you to embed the camera feed into your own application as a widget, i.e. have control over it as well.
The image_picker plugin will launch a different application (a camera or gallery application) and return a File (an image or video file selected by the user in the other application) to your own application.
If you want to implement (and customize) the camera for you own purposes, you will have to use camera, but if you only want to retrieve imagery or video from the user, image_picker will be your choice.
i have hired a programmer to create an iPhone app for me. The purpose of the app is to take a photo and upload it to a server. We want to make a special purpose screen to review the photo before uploading it. This specially developed screen will crucially have zooming functionality.
He claims that after taking a photo, it is impossible to avoid the "use"/"reuse" screen to show up, so now we have two screens to review the photo. First the standard one from Apple, then our own with zoom. Is he right about that? It just sounds so unreasonable that Apple would put such a restriction.
Edit: I mean taking a photo using the camera.
As par Apple's documentation
To perform fully-customized image or movie capture, instead use the AV
Foundation framework as described in “Media Capture and Access to
Camera” in AV Foundation Programming Guide. To create a
fully-customized image picker for browsing the photo library, use
classes from the Assets Library framework. For example, you could
create a custom image picker that displays larger thumbnail images,
that makes use of EXIF metadata including timestamp and location
information, or that integrates with other frameworks such as Map Kit.
For more information, see Assets Library Framework Reference. Media
browsing using the Assets Library framework is available starting in
iOS 4.0
In short yes it is possible check out this sample
[Update]
Use the allowsEditing property on your UIImagePickerController
imagePickerController.allowsEditing = NO;
Previous answer was a bit of a hack to take advantage of a code path that didn't show the buttons but wasn't awesome.
[Previous answer]
You can actually avoid it without going through the hassle of setting up your own image capture from AV Foundation.
Including the following will remove the need to show the "review" screen. All you have to do is put in a few of your own buttons and wire them up to the appropriate functionality.
[self.imagePickerController setShowsCameraControls:NO];
It is a little bit too late I know, but for future reference:
This is far more simple that the answers already provided,
what you are looking for is the allowsEditing option.
imagePickerController.allowsEditing = NO;
That should be enough to avoid showing the "Retake"/"Use" screen after the user takes a picture.
I know that generally, it's no problem to overlay HTML (and even do advanced compositing operations) to HTML5 native video. I've seen cool tricks with keying out green screens in realtime, in the browser, for example.
What I haven't see yet, though, is something that tracks in-video content, perhaps at the pixel level, and modifies the composited overlay in accordance. Motion tracking, basically. A good example would be an augmented reality sort of app (though for simplicity's sake, let's say augmenting an overlay over on-demand video rather than live video).
Has anyone seen any projects like this, or even better, any frameworks for HTML5 video overlaying (other than transport controls)?
If we use the canvas tag to capture the instances of the video, we are able to get the pixel level information of the video. Then we can detect the motion tracking i think. May be the work of HTML5 will be upto grabing the pixel informaion, then its our work to detect the things we need..
And i didnt find any such frame works for HTML5 video tag, as there is no common video format supported by all browsers...
The canvas tag video mode is not supported by iOS.
Wirewax built an open API for that. Works quite well - even on Iphone.
http://www.wirewax.com/
I wonder if it is possible to write a lightroom plugin, which applies crop rectangles to a set of images?
Of course I do not just want to duplicate the crop, I'd like to set a different crop to every image, based on some computations.
Can this be achieved with lightroom plugins, or would I need to try a different approach?
Unfortunately as of Lightroom SDK 3.0 it is not possible to programmatically crop an image. There is no official reason for this, but it seems to stem from a two linked causes. First, the only way to change an image is to use LrDevelopPreset. This class allows a develop preset to be modified and applied to an image. And second, develop presets cannot store crop settings.
There are a couple of threads in the Lightroom Feature Request forum mentioning the inability of crop settings to be specified in a develop preset. No generally applicable workaround has been found.