Currently I'm using this library:
https://pub.dev/packages/camera and it has setFocusMode to either Auto or locked, but I need a way to be able to get manual focus mode for camera, where user can tap in camera feed and the focus should be adjusted accordingly.
How do I go about implementing this in my app?
I found this plugin https://pub.dev/documentation/manual_camera/latest/. Does this work? you could use focus distance. If you could get the distance of the object you could set it that way. It's almost like shooting out a ray in game programming. I don't know if this is possible to do but maybe using the size of the objects in the image you could get the distance. someone else has probably already figured this out.
Related
I am using the camera for reading some text and currently, my images look quite blurry
Is it possible to change the focus of the camera?
I am using
https://www.raspberrypi.org/products/camera-module-v2/
Yes, it's definitely possible, I did it many times. Sometimes in the camera box there is even a specific tool to rotate the lens included (check if you have it, I experienced that it's not always present). If you don't have a tool take thin pliers and rotate the lens, you can look here.
I have the, in my opinion, simple problem of disabling image detection with the AR Camera. I have the problem, that my app detects an image from the image library and spawns an object etc. everything according to plan.
But the problem is that if move the camera over another detectable image, it recognizes it. This is bad not because it spawns something additionaly but because you can "collect" the images in my app, so it unlocked the other detected one even though it shouldn´t.
So how can I disable image detection without turning off the AR-Camera?
I so far tried to simply disable the "ARManager" and the "ARTrackedImageManager" script (.enabled=false), but it didn´t solve my problem, because the app still detects other images.
Hope I could explain what my question and problem is properly. Any help is appreciated!
It really depends on what library you're using to detect the image. Generally, most marker tracking libraries will create a marker object in your Unity scene. You can disable these marker objects after you find one, and only leave the marker you're interested in. Make sure you also set the number of tracked images to 1 so you won't accidentally find two markers in one frame.
i want to make a live color detection using camera on android in unity. apps that i want is like "color grab" on playstore.
anyone can help me how it works? or how to make it on unity?
Well SO isn't a script providing service: always try to provide what you have tried already before asking a question. If you don't have any script, at least expose you way you want to do it, the steps you think are needed, ...
Anyway, I'd advise you to take a look at Unity Texture2D.ReadPixels() method:
display what you need on screen
when the user touch a place, call for ReadPixels()
then retrieve the color of the desire location on the texture using Texture2D.GetPixel()
If you want to search for a larger area (not a single pixel), you can look for all the pixels around the wanted location and then get the average color found.
Hope this helps,
How to implement a way to measure distances in real time (video camera?) on the iPhone, like this app that uses a card to compare the size of the card with the actual distance?
Are there any other ways to measure distances? Or how to go about doing this using the card method? What framework should I use?
Well you do have something for reference, hence the use of the card. Saying that after watching the a video for the app I can't seem it seems too user friendly.
So you either need a reference of an object that has some known size, or you need to deduct the size from the image. One idea I just had that might help you do it is what the iPhone's 4 flash (I'm sure it's very complicated by it might just work for some stuff).
Here's what I think.
When the user wants to measure something, he takes a picture of it, but you're actually taking two separate images, one with flash on, one with flash off. Then you can analyze the lighting differences in the image and the flash reflection to determine the scale of the image. This will only work for close and not too shining objects I guess.
But that's about the only other way I thought about deducting scale from an image without any fixed objects.
I like Ron Srebro's idea and have thought about something similar -- please share if you get it to work!
An alternative approach would be to use the auto-focus feature of the camera. Point-and-shoot camera's often have a laser range finder that they use to auto-focus. iPhone doesn't have this and the f-stop is fixed. However, users can change the focus by tapping the camera screen. The phone can also switch between regular and macro focus.
If the API exposes the current focus settings, maybe there's a way to use this to determine range?
Another solution may be to use two laser pointers.
Basically you would shine two laser pointers at, say, a wall in parallel. Then, the further back you go, the beams will look closer and closer together in the video, but they will still remain the same distance apart. Then you can easily come up with some formula to measure the distance based on how far apart the dots are in the photo.
See this thread for more details: Possible to measure distance with an iPhone and laser pointer?.
In an application, i saw that they used to display pictures of vehicles. But what was amazing was when we touch and swipe in that picture, it rotates in 3d way left and right. And from the front view we can rotate and get to see its back view also. It is a very good feature and i was trying to replicate it. But couldnt get an idea of how and where to start. My doubts are
Whats the actual format of the thing, it surely isn't a picture.
How do they get to rotate it?
Could someone give me an idea where i should start or where I should look upon?
Just like the KennyTM told you, OpenGL-ES is the weapon of choice. Take pictures of that object from all the sides you need to show, then use those as textures for the faces of the cube. Got the idea?