Android Long Exposure for Sky Imaging - android-camera

I am very new to Android Camera API. My question is a bit similar to this but with different intentions.
In astro-photography, long exposures are very common to capture faint Deep Sky Objects (DSOs) like nebulas, galaxies and star clusters. But when we take sky shots with long exposures (say 30s), the stars appear as lines (star trails) instead of points, due to continuous rotation of the Earth. Thus astro-photography highly depends on tracking mounts for cameras and telescopes which negate this effect by continuous rotation (after polar alignment) in the opposite direction as the direction of Earth's rotation.
I am trying to find if it's possible to develop an algorithm to achieve this "de-rotation".
Remember we can record videos for planetary imaging (planets are very bright). Stacking softwares like Registax are used to stack good frames to end up with a nice detailed image. But the same technique cannot be used for DSOs because they are too faint and at 30 FPS each frame will just get 1/30 seconds of exposure, thus enough photons won't be recorded per frame by the sensor to distinguish it from background glow.
So my question is: Can we stream raw data from sensor using Android Camera API to a program which will take care of derotation, continuously adding the light information to the SAME pixels, instead of adjacent pixels due to Earth's rotation?
Many Thanks,
Ahmed
(Attached image was taken using 30s exposure using Xiaomi Redmi Note 9S)
Orion and Pleiades:

Related

Extracting measurements from a finger via ROI and image processing MATLAB

I am trying to do a number of things via MATLAB but I am getting a bit lost with what techniques to use. My ultimate goal is to extract various measurements from a users fingerprint presentation, e.g. how far the finger over/undershoots, the co-ordinates of where the finger enters, the angle of the finger.
In my current setup, I have a web camera recording footage of a top down view of the presentation which I then take the video file and break down into individual frames. https://www.dropbox.com/s/zhvo1vs2615wr29/004.bmp?dl=0
What I am trying to work on at the moment is using ROI based image processing to create a binary mask around the edges of the scanner. I'm using the imbw function to get a binarised image and getting this as a result. https://www.dropbox.com/s/1re7a3hl90pggyl/mASK.bmp?dl=0
What I could use is some guidance on where to go from here. I want to be able to take measurements from the defined ROI to work out various metrics e.g. how far a certain point is from the ROI so I must have some sort of border for the scanner edges. From my experience in image processing so far, this has been hard to clearly define. I would like to get a clearer image where the finger is outlined and defined and the background (i.e. the scanner light/blocks) are removed.
Any help would be appreciated.
Thanks

Taking Depth Image From Iphone (or consumer camera)

I have read that it's possible to create a depth image from a stereo camera setup (where two cameras of identical focal length/aperture/other camera settings take photographs of an object from an angle).
Would it be possible to take two snapshots almost immediately after each other(on the iPhone for example) and use the differences between the two pictures to develop a depth image?
Small amounts of hand-movement and shaking will obviously rock the camera creating some angular displacement, and perhaps that displacement can be calculated by looking at the general angle of displacement of features detected in both photographs.
Another way to look at this problem is as structure-from-motion, a nice review of which can be found here.
Generally speaking, resolving spatial correspondence can also be factored as a temporal correspondence problem. If the scene doesn't change, then taking two images simultaneously from different viewpoints - as in stereo - is effectively the same as taking two images using the same camera but moved over time between the viewpoints.
I recently came upon a nice toy example of this in practice - implemented using OpenCV. The article includes some links to other, more robust, implementations.
For a deeper understanding I would recommend you get hold of an actual copy of Hartley and Zisserman's "Multiple View Geometry in Computer Vision" book.
You could probably come up with a very crude depth map from a "cha-cha" stereo image (as it's known in 3D photography circles) but it would be very crude at best.
Matching up the images is EXTREMELY CPU-intensive.
An iPhone is not a great device for doing the number-crunching. It's CPU isn't that fast, and memory bandwidth isn't great either.
Once Apple lets us use OpenCL on iOS you could write OpenCL code, which would help some.

Polling IPhone Camera to Process Image

Scenario is I want my app to process (in the background if possible) images been seen by the iphone camera.
e.g. App is running, user places the phone down on a piece of red cardboard, than want to display an alertview saying "Phone placed on Red Surface"(this is a simplified version of what i want to do but just to keep the question direct).
Hope this makes sense. I know there is two seperate concerns here.
How to process images from the camera in the background of the app (if we cant do this that we can initiate the process with say a button click if needed).
Processing the image to say what solid colour it is sitting on.
Any help/guidance would be greatly appreciated.
Thanks
Generic answers to your two questions:
Background processing of image can be triggered as a timer event. Say for example, every 30 second, capture the image on the screen and do the processing behind. If the processing is not computing/time intensive, this should work
It is technically possible to know the color of say one pixel programatically. If you are sure that the entire image is just one color, you can try that approach. Get few random points and get the color of the pixel in the image. But if the image (in your example, red board) consists of an image or multiple colors, then that will require detailed image processing techniques.
Hope this helps
1) Image Capture
There's two kinds of apps that continually take imagery from the camera: media capture (e.g. Camera, iMovie) or Augmented Reality apps.
Here's the iPhone SDK tutorial for media capture:
https://developer.apple.com/library/ios/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW3
Access the camera with iPhone SDK
Augmented Reality apps take continual pictures from the camera for processing/overlay. I suggest you look into some of the available AR kits and see how they get a continual stream from the camera and also analyze the pixels.
Starting a augmented reality (AR) app like Panasonic VIERA AR Setup Simulator
http://blog.bordertownlabs.com/post/157320598/customizing-the-iphone-camera-view-with
2) Image Processing
Image processing is a really big topic that's been addressed in multiple other places:
https://photo.stackexchange.com/questions/tagged/image-processing
https://dsp.stackexchange.com/questions/tagged/image-processing
https://mathematica.stackexchange.com/questions/tagged/image-processing
..but for starters, you'll need to use some heuristical analysis to determine what you're looking for. Sampling the captured pixels in a bunch of places (e.g. corners + middle) may help, as would generating a histogram of colour intensities - if there's lots of red but little or no blue and green, it's a red card.

iphone, Image processing

I am building an application on night vision but i don't find any useful algorithm which I can apply on the dark images to make it clear. Anyone please suggest me some good algorithm.
Thanks in advance
With the size of the iphone lens and sensor, you are going to have a lot of noise no matter what you do. I would practice manipulating the image in Photoshop first, and you'll probably find that it is useful to select a white point out of a sample of the brighter pixels in the image and to use a curve. You'll probably also need to run an anti-noise filter and smoother. Edge detection or condensation may allow you to bold some areas of the image. As for specific algorithms to perform each of these filters there are a lot of Computer Science books and lists on the subject. Here is one list:
http://www.efg2.com/Lab/Library/ImageProcessing/Algorithms.htm
Many OpenGL implementations can be found if you find a standard name for an algorithm you need.
Real (useful) night vision typically uses an infrared light and an infrared-tuned camera. I think you're out of luck.
Of course using the iPhone 4's camera light could be considered "night vision" ...
Your real problem is the camera and not the algorithm.
You can apply algorithm to clarify images, but it won't make from dark to real like by magic ^^
But if you want to try some algorithms you should take a look at OpenCV (http://opencv.willowgarage.com/wiki/) there is some port like here http://ildan.blogspot.com/2008/07/creating-universal-static-opencv.html
I suppose there are two ways to refine the dark image. first is active which use infrared and other is passive which manipulates the pixel of the image....
The images will be noisy, but you can always try scaling up the pixel values (all of the components in RGB or just the luminance of HSV, either linear or applying some sort of curve, either globally or local to just the darker areas) and saturating them, and/or using a contrast edge enhancement filter algorithm.
If the camera and subject matter are sufficiently motionless (tripod, etc.) you could try summing each pixel over several image captures. Or you could do what some HDR apps do, and try aligning images before pixel processing across time.
I haven't seen any documentation on whether the iPhone's camera sensor has a wider wavelength gamut than the human eye.
I suggest conducting a simple test before trying to actually implement this:
Save a photo made in a dark room.
Open in GIMP (or a similar application).
Apply "Stretch HSV" algorithm (or equivalent).
Check if the resulting image quality is good enough.
This should give you an idea as to whether your camera is good enough to try it.

Measuring a room with an iPhone

I have a need to measure a room (if possible) from within an iPhone application, and I'm looking for some ideas on how I can achieve this. Extreme accuracy is not important, but accuracy down to say 1 foot would be good. Some ideas I've had so far are:
Walk around the room and measure using GPS. Unlikely to be anywhere near accurate enough, particularly for iPod touch users
Emit sounds from the microphone and measure how long they take to return. There are some apps out there that do this already, such as PocketMeter. I suspect this would not be user friendly, and more gimmicky than practical.
Anyone have any other ideas?
You could stand in one corner and throw the phone against the far corner. The phone could begin measurement at a certain point of acceleration and end measurement at deceleration
1) Set iPhone down on the floor starting at one wall with base against the wall.
2) Mark line where iPhone ends at top.
3) Pick iPhone up and move base to where the line is you just drew.
4) Repeat steps 1->3 until you reach the other wall.
5) Multiply number of lines it took to reach other wall by length of iPhone to reach final measurement.
=)
I remember seeing programs for realtors that involved holding a reference object up in a picture. The program would identify the reference object and other flat surfaces in the image and calculate dimensions from that. It was intended for measuring the exterior of houses. It could follow connected walls that it could assume were at right angles.
Instead of shipping with a reference object, as those programs did, you might be able to use a few common household objects like a piece of printer paper. Let the user pick from a list of common objects what flat item they are holding up to the wall.
Detecting the edges of walls, and of the reference object, is some tricky pattern recognition, followed by some tricky math to convert the found edges to planes. Still better than throwing you phone at the far wall though.
Emit sounds from the microphone and measure how long they take to return. There are some apps out there that do this already, such as PocketMeter. I suspect this would not be user friendly, and more gimmicky than practical.
Au contraire, mon frère.
This is the most user friendly, not to mention accurate, way of measuring the dimensions of a room.
PocketMeter measures the distance to one wall with an accuracy of half an inch.
If you use the same formulas to measure distance, but have the person stand near a corner of the room (so that the distances to the walls, floor, and ceiling are all different), you should be able to calculate all three measurements (length, width, and height) with one sonar pulse.
Edited, because of the comment, to add:
In an ideal world, you would get 6 pulses, one from each of the surfaces. However, we don't live in an ideal world. Here are some things you'll have to take into account:
The sound pulse causes the iPhone to vibrate. The iPhone microphone picks up this vibration.
The type of floor (carpet, wood, tile) will affect the time that the sound travels to the floor and back to the device.
The sound reflects of off more than one surface (wall) and returns to the iPhone.
If I had to guess, because I've done something similar in the past, you're going to have to emit a multi-frequency tone, made up of a low frequency, a medium frequency, and a high frequency. You'll have to perform a fast Fourier Transform on the sound wave you receive to pick out the frequencies that you transmitted.
Now, I don't want to discourage you. The calculations can be done. However, it's going to take some work. After all PocketMeter has been at it for a while, and they only measure the distance to one wall.
I think an easier way to do this would be to use the Pythagorean theorem. Most rooms are 8 or 10 feet tall and if the user can guess accurately, you can use the camera to do some analysis and crunch the numbers. (You might have to have some clever way to detect the angle)
How to do it
I expect 5 points off of your bottom line for this ;)
Let me see if it helps. Take an object of known length and keep it beside the wall and with Iphone, take pic of wall along with the object that you kept beside the wall. Now get the ratio of wall width and object width from the image in Iphone. And as you know the width of the object, you can easily calcualte the width of wall. repeat it for each wall and you will have a room measurement.
Your users could measure a known distance by pacing it off, and thereby calibrate the length of their pace. Then they could enter the distance of each wall in paces, and the phone would convert it to feet. This would probably be very convenient, and would probably be accurate to within 10%.
If they may need more accurate readings, then give them the option of entering in a measurement from a tape measure.
This answer is somewhat similar to Jitendra's answer, but the method he suggests will only work where you can fit the whole wall in a single shot.
Get an object of know size and photograph it held against the wall with the iphone held against the other wall (two people or blutac needed). Then you can calculate the distance between the walls by looking at the size of the object (in pixels) in the photo. You could use a PDF to make a printed document the object of known size and use a 2D barcode to get the iphone to pick it up.
When the user wants to measure something, he takes a picture of it, but you're actually taking two separate images, one with flash on, one with flash off. Then you can analyze the lighting differences in the image and the flash reflection to determine the scale of the image. This will only work for close and not too shining objects I guess.
But that's about the only other way I thought about deducting scale from an image without any fixed objects.