Beaglebone black and RaspberryPi fastest time response - raspberry-pi

Does anybody know what the fastest processing times are for the beaglebome black and the RaspberryPi. By processing time, i refer to reading a really fast input signal across any two of their input pins.
Context:
I am building a small particle detector and the pmt output, which i will connect directly to the beaglebone black or a RasberryPi for processing, is 3.3V and ~40ns wide. I am concerned whether these signals will be too fast for these micro-computers to even detect it. And i cant seem to find that info anywhere.
Thanks in advance.

Beaglebone black has programmable realtime units (PRU) that run at 200 MHz (5ns per instruction). With these you will be able to reliably detect and count pulses >=15ns wide that are at least 30ns or so apart.
See this presentation. It's really quite awesome: http://beagleboard.org/pru
Raspberry Pi doesn't have anything similar with real time/reliability guarantees. I don't know whether or not there's a clever way you could get it to do what you want.

Related

Android Long Exposure for Sky Imaging

I am very new to Android Camera API. My question is a bit similar to this but with different intentions.
In astro-photography, long exposures are very common to capture faint Deep Sky Objects (DSOs) like nebulas, galaxies and star clusters. But when we take sky shots with long exposures (say 30s), the stars appear as lines (star trails) instead of points, due to continuous rotation of the Earth. Thus astro-photography highly depends on tracking mounts for cameras and telescopes which negate this effect by continuous rotation (after polar alignment) in the opposite direction as the direction of Earth's rotation.
I am trying to find if it's possible to develop an algorithm to achieve this "de-rotation".
Remember we can record videos for planetary imaging (planets are very bright). Stacking softwares like Registax are used to stack good frames to end up with a nice detailed image. But the same technique cannot be used for DSOs because they are too faint and at 30 FPS each frame will just get 1/30 seconds of exposure, thus enough photons won't be recorded per frame by the sensor to distinguish it from background glow.
So my question is: Can we stream raw data from sensor using Android Camera API to a program which will take care of derotation, continuously adding the light information to the SAME pixels, instead of adjacent pixels due to Earth's rotation?
Many Thanks,
Ahmed
(Attached image was taken using 30s exposure using Xiaomi Redmi Note 9S)
Orion and Pleiades:

Raspberry pi cam picture scaling and offset problems

I have a question on the Raspberry Pi cam. I am using openCV on a raspberry Pi 2 to make a line-follower for a robot.
Basically the idea is to find the direction of a line in the image using derivatives and color segmentation.
However, I'm found some strange behaviour when I compare the results on an ordinary PC webcamera and the picam. The algorithm works well on the PC webcam, and the direction indicator sits spot on the line. On the picam there is a strange scale and offset which I don't understand.
On both platforms I have tried both the cap.set(CV_CAP_PROP_FRAME_WIDTH/HEIGHT) to rescale the image, as well as the resize function. Both of them still produce the strange offset. I use the circle(...) and line(...) methods in openCV to overlay the line and circles on the captured image.
Could anyone help to explain this behaviour? See the links below for a visual caption.
picam
webcam
Regards
I couldn't add the pictures directly because of the policies of Stackexchange, so had to provide links instead.
I eventually discovered the solution to the problem, and it involved changing the order of the taps of a derivative filter for the Windows and Linux versions of the program. Exactely why this is the case is a mystery to me, and may involve differences in compiler optimization (Visual Studio 13 vs g++ 4.6.3), or maybe a silly error on my part.
On the the PC I use {1 0 -1} filter taps, on the RP2 I have to use {-1 0 1} instead.
The filter runs on a S8 (-127..127) image, so there is no issue of a wraparound.
At any rate, I consider the issue closed.

Fast rotation of an image (bitmap) on an iPhone for an arbitrary degree

I need to rotate a full size photo (about 8MB) as fast as possible on an iPhone (4s and up), an arbitrary angle. The code to do so with CoreImage is easy enough, but not fast. It takes about 1.5 seconds on a 4s. Please note that the purpose of this rotate is for further image processing in memory, NOT for display on the screen.
Is there any kind of hope that we can get this down to sub-second given, perhaps, the DSP (using the Accelerate framework) or OpenGL (and keeping in mind that we have to copy the bits in and out of whatever buffer we using. If this is hopeless then we have other (but more complicated) ways to tackle the job. I have not written OpenGL code before and want some assurance that this will actually work before I spend significant time on it!
Thank you,
Ken
Since you have it running at 1.5s with no hardware acceleration I'd say it's safe to assume you can get it under a second with OpenGL.

How to improve edge detection on IPhone apps?

I'm currently developing an IPhone app that uses edge detection. I took some sample pictures and I noticed that they came out pretty dark in doors. Flash is obviously an option but it usually blinding the camera and miss some edges.
Update: I'm more interested in IPhone tips. If there is a wat to get better pictures.
Have you tried playing with contrast and/or brightness? If you increase contrast before doing the edge detection, you should get better results (although it depends on the edge detection algorithm you're using and whether it auto-magically fixes contrast first).
Histogram equalisation may prove useful here as it should allow you to maintain approximately equal contrast levels between pictures. I'm sure there's an algorithm been implemented in OpenCV to handle it (although I've never used it on iOS, so I can't be sure).
UPDATE: I found this page on performing Histogram Equalization in OpenCV

iphone, Image processing

I am building an application on night vision but i don't find any useful algorithm which I can apply on the dark images to make it clear. Anyone please suggest me some good algorithm.
Thanks in advance
With the size of the iphone lens and sensor, you are going to have a lot of noise no matter what you do. I would practice manipulating the image in Photoshop first, and you'll probably find that it is useful to select a white point out of a sample of the brighter pixels in the image and to use a curve. You'll probably also need to run an anti-noise filter and smoother. Edge detection or condensation may allow you to bold some areas of the image. As for specific algorithms to perform each of these filters there are a lot of Computer Science books and lists on the subject. Here is one list:
http://www.efg2.com/Lab/Library/ImageProcessing/Algorithms.htm
Many OpenGL implementations can be found if you find a standard name for an algorithm you need.
Real (useful) night vision typically uses an infrared light and an infrared-tuned camera. I think you're out of luck.
Of course using the iPhone 4's camera light could be considered "night vision" ...
Your real problem is the camera and not the algorithm.
You can apply algorithm to clarify images, but it won't make from dark to real like by magic ^^
But if you want to try some algorithms you should take a look at OpenCV (http://opencv.willowgarage.com/wiki/) there is some port like here http://ildan.blogspot.com/2008/07/creating-universal-static-opencv.html
I suppose there are two ways to refine the dark image. first is active which use infrared and other is passive which manipulates the pixel of the image....
The images will be noisy, but you can always try scaling up the pixel values (all of the components in RGB or just the luminance of HSV, either linear or applying some sort of curve, either globally or local to just the darker areas) and saturating them, and/or using a contrast edge enhancement filter algorithm.
If the camera and subject matter are sufficiently motionless (tripod, etc.) you could try summing each pixel over several image captures. Or you could do what some HDR apps do, and try aligning images before pixel processing across time.
I haven't seen any documentation on whether the iPhone's camera sensor has a wider wavelength gamut than the human eye.
I suggest conducting a simple test before trying to actually implement this:
Save a photo made in a dark room.
Open in GIMP (or a similar application).
Apply "Stretch HSV" algorithm (or equivalent).
Check if the resulting image quality is good enough.
This should give you an idea as to whether your camera is good enough to try it.