Convolving an image with a boxcar kernel in jython for Fiji - plugins

I am trying to do some image processing in Fiji using jython and have run in to trouble. I am trying to develop a plugin where an image is convolved by a boxcar kernel. By recording a macro I get the following which could be pasted into the jython script:
run("Convolve...", "text1=[0.04 0.04 0.04 0.04 0.04\n0.04 0.04 0.04 0.04 0.04\n0.04 0.04 0.04 0.04 0.04\n0.04 0.04 0.04 0.04 0.04\n0.04 0.04 0.04 0.04 0.04\n]");
where text1 is the string used for the convolution. However, the problem is that the plugin requires that the user may define the size of the boxcar kernel. Thus, I cannot hardcode this into the script. Does anyone know how to proceed. I am open to alternative methods as long as the final result is a boxcar convolution.
Thank you very much in advance

You can construct the string that represents the kernel dynamically in jython for a given size and pass it as a parameter to imagej.
args=split(getArgument);
kernel = args[0];
run("Convolve...", "text1=&kernel");
Alternatively, pass size as a parameter and construct the string using the imagej macro language itself.
args=split(getArgument);
size = args[0];
// for loop to create kernel
run("Convolve...", "text1=&kernel");

Years later, but for posterity:
Looking at the ImageJ API, you can actually call the Convolver() class with the necessary arguments.
For example:
from ij import IJ
from ij import ImagePlus
from ij.plugin.filter import Convolver
imp = IJ.openImage("http://imagej.nih.gov/ij/images/blobs.gif")
ip = imp.getProcessor()
kernel = [1.0, 2.0, 1.0,
0.0, 0.0, 0.0,
-1.0, -2.0, -1.0]
Convolver().convolve(ip, kernel, 3, 3)
out = ImagePlus("convolved", ip)
out.show()
It should be arbitrary to ask the user for an input kernel list from there.

Related

How to select a 'sensitivity' value for my circular hough transform?

I'm using circular hough transform to detect circles in a image. There are four basic parameters which are necessary for the circular hough function:
Image name
Radius range
Object polarity (For my case, it is dark)
Sensitivity value (Default - 0.5)
[centers, radii] = imfindcircles(img1,[r1
r2],'ObjectPolarity','dark', ... 'Sensitivity',sens);
Sensitivity value varies between 0 and 1.Here, I'm varying the sensitivity value between 0.9 and 0.95(Number of circles detected increases with sensitivity value. Maximum=1). For different images, optimum results are yielded at different sensitivity values.
Let's say for '1.jpg' with radius range [3,8], the function provided different results(No. of circles) for different sensitivity values:
0.90 -> 553
0.91 -> 958
0.92 -> 1412
0.93 -> 1799
0.94 -> 2164
0.95 -> 2453
0.96 -> 2806
0.97 -> 3170
1.00 -> 6393
But for this image, the more accurate value is found between 0.94 and 0.95 (can be 0.941,0.942...,0.949 etc.,). Count is found to be 2200-2400. It has a lot of false positives if I have used values between [0.96,1].
I'm in need of a function, which has to tell me when to stop changing sensitivity based on a sudden increase in count.(i.e) a stable state is attained.
For the above case, it attains a stable state between [0.94-0.96].(i.e) The output value changes gradually between these points(2100,2400,2800).
Is there any mathematical technique to stop the count, based on rate of change of values?
Or is there any other better solution to overcome my problem?

Calculate random value * unknown value = 1

I'm having trouble importing the models from maya fbx to unity at the right scale. I detected the problem being inside Unity when importing the fbx file.
There's not really a workaround this other than changing by hand the .meta file:
useFileScale: 0
Since modelImporter.isFileScaleUsed in Unity is read-only, I can't change the value with a script, but I can change the global scale:
globalScale
Say I have file scale at 0.01, the normal value is 1 for scale, how can I calculate 0.01 * 100 = 1 with UnityScript, meaning I need to get the value 100 out of equation 0.01 * ? = 1?
Divide 1 by the value you know. For example, if a * b = 1 then 1/a = b.

Matlab code to generate images from entropy

Could you please help me with this question:
Assume that on average in binary images, 75% of the pixels are white, and 25% are black. What is the entropy of this source? Model this source in Matlab and generate some sample images according to this process
To find the entropy, you just need to apply the definition:
H = -0.25 * log2(0.25) - 0.75 * log2(0.75)
Since we are using log2, the result will be in bits.
As for generating a Matlab B&W (i.e. binary) image of size 512x512, you can simply do:
im = rand(512) < 0.75;
By convention, true = 1 = white and false = 0 = black.

create opencv camera matrix for iPhone 5 solvepnp

I am developing an application for the iPhone using opencv. I have to use the method solvePnPRansac:
http://opencv.willowgarage.com/documentation/cpp/camera_calibration_and_3d_reconstruction.html
For this method I need to provide a camera matrix:
__ __
| fx 0 cx |
| 0 fy cy |
|_0 0 1 _|
where cx and cy represent the center pixel positions of the image and fx and fy represent focal lengths, but that is all the documentation says. I am unsure what to provide for these focal lengths. The iPhone 5 has a focal length of 4.1 mm, but I do not think that this value is usable as is.
I checked another website:
http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
which shows how opencv creates camera matrices. Here it states that focal lengths are measured in pixel units.
I checked another website:
http://www.velocityreviews.com/forums/t500283-focal-length-in-pixels.html
(about half way down)
it says that focal length can be converted from units of millimeters to pixels using the equation: fx = fy = focalMM * pixelDensity / 25.4;
Another Link I found states that fx = focalMM * width / (sensorSizeMM);
fy = focalMM * length / (sensorSizeMM);
I am unsure about these equations and how to properly create this matrix.
Any help, advice, or links on how to create an accurate camera matrix (especially for the iPhone 5) would be greatly appreciated,
Isaac
p.s. I think that (fx/fy) or (fy/fx) might be equal to the aspect ratio of the camera, but that might be completely wrong.
UPDATE:
Pixel coordinates to 3D line (opencv)
using this link, I can figure out how they want fx and fy to be formatted because they use it to scale angles relative to their distance from the center. therefore, fx and fy are likely in pixels/(unit length) but im still not sure what this unit length needs to be, can it be arbitrary as long as x and y are scaled to each other?
You can get an initial (rough) estimate of the focal length in pixel dividing the focal length in mm by the width of a pixel of the camera' sensor (CCD, CMOS, whatever).
You get the former from the camera manual, or read it from the EXIF header of an image taken at full resolution. Finding out the latter is a little more complicated: you may look up on the interwebs the sensor's spec sheet, if you know its manufacturer and model number, or you may just divide the overall width of its sensitive area by the number of pixels on the side.
Absent other information, it's usually safe to assume that the pixels are square (i.e. fx == fy), and that the sensor is orthogonal to the lens's focal axis (i.e. that the term in the first row and second column of the camera matrix is zero). Also, the pixel coordinates of the principal point (cx, cy) are usually hard to estimate accurately without a carefully designed calibration rig, and an as-carefully executed calibration procedure (that's because they are intrinsically confused with the camera translation parallel to the image plane). So it's best to just set them equal to the geometrical geometrical center of the image, unless you know that the image has been cropped asymmetrically.
Therefore, your simplest camera model has only one unknown parameter, the focal length f = fx = fy.
Word of advice: in your application is usually more convenient to carry around the horizontal (or vertical) field-of-view angle, rather than the focal length in pixels. This is because the FOV is invariant to image scaling.
The "focal length" you are dealing with here is simply a scaling factor from objects in the world to camera pixels, used in the pinhole camera model (Wikipedia link). That's why its units are pixels/unit length. For a given f, an object of size L at a distance (perpendicular to the camera) z, would be f*L/z pixels.
So, you could estimate the focal length by placing an object of known size at a known distance of your camera and measuring its size in the image. You could aso assume the central point is the center of the image. You should definitely not ignore the lens distortion (dist_coef parameter in solvePnPRansac).
In practice, the best way to obtain the camera matrix and distortion coefficients is to use a camera calibration tool. You can download and use the MRPT camera_calib software from this link, there's also a video tutorial here. If you use matlab, go for the Camera Calibration Toolbox.
Here you have a table with the spec of the cameras for iPhone 4 and 5.
The calculation is:
double f = 4.1;
double resX = (double)(sourceImage.cols);
double resY = (double)(sourceImage.rows);
double sensorSizeX = 4.89;
double sensorSizeY = 3.67;
double fx = f * resX / sensorSizeX;
double fy = f * resY / sensorSizeY;
double cx = resX/2.;
double cy = resY/2.;
Try this:
func getCamMatrix()->(Float, Float, Float, Float)
{
let format:AVCaptureDeviceFormat? = deviceInput?.device.activeFormat
let fDesc:CMFormatDescriptionRef = format!.formatDescription
let dim:CGSize = CMVideoFormatDescriptionGetPresentationDimensions(fDesc, true, true)
// dim = dimensioni immagine finale
let cx:Float = Float(dim.width) / 2.0;
let cy:Float = Float(dim.height) / 2.0;
let HFOV : Float = format!.videoFieldOfView
let VFOV : Float = ((HFOV)/cx)*cy
let fx:Float = abs(Float(dim.width) / (2 * tan(HFOV / 180 * Float(M_PI) / 2)));
let fy:Float = abs(Float(dim.height) / (2 * tan(VFOV / 180 * Float(M_PI) / 2)));
return (fx, fy, cx, cy)
}
Old thread, present problem.
As Milo and Isaac mentioned after Milo's answer, there seems to be no "common" params available for, say, the iPhone 5.
For what it is worth, here is the result of a run with the MRPT calibration tool, with a good old iPhone 5:
[CAMERA_PARAMS]
resolution=[3264 2448]
cx=1668.87585
cy=1226.19712
fx=3288.47697
fy=3078.59787
dist=[-7.416752e-02 1.562157e+00 1.236471e-03 1.237955e-03 -5.378571e+00]
Average err. of reprojection: 1.06726 pixels (OpenCV error=1.06726)
Note that dist means distortion here.
I am conducting experiments on a toy project, with these parameters---kind of ok. If you do use them on your own project, please keep in mind that they may be hardly good enough to get started. The best will be to follow Milo's recommendation with your own data. The MRPT tool is quite easy to use, with the checkerboard they provide. Hope this does help getting started !

iPhone - AVAudioPlayer - convert decibel level into percent

I like to update an existing iPhone application which is using AudioQueue for playing audio files. The levels (peakPowerForChannel, averagePowerForChannel) were linear form 0.0f to 1.0f.
Now I like to use the simpler class AVAudioPlayer which works fine, the only issue is that the levels which are now in decibel, not linear from -120.0f to 0.0f.
Has anyone a formula to convert it back to the linear values between 0.0f and 1.0f?
Thanks
Tom
Several Apple examples use the following formula to convert the decibels into a linear range (from 0.0 to 1.0):
double percentage = pow (10, (0.05 * power));
where power is the value you get from one of the various level meter methods or functions, such as AVAudioPlayer's averagePowerForChannel:
Math behind the Linear and Logarithmic value conversion:
1. Linear to Decibel (logarithmic):
decibelValue = 20.0f * log10(linearValue)
Note: log is base 10
Suppose the linear value in the form of percentage range from [ 0 (min vol) to 100 (max vol)] then the decibelValue for half of the volume (50%) is
decibelValue = 20.0f * log10(50.0f/100.0f) = -6 dB
Full volume:
decibelValue = 20.0f * log10(100.0f/100.0f) = 0 dB
Complete mute:
decibelValue = 20.0f * log10(0/100.0f) = -infinity
2. Decibel(logarithmic) to Linear:
LinearValue = pow(10.0f, decibelValue/20.0f)
Apple uses a lookup table in their SpeakHere sample that converts from dB to a linear value displayed on a level meter.
I moulded their calculation in a small routine; see here.