Can I set step size of bounds for scipy differential optimization? - scipy

I used a optimization stratagy using scipy differential optimization
This is my code
bounds = [(-10.,3.), (-5.,5.), (-5.,5.)]
result = differential_evolution(t.findMax, bounds, maxiter=100)
Is there any way to set the step size of the bounds?
In my case, it is enough to check the step size 0.5
for example at bounds[1]
-5, -4.5, -4, -3.5, ... 4.5, 5
If it is possible it makes me save much time
Thanks

Related

Dynamically setting a 'targetSize' for centerCropWindow2d()

Following the example from the documentation page of the centerCropWindow2d function, I am trying to dynamically crop an image based on a 'scale' value that is set by the user. In the end, this code would be used in a loop that would scale an image at different increments, and compare the landmarks between them using feature detection and extraction methods.
I wrote some test code to try and isolate 1 instance of this user-specified image cropping,
file = 'frameCropped000001.png';
image = imread(file);
scale = 1.5;
scaled_width = scale * 900;
scaled_height = scale * 635;
target_size = [scaled_width scaled_height];
scale_window = centerCropWindow2d(size(image), target_size);
image2 = imcrop(image, scale_window);
figure;
imshow(image);
figure;
imshow(image2);
but I am met with this error:
Error using centerCropWindow2d (line 30)
Expected input to be integer-valued.
Error in testRIA (line 20)
scale_window = centerCropWindow2d(size(image), target_size);
Is there no way to do use this function the way I explained above? If not, what's the easiest way to "scale" an image without just resizing it [that is, if I scale it by 0.5, the image stays the same size but is zoomed in by 2x].
Thank you in advance.
I didn't take into account that the height and width for some scales would NOT be whole integers. Since Matlab cannot crop images that are inbetween whole pixel numbers, the "Expected input to be integer-valued." popped up.
I solved my issue by using Math.floor() on the 'scaled_width' and 'scaled_height' variables.

How to get rectangular inclination from axis-aligned bounding box?

My binary image has rectangular rotated objects of known size on it. I'd like to get the object inclination using axis-aligned bounding box that MATLAB's regionprops returns. What are my suggestions:
Let bounding box width be W, side of rectangle be C and inclination alpha
Then
Using Weierstrass substitution
After some simplification:
Solving the equation for tan(alpha/2) with
For any nonzero inclination discriminant is positive.
Logic seems to be OK, so as math. Could you please point where I make a mistake, or what is a better way to get inclination?
Here is corresponding MATLAB code:
img = false(25,25);
img(5:16,5:16) = true;
rot_img = imrotate(img, 30, 'crop');
props = regionprops(bwlabel(rot_img),'BoundingBox');
bbox = cat(1,props.BoundingBox);
w = bbox(3);
h = 12;
a = -1*(1+w/h); b = 2; c = 1 - w/h;
D = b^2 - 4*a*c;
alpha = 2*atand((-b + sqrt(D))/(2*a));
%alpha = 25.5288
EDIT Thank you for trigonometry hints. They significantly simplify the calculations, but they give wrong answer. As I now understand, the question is asked in wrong way. The thing I really need is finding inclination of short lines (10-50 pixels) with high accuracy (+/- 0.5 deg), the lines' position is out of interest.
The approach used in the question and answers show better accuracy for long lines, for c = 100 error is less than 0.1 degree. That means we're into rasterization error here, and need subpixel accuracy. At the moment I have only one algorithm that solves the problem - Radon transform, but I hope you can recommend something else.
p = bwperim(rot_img);
theta=0:0.1:179.9;
[R,xp] = radon(p,theta); %Radon transform of contours
a=imregionalmax(R,true(3,3)); %Regional maxima of the transform
[r,c]=find(a); idx=sub2ind(size(a),r,c); maxvals=R(idx);
[val,midx]=sort(maxvals,'descend'); %Choose 4 highest maxima
mean(rem(theta(c(midx(1:4))),90)) %And average corresponding angles
%29.85
If rectangle is square:
w/c=sin(a)+cos(a)
(w/c)^2=1+sin(2a)
sin(2a)=(w/c)^2-1
a=0.5*arcsin((w/c)^2-1)
May be use regionprops function with 'Orientation' option...

Matlab gradient equivalent in opencv

I am trying to migrate some code from Matlab to Opencv and need an exact replica of the gradient function. I have tried the cv::Sobel function but for some reason the values in the resulting cv::Mat are not the same as the values in the Matlab version. I need the X and Y gradient in separate matrices for further calculations.
Any workaround that could achieve this would be great
Sobel can only compute the second derivative of the image pixel which is not what we want.
(f(i+1,j) + f(i-1,j) - 2f(i,j)) / 2
What we want is
(f(i+i,j)-f(i-1,j)) / 2
So we need to apply
Mat kernelx = (Mat_<float>(1,3)<<-0.5, 0, 0.5);
Mat kernely = (Mat_<float>(3,1)<<-0.5, 0, 0.5);
filter2D(src, fx, -1, kernelx)
filter2D(src, fy, -1, kernely);
Matlab treats border pixels differently from inner pixels. So the code above is wrong at the border values. One can use BORDER_CONSTANT to extent the border value out with a constant number, unfortunately the constant number is -1 by OpenCV and can not be changed to 0 (which is what we want).
So as to border values, I do not have a very neat answer to it. Just try to compute the first derivative by hand...
You have to call Sobel 2 times, with arguments:
xorder = 1, yorder = 0
and
xorder = 0, yorder = 1
You have to select the appropriate kernel size.
See documentation
It might still be that the MatLab implementation was different, ideally you should retrieve which kernel was used there...
Edit:
If you need to specify your own kernel, you can use the more generic filter2D. Your destination depth will be CV_16S (16bit signed).
Matlab computes the gradient differently for interior rows and border rows (the same is true for the columns of course). At the borders, it is a simple forward difference gradY(1) = row(2) - row(1). The gradient for interior rows is computed by the central difference gradY(2) = (row(3) - row(1)) / 2.
I think you cannot achieve the same result with just running a single convolution filter over the whole matrix in OpenCV. Use cv::Sobel() with ksize = 1, then treat the borders (either manually or by applying a [ 1 -1 ] filter).
Pei's answer is partly correct. Matlab uses these calculations for the borders:
G(:,1) = A(:,2) - A(:,1);
G(:,N) = A(:,N) - A(:,N-1);
so used the following opencv code to complete the gradient:
static cv::Mat kernelx = (cv::Mat_<double>(1, 3) << -0.5, 0, 0.5);
static cv::Mat kernely = (cv::Mat_<double>(3, 1) << -0.5, 0, 0.5);
cv::Mat fx, fy;
cv::filter2D(Image, fx, -1, kernelx, cv::Point(-1, -1), 0, cv::BORDER_REPLICATE);
cv::filter2D(Image, fy, -1, kernely, cv::Point(-1, -1), 0, cv::BORDER_REPLICATE);
fx.col(fx.cols - 1) *= 2;
fx.col(0) *= 2;
fy.row(fy.rows - 1) *= 2;
fy.row(0) *= 2;
Jorrit's answer is partly correct.
In some cases, the value of the directional derivative may be negative, and MATLAB will retain these negative numbers, but OpenCV Mat will set the negative number to 0.

create opencv camera matrix for iPhone 5 solvepnp

I am developing an application for the iPhone using opencv. I have to use the method solvePnPRansac:
http://opencv.willowgarage.com/documentation/cpp/camera_calibration_and_3d_reconstruction.html
For this method I need to provide a camera matrix:
__ __
| fx 0 cx |
| 0 fy cy |
|_0 0 1 _|
where cx and cy represent the center pixel positions of the image and fx and fy represent focal lengths, but that is all the documentation says. I am unsure what to provide for these focal lengths. The iPhone 5 has a focal length of 4.1 mm, but I do not think that this value is usable as is.
I checked another website:
http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
which shows how opencv creates camera matrices. Here it states that focal lengths are measured in pixel units.
I checked another website:
http://www.velocityreviews.com/forums/t500283-focal-length-in-pixels.html
(about half way down)
it says that focal length can be converted from units of millimeters to pixels using the equation: fx = fy = focalMM * pixelDensity / 25.4;
Another Link I found states that fx = focalMM * width / (sensorSizeMM);
fy = focalMM * length / (sensorSizeMM);
I am unsure about these equations and how to properly create this matrix.
Any help, advice, or links on how to create an accurate camera matrix (especially for the iPhone 5) would be greatly appreciated,
Isaac
p.s. I think that (fx/fy) or (fy/fx) might be equal to the aspect ratio of the camera, but that might be completely wrong.
UPDATE:
Pixel coordinates to 3D line (opencv)
using this link, I can figure out how they want fx and fy to be formatted because they use it to scale angles relative to their distance from the center. therefore, fx and fy are likely in pixels/(unit length) but im still not sure what this unit length needs to be, can it be arbitrary as long as x and y are scaled to each other?
You can get an initial (rough) estimate of the focal length in pixel dividing the focal length in mm by the width of a pixel of the camera' sensor (CCD, CMOS, whatever).
You get the former from the camera manual, or read it from the EXIF header of an image taken at full resolution. Finding out the latter is a little more complicated: you may look up on the interwebs the sensor's spec sheet, if you know its manufacturer and model number, or you may just divide the overall width of its sensitive area by the number of pixels on the side.
Absent other information, it's usually safe to assume that the pixels are square (i.e. fx == fy), and that the sensor is orthogonal to the lens's focal axis (i.e. that the term in the first row and second column of the camera matrix is zero). Also, the pixel coordinates of the principal point (cx, cy) are usually hard to estimate accurately without a carefully designed calibration rig, and an as-carefully executed calibration procedure (that's because they are intrinsically confused with the camera translation parallel to the image plane). So it's best to just set them equal to the geometrical geometrical center of the image, unless you know that the image has been cropped asymmetrically.
Therefore, your simplest camera model has only one unknown parameter, the focal length f = fx = fy.
Word of advice: in your application is usually more convenient to carry around the horizontal (or vertical) field-of-view angle, rather than the focal length in pixels. This is because the FOV is invariant to image scaling.
The "focal length" you are dealing with here is simply a scaling factor from objects in the world to camera pixels, used in the pinhole camera model (Wikipedia link). That's why its units are pixels/unit length. For a given f, an object of size L at a distance (perpendicular to the camera) z, would be f*L/z pixels.
So, you could estimate the focal length by placing an object of known size at a known distance of your camera and measuring its size in the image. You could aso assume the central point is the center of the image. You should definitely not ignore the lens distortion (dist_coef parameter in solvePnPRansac).
In practice, the best way to obtain the camera matrix and distortion coefficients is to use a camera calibration tool. You can download and use the MRPT camera_calib software from this link, there's also a video tutorial here. If you use matlab, go for the Camera Calibration Toolbox.
Here you have a table with the spec of the cameras for iPhone 4 and 5.
The calculation is:
double f = 4.1;
double resX = (double)(sourceImage.cols);
double resY = (double)(sourceImage.rows);
double sensorSizeX = 4.89;
double sensorSizeY = 3.67;
double fx = f * resX / sensorSizeX;
double fy = f * resY / sensorSizeY;
double cx = resX/2.;
double cy = resY/2.;
Try this:
func getCamMatrix()->(Float, Float, Float, Float)
{
let format:AVCaptureDeviceFormat? = deviceInput?.device.activeFormat
let fDesc:CMFormatDescriptionRef = format!.formatDescription
let dim:CGSize = CMVideoFormatDescriptionGetPresentationDimensions(fDesc, true, true)
// dim = dimensioni immagine finale
let cx:Float = Float(dim.width) / 2.0;
let cy:Float = Float(dim.height) / 2.0;
let HFOV : Float = format!.videoFieldOfView
let VFOV : Float = ((HFOV)/cx)*cy
let fx:Float = abs(Float(dim.width) / (2 * tan(HFOV / 180 * Float(M_PI) / 2)));
let fy:Float = abs(Float(dim.height) / (2 * tan(VFOV / 180 * Float(M_PI) / 2)));
return (fx, fy, cx, cy)
}
Old thread, present problem.
As Milo and Isaac mentioned after Milo's answer, there seems to be no "common" params available for, say, the iPhone 5.
For what it is worth, here is the result of a run with the MRPT calibration tool, with a good old iPhone 5:
[CAMERA_PARAMS]
resolution=[3264 2448]
cx=1668.87585
cy=1226.19712
fx=3288.47697
fy=3078.59787
dist=[-7.416752e-02 1.562157e+00 1.236471e-03 1.237955e-03 -5.378571e+00]
Average err. of reprojection: 1.06726 pixels (OpenCV error=1.06726)
Note that dist means distortion here.
I am conducting experiments on a toy project, with these parameters---kind of ok. If you do use them on your own project, please keep in mind that they may be hardly good enough to get started. The best will be to follow Milo's recommendation with your own data. The MRPT tool is quite easy to use, with the checkerboard they provide. Hope this does help getting started !

distribute same size circles evenly inside a square using Matlab

I have a figure of size 14 x 14 square drawn inside an axis of 20 x 20, in matlab.
I am trying to draw circles of radius 0.7 inside the square and need to arrange them evenly. I need to draw 233 circles. Please let me know how can I do it?
Currently I can draw them randomly but couldn't get 233 circle. Please see my below code.
Your reply is appreciated.
% Urban, sub urban, Rural areas
x_area =[3, 12, 6];
y_area = [6, 8, 16];
r_area = [1, 7, 2];
f = figure;
hAxs = axes('Parent',f);
hold on, box on, axis equal
xlabel('x')
ylabel('y','Rotation',0)
title('Compute the area of circles a vectorized way for several cicles')
axis([0 20 0 20])
rectangle('Position',[5,1,14,14])
rectangle('Position',[3,1,2,2])
rectangle('Position',[1,3,4,4])
hold on, box on, axis equal
a = 233;
x_base_urban = randi([6 18], 1, a);
b = rand([10 8], 1);
y_base_urban = randi([2 14],1, a);
r_base_urban = 0.9;
size_x = size(x_base_urban);
size_x = size_x(2);
size_y = size(y_base_urban);
size_y = size_y(2);
colour = rand(size_x,3);
for t = 1: size_x
plot(x_base_urban(t)+ r_base_urban.*cos(0:2*pi/100:2*pi),...
y_base_urban(t)+ r_base_urban.*sin(0:2*pi/100:2*pi),'parent',hAxs)
plot(x_base_urban(t),y_base_urban(t),'+','parent',hAxs)
end
Thanks
Randomly plotting everything won't work. Actually, if your circles can't overlap, nothing will work. To show that, just compare the results of following calculations:
lSquare = 14;
rCircle = 0.7;
nCircles = 233;
areaCircles = nCircles * pi * rCircle^2
areaSquare = lSquare^2
You will see that areaCircles > areaSquare, so it is impossible to fit them all in. On the other hand, if areaSquare >= areaCircles does not guarantee you that a solution exists!
Try your setup with a smaller example to come up with a solution. E.g. take a square box and a bunch of spherical objects (balls, marbles, oranges, apples, ... if need be) and try to fit as much of those in your box. If that works, you might even want to draw their positions on a sheet of paper before trying to implement it.
If you do this correctly, you will get an idea of how to stack round objects in a square container. That is also exactly what you need to do in your exercise. Then try to make a model/algorithm of what you did manually and implement that in MATLAB. It won't be hard, but you will need some small calculations: Pythagoras and intersection of circles.
I also suggest you use a function to draw a circle as #Andrey shows, so something of the form function drawCircle(center, radius). That allows you to keep complexity down.
If your circles can overlap, then the solution is quite easy: look at a circle as an object with a center point and distribute these center points evenly over the square. Don't use rand to do this, but calculate their positions yourself.
If you can't find a solution, I might expand my answer in a few days.
Without diving too deep into your code, I think that you need to add a hold function, after the first plot
for t = 1: size_x
plot(x_base_urban(t)+ r_base_urban.*cos(0:2*pi/100:2*pi),...
y_base_urban(t)+ r_base_urban.*sin(0:2*pi/100:2*pi),'parent',hAxs)
plot(x_base_urban(t),y_base_urban(t),'+','parent',hAxs)
hold(hAxs,'on');
end
By the way, the best way to draw circle is by using rectangle command.
rectangle('Curvature',[1 1],'Position',[1 3 4 5])
So you can create a PlotCircle function (like #EgonGeerardyn suggests) like this:
function plotCircle(x,y,r)
rectangle('Position',[x-r y-r 2*r 2*r],'Curvature',[1 1]);
end