Creating a boxplot with error bars in python - boxplot

Another question, I have this data from my lab exercise
std = [10.17, 10.15, 10.13, 10.18, 10.21]
mean = [0.009, 0.012, 0.019, 0.04, 0.025]
name = ['control','Oligo','Rotenone','CCCP','KCN']
and I want to create a boxplot with error bars in Python, but everytime I try to create the plot, I get this error message
"only size-1 arrays can be converted to python scalars"
using this code or others alike found on other platforms
fig, ax = plt.subplots()
ax.bar(name, mean, yerr=std, align='center', alpha0.5)
What am i missing here? in the examples online it work perfectly.

Related

MATLAB plotting in a parfor-loop

I need to plot figures with subplots inside a parfor-loop, similar to this question (which deals more with the quality of the plots).
My code looks something like this:
parfor idx=1:numel(A)
N = A(idx);
fig = figure();
ax = subplot(3,1,1);
plot(ax, ...);
...
saveas(fig,"..."),'fig');
saveas(fig,"...",'png');
end
This gives a weird error:
Data must be numeric, datetime, duration or an array convertible to double.
I am sure that the problem does not lie in non-numeric data as the same code without parallelization works.
At this point I expected an error because threads will concurrently create and access figures and axes objects, and I do not think it is ensured that the handles always correspond to the right object (threads are "cross-plotting" so to say).
If I pre-initialize the objects and then acces them like this,
ax = cell(1,numel(A)); % or ax = zeros(1,numel(A));
ax(idx) = subplot(3,1,1);
I get even weirder errors somewhere in the fit-calls I use:
Error using curvefit.ensureLogical>iConvertSubscriptIndexToLogical (line 26)
Excluded indices must be nonnegative integers that reference the fit's input data points
Error in curvefit.ensureLogical (line 18)
exclude = iConvertSubscriptIndexToLogical(exclude, nPoints);
Error in cfit/plot (line 46)
outliers = curvefit.ensureLogical( outliers, numel( ydata ) );
I have the feeling it has to work with some sort of variable slicing described in the documentation, I just can't quite figure out how.
I was able to narrow the issues down to a fitroutine I was using.
TLDR: Do not use fitobjects (cfit or sfit) for plots in a parfor-loop!
Solutions:
Use wrappers like nlinfit() or lsqcurvefit() instead of fit(). They give you the fit parameters directly so you can call your fitfunction with them when plotting.
If you have to use fit() (for some reason it is the only one which was able to fit my data more or less consistently), extract the fit parameters and then call your fitfunction using cell expansion.
fitfunc = #(a,b,c,d,e,x) ( ... );
[fitobject,gof,fitinfo] = fit(x,y,fitfunc,fitoptions(..));
vFitparam = coeffvalues(fitobject);
vFitparam_cell = num2cell(vFitparam);
plot(ax,x,fitfunc(vFitparam_cell{:},x), ... );
As far as I know fit() requires the function handle to have subsequent parameters (not a vector), so by using a cell you can avoid bloated code like this:
plot(ax,x,fitfunc(vFitparam(1),vFitparam(2),vFitparam(3),vFitparam(4),vFitparam(5),x), ... );

Convergence when utilizing scipy.odr module to find best-fit parameters when there is only horizontal errorbars

I am trying to fit a piecewise (otherwise linear) function to a set of experimental data. The form of the data is such that there is only horizontal error bars and no vertical error bars. I am familiar with scipy.optimize.curve_fit module but that works when there is only vertical error bars corresponding to the dependent variable y. After searching for my specific need, I came across the following post where it explains about the possibility of using scipy.odr module when errorbars are those of independent variable x. (Correct fitting with scipy curve_fit including errors in x?)
Attached is my version of the code which tries to find best-fit parameters using ODR methodology. It actually draws best-fit function and it seems it's working. However, after changing initial (educated guess) values and trying to extract best-fit parameters, I am getting the same guessed parameters I inserted initially. This means that the method is not convergent and you can verify this by printing output.stopreason and getting
['Numerical error detected']
So, my question is whether this methodology is consistent with my function being piecewise and if not, if there is any other correct methodology to adopt in such cases?
from numpy import *
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
from scipy.odr import ODR, Model, Data, RealData
x_array=array([8.2,8.6,9.,9.4,9.8,10.2,10.6,11.,11.4,11.8])
x_err_array=array([0.2]*10)
y_array=array([-2.05179545,-1.64998354,-1.49136169,-0.94200805,-0.60205999,0.,0.,0.,0.,0.])
y_err_array=array([0]*10)
# Linear Fitting Model
def func(beta, x):
return piecewise(x, [x < beta[0]], [lambda x:beta[1]*x-beta[1]*beta[0], lambda x:0.0])
data = RealData(x_array, y_array, x_err_array, y_err_array)
model = Model(func)
odr = ODR(data, model, [10.1,1.02])
odr.set_job(fit_type=0)
output = odr.run()
f, (ax1) = plt.subplots(1, sharex=True, sharey=True, figsize=(10,10))
ax1.errorbar(x_array, y_array, xerr = x_err_array, yerr = y_err_array, ecolor = 'blue', elinewidth = 3, capsize = 3, linestyle = '')
ax1.plot(x_array, func(output.beta, x_array), 'blue', linestyle = 'dotted', label='Best-Fit')
ax1.legend(loc='lower right', ncol=1, fontsize=12)
ax1.set_xlim([7.95, 12.05])
ax1.set_ylim([-2.1, 0.1])
ax1.yaxis.set_major_locator(MaxNLocator(prune='upper'))
ax1.set_ylabel('$y$', fontsize=12)
ax1.set_xlabel('$x$', fontsize=12)
ax1.set_xscale("linear", nonposx='clip')
ax1.set_yscale("linear", nonposy='clip')
ax1.get_xaxis().tick_bottom()
ax1.get_yaxis().tick_left()
f.subplots_adjust(top=0.98,bottom=0.14,left=0.14,right=0.98)
plt.setp([a.get_xticklabels() for a in f.axes[:-1]], visible=True)
plt.show()
An error of 0 for y is causing problems. Make it small but not zero, e.g. 1e-16. Doing so the fit converges. It also does if you omit the y_err_array when defining RealData but I am not sure what happens internally in that case.

Matlab regression returns beta coffecient not right?

I am trying to caclulate the beta coffecients with a regression using something like regstat. My Matlab code looks likes:
%get stock
sym = 'F'
%calculaltes returns with output of standard Open High Low Close
[o,h,l,clS]=YahooGetData(sym, priords, now,'d')
y = diff(clS)
%index like S&P 500
symIdx='^GSPC'
[o,h,l,clI]=YahooGetData(sym, priords, now,'d')
x = diff(clI)
mdl = regstats(x,y)
My beta coffeicients always return 1 and 0 no matter what stock symbol I use. Is there any reason why this is? What do you think I am doing anything wrong? I also get the same results using polyfit.
Thanks
Stupid big, I replaced with:
retIdx,o,h,l,clI]=YahooGetData(symIdx, priords, now,'d')
not use sym. DOH!. Sorry about that

Why is the output different for code ported from MATLAB to Python?

EDIT: After some more testing and a response form the scipy mailing list, the issue appears to be with fspecial(). To get the same output I need to generate the same kind of kernel in Python as the Matlab fspecial command is producing. For now I will try to export the kernel from matlab and work from there. Added as a edit since question has been "closed"
I am trying to port the following MATLAB code to Python. It seems to work but the output is different form MATLAB. I think the problem is with apply a "mean" filter to the log(amplituide). Any help appreciated.
The MATLAB code is from: http://www.klab.caltech.edu/~xhou/projects/spectralResidual/spectralresidual.html
%% Read image from file
inImg = im2double(rgb2gray(imread('1.jpg')));
inImg = imresize(inImg, 64/size(inImg, 2));
%% Spectral Residual
myFFT = fft2(inImg);
myLogAmplitude = log(abs(myFFT));
myPhase = angle(myFFT);
mySpectralResidual = myLogAmplitude - imfilter(myLogAmplitude, fspecial('average', 3), 'replicate');
saliencyMap = abs(ifft2(exp(mySpectralResidual + i*myPhase))).^2;
%% After Effect
saliencyMap = mat2gray(imfilter(saliencyMap, fspecial('gaussian', [10, 10], 2.5)));
imshow(saliencyMap);
Here is my attempt in python:
from skimage import img_as_float
from skimage.io import imread
from skimage.color import rgb2gray
from scipy import fftpack, ndimage, misc
from scipy.ndimage import uniform_filter
from matplotlib.pyplot as plt
# Read image from file
image = img_as_float(rgb2gray(imread('1.jpg')))
image = misc.imresize(image, 64.0 / image.shape[0])
# Spectral Residual
fft = fftpack.fft2(image)
logAmplitude = np.log(np.abs(fft))
phase = np.angle(fft)
avgLogAmp = uniform_filter(logAmplitude, size=3, mode="nearest") #Is this same a applying "mean" filter
spectralResidual = logAmplitude - avgLogAmp
saliencyMap = np.abs(fftpack.ifft2(np.exp(spectralResidual + 1j * phase))) ** 2
# After Effect
saliencyMap = ndimage.gaussian_filter(sm, sigma=2.5)
plt.imshow(sm)
plt.show()
For completness here is a input image and the output from MATLAB and python.
I doubt anyone will be able to give you a firm answer on this. It could be any number of things... Could be that one FFT is 0-centered while the other isn't, could be a float vs double somewhere, could be mishandling of absolute value, could be a filter setting, ...
If I were you, I'd write out some intermediate values for both computations and find a way to compare them. Start in the middle, if they compare well then move down, if they don't compare well then move up. Maybe write an intermediate value from the python script out to a file, import into matlab, take the element-wise difference, and graph. If they're not the same dimensions, that's clue #1.

OpenCV 2.3 camera calibration

I'm trying to use OpenCV 2.3 python bindings to calibrate a camera. I've used the data below in matlab and the calibration worked, but I can't seem to get it to work in OpenCV. The camera matrix I setup as an initial guess is very close to the answer calculated from the matlab toolbox.
import cv2
import numpy as np
obj_points = [[-9.7,3.0,4.5],[-11.1,0.5,3.1],[-8.5,0.9,2.4],[-5.8,4.4,2.7],[-4.8,1.5,0.2],[-6.7,-1.6,-0.4],[-8.7,-3.3,-0.6],[-4.3,-1.2,-2.4],[-12.4,-2.3,0.9], [-14.1,-3.8,-0.6],[-18.9,2.9,2.9],[-14.6,2.3,4.6],[-16.0,0.8,3.0],[-18.9,-0.1,0.3], [-16.3,-1.7,0.5],[-18.6,-2.7,-2.2]]
img_points = [[993.0,623.0],[942.0,705.0],[1023.0,720.0],[1116.0,645.0],[1136.0,764.0],[1071.0,847.0],[1003.0,885.0],[1142.0,887.0],[886.0,816.0],[827.0,883.0],[710.0,636.0],[837.0,621.0],[789.0,688.0],[699.0,759.0],[768.0,800.0],[697.0,873.0]]
obj_points = np.array(obj_points)
img_points = np.array(img_points)
w = 1680
h = 1050
size = (w,h)
camera_matrix = np.zeros((3, 3))
camera_matrix[0,0]= 2200.0
camera_matrix[1,1]= 2200.0
camera_matrix[2,2]=1.0
camera_matrix[2,0]=750.0
camera_matrix[2,1]=750.0
dist_coefs = np.zeros(4)
results = cv2.calibrateCamera(obj_points, img_points,size,
camera_matrix, dist_coefs)
First off, your camera matrix is wrong. If you read the documentation, it should look like:
fx 0 cx
0 fy cy
0 0 1
If you look at yours, you've got it the wrong way round:
fx 0 0
0 fy 0
cx cy 1
So first, set camera_matrix to camera_matrix.T (or change how you construct camera_matrix. Remember that camera_matrix[i,j] is row i, column j).
camera_matrix = camera_matrix.T
Next, I ran your code and I see that "can't seem to get it to work" means the following error (by the way - always say what you mean by "can't seem to get it to work" in your questions - if it's an error, post the error. If it runs but gives you weirdo numbers, say so):
OpenCV Error: Assertion failed (ni >= 0) in collectCalibrationData, file /home/cha66i/Downloads/OpenCV-2.3.1/modules/calib3d/src/calibration.cpp, line 3161
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
cv2.error: /home/cha66i/Downloads/OpenCV-2.3.1/modules/calib3d/src/calibration.cpp:3161: error: (-215) ni >= 0 in function collectCalibrationData
I then read the documentation (very useful by the way) and noticed that obj_points and img_points have to be vectors of vectors, because it is possible to feed in sets of object/image points for multiple images of the same chessboard(/calibration points).
Hence:
cv2.calibrateCamera([obj_points], [img_points],size, camera_matrix, dist_coefs)
What? I still get the same error?!
Then, I had a look at the OpenCV python2 samples (in the folder OpenCV-2.x.x/samples/python2), and noticed a calibration.py showing me how to use the calibration functions (never underestimate the samples, they're often better than the documentation!).
I tried to run calibration.py but it doesn't run because it doesn't supply the camera_matrix and distCoeffs arguments, which are necessary. So I modified it to feed in a dummy camera_matrix and distCoeffs, and hey, it works!
The only difference I can see between my obj_points/img_points and theirs, is that theirs has dtype=float32, while mine doesn't.
So, I change my obj_points and img_points to also have dtype float32 (the python2 interface to OpenCV is funny like that; often functions don't work when matrices don't have a dtype):
obj_points = obj_points.astype('float32')
img_points = img_points.astype('float32')
Then I try again:
>>> cv2.calibrateCamera([obj_points], [img_points],size, camera_matrix, dist_coefs)
OpenCV Error: Bad argument
(For non-planar calibration rigs the initial intrinsic matrix must be specified)
in cvCalibrateCamera2, file ....
What?! A different error at least. But I did supply an initial intrinsic matrix!
So I go back to the documentation, and notice the flags parameter:
flags – Different flags that may be zero or a combination of the
following values:
CV_CALIB_USE_INTRINSIC_GUESS cameraMatrix contains valid initial
values of fx, fy, cx, cy that are optimized further
...
Aha, so I have to tell the function explicitly to use the initial guesses I provided:
cv2.calibrateCamera([obj_points], [img_points],size, camera_matrix.T, dist_coefs,
flags=cv2.CALIB_USE_INTRINSIC_GUESS)
Hurrah! It works!
(Moral of the story - read the OpenCV documentation carefully, and use the newest version (i.e. on opencv.itseez.com) if you're using the Python cv2 interface. Also, consult the examples in the samples/python2 directory to supplement the documentation. With these two things you should be able to work out most problems.)
After the help from mathematical.coffee I have got this 3d calibration to run.
import cv2
from cv2 import cv
import numpy as np
obj_points = [[-9.7,3.0,4.5],[-11.1,0.5,3.1],[-8.5,0.9,2.4],[-5.8,4.4,2.7],[-4.8,1.5,0.2],[-6.7,-1.6,-0.4],[-8.7,-3.3,-0.6],[-4.3,-1.2,-2.4],[-12.4,-2.3,0.9],[-14.1,-3.8,-0.6],[-18.9,2.9,2.9],[-14.6,2.3,4.6],[-16.0,0.8,3.0],[-18.9,-0.1,0.3],[-16.3,-1.7,0.5],[-18.6,-2.7,-2.2]]
img_points = [[993.0,623.0],[942.0,705.0],[1023.0,720.0],[1116.0,645.0],[1136.0,764.0],[1071.0,847.0],[1003.0,885.0],[1142.0,887.0],[886.0,816.0],[827.0,883.0],[710.0,636.0],[837.0,621.0],[789.0,688.0],[699.0,759.0],[768.0,800.0],[697.0,873.0]]
obj_points = np.array(obj_points,'float32')
img_points = np.array(img_points,'float32')
w = 1680
h = 1050
size = (w,h)
camera_matrix = np.zeros((3, 3),'float32')
camera_matrix[0,0]= 2200.0
camera_matrix[1,1]= 2200.0
camera_matrix[2,2]=1.0
camera_matrix[0,2]=750.0
camera_matrix[1,2]=750.0
dist_coefs = np.zeros(4,'float32')
retval,camera_matrix,dist_coefs,rvecs,tvecs = cv2.calibrateCamera([obj_points],[img_points],size,camera_matrix,dist_coefs,flags=cv.CV_CALIB_USE_INTRINSIC_GUESS)
The only problem I have now is why is the dist_coefs vector is 5 elements long when returned from the calibration function. the documentation says " if the vector contains four elements, it means that K3=0". But in fact K3 is is used, no matter the length of dist_coefs (4 or 5). Furthermore I can't seem to get flag CV_CALIB_FIX_K3 to work, tied to use that flag to force K3 to be zero. cashes saying an integer is required. this could be because I don't know how to do multiple flags at once, I'm just doing this, flags = (cv.CV..., cv.CV...).
Just to compare, from the matlab camera cal routine the results are...
Focal length: 2210. 2207.
principal point: 781. 738.
Distortions: 4.65e-2 -9.74e+0 3.9e-3 6.74e-3 0.0e+0
Rotation vector: 2.36 0.178 -0.131
Translation vector: 16.016 2.527 69.549
From this code,
Focal length: 1647. 1629.
principal point: 761. 711.
Distortions: -2.3e-1 2.0e+1 1.4e-2 -9.5e-2 -172e+2
Rotation vector: 2.357 0.199 -0.193
Translation vector: 16.511 3.307 48.946
I think if I could figure out how to force k3=0, the rest of the values would align right up.
For what it is worth, the following code snippet currently works under 2.4.6.1:
pattern_size = (16, 12)
pattern_points = np.zeros( (np.prod(pattern_size), 3), np.float32)
pattern_points[:, :2] = np.indices(pattern_size).T.reshape(-1, 2).astype(np.float32)
img_points = pattern_points[:, :2] * 2 + np.array([40, 30], np.float32)
print(cv2.calibrateCamera([pattern_points], [img_points], (400, 400), flags=cv2.CALIB_USE_INTRINSIC_GUESS))
Note that camera_matrix and dist_coefs are not needed.
Make dist_coeffs vector as 5 dimensional zero vector and then use CV_CALIB_FIX_K3 flag. You can see that last element in the vector (K3) will be zero.
When it comes to using multiple flags, you can OR them.
Example : cv.CV_CALIB_USE_INTRINSIC_GUESS | cv.CV_CALIB_FIX_K3
Use Point3f and Point2f instead of Point3d and Point2d to define object_points and image_points and it will work.