Have tried several iterative versions, based on the error message prompts, to save a ggplot to file using either the pixels or inches method:
ggsave(filename="nlrundiff.jpg", width=4, height=4, units='in', plot=plt)
No success in either case, the resulting error message excerpt is as follows:
/usr/local/lib/python2.7/dist-packages/ggplot/utils/ggutils.pyc in ggsave(filename, plot, device, format, path, scale, width, height, units, dpi, limitsize, **kwargs)
118 from_inch = {"in":lambda x:x,"cm":lambda x: x * 2.54, "mm":lambda x: x * 2.54 * 10}
119
--> 120 w, h = figure.get_size_inches()
121 issue_size = False
122 if width is None:
AttributeError: 'NoneType' object has no attribute 'get_size_inches'
Is this a input syntax error on my part or a Python ggplot bug?
Thanks.
Calling ggsave(...) without specifying the ggplot object or printing it beforehand is not supported (but the above is a bug and should print a user readable message).
So, either print the ggplot object with gg.draw() or print(gg) and then call ggsave(...) like you did above OR pass in the ggplot-object: ggsave(gg, filename=...).
Related
I'm running a grid search optimazation o a Databricks notebook, the same code runs on my local machine but when I try to run in on Databricks I get a TypeError as follow:
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
The fitting process I'm running is this (note this has defined p,d,q,P,D,Q,m values as I need to check why none model are being fitted):
exodus_train = np.array(np.random.normal(2,1, size=(25,1)))
model = sm.tsa.statespace.SARIMAX(train,
order=[2,0,0],
exog=exodus_train,
seasonal_order=[2,0,0,12],
enforce_stationarity=False,
enforce_invertibility=False).fit()
Than it trow an TypeError:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<command-1275539631463044> in <module>
4 seasonal_order=[2,0,0,12],
5 enforce_stationarity=False,
----> 6 enforce_invertibility=False).fit()
/databricks/python/lib/python3.7/site-packages/statsmodels/tsa/statespace/mlemodel.py in fit(self, start_params, transformed, cov_type, cov_kwds, method, maxiter, full_output, disp, callback, return_params, optim_score, optim_complex_step, optim_hessian, flags, **kwargs)
430 """
431 if start_params is None:
--> 432 start_params = self.start_params
433 transformed = True
434
/databricks/python/lib/python3.7/site-packages/statsmodels/tsa/statespace/sarimax.py in start_params(self)
966 # Although the Kalman filter can deal with missing values in endog,
967 # conditional sum of squares cannot
--> 968 if np.any(np.isnan(endog)):
969 mask = ~np.isnan(endog).squeeze()
970 endog = endog[mask]
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' ```
In case this happens with someone else, this will happen if your time series value has commas as decimal separator or if you column is not a float.
I am trying to add two layers each of size (None, 24, 24, 8) but getting the class error as below:
Code:
x = add([layers[i-1],layers[i-9]])
or
x = Add()([layers[i-1],layers[i-9]])
Error:
/keras_222/local/lib/python2.7/site-packages/keras/engine/base_layer.py", line 285, in assert_input_compatibility
str(inputs) + '. All inputs to the layer '
ValueError: Layer add_1 was called with an input that isn't a symbolic tensor. **Received type: <class** 'keras.layers.normalization.BatchNormalization'>. Full input: [<keras.layers.normalization.BatchNormalization object at 0x7f04e4085850>, <keras.layers.normalization.BatchNormalization object at 0x7f050013cd10>]. All inputs to the **layer should be tensors**.
Please advise how to move forward. I also tried putting axis=1 or axis=-1 but it didn't work.
x = Add()([layers[i-1],layers[i-9]],axis=1)
or
x = Add()([layers[i-1],layers[i-9]], axis=-1)
The problem is that you are passing layers instead of tensors to your Add() layer. I suppose you have an Input() layer somewhere in your code. You need to pass this input through your other layers. Your code should instead look something like this:
input = Input(shape)
# pass input through other intermediate layers first if needed
output_1 = layers[i-1](input)
output_2 = layers[i-9](input)
x = Add()([output_1, output_2])
I got a hundred of files, in which all the NaN is mistakenly written as 'N.A.'. I need to correct all the file in order to do calculations in Matlab. I wrote some codes as below but it always complained an error. I tried N.A. with and without quotes in the code, but still error. Could somebody help? I really have no idea where the code is wrong.
Data = dir('*.xls');
namelist1={Data.name};
for w = 1: numel(Data)
basefilenamedata=Data(w).name;
T=readtable(basefilenamedata);
P=table2array(T);
P(P ==N.A.) = NaN; % here I also tried P(P =='N.A.') = NaN, but still an error
W=array2table(P);
writetable(W,fullfile(DataFolder,[basefilenamedata '.xls']),'Sheet',1,'Range','A1');
end
Error: File: untitled Line: 7 Column: 15
Unbalanced or unexpected parenthesis or bracket.
File example 1:
colony center_y center_x radii area
1 1486.035197 1994.842984 52 8494.866535
2 1839.73197 439.5529361 58 10568.31769
3 1173.664471 403.4185646 64 12867.96351
4 N.A. N.A. N.A. N.A.
5 N.A. N.A. N.A. N.A.
File example 2:
Area Centroid_1 Centroid_2 MeanGrey ColonyNum
12984 868.0061614 340.6169901 61 1
12378 1289.909517 253.0196316 67 2
N.A. N.A. N.A. N.A. 3
Look at standardizeMissing
T0 = readtable(basefilenamedata);
T = standardizeMissing(T0 ,{Inf,'N/A','N.A.'},'DataVariables',{'a','x'}) %change to your data variables name
Second thing, to compare strings in matlab u should use strcmp and not ==.
Third thing, you can see how a table should look like, run open('patients.xls') in command line.
A find-replace in your excel files would quickly fix this problem. Anyway, if you want to solve it programmatically, I will explain you first what's happening.
The function readtable tries to heuristically define the type of your table columns, calling detectImportOptions internally, before parsing them. If it finds strings (different than the ones commonly used to represent a true NaN, like, in this example, N.A.) coupled with numbers, it may decide to interpret that specific column as a column of string values.
To overcome this problem, call detectImportOptions, modify the VariableTypes parameter "manually", and pass it to the readtable function.
for w = 1: numel(Data)
basefilenamedata = Data(w).name;
opts = detectImportOptions(basefilenamedata);
% Make sure that your P column type is set as double...
opts = setvartype(opts,{'double' 'double' 'double' ...});
T = readtable(basefilenamedata,opts);
% Go on writing...
end
If the files are all identical, you can also perform this only once with the first file, outside your loop, and then pass the same opts to all your readtable calls.
This approach will be much, much faster than sanitizing your input, sanitizing your data once it's parsed and all such approaches... especially if you can work with a single option for all your files.
I have the following class and method that should convolve an array with a kernel.
import numpy as np
from numpy.fft import fft2 as FFT, ifft2 as IFFT
from PIL import Image
from tqdm import trange, tqdm
from numba import jit
from time import sleep
import _kernel
class convolve(object):
""" contains methods to convolve two images """
def __init__(self, image_array, kernel):
self.array = image_array
self.kernel = kernel
self.__rangeX_ = self.array.shape[0]
self.__rangeY_ = self.array.shape[1]
self.__rangeKX_ = self.kernel.shape[0]
self.__rangeKY_ = self.kernel.shape[1]
if (self.__rangeKX_ >= self.__rangeX_ or \
self.__rangeKY_ >= self.__rangeY_):
raise ValueError('Must submit suitable sizes for convolution.')
#jit(nopython=True)
def spaceConv(self):
""" normal convolution, O(N^2*n^2). This is usually too slow """
# pad array for convolution
offsetX = self.__rangeKX_ // 2
offsetY = self.__rangeKY_ // 2
self.array = np.pad(self.array, \
[(offsetY, offsetY), (offsetX, offsetX)], \
mode='constant', constant_values=0)
# this is the O(N^2) part of this algorithm
for i in xrange(self.__rangeX_ - 2*offsetX):
for j in xrange(self.__rangeY_ - 2*offsetY):
# Now O(n^2) portion
total = 0.0
for k in xrange(2*offsetX+1):
for t in xrange(2*offsetY+1):
total += self.kernel[k][t] * self.array[i+k][j+t]
self.array[i+offsetX][j+offsetY] = total
return self.array
As an additional note (in case anyone asks), _kernel just generates specific kernels one may want to convolve the image with (e.g. Gaussian, Moffat, etc.), so it has nothing to do with this class.
When I call the above class on an image and kernel, I get the following error:
Traceback (most recent call last):
File "fftconv.py", line 147, in <module>
plt.imshow(conv.spaceConv(), interpolation='none', cmap='gray')
File "/root/anaconda2/lib/python2.7/site-packages/numba/dispatcher.py", line 304, in _compile_for_args
raise e
numba.errors.UntypedAttributeError: Caused By:
Traceback (most recent call last):
File "/root/anaconda2/lib/python2.7/site-packages/numba/compiler.py", line 249, in run
stage()
File "/root/anaconda2/lib/python2.7/site-packages/numba/compiler.py", line 465, in stage_nopython_frontend
self.locals)
File "/root/anaconda2/lib/python2.7/site-packages/numba/compiler.py", line 789, in type_inference_stage
infer.propagate()
File "/root/anaconda2/lib/python2.7/site-packages/numba/typeinfer.py", line 717, in propagate
raise errors[0]
UntypedAttributeError: Unknown attribute "rangeKX" of type pyobject
File "fftconv.py", line 45
[1] During: typing of get attribute at fftconv.py (45)
Failed at nopython (nopython frontend)
Unknown attribute "rangeKX" of type pyobject
File "fftconv.py", line 45
[1] During: typing of get attribute at fftconv.py (45)
This error may have been caused by the following argument(s):
- argument 0: cannot determine Numba type of value <__main__.convolve object at 0xaff5628c>
Usually I'm pretty good at tracing through Python errors to the cause, but because I'm not familiar with the inner-works of Numba, I'm not sure why it doesn't know what type offsetX is. Any suggestions?
One step performed by numba is type-inference. This assigns types to the different values present in the function so that it can compile (in a way that it works fast).
The error means that numba doesn't understand the first input argument on the function (self in this case). Numba works best in plain functions where the arguments are scalars or array (all numeric). One option would be to move the O(n^2) loop into a function of its own and have that function receive the arrays and any other value explicitly, and decorate that function with numba.njit (or numba.jit(nopython=True), which are equivalent
Also worth a try is just trying the code "as is" removing the "nopython=True". If the performance is good enough then leave it alone :). That may happen, as numba.jit is able to detect loops inside the code that can be compiled in "no python" mode and automatically do what is needed so that the loop itself is compiled in full speed mode. The explicit "nopython=True" keyword disables that mode though.
So, we are trying to execute the following code. The two if statements are executing, however, the inside if statements are failing to execute (we verified this by not suppressing the output). Is there a reason why? Or are we just not able to reach this state?
Specifications
The input is as follows: v is a vector of int values and c is a integer. c must be less than or equal to one of the values within v
The problem that we are trying to solve with this algorithm is as follows:
Given a cash register, how does one make change such that the fewest coins
possible are returned to the customer?
Ex: Input: v = [1, 10, 25, 50], c = 40. Output O = [5, 1, 1, 0]
We are just looking for not a better solution but more of a reason why that portion of the code is not executing.
function O = changeGreedy(v,c)
O = zeros(size(v,1), size(v,2));
for v_item = 1:size(v,2)
%locate largest term
l_v_item = 1
for temp = 2:size(v,2)
if v(l_v_item) < v(temp)
l_v_item = temp
end
end
%"Items inside if statement are not executing"
if (c > v(l_v_item))
v(l_v_item) = -1 %"Not executing"
else
O(l_v_item) = idivide(c, v(l_v_item)) %"Not executing"
c = mod(c, v(l_v_item)) %"Not executing"
end
end
If c or v are not integers, i.e. class(c) evaluates to double, then I get the following error message
??? Error using ==> idivide>idivide_check at 66
At least one argument must belong to an integer class.
Error in ==> idivide at 42
idivide_check(a,b);
and the program stops executing. Thus, the inside of the second statement never executes. In contrast, if, say, c is an integer, for example of class uint8, everything executes just fine.
Also: what are you actually trying to achieve with this code?
Try to do this operation on your input data:
v = int32([1, 10, 25, 50]), c = int32(40)
and run again, at least some portions of your code will execute. There is an error raised by idivide, which apparently you missed:
??? Error using ==> idivide>idivide_check at 67
At least one argument must belong to an integer class.
Error in ==> idivide at 42
idivide_check(a,b);
Indeed, idivide seems to require that you have actual integer input data (that is, class(c) and class(v) both evaluate to an integer type, such as int32).