Possible bug in SciPy shgo (Unexpected TypeError) - scipy

When running the following
from scipy.optimize import rosen, rosen_der, rosen_hess
bounds = [(0,1.6), (0, 1.6), (0, 1.4), (0, 1.4), (0, 1.4)]
result = scipy.optimize.shgo(rosen, bounds, options={'jac':rosen_der,'hess':rosen_hess})
I get
TypeError: _minimize_slsqp() got multiple values for argument 'jac'
I believe that jac is correctly specified here (see this documentation). Did I make a mistake or is there a bug here?

This appears to be a bug see https://github.com/scipy/scipy/issues/14533.
(An answer with a workaround would still be much appreciated.)

Related

AnyLogic: Does anyone know what "int approximationOrder" in the constructor of a table function stands for?

I want to use the constructor of an AnyLogic TableFunction, but I do not know what int approximationOrder stands for. The constructor without it is labeled as deprecated. Can anyone help?
int approximationOrder: the order (1, 2, 3...) of the approximation polynomial if the INTERPOLATION_APPROXIMATION mode is selected
If you do not set your InterpolationType to INTERPOLATION_APPROXIMATION then it does not matter what you choose here.
For example:
TableFunction x = new TableFunction(
new double[]{1,2,3,4,5},
new double[]{1,2,1,2,2},
TableFunction.InterpolationType.INTERPOLATION_LINEAR,
1, //approximationOrder makes no impact as we will not be using it
TableFunction.OUTOFRANGE_ERROR,
999.0
);
It is this parameter you see when you select your interpolation to approximation
What version of AnyLogic are you using? The latest version has this documented

tensorflow/lite/core/subgraph.cc BytesRequired number of elements overflowed. Node number 1 (CONV_2D) failed to prepare. tflite

I am trying to convert a CNN model into tflite model. I converted it successfully, but this error happens when I try to load and run the model.
I am building a flutter app.
It initializes the Tensorflow Lite runtime but then raises this error.
I/tflite (27856): Initialized TensorFlow Lite runtime.
E/flutter (27856): [ERROR:flutter/lib/ui/ui_dart_state.cc(166)] Unhandled Exception: PlatformException(Failed to load model, Internal error: Unexpected failure when preparing tensor allocations: tensorflow/lite/core/subgraph.cc BytesRequired number of elements overflowed.
E/flutter (27856):
E/flutter (27856): Node number 1 (CONV_2D) failed to prepare.
I think I have figured out the problem.
After spending days trying to solve this problem. I found out that the model I was using to convert was an ImagNet pretrained model which is InceptionV3. The problem is may be there are some layers could not converted.
I used the following and they worked perfectly fine.
MobileNet and MobileNetV2.
NasNet Mobile version.
OR if you are new to deep learning and don't want to train or skip the deep learning part you can use Teachable Machine then convert it easly.
I hope this could help you guys!! Thank you
I ran into the exact same issue the last few days. I tried to load and run a tflite model on Android. I finally figured out how to solve the problem.
I was creating my model using:
model = Xception(include_top=False)
The important part here is include_top=False, together with the default argument input_shape=None.
If you look at the source code of Xception, Inception, MobileNet, or whatever (that you can find here), you will see that at some point before creating the first layer they call
input_shape = imagenet_utils.obtain_input_shape(
input_shape,
default_size=<default_size>,
min_size=<min_size>,
data_format=backend.image_data_format(),
require_flatten=include_top,
weights=weights)
which is implemented here, with the most important part for us being:
if input_shape:
...
else:
if require_flatten:
input_shape = default_shape
else:
if data_format == 'channels_first':
input_shape = (3, None, None)
else:
input_shape = (None, None, 3)
Thus, if I am not mistaken, when we set include_top to False, instead of getting the default shape we end up with undefined number of rows and columns. I am not sure how this is converted to tflite, although there is no error raised during conversion, but it really seems that Android cannot work with that (probably this is equivalent to setting an infinite image size). Hence this error when initializing the interpreter:
BytesRequired number of elements overflowed
When I set the proper input_shape argument in the constructor, i.e.
model = Xception(include_top=False, weights=None, input_shape=(rows, cols, channels))
then the converted model was working fine on Android.
As for why it is initializing correctly with MobileNetV2 in the same situation, i.e. by creating the model like so:
model = MobileNetV2(include_top=False)
I cannot explain...
Hope this brings an answer to your original question.
In fact, this is specified in the documentation, for instance in Xception:
input_shape: optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(299, 299, 3)`.
It should have exactly 3 inputs channels,
and width and height should be no smaller than 71.
E.g. `(150, 150, 3)` would be one valid value.
Whilst for MobileNetV2:
input_shape: Optional shape tuple, to be specified if you would
like to use a model with an input image resolution that is not
(224, 224, 3).
It should have exactly 3 inputs channels (224, 224, 3).
You can also omit this option if you would like
to infer input_shape from an input_tensor.
If you choose to include both input_tensor and input_shape then
input_shape will be used if they match, if the shapes
do not match then we will throw an error.
E.g. `(160, 160, 3)` would be one valid value.
Although it is not crystal clear.

Plot a graph with ipycytoscape (and networkx)

Following the instructions of ipycitoscape I am not able to plot a graph using ipycitoscape.
according to: https://github.com/QuantStack/ipycytoscape/blob/master/examples/Test%20NetworkX%20methods.ipynb
this should work:
import networkx as nx
import ipycytoscape
G2 = nx.Graph()
G2.add_nodes_from([*'ABCDEF'])
G2.add_edges_from([('A','B'),('B','C'),('C','D'),('E','F')])
print(G2.nodes)
print(G2.edges)
cytoscapeobj = ipycytoscape.CytoscapeWidget()
cytoscapeobj.graph.add_graph_from_networkx(nx_graph)
G2 is a networkx graph example and it looks ok since print(G2) gives the networkx object back and G2.nodes and G2.edges can be printed.
The error:
ValueError: invalid literal for int() with base 10: 'A'
Why should a node be an integer?
More general what to do if the starting data point if a pandas dataframe with a million rows edges those being strings like ProcessA-ProcessB, processC-processD etc
Also having a look to the examples it is to be noted that the list of nodes is composed of a dictionary data for every node. that data including an "id" per node and also "Atribute". The surprise here is that the networkx Graph should have all those properties.
thanks
This problem was fixed. See attachment.
Please let me know if it's still happening. Feel free to open an issue: https://github.com/QuantStack/ipycytoscape/
I'm just playing around with ipycytoscape myself, so I could be way off-base, but, shouldn't the line be:
cytoscapeobj.graph.add_graph_from_networkx(G2) # your graph name goes here
Trying to generate a cytoscape object built on a graph that doesn't exist might trigger a ValueError because it can't find any nodes.

In Spyder, is there some way to plot a dataframe by right-clicking it like you can with arrays?

I've just recently discovered that you can right-click an array in Spyder and get a quick plot of the data. With sample data like this:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Some numbers in a data frame
nsample = 440
x1 = np.linspace(0, 100, nsample)
y = np.sin(x1)
dates = pd.date_range(pd.datetime(2016, 1, 1).strftime('%Y-%m-%d'), periods=nsample).tolist()
df = pd.DataFrame({'dates':dates, 'x1':x1, 'y':y})
df = df.set_index(['dates'])
df.index = pd.to_datetime(df.index)
you can go to the Variable explorer, right-click y and get the following directly in the console:
which will give you this:
The same option does not seem to be available to a pandas dataframe:
Sure, you could easily go for df.plot():
But I really like the right-click option to check whether the variables and dataframes look the way I expect them to when I'm messing around with a lot of data. So, is there any library I'd have to import? Or maybe something in the settings? I've also noticed that what happens in the console is this little piece of magic: %varexp --plot y, but can't seem to find an equivalent for data frames.
Thank you for any suggestions!
(Spyder developer here) This is just a bit of missing functionality for Dataframes, but it's very easy to implement.
Please open an issue in our issue tracker, so we don't forget to do it in a future release.

Projection anomaly between function vs string projection definition

We recently switched our definitions from the first to the second format, because OpenLayers threw exceptions on the first one.
The used definitions:
Old:
proj4.defs["EPSG:28992"] = "+proj=sterea +lat_0=52.15616055555555 +lon_0=5.38763888888889 +k=0.9999079 +x_0=155000 +y_0=463000 +ellps=bessel +towgs84=565.417,50.3319,465.552,-0.398957,0.343988,-1.8774,4.0725 +units=m +no_defs";
New:
proj4.defs("EPSG:28992", "+proj=sterea +lat_0=52.15616055555555 +lon_0=5.38763888888889 +k=0.9999079 +x_0=155000 +y_0=463000 +ellps=bessel +towgs84=565.040,49.910,465.840,-0.40939,0.35971,-1.86849,4.0772 +units=m +no_defs")
Strange enough, the latter one does correctly transform points. Our points seem to be misaligned, and not by a specific offset, they seem to be just wrong positioned at all. We think this is due to the towgs84 property. Question now is, how is the first format parsed/ handled differently than the secondly? What are the differences? (I am using the same code and newest version of proj4js in both occasions).
I was accidentally loading pro4j twice, once trough potree, and once manually (for openlayers). Turned out one of the two was still on version 2.2.1...