i am trying to replace a layer in my layergraph-object, using matlab.
I am using the command "replaceLayer(lgraph,name,layer)" , which is documented in the following URL:
https://de.mathworks.com/help/deeplearning/ref/layergraph.replacelayer.html
The explicit line is:
lgraph = replaceLayer(lgraph,name,layer);
Where lgraph is:
DAGNetwork with properties:
Layers: [144×1 nnet.cnn.layer.Layer]
Connections: [170×2 table]
InputNames: {'data'}
OutputNames: {'classoutput'}
the name is 'fc' (for fully connected)
and the layer itself with the following attributes:
layer =
FullyConnectedLayer with properties:
Name: 'fc'
Hyperparameters
InputSize: 720
OutputSize: 8
Learnable Parameters
Weights: [8×720 double]
Bias: [8×1 double]
I already tried to replace the old fully connected layer with a new fully connected layer which has the same number of outputs, because i thought those numbers need to be the same. But it still throws the exact same error message.
Check for incorrect argument data type or missing argument in call to function 'replaceLayer'.
Now i don't know what to try next.
Things i already tried:
Using a default weight-initialization so that the weight-attributes of the new layer are non-empty
Using the same number of outputs for the old (to be replaced) and the new (inserted) fully-connected layer.
Would be cool if someone could help me, i did not find anything useful in the matlab-forums. Thanks!
Fixed it, i need to add the following line before:
lgraph=layerGraph(lgraph);
Related
I am trying to build a simple multibody plant system in Drake using the basic DrakeVisualizer. However, for my use case, I also want to be able to automatically track the derivatives through the physics simulation, so am using the AutoDiffXd version of system:
timestep = 1e-3
builder = DiagramBuilder_[AutoDiffXd]()
plant = MultibodyPlant(timestep)
scene_graph = SceneGraph_[AutoDiffXd]()
brick_file = FindResourceOrThrow("drake/examples/manipulation_station/models/061_foam_brick.sdf")
parser = Parser(plant)
brick = parser.AddModelFromFile(brick_file, model_name="brick")
plant.Finalize()
plant_ad = plant.ToAutoDiffXd()
plant_ad.RegisterAsSourceForSceneGraph(scene_graph)
scene_graph.AddRenderer("renderer", MakeRenderEngineVtk(RenderEngineVtkParams()))
DrakeVisualizer.AddToBuilder(builder, scene_graph)
builder.AddSystem(plant_ad)
builder.AddSystem(scene_graph)
builder.Connect(plant_ad.get_geometry_poses_output_port(), scene_graph.get_source_pose_port(plant_ad.get_source_id()))
builder.Connect(scene_graph.get_query_output_port(), plant_ad.get_geometry_query_input_port())
diagram = builder.Build()
context = diagram.CreateDefaultContext()
simulator = Simulator_[AutoDiffXd](diagram, context)
simulator.AdvanceTo(2.0)
However, when I run this, I get the following error:
File "/home/craig/Repos/drake-exps/autoDiffExperiment.py", line 102, in auto_phys
DrakeVisualizer.AddToBuilder(builder, scene_graph)
TypeError: AddToBuilder(): incompatible function arguments. The following argument types are supported:
1. (builder: pydrake.systems.framework.DiagramBuilder_[float], scene_graph: drake::geometry::SceneGraph<double>, lcm: pydrake.lcm.DrakeLcmInterface = None, params: pydrake.geometry.DrakeVisualizerParams = <pydrake.geometry.DrakeVisualizerParams object at 0x7ff6274e14b0>) -> pydrake.geometry.DrakeVisualizer
2. (builder: pydrake.systems.framework.DiagramBuilder_[float], query_object_port: pydrake.systems.framework.OutputPort_[float], lcm: pydrake.lcm.DrakeLcmInterface = None, params: pydrake.geometry.DrakeVisualizerParams = <pydrake.geometry.DrakeVisualizerParams object at 0x7ff627736730>) -> pydrake.geometry.DrakeVisualizer
Invoked with: <pydrake.systems.framework.DiagramBuilder_[AutoDiffXd] object at 0x7ff65654f8f0>, <pydrake.geometry.SceneGraph_[AutoDiffXd] object at 0x7ff656562130>
From this error, it appears the DrakeVisualizer class only accepts systems which use float scalars exlusively. So I am stuck --- either I can go back to floats (but lose the autodiff differentiable simulation functionality I was after in the first place), or continue to use autodiffxd systems (but be completely unable to visualize what is going on in my simulation).
Is there a way to get both that I am missing?
Sorry for the pain and inconvenience. Your description and assessment are all spot on. Most of the visualization mechanisms are float only and, in its current state, attempts to visualizing an AutoDiff diagram will fail.
You have a couple of options (neither of which is appealing):
Go with one of the outcomes you've described above (no vis or no derivatives).
Put in a Drake feature request to be able to attach a visualizer to an AutoDiff diagram.
I can come up with some hacky workarounds (that aren't immediately clear would even work). So, if you're desperate for derivatives and visualization, they could be explored. But, ultimately, the feature request and a formal Drake solution would be the best long-term resolution.
=====================================
Big update. As of #14569, the DrakeVisualizer class is now templated on the scalar type (item 2 in the list above). That has two implications:
You can build an AutoDiffXd-valued diagram with a visualizer in it (as in your example), or
You can create a double-valued diagram and scalar convert it (i.e., diagram.ToAutoDiffXd() into an AutoDiffXd-valued diagram.
I am trying to convert a CNN model into tflite model. I converted it successfully, but this error happens when I try to load and run the model.
I am building a flutter app.
It initializes the Tensorflow Lite runtime but then raises this error.
I/tflite (27856): Initialized TensorFlow Lite runtime.
E/flutter (27856): [ERROR:flutter/lib/ui/ui_dart_state.cc(166)] Unhandled Exception: PlatformException(Failed to load model, Internal error: Unexpected failure when preparing tensor allocations: tensorflow/lite/core/subgraph.cc BytesRequired number of elements overflowed.
E/flutter (27856):
E/flutter (27856): Node number 1 (CONV_2D) failed to prepare.
I think I have figured out the problem.
After spending days trying to solve this problem. I found out that the model I was using to convert was an ImagNet pretrained model which is InceptionV3. The problem is may be there are some layers could not converted.
I used the following and they worked perfectly fine.
MobileNet and MobileNetV2.
NasNet Mobile version.
OR if you are new to deep learning and don't want to train or skip the deep learning part you can use Teachable Machine then convert it easly.
I hope this could help you guys!! Thank you
I ran into the exact same issue the last few days. I tried to load and run a tflite model on Android. I finally figured out how to solve the problem.
I was creating my model using:
model = Xception(include_top=False)
The important part here is include_top=False, together with the default argument input_shape=None.
If you look at the source code of Xception, Inception, MobileNet, or whatever (that you can find here), you will see that at some point before creating the first layer they call
input_shape = imagenet_utils.obtain_input_shape(
input_shape,
default_size=<default_size>,
min_size=<min_size>,
data_format=backend.image_data_format(),
require_flatten=include_top,
weights=weights)
which is implemented here, with the most important part for us being:
if input_shape:
...
else:
if require_flatten:
input_shape = default_shape
else:
if data_format == 'channels_first':
input_shape = (3, None, None)
else:
input_shape = (None, None, 3)
Thus, if I am not mistaken, when we set include_top to False, instead of getting the default shape we end up with undefined number of rows and columns. I am not sure how this is converted to tflite, although there is no error raised during conversion, but it really seems that Android cannot work with that (probably this is equivalent to setting an infinite image size). Hence this error when initializing the interpreter:
BytesRequired number of elements overflowed
When I set the proper input_shape argument in the constructor, i.e.
model = Xception(include_top=False, weights=None, input_shape=(rows, cols, channels))
then the converted model was working fine on Android.
As for why it is initializing correctly with MobileNetV2 in the same situation, i.e. by creating the model like so:
model = MobileNetV2(include_top=False)
I cannot explain...
Hope this brings an answer to your original question.
In fact, this is specified in the documentation, for instance in Xception:
input_shape: optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(299, 299, 3)`.
It should have exactly 3 inputs channels,
and width and height should be no smaller than 71.
E.g. `(150, 150, 3)` would be one valid value.
Whilst for MobileNetV2:
input_shape: Optional shape tuple, to be specified if you would
like to use a model with an input image resolution that is not
(224, 224, 3).
It should have exactly 3 inputs channels (224, 224, 3).
You can also omit this option if you would like
to infer input_shape from an input_tensor.
If you choose to include both input_tensor and input_shape then
input_shape will be used if they match, if the shapes
do not match then we will throw an error.
E.g. `(160, 160, 3)` would be one valid value.
Although it is not crystal clear.
This is my first go at trying to create an HDF5 file from scratch using the Low-Level commands via MATLAB.
My issue is that I am having a hard time trying to write data to each specific member in the datatype on my dataset.
First, I create a new HDF5 file, and set the right layer of groups:
new_h5 = H5F.create('new_hdf5_file.h5','H5F_ACC_TRUNC','H5P_DEFAULT','H5P_DEFAULT');
new_h5 = H5G.create(new_h5,'first','H5P_DEFAULT','H5P_DEFAULT','H5P_DEFAULT');
new_h5 = H5G.create(new_h5,'second','H5P_DEFAULT','H5P_DEFAULT','H5P_DEFAULT');
Then, I create my datatype:
datatype = H5T.create('H5T_compound',20);
H5T.insert(datatype,'first_element',0,'H5T_NATIVE_INT');
H5T.insert(datatype,'second_element',4,'H5T_NATIVE_DOUBLE');
H5T.insert(datatype,'third_element',12,'H5T_NATIVE_DOUBLE');
Then, I format that into my dataset:
new_h5 = H5D.create(new_h5,'location',datatype,H5S.create('H5S_SCALAR'),'H5P_DEFAULT');
subset = H5D.get_type(H5D.open(new_h5,'/first/second/location'));
mem_type = H5T.get_member_type(subset,0);
I receive an error with the following command:
H5D.write(mem_type,'H5ML_DEFAULT','H5S_ALL','H5S_ALL','H5P_DEFAULT',data);
Error using hdf5lib2
Unhandled HDF5 class (H5T_NO_CLASS) encountered. It is not possible to write to this attribute or dataset.
So, I try this method instead:
new_h5 = H5D.create(new_h5,'location',datatype,H5S.create_simple(2,dims,dims),'H5P_DEFAULT'); %where dims are the dimensions of all matrices of data structure
H5D.write(mem_type,'H5ML_DEFAULT','H5S_ALL','H5S_ALL','H5P_DEFAULT',data); %where data is a structure
I receive an error with this following command:
H5D.write(mem_type,'H5ML_DEFAULT','H5S_ALL','H5S_ALL','H5P_DEFAULT',data);
Error using hdf5lib2
Attempted to transfer too many values to or from the library buffer.
When looking here for the XML tags for the error messages, it describes the above error as "illegalArrayAccess." Apparently, according to this question, you can only write to 4 members without the buffer throwing an error?
Is this correct? How can I correctly write to each member. I am about to reach my mental limit trying to figure this one out.
EDIT:
References kept here for general information:
HDF5 Compound Datatypes Example
HDF5 Compount Datatypes
H5D.write MATLAB Command
I found out why I cannot write data. I have solved the problem. I had my dimensions set incorrectly (which is code I forgot to include originally). My apologies. I had my dimensions like this:
dims = fliplr(size(data_matrix));
Where dims was a 15x250 matrix. The error was in that the buffer was unable to write a 250x15 matrix for each member, because it only had data for a 250x1 for each member.
The following code will (generically) work for writing data to each member:
new_h5 = H5F.create('new_hdf5_file.h5','H5F_ACC_TRUNC','H5P_DEFAULT','H5P_DEFAULT');
new_h5 = H5G.create(new_h5,'first','H5P_DEFAULT','H5P_DEFAULT','H5P_DEFAULT');
new_h5 = H5G.create(new_h5,'second','H5P_DEFAULT','H5P_DEFAULT','H5P_DEFAULT');
datatype = H5T.create('H5T_compound',20);
H5T.insert(datatype,'first_element',0,'H5T_NATIVE_INT');
H5T.insert(datatype,'second_element',4,'H5T_NATIVE_DOUBLE');
H5T.insert(datatype,'third_element',12,'H5T_NATIVE_DOUBLE');
dims = fliplr(size(data_matrix)); dims = [1 dims(1,2)];
new_h5 = H5D.create(new_h5,'location',datatype,H5S.create_simple(2,dims,dims),'H5P_DEFAULT');
H5D.write(new_h5,'H5ML_DEFAULT','H5S_ALL','H5S_ALL','H5P_DEFAULT',data_structure);
where data_matrix is a 15x250 matrix containing all data, and where data_structure is a sctucture containing 15 fields, each 250x1 in size.
Can someone please tell me how I can access results from power flow calculations using Matpower 3.2? In the manual, there is an instruction to do the following (to access for example real power injected at "from" bus end) :
mpc = loadcase('case14')
results = runpf(mpc)
branch_pf = results.branch(:, 14)
But, when I do that, nothing comes up, because results are saved as a variable and the value is 100, and it seems like the results (in this case real power from bus end, or any other variable) are not stored anywhere but printed in command list.
When you run the first line of code, you load the system data into a struct called mpc. This struct contains all the information you need in order to perform power flow studies.
For the 'case14', it should look like this:
mpc =
version: '2'
baseMVA: 100
bus: [14x13 double]
gen: [5x21 double]
branch: [20x13 double]
gencost: [5x7 double]
If you have something different here, then you have messed up.
When you run the second line of code you'll get something like this, followed by a large number of rows with results, all nicely formatted with headers etc.
MATPOWER Version 4.1, 14-Dec-2011 -- AC Power Flow (Newton)
Newton's method power flow converged in 2 iterations.
Converged in 1.14 seconds
================================================================================
| System Summary |
================================================================================
How many? How much? P (MW) Q (MVAr)
--------------------- ------------------- ------------- -----------------
Buses 14 Total Gen Capacity 772.4 -52.0 to 148.0
Generators 5 On-line Capacity 772.4 -52.0 to 148.0
Now, you didn't want to see the results, you want to store them, right? If you have a working version of Matpower, and haven't screwed up any files, you should get a results variable like this:
results =
version: '2'
baseMVA: 100
bus: [14x13 double]
gen: [5x21 double]
branch: [20x17 double]
gencost: [5x7 double]
order: [1x1 struct]
et: 1.1400
success: 1
Note the success attribute in the end. If this is not 1, that means the solution didn't converge. Obviously, as case14 is a sample case, that's wrong. Unless you have messed up, you should have success: 1.
The last row actually does what you want. The active power flow in the six first branches are:
branch_pf = results.branch(:, 14)
branch_pf =
156.8829
75.5104
73.2376
56.1315
41.5162
-23.2857
After running those lines, this is what my workspace looks like:
This is in fact an off-topic question, but since this is the first power system related question I've seen, and you're using Matpower (I've used it a lot), I couldn't not answer it.
I need to categorize a dataset according to different age groups. The categorization depends on whether the Sex is Male or Female. I first subset the data by gender and then use the ordinal function (dataset is from a Matlab example). The following code crashes on the last line when I try to vertically concatenate the subsets:
load hospital;
subset_m=hospital(hospital.Sex=='Male',:);
subset_f=hospital(hospital.Sex=='Female',:);
edges_f=[0 20 max(subset_f.Age)];
edges_m=[0 30 max(subset_m.Age)];
labels_m = {'0-19','20+'};
labels_f = {'0-29','30+'};
subset_m.AgeGroup= ordinal(subset_m.Age,labels_m,[],edges_m);
subset_f.AgeGroup = ordinal(subset_f.Age,labels_f,[],edges_f);
vertcat(subset_m,subset_f);
Error using dataset/vertcat (line 76)
Could not concatenate the dataset variable 'AgeGroup' using VERTCAT.
Caused by:
Error using ordinal/vertcat (line 36)
Ordinal levels and their ordering must be identical.
Edit
It seems that a vital part was missing in the question, here is the answer to the corrected question. You need to use join rather than vertcat, for example:
joinFull = join(subset_f,subset_m,'LeftKeys','LastName','RightKeys','LastName','type','rightouter','mergekeys',true)
Solution of original problem
It seems like you are actually trying to work with the wrong variable. If I change all instances of hospitalCopy into hospital then everything works fine for me.
Perhaps you copied hospital and edited it, thus losing the validity of the input.
If you really need to have hospitalCopy make sure to assign to it directly after load hospital.
If this does not help, try using clear all before running the code and make sure there is no file called 'hospital' in your current directory.