CNTK TransferLearning.model - neural-network

I've been following the Build your own image classifier using Transfer Learning tutorial found at this link.
At the end of the tutorial, it says you use your own data set, which I created by following the example in the tutorial. It successfully completed and created a folder ~/CNTK-Samples-2-3-1/Examples/Image/TransferLearning/Output, which is expected. Inside this output folder is predictions.txt, predOutput.txt, and TransferLearning.model.
My question is, how do I access TransferLearning.model? I'm not sure what to do with it and I can't find anything in the documentation allowing me to alter it or run it.

Once you successfully create your model, you can use it in Python environment (with CNTK) to classify your images. The code should look something like this:
#load the trained transfer learning model
trained_model = load_model(model_file)
#for every new image:
#get predictions for a single image
probs = eval_single_image(trained_model, img_file, image_width, image_height)

Related

Working on Github open-source on Object Detection with Faster-RCNN, but can`t figure out 'each element in list of batch should be of equal size'

I am trying to run through open source Faster-RCNN code. Although I copied and pasted exact same code from the original, it doesn`t work.
I could not figure out what am I missing in order to run the code properly.
When I run this :
to validate class images...
test_loader = DataLoader(valid_dataset, batch_size = len(valid_dataset))
a,b,c = iter(test_loader).next()
print(torch.unique(c[0], return_counts=True))
Below response comes out :
enter image description here
This is original github adress.
https://github.com/pranayKD/faster_rcnn_colab_pytorch/blob/master/Faster_RCNN.ipynb
Thanks for reading,
River.

How to use Transfer Learning for Text classification in Apples Create ML?

I was watching https://developer.apple.com/videos/play/wwdc2019/428/, where Transfer Learning is used for text classification in Create ML.
I wanted to do the same and created Datasets with the following structure:
Folders where the Name of the Folder is the Label / Answer to a Question.
Inside each folder there are 10 text files with Questions.
Pretty much the same Idea like in the Video.
When I now choose Transfer Learning (like in the Video) and start to train, the Window tells me "Model has no Data" (see Screenshot).
Error Screenshot
What I am doing wrong ?
By looking at your Error Screenshot, you are missing the test data.

How to save models in pytorch from tpu to cpu

I am training a Neural Network model with pytorch.
Since this model is very complicated, I made use of the pytorch_xla package to use TPU. I finished training the model and now I want to save the weight so I will be able to use them from any envrionment.
I tried to save the data like so
file_name = "model_params"
torch.save(model.state_dict(), file_name)
and when I tried to load them (from environment which does not support TPU)
model.load_state_dict(torch.load(file name))
I got the following error
NotImplementedError: Could not run 'aten::empty_strided' with arguments from the 'XLA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty_strided' is only available for these backends: [CPU, Meta, BackendSelect, Named, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, UNKNOWN_TENSOR_TYPE_ID, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
Is there a way to do what I want?

Matlab - replace classification layer using a script (automatically)

i want to train a neural network with flexible output size. In the beginning, i used the matlab deep network designer to manually replace the classification and fully connected layer to the desired output size. Now, i want to automatically replace it, using a script.
Which command is working for that?
Simply trying the line:
net.Layers(142,1).InputSize = 10;
gives me the error message
Unable to set the 'InputSize' property of class 'FullyConnectedLayer' because it is read-only.
Trying to replace the complete layer (not only inputsize) is resulting in the same error message.
Is this possible with matlab, and if yes, which commands will do the job?
Thanks in advance!
Okay, i think i solved it. When changing the network manually inside of the deepnetworkdesigner, there is the possibility to click "export code" which exports the desired code for the adapted output size

How can I create in Gehpi directed tree graph instead of sphererical

I want to make a network graph which shows the distribution of our documents in our folder structure.
I have the nodefile, edgefile and gephi graph file in this location:
https://1drv.ms/f/s!AuVfRBdVHkO7hgs5K9r9f7jBBAUH
What I do is:
Run the algorithm ForceAtlas2 with scaling 10-20, dissuade hub marked and prevent overlap marked, all other standard setting.
What I get is a graph with groups radial/spherical distributed. However, what I want is a tree directed network graph.
Anyone know how I can adjust Gephi to make this?
Thanks!
I just found a solution.
I tested the file format as shown on the Yed site "import excel file" page
http://yed.yworks.com/support/manual/import_excel.html
This gave me the Yed import dialog (took a life time to figure out that it's a pop up menu and not selectable through the standard menu)
Anyway, it worked and I've adjusted the test files with the data prepared for the Gehpi. This was pretty easy, I could used the source target ID's etc. Just copy paste.
I load it into Yed and used some directed and radial clustering algorithms on it. Works fine!
Below you can find the excel node/edge file used to import in Yed and the graph file you can open with Yed to see the final radial result.
https://1drv.ms/f/s!AuVfRBdVHkO7hg6DExK_eVkm5_mR
Only thing to figure out is how to combine the weight (which represents the number of documents) with the node size.
Unfortunately, as of version 0.9.0, Gephi no longer supports hierarchical graphs. Maybe try using a previous version?
Other alternatives involve more complex software, such as Graphviz, but you need a .dot file instead of your .csv. I looked all over, but could not find an easy-to-use csv to dot converter.
You could try looking at d3-hierarchy, a node.js program, but then again you need to use the not-so-user-friendly npm. If you look at the link, it looks like it can produce the kind of diagram you're looking for.