How to apply keras model (.h5 have converted to .json)to HTML - tf.keras

I have used tf.keras trained inception-v3 model, and according to tf.js official document, .h5 has been converted to .json, the problem now is how to use local models and predict in JavaScript.
I looked it up https://github.com/tensorflow/tfjs-models/tree/master/mobilenet
, here, I notice this code
< script SRC=“ https://cdn.jsdelivr.net/npm/ #tensorflow-models/ mobilenet#1.0.0 "> < / script >
does it appear that I need to upload the model to the web before I can use it? If so, which file should I upload? Another question is, does the image format need to be changed when the model predicts the image? If so, what should I do

Related

How to use Transfer Learning for Text classification in Apples Create ML?

I was watching https://developer.apple.com/videos/play/wwdc2019/428/, where Transfer Learning is used for text classification in Create ML.
I wanted to do the same and created Datasets with the following structure:
Folders where the Name of the Folder is the Label / Answer to a Question.
Inside each folder there are 10 text files with Questions.
Pretty much the same Idea like in the Video.
When I now choose Transfer Learning (like in the Video) and start to train, the Window tells me "Model has no Data" (see Screenshot).
Error Screenshot
What I am doing wrong ?
By looking at your Error Screenshot, you are missing the test data.

How to load an SVG and a referenced PNG from a REST service at the same time?

I'm trying to create a web service in PHP that can deliver an SVG with reference to a PNG raster image. Both the data for the SVG as well as the binary PNG image come from a MySQL database on the server.
Option A: Encode the PNG data in base-64 and embed it directly in the SVG, such as:
<image xlink:href="data:image/png;base64,..."/>
Concerns: 30% heavier load than loading it as pure binary and noticeable delay when loading it with Postman (or is this just because of Postman).
Option B: Call the PNG data as binary and save it as a file on the file system, then call the SVG file, which would then reference the physical PNG file.
Concerns: Involvement of the file system (which implies I need to start managing physical files, expiration dates etc).
Is there perhaps another way that an SVG can reference the binary data on the fly without it having to be on the file system?
To accomplish something similar (in my case sending data for SVGs with additional data about each file as binary files, which are much smaller than sending xml, text, or json) - I use CBOR. In my case, I compress the SVG using LZString compression first, and add this along with additional data attributes to a JSON object. Then I convert the JSON object to CBOR. I think CBOR can handle your base 64 data without any need for conversion - more information about it is here: cbor.io
I found a PHP library for CBOR here: https://github.com/2tvenom/CBOREncode
This may not be the way to go at all for you, but I thought I'd throw it out there just in case.

CNTK TransferLearning.model

I've been following the Build your own image classifier using Transfer Learning tutorial found at this link.
At the end of the tutorial, it says you use your own data set, which I created by following the example in the tutorial. It successfully completed and created a folder ~/CNTK-Samples-2-3-1/Examples/Image/TransferLearning/Output, which is expected. Inside this output folder is predictions.txt, predOutput.txt, and TransferLearning.model.
My question is, how do I access TransferLearning.model? I'm not sure what to do with it and I can't find anything in the documentation allowing me to alter it or run it.
Once you successfully create your model, you can use it in Python environment (with CNTK) to classify your images. The code should look something like this:
#load the trained transfer learning model
trained_model = load_model(model_file)
#for every new image:
#get predictions for a single image
probs = eval_single_image(trained_model, img_file, image_width, image_height)

How can I create in Gehpi directed tree graph instead of sphererical

I want to make a network graph which shows the distribution of our documents in our folder structure.
I have the nodefile, edgefile and gephi graph file in this location:
https://1drv.ms/f/s!AuVfRBdVHkO7hgs5K9r9f7jBBAUH
What I do is:
Run the algorithm ForceAtlas2 with scaling 10-20, dissuade hub marked and prevent overlap marked, all other standard setting.
What I get is a graph with groups radial/spherical distributed. However, what I want is a tree directed network graph.
Anyone know how I can adjust Gephi to make this?
Thanks!
I just found a solution.
I tested the file format as shown on the Yed site "import excel file" page
http://yed.yworks.com/support/manual/import_excel.html
This gave me the Yed import dialog (took a life time to figure out that it's a pop up menu and not selectable through the standard menu)
Anyway, it worked and I've adjusted the test files with the data prepared for the Gehpi. This was pretty easy, I could used the source target ID's etc. Just copy paste.
I load it into Yed and used some directed and radial clustering algorithms on it. Works fine!
Below you can find the excel node/edge file used to import in Yed and the graph file you can open with Yed to see the final radial result.
https://1drv.ms/f/s!AuVfRBdVHkO7hg6DExK_eVkm5_mR
Only thing to figure out is how to combine the weight (which represents the number of documents) with the node size.
Unfortunately, as of version 0.9.0, Gephi no longer supports hierarchical graphs. Maybe try using a previous version?
Other alternatives involve more complex software, such as Graphviz, but you need a .dot file instead of your .csv. I looked all over, but could not find an easy-to-use csv to dot converter.
You could try looking at d3-hierarchy, a node.js program, but then again you need to use the not-so-user-friendly npm. If you look at the link, it looks like it can produce the kind of diagram you're looking for.

loading libsvm text file in scikit

I have a text file called "test.txt" which contains data in libsvm format.
Data in this file is represented as follows:
165475 0:246870 1124384:2 342593:7 1141651:1 297582:1 1186846:1 17725:1 656602:1
463304:1 766612:1 573309:1 290046:1 748198:1 216665:1 950594:2 909004:1 29008:1
105623:1 5018:5 806027:1 1125729:1 757846:1 1023921:2 612980:1 120767:1 51340:1
108172:5 674420:2
where 1st term represents the label and remaining represents the feature and its weight(separated by : ).This is a very huge file(with every label having lots of features and weights).
I am using scikit with ipython notebook and want to load this data in notebook to start processing it.
Can someone tell how to do that.Thanks in advance.
Use load_svmlight_file from sklearn.datasets.