I'm trying to train custom classifier, but still getting failed status.
{
"classifier_id": "Castles_1969040174",
"name": "Castles",
"status": "failed"
}
I checked images and they are in right size (< 10 MB each) and both ZIPs are containing more than 10 images and are way smaller then 100 MB.
Is here any other condition I could check to get my classifier to be trained correctly?
Run code to retrieve classifier details, check the API documentation for how -
https://cloud.ibm.com/apidocs/visual-recognition#retrieve-classifier-details
for curl it will look like
curl -u "apikey:{apikey}" "https://gateway.watsonplatform.net/visual-recognition/api/v3/classifiers/Castles_1969040174?version=2018-03-19"
The response will contain an explanation telling you why the create classifier failed.
Related
Hi all I have been researching this challenge for some time now and all my efforts are futile. What I am trying to do?
I am running YOLOV5 and this is working fine in the training stage and the detection stages. The operations is outputting multiple .txt files per video for each detection per frame. This is the command I am using:
python3 detect.py --weights /Users/YOLO2ClassOnly/yolov5/runs/train/exp11/weights/best.pt --source /Users/YOLO2ClassOnly/yolov5/data/videos --conf 0.1 --line-thickness 1 --save-txt SAVE_TXT --save-conf
This command produces multiple text files, for example vid0_walking.txt, vid1_walking.txt, vid2_walking.txt...n/ etc.
This is depleting my storage resources and I am trying to avoid this.
What I would like to do?
Store the files in one .csv file in this format, please.
# xmin ymin xmax ymax confidence class name
# 0 749.50 43.50 1148.0 704.5 0.874023 0 person
# 2 114.75 195.75 1095.0 708.0 0.624512 0 person
# 3 986.00 304.00 1028.0 420.0 0.286865 27 tie
I have been following Glen Jorcher Links Here:
https://github.com/ultralytics/yolov5/issues/7499
But this is futile, this function print(results.pandas().xyxy[0])
is not working to generate the output for video as per above.
Please help, this is challenging me due to my lack of understanding.
Thanx in advance for acknowledging my digital presence and I am grateful for your guidance!
I'm working in a Google Colab notebook and set up via
!wget http://setup.johnsnowlabs.com/nlu/colab.sh -O - | bash
import nlu
a quick version check nlu.version() confirms 3.4.2
Several of the official tutorial notebooks (for ex.: XLNet) create a multi-model pipeline that includes both 'sentiment' and 'emotion'.
Direct copy of content from the notebook:
import pandas as pd
# Download the dataset
!wget -N https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/resources/en/sarcasm/train-balanced-sarcasm.csv -P /tmp
# Load dataset to Pandas
df = pd.read_csv('/tmp/train-balanced-sarcasm.csv')
pipe = nlu.load('sentiment pos xlnet emotion')
df['text'] = df['comment']
max_rows = 200
predictions = pipe.predict(df.iloc[0:100][['comment','label']], output_level='token')
predictions
However, running a prediction on this pipe results in the following error:
sentimentdl_glove_imdb download started this may take some time.
Approximate size to download 8.7 MB
[OK!]
pos_anc download started this may take some time.
Approximate size to download 3.9 MB
[OK!]
xlnet_base_cased download started this may take some time.
Approximate size to download 417.5 MB
[OK!]
classifierdl_use_emotion download started this may take some time.
Approximate size to download 21.3 MB
[OK!]
glove_100d download started this may take some time.
Approximate size to download 145.3 MB
[OK!]
tfhub_use download started this may take some time.
Approximate size to download 923.7 MB
[OK!]
sentence_detector_dl download started this may take some time.
Approximate size to download 354.6 KB
[OK!]
---------------------------------------------------------------------------
IllegalArgumentException Traceback (most recent call last)
<ipython-input-1-9b2e4a06bf65> in <module>()
34
35 # NLU to gives us one row per embedded word by specifying the output level
---> 36 predictions = pipe.predict( df.iloc[0:5][['text','label']], output_level='token' )
37
38 display(predictions)
9 frames
/usr/local/lib/python3.7/dist-packages/pyspark/sql/utils.py in raise_from(e)
IllegalArgumentException: requirement failed: Wrong or missing inputCols annotators in SentimentDLModel_6c1a68f3f2c7.
Current inputCols: sentence_embeddings#glove_100d. Dataset's columns:
(column_name=text,is_nlp_annotator=false)
(column_name=document,is_nlp_annotator=true,type=document)
(column_name=sentence,is_nlp_annotator=true,type=document)
(column_name=sentence_embeddings#tfhub_use,is_nlp_annotator=true,type=sentence_embeddings).
Make sure such annotators exist in your pipeline, with the right output names and that they have following annotator types: sentence_embeddings
Having experimented with various combinations of models, it turns out that the problem is caused whenever 'sentiment' and 'emotion' models are specified in the same pipeline (regardless of pipeline order or what other models are listed).
Running pipe = nlu.load('emotion ANY OTHER MODELS') or pipe = nlu.load('sentiment ANY OTHER MODELS') will be successful, so it really appears to be only a result of combining 'sentiment' and 'emotion'
Is this a known bug? Does anyone have any suggestions for fixing?
My temporary solution has been to run emoPipe = nlu.load('emotion').predict() in isolation, then inner join the resulting dataframe to the the resulting df of pipe = nlu.load('sentiment pos xlnet').predict().
However, I would like to understand better what is failing and to know if there is a way to streamline the inclusion of all models.
Thanks
When I run pester I get this output
Covered 100% / 75%. 114 analyzed Commands in 1 File
What does the 75% mean? I haven't been able to find it anywhere in the documentation.
It is the value of $PesterPreference.CodeCoverage.CoveragePercentTarget.Value, i.e the minimum amount of test coverage you want to achieve. This is set to 75% by default.
It's mentioned on the page describing New-PesterConfiguration:
https://pester-docs.netlify.app/docs/commands/New-PesterConfiguration
CoveragePercentTarget: Target percent of code coverage that you want
to achieve, default 75%. Default value: 75
But it was quite hard to figure out, and it could do with being added to the documentation page about test coverage. I ended up searching through the source code and found that the message you listed is output here:
https://github.com/pester/Pester/blob/1515194f4868f6aaae82d7d376a8a776afe0ebf4/src/functions/Output.ps1
CoverageMessage = 'Covered {2:0.##}% / {5:0.##}%. {3:N0} analyzed {0} in {4:N0} {1}.'
Which is populated with values here:
$coverageMessage = $ReportStrings.CoverageMessage -f $command, $file, $executedPercent, $totalCommandCount, $fileCount, $PesterPreference.CodeCoverage.CoveragePercentTarget.Value
( please pardon my long post, dearly appreciate your help )
I am training the squeezeDet model for the pascal VOC style custom data as per the training code from the repository HERE
train.py
model_definition and HERE
the saved model checkpoint performs well as I can see acceptable performance.
Now i am trying to freeze the model for deployment using coreML to see how the performance is in a mobile platform. The authors of the script only report performance in a GPU environment in their research paper.
I follow the recommended steps as per tensorflow, my commands are as below
First,
I write the graph out from the checkpoint meta file
path_to_ckpt_meta = rootdir + "model.ckpt-355000.meta"
path_to_ckpt_data = rootdir + "model.ckpt-355000"
sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
saver = tf.train.import_meta_graph(path_to_ckpt_meta)
saver.restore(sess, path_to_ckpt_data)
tf.train.write_graph(tf.get_default_graph().as_graph_def(), rootdir, "model_ckpt_355000_graph_V2.pb", False)
Now
I check the graph summary as see all the tensors in the model . The output summary file is HERE.
However, when I check the checkpoint file using the inspect_checkpoint.py function from tensorflow I see no image_input nodes. The output of inspection is HERE.
Second
I freeze the graph using the tensorflow freeze_graph.py function
python ./tensorflow/python/tools/freeze_graph.py \
--input_graph=path-to-dir/train/model_ckpt_355000_graph.pb \
--input_checkpoint=path-to-dir/train/model.ckpt-355000 \
--output_graph=path-to-dir/train/frozen_sqdt_ckpt_355000.pb \
--output_node_names=bbox/trimming/bbox,probability/score,probability/class_idx
the freeze_graph call completes without error and results in the frozen graph as per the command above.
Now,
when I check the frozen graph using the summarize_graph function call
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=/tmp/logs/squeezeDet_NewDataset_test01_March02/train/frozen_sqdt_ckpt_355000.pb
I get the following
No inputs spotted.
No variables spotted.
Found 3 possible outputs: (name=bbox/trimming/bbox, op=Transpose) (name=probability/score, op=Max) (name=probability/class_idx, op=ArgMax)
Found 2703452 (2.70M) const parameters, 0 (0) variable parameters, and 0 control_edges
Op types used: 130 Const, 68 Identity, 32 BiasAdd, 32 Conv2D, 31 Relu, 15 Mul, 14 Add, 10 ConcatV2, 9 Sub, 5 RealDiv, 5 Reshape, 4 Maximum, 4 Minimum, 3 StridedSlice, 3 MaxPool, 2 Exp, 2 Greater, 2 Cast, 2 Select, 1 Transpose, 1 Softmax, 1 Sigmoid, 1 Unpack, 1 RandomUniform, 1 QueueDequeueManyV2, 1 Pack, 1 Max, 1 Floor, 1 FIFOQueueV2, 1 ArgMax
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/tmp/logs/squeezeDet_NewDataset_test01_March02/train/frozen_sqdt_ckpt_355000.pb --show_flops --input_layer= --input_layer_type= --input_layer_shape= --output_layer=bbox/trimming/bbox,probability/score,probability/class_idx
this output above suggests that there is no input detected from the frozen graph. I check the summary of the frozen graph and find no image_input tensor. HERE
When I check my original graph ( written in step 1 ) with summarize graph, It does show inputs.
My troubleshooting
Suggests there is some mixup in the original authors code where the image_input is not provided as an input tensor. Though, the confusing part is that I can see the input image tensor in the summary of the output graph from the checkpoint meta file.
My question is,
-- why is the frozen graph removing the input nodes, when the original graph has the inputs ?
-- And, what can I do to change this and be able to successfully freeze_graph correctly.
Is there a transformation that need to perform in order to make this freeze model compatible with the coreML format.?
All your help is much appreciated.
Best
Aman
I want to extract the total bill from image receipts. I could extract the entire data present in the image but now I am struck with the problem of extracting only the information that I need.
This is the image that I have.
I am pasting the extracted information from the image
m cm lnnk 3mm: :33; no 1 z m
x Visut all! ms“; (or nulnunn mfn an an: nan.
Sub Iota] 19.56
TOTAL 19.56
VISA 1956
Fun 19.56
D!!! You Know 0
For ureat-tastlru dessens under 200
cahries, try our Triple Berry Frozen
Yogurt Sunda: a dish of Frozen Yogurt.
or a Vanma rozen Vugurt Done.
From this data I just want to extract the total bill. To get this I found out that I could use Ad Hoc Normalization (Adhoc retrieval). Can someone provide any insights on Adhoc retrieval. If there are any other option to extract the data from the image please let me do so. I am using tesseract to extract this information. Sometimes it is no giving the proper output. I could use some help in improvising the output given by the tesseract.
Why do you need ad hoc retrieval in this case? Since you are getting the OCR result from the receipt, you can simply perform a regular text search for the item appearing next to "TOTAL".
There are algorithms for image text search, but this seems like overkill for such a straightforward application unless there is a good reason to do so.