Moses machine translation - using Moses with Anymalign - moses

Does anyone know how to replace GIZA++ in Moses with Anymalign which is obtained from here
In fact, there is 9 steps to using Moses, I want to start the step 4 without passing the step 2 and 3, but it seems to be impossible not to use GIZA++. Anyone has a clue?

In the moses manual
on page 351 in the section 8.3 Reference: All Training Parameters there is described parameter --first-step -- first step in the training process (default 1), so you can use train-model.perl ... --first-step 4 to start training from step 4

Related

How do I store YOLOv5 Outputs to one csv file rather than many .txt files per detection

Hi all I have been researching this challenge for some time now and all my efforts are futile. What I am trying to do?
I am running YOLOV5 and this is working fine in the training stage and the detection stages. The operations is outputting multiple .txt files per video for each detection per frame. This is the command I am using:
python3 detect.py --weights /Users/YOLO2ClassOnly/yolov5/runs/train/exp11/weights/best.pt --source /Users/YOLO2ClassOnly/yolov5/data/videos --conf 0.1 --line-thickness 1 --save-txt SAVE_TXT --save-conf
This command produces multiple text files, for example vid0_walking.txt, vid1_walking.txt, vid2_walking.txt...n/ etc.
This is depleting my storage resources and I am trying to avoid this.
What I would like to do?
Store the files in one .csv file in this format, please.
# xmin ymin xmax ymax confidence class name
# 0 749.50 43.50 1148.0 704.5 0.874023 0 person
# 2 114.75 195.75 1095.0 708.0 0.624512 0 person
# 3 986.00 304.00 1028.0 420.0 0.286865 27 tie
I have been following Glen Jorcher Links Here:
https://github.com/ultralytics/yolov5/issues/7499
But this is futile, this function print(results.pandas().xyxy[0])
is not working to generate the output for video as per above.
Please help, this is challenging me due to my lack of understanding.
Thanx in advance for acknowledging my digital presence and I am grateful for your guidance!

How do I generate a fixed sized list of facts (duplicates included)?

I'm new to ASP & Clingo and I need to work on a project for school. I thought about some basic music generator.
For now, I need to generate notes (I'm sticking with C major for now). I also want to generate them randomly and I don't know how to do that. How can I make the following code generate a random sequence of notes (duplicates too)?
note(c;d;e;f;g;a;b).
20 { play(X) : note(X)} 30.
#show play/1.
So far, the code won't allow for more than 7 as the upper bound, because it won't show duplicate notes.
Current output: play(b) play(g) play(e) play(c)
Wanted output: play(d) play(g) play(f) ...[20-30 randomly generated notes]
I want to be able to add constraints later (such as this note should not be followed by that note, and so on). I appreciate any tips since I know so little about this.
An answer set is a set. The atoms have no order and duplicates are not possible because it is a set.
You want to guess one note for each beat.
beat(1..8).
1 { play(N,B) : note(N) } 1 :- beat(B).

tensorflow checkpoint missing input tensor node

( please pardon my long post, dearly appreciate your help )
I am training the squeezeDet model for the pascal VOC style custom data as per the training code from the repository HERE
train.py
model_definition and HERE
the saved model checkpoint performs well as I can see acceptable performance.
Now i am trying to freeze the model for deployment using coreML to see how the performance is in a mobile platform. The authors of the script only report performance in a GPU environment in their research paper.
I follow the recommended steps as per tensorflow, my commands are as below
First,
I write the graph out from the checkpoint meta file
path_to_ckpt_meta = rootdir + "model.ckpt-355000.meta"
path_to_ckpt_data = rootdir + "model.ckpt-355000"
sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
saver = tf.train.import_meta_graph(path_to_ckpt_meta)
saver.restore(sess, path_to_ckpt_data)
tf.train.write_graph(tf.get_default_graph().as_graph_def(), rootdir, "model_ckpt_355000_graph_V2.pb", False)
Now
I check the graph summary as see all the tensors in the model . The output summary file is HERE.
However, when I check the checkpoint file using the inspect_checkpoint.py function from tensorflow I see no image_input nodes. The output of inspection is HERE.
Second
I freeze the graph using the tensorflow freeze_graph.py function
python ./tensorflow/python/tools/freeze_graph.py \
--input_graph=path-to-dir/train/model_ckpt_355000_graph.pb \
--input_checkpoint=path-to-dir/train/model.ckpt-355000 \
--output_graph=path-to-dir/train/frozen_sqdt_ckpt_355000.pb \
--output_node_names=bbox/trimming/bbox,probability/score,probability/class_idx
the freeze_graph call completes without error and results in the frozen graph as per the command above.
Now,
when I check the frozen graph using the summarize_graph function call
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=/tmp/logs/squeezeDet_NewDataset_test01_March02/train/frozen_sqdt_ckpt_355000.pb
I get the following
No inputs spotted.
No variables spotted.
Found 3 possible outputs: (name=bbox/trimming/bbox, op=Transpose) (name=probability/score, op=Max) (name=probability/class_idx, op=ArgMax)
Found 2703452 (2.70M) const parameters, 0 (0) variable parameters, and 0 control_edges
Op types used: 130 Const, 68 Identity, 32 BiasAdd, 32 Conv2D, 31 Relu, 15 Mul, 14 Add, 10 ConcatV2, 9 Sub, 5 RealDiv, 5 Reshape, 4 Maximum, 4 Minimum, 3 StridedSlice, 3 MaxPool, 2 Exp, 2 Greater, 2 Cast, 2 Select, 1 Transpose, 1 Softmax, 1 Sigmoid, 1 Unpack, 1 RandomUniform, 1 QueueDequeueManyV2, 1 Pack, 1 Max, 1 Floor, 1 FIFOQueueV2, 1 ArgMax
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/tmp/logs/squeezeDet_NewDataset_test01_March02/train/frozen_sqdt_ckpt_355000.pb --show_flops --input_layer= --input_layer_type= --input_layer_shape= --output_layer=bbox/trimming/bbox,probability/score,probability/class_idx
this output above suggests that there is no input detected from the frozen graph. I check the summary of the frozen graph and find no image_input tensor. HERE
When I check my original graph ( written in step 1 ) with summarize graph, It does show inputs.
My troubleshooting
Suggests there is some mixup in the original authors code where the image_input is not provided as an input tensor. Though, the confusing part is that I can see the input image tensor in the summary of the output graph from the checkpoint meta file.
My question is,
-- why is the frozen graph removing the input nodes, when the original graph has the inputs ?
-- And, what can I do to change this and be able to successfully freeze_graph correctly.
Is there a transformation that need to perform in order to make this freeze model compatible with the coreML format.?
All your help is much appreciated.
Best
Aman

protege set data range expression for a data properties

I have a data properties hasCode that can assume one of this values:
"1i"
"2i"
"3i"
"4i"
What is the expression that I have to write for get this restriction?
Thank you so much
{"1"^^xsd:int, "2"^^xsd:int, "3"^^xsd:int, "4"^^xsd:int}
This should do the trick.
Note: there's a bug in Protege 5 beta 21 that will not make this work. Either use Protege 5 beta 17 or wait for the next beta for this to work properly.

Matlab/Simulink: If block error

Please refer to the image at the following link attached for understanding the question.
Image is at this link: http://www5.picturepush.com/photo/a/12014483/img/12014483.jpg)
here are 2 inputs: 1.Speed_Pulse 2.PreviousSpeedPulse_1_old
The second input is nothing but the first input delayed in time by 1 time instant using the unit delay block. The 'If' block compares the 2 inputs. If the input 'u1' (Speed_Pulse) is less than 'u2'(PreviousSpeedPulse_1_old) then, in the 'if action' block, the value 64 is simply added to u1 (Speed_Pulse) value. Else, the input Speed_Pulse is directly transmitted to the output via the 'else action' block. One of the outputs is transmitted to 'Temp' (depending on the 'if-else') using the 'Merge' block.
Now, please refer to following table of inputs and outputs.
The table is at this link: http://img521.imageshack.us/img521/8684/tablewy.png
In the table, the values are wrong for instant 4 and instant 7.
I could not find a reason for this abrupt wrong output.
Any idea what is going wrong?
Sorry it was my mistake. Actually I was using a Framework below it and it was the error of the Framework. Got it resolved. Thanks for your help.