Correlation in Windows Workflow - workflow

I am new to WF and i am working on the correlation on WF.i am not finding any step by step guide which explains correlation in easy language. please suggest me some step by step and easy to implement thing.kindly anyone have useful links

Checkout th following link. You may need to read part 1 first
http://blog.petegoo.com/index.php/2010/06/26/wf4-services-part-2-ndash-correlation/

Related

H5P sending results to Moodle

I am very much puzzled about using H5P in Moodle.
The idea is great, obviously, yet I cannot make it work as I expected.
My principles/idea:
There are bunch of activities in each course
Each activity can build up several Student's skills, say Creative Thinking or Problem Solving
After finishing each activity Student, based on the result, can go
to the next activity or re-do it if failed
For the testing purposes I set up 3 outcomes (0-30 > NO pass, 30-70 >
1 point, 71-100 > 2 points) in the H5P module - this one is working
fine.
The outcome should be passed to Moodle, so the course can then decide
what to do: pass with 1 or 2 points, or fail and request to do the
activity again
This outcome then will be added to Student's skillset
Say, I have this basic crossword. After finishing it the Student can achieve two Skills mentioned above yet still this depends on the outcome, eg. result 1 means 0 in Creative Thinking and +1 in Problem Solving, and/or 2 means +1 in Creative Thinking and +2 in Problem Solving.
The activity itself works as expected, as I mentioned above, see the images (note ONE point circled):
and , but then nothing happens.
The student is NOT taken to the next activity, all s/he can do is to retry same activity over and over again.
Is it possible to force Moodle/H5P to act as described above?
For the testing purposes I used two 'activities': one being 'h5p' itself and the other being 'lesson' with same h5p modules being added inside, see the image:
I run this all on WAMP
I tried to follow xAPI https://h5p.org/documentation/x-api
which resulted in js error:
Sorry for the long post - tried to cover everything.
If anybody knows the answers - this reply will be much appreciated.
Cheers,
Greg

what are the steps need to follow in order to train deepspeech on Google colab?

what are the steps (A - Z) we need to follow in order to train a model using Colab?
how to prepare our own data set in case if I need to fine-tune it for my voice/ our country’s accent?
The Mozilla DeepSpeech PlayBook is an excellent starting point for your question. It is not specific to Google Colab, but provides guidance on both data preparation and training.

How do I structure my project folder in eclipse for Cucumber project which has sprint wise delivery

I am trying to create an automation framework using cucumber and trying to replicate a real time scenario (sprint wise delivery).
How do I structure my folders/source folders/packages in eclipse? Below is the structure which I am about to follow but I am not quite convinced if it is right.
I am trying to structure in such a way that when I give the command
"mvn test -Dcucumber.options="src\test\resources\sprint1\features", then it should run all the features under sprint1, similarly for sprint2 and so on.
Any suggestions or inputs would be helpful.
P.S: Since I am new to cucumber, a detailed explaination on the folder structure for real time sprint wise delivery would be much appreciated.
Thanks :)
I would not consider the file structure you are thinking of.
The reason is that after a while, it doesn't matter when a feature was added to the system. So organizing features based on time is a bad idea.
If you still need to be able to run the features for a specific sprint, consider using tags instead. That would allow you only to run the features connected to the sprint you are interested in.
I would not to that either, because after a while it doesn't matter which sprint a piece of functionality was added. It should still pass all executions, even if it is 27 sprints old.
If this organization is bad, how should you do it instead?
This is a question where a lot of people have a lot of opinions and the debate can get very heated.
My take is that it is interesting to make sure that the code is easy to use. With that I mean easy to navigate and understand for a new developer. If you want, think of usability in any other product.
Given this, I would organize the features after functional areas in different packages. A package for each area, one for viewing products, one for ordering products, one for paying etc.
I would also try to take a step further and organize the source code in a similar way.
But I would never organize using a temporal approach as you are thinking of.
You should not organize your tests as per the sprint because a particular sprint will end on a particular time. If you want to run some feature files together for temporary basis(till the time sprint is not over), you can add tags on the top in the feature files.
For example:
You have following 2 feature files:
src/test/resources/sprint1/file1.feature
src/test/resources/sprint1/file2.feature
Just add "#sprint1" on top of each feature as shown below:
//1. file1.feature
#sprint1
Feature: sprint1 : features : file1
Scenario: Some scenario desc..
Given ....
When ....
Then ....
//2. file2.feature
#sprint1
Feature: sprint1 : features : file1
Scenario: Some scenario desc..
Given ....
When ....
Then ....
Now to run these both files you need to execute the following code in your command prompt:
cucumber --tags #sprint1
By executing this command, all the files which contains "#sprint1" tag will run. After the sprint is over, you can delete this extra tag from feature files

Predict Class Probabilities in Spark RandomForestClassifier

I built random forest models using ml.classification.RandomForestClassifier. I am trying to extract the predict probabilities from the models but I only saw prediction classes instead of the probabilities. According to this issue link, the issue is resolved and it leads to this github pull request and this. However, It seems it's resolved in the version 1.5. I'm using the AWS EMR which provides Spark 1.4.1 and sill have no idea how to get the predict probabilities. If anyone knows how to do it, please share your thought or solutions. Thanks!
I have already answered a similar question before.
Unfortunately, with MLLIb you can't get the probabilities per instance for classification models till version 1.4.1.
There is JIRA issues (SPARK-4362 and SPARK-6885) concerning this exact topic which is IN PROGRESS as I'm writing the answer now. Nevertheless, the issue seems to be on hold since November 2014
There is currently no way to get the posterior probability of a prediction with Naive Baye's model during prediction. This should be made available along with the label.
And here is a note from #sean-owen on the mailing list on a similar topic regarding the Naive Bayes classification algorithm:
This was recently discussed on this mailing list. You can't get the probabilities out directly now, but you can hack a bit to get the internal data structures of NaiveBayesModel and compute it from there.
Reference : source.
This issue has been resolved with Spark 1.5.0. Please refer to the JIRA issue for more details.
Concerning AWS, there is not much you can do now for that. A solution might be if you can fork the emr-bootstrap-actions for spark and configure it for you needs, then you'll be able to install Spark on AWS using the bootstrap step.
Nevertheless, this might seem a little complicated.
There is some thing you might need to consider :
update the spark/config.file to install you spark-1.5. Something like :
+3 1.5.0 python s3://support.elasticmapreduce/spark/install-spark-script.py s3://path.to.your.bucket.spark.installation/spark/1.5.0/spark-1.5.0.tgz
this file list above must be a proper build of spark located in an specified s3 bucket you own for the time being.
To build your spark, I advice you reading about it in the examples section about building-spark-for-emr and also the official documentation. That should be about it! (I hope I haven't forgotten anything)
EDIT : Amazon EMR release 4.1.0 offers an upgraded version of Apache Spark (1.5.0). You can check here for more details.
Unfortunately this isn't possible with version 1.4.1, you could extend the random forest class and copy some of the code I added in that pull request if you can't upgrade - but be sure to switch back to the regular version once you are able to upgrade.
Spark 1.5.0 is now supported natively on EMR with the emr-4.1.0 release! No more need to use the emr-bootstrap-actions, which btw only work on 3.x AMIs, not emr-4.x releases.

How to diagram a repeating set of steps from multiple paths in Visio?

Is there a proper way to diagram this? I have two different paths, but at some point in the process Step 1, Step 2 and Step 3 will be done.
It's possible that this flowchart will be very large, and Steps 1 thru 3 will be run thru from many different paths.
Thank you!!
I don't know that there is a proper way to document this, but generally if you are repeating the same steps in the same sequence it is a good idea to carve this out as a sub-process or predefined process.
I would recommend coming up with a meaningful name to describe steps 1 - 3 and then replacing those steps in your diagram with a "predefined process" shape with that name.
Then you would create a supplemental sheet to include your predefined process (or sub-process):
And your initial sheet would look something like this:
By simplifying your diagram in this fashion it will also make it easier for your audience to understand. I hope this helps!