Import decision tree from R into Orange - orange

I have a decision tree made with rpart in R. Is there a way to bring this tree into orange to view and interact with? I would like to use the classification tree viewer in Orange as it more interactive.
Cheers

Related

Modeling Contact in Abaqus

I have simulated a 2D model in Abaqus which indicates 2 main part, first a sector that represent as wheel and, second a rectangular that represent rail.
the wheel and rail are in contact and wheel should rotate and move from left to right. there is a vertical small crack in middle of the rail to studying the result of contact on fracture mechanics.
my model has been completed, and in the 1/10 of total time good result has been achieved but after that the stress distribution is completely wrong and although the contact seems to exist, the stress is almost zero.
Any guide, comment or advice would be appreciated.
thank you

How to import models from Speed tree to Unity?

I create a tree model in Speed tree.
There are three materials I used in those leaves.
There are four groups of flower with three different materials.
However, I only got one material of leaves when I open this model file in Unity environment.
The second question is the texture mapping is on the wrong position.
It looks like the uv mapping problem.
How can I solve these two problems?

How to train a neural network to generate movements based on a training set of hand motions?

I am making a game in Unity that involves creatures whose animations are determined by physics. For example, to move a limb of a creature, I can apply forces to the rigidbody it's associated with. I know how to apply forces programmatically through a script to create movements, but I'd like to create more complex and organic movements and thought that I might be able to use a neural network to do this.
I'd like each of the creatures to have a distinct way of moving in the world. I'd like to first puppeteer the creatures manually using my hand (with a Leap Motion controller), and have a neural network generate new movements based on the training I did with my hand.
More concretely, my manual puppeteering setup will apply forces to the rigidbodies of the creature as I move my hand. So if I lift my finger up, the system would apply a series of upward forces to the limb that is mapped to my finger. As I am puppeteering the creature, the NN receives Vector3 forces for each of the rigidbodies. In a way this is the same task as generating a new text based on a corpus of texts, but in this case my input is forces rather than strings.
Based on that training set, is it possible for the NN to generate movements for the characters (forces to be applied to the limbs) to mimic the movements I did with my hand?
I don't have that much experience with neural networks, but am eager to learn, specifically for this project. It would be great to know about similar projects that were done in Unity, or relevant libraries I could use that would simplify the implementation. Also, please let me know if there is anything I can clarify!
Not really an answer but would not fit for comments
I'm not sure the strategy you want to apply to train your model is the right one.
I would go for reinforcement learning methods (you can check this question for more infos about it) using, for example, the distance traveled by the center of mass of the creature on the x-axis as a fitness. If this leads to weird behaviours (like this well known robot) you could, for example, think of strategies like penalizing your individuals given the distance traveled on y and z axis (still by the CoM) to try having guys that keep there CoM on the same plane.
Without knowing exactly what you want to achieve this is hard to give you more advices. Although, if you are not looking only for neural network based techniques, there is this really great paper you might want to have a look at (here is the video of their results).

Is there a way to set world wrapping?

I am writing a model that animates a network that changes layout from a tree to a Hilbert curve and vice-versa. When in Hilbert mode, I want the world to wrap like a torus.
All other times I want the world to be a box. Since 3.1 there are no longer any no-wrap distance primitives in NetLogo, but is there any way to set the wrapping of the world from within my program? Neither the user guide nor the Netlogo Dictionary mention this.
For now, you can use __change-topology. See
https://groups.google.com/forum/#!topic/netlogo-devel/bQeerTqb-R4

How to implement the conditional probability for each pixel?

I have asked a 'very broad' question yesterday link about build conditional random field based energy function from images. I got negative feedback from the comments and I think I should modify the question and make it more specific.
Here I have got bunch of images with a cow on the grass and some sky at the background. I want to segment the cow from the grass and sky (a toy problem only).
I firstly over-segment the images using some super-pixel method, and I have got the ground truth of labels of my 10 training images. Here is an example,
Then I pass these super-pixel patches to some filter to get texture features and saved them into feature vectors for cow (c), grass (g) and sky (s)
My question is how to using Matlab to implement the conditional probabilities of these three classes?
Like P(X_i|C_c), P(X_i|C_g) and P(X_i|C_s)? X_i are each super-pixel, C_x are the three classes.
I think some webpage mentioned about using Matlab hist function. Not sure how and why to do that. Please give me some basic applicable hints not complicated papers. Thanks a lot. A.