how to implement partial derivative in NetLogo - netlogo

I have a very general question to NetLogo. How can I implement a partial derivative. It's gonna be a simple model of priorities of a citizen with the respect to transportation vehicle.

Related

Clustering with nonlinear mixed effect modeling

I want to optimally find the clusters and assignment of each subject into the correct cohort in the nonlinear mixed effect framework. I came to know a package in R which is lcmm that calls this type of modeling a latent class mixture model. They have the clustering of the linear mixed effect model in hlme function. I am wondering if there is a package that deals with the latent class/clustering of the nonlinear mixed effect modeling? Any help is appreciated.

Genetic Programming in Agent Based Modeling with NetLogo

I have an Agent-Based model written in NetLogo. Now I want to take it to next level and evolve my agents as Genetic Programming population. I want a way to incorporate the genetic programming part into my NetLogo model, either through an interface or write it in NetLogo itself if that's possible. Anybody has any insights into this?
Thank you

Is it possible to calculate the posterior probability of any type of classifiers?

As i know, some classifiers such as Naive Bayes calculate the posterior probability of data and based on it produce the result.
My question is that does any classifier can produce posterior probability?
for example how decision tree can generate it?
Some classification models such as logistic regression and neural networks compute posterior class probabilities directly. Models based on generative models, such the quadratic discriminant and models derived from mixture densities, also compute posterior class probabilities. Decision trees can be easily adapted to output a class probability by returning the proportion of positive examples from leaves of the tree.
A prominent exception is the support vector machine, which doesn't return a probability. I think maybe someone has tried to modify it to return a probability; dunno how that worked out.
See Hastie, Tibshirani, and Friedman, "Elements of Statistical Learning" (or any of many texts) for more about this stuff. Further questions of this kind should probably go to stats.stackexchange.com.

Extended Kalman filter for vehicle tracking

I have read somewhere that movement of a vehicle in cities is non-linear. It accelerates or de-accelerates frequently.
Can I use Extended Kalman filter for vehicle tracking moving on a road?
I am not able to understand the difference between KF and EKF.
The difference between a KF and EKF is in the model that is used, i.e. the equations used for propagation of the state (transition) and measurement update. If the model is linear, you can use a KF, EKF's are used for non-linear models.
For your case, the movement may not be linear, you can still create a linear state transition model:
Assume a vehicle with 1 dimensional motion. You could model this using a state [x,v,a] (position, speed, acceleration).
The state transition can be modeled as:
which is a linear model.

Gaussian Naive Bayes classification

I have found the following Matlab implementation of a Naive Bayes classifier:
https://github.com/jjedele/Naive-Bayes-Classifier-Octave-Matlab
What is the difference between Gaussian Naive Bayes and Naive Bayes? How could I extend the above implementation to become Gaussian Naive Bayes?
How can I extend the implementation for using it with 4 classes? Just doing one-vs-all other?
Thank you very much for the help.
In Naive Bayes Classification we take a set of features (x0,x1,...xn) and try to assign those feature to one of a known set Y of class (y0,y1,...yk) we do that by using training data to calculate the conditional probabilities that tell us how often a particular class had a certain feature in the training set and then multiplying them together.
The result is a score for each class in the set Y. We then take the highest scoring member of Y as the class that our feature set should be assigned to.
up until this point we haven't made any assumptions about what the p(x|C) distributions look like.
In Guassian Naive Bayes we assume that all those p(x|C) values are normaly distributed that's the only "difference" and it really isn't a difference GNB is just a subset of Naive Bayes.
This can be useful if you don't have a lot of training data, and are willing to make the assumption that the population data is normally distributed about the mean of the sample (training) data you do have.
Full discloser the Tex comes from wikipedia.