I want to use minisat to solve a 7 * 7 size game of life, to get the stable generations.
Here I simplify the rule of live and death:
Von Neumann de rayon 1
The cell who has south, east and north neighbors alive will be alive.
(xin : north neighbor; xie : east neighbors; xis : south neighbors)
My formule
But I don't know to to change this to CNF(Conjunctive normal form)
Can someone help me? T
The way I learned CNF, the "dead" formula is a single disjunctive term:
~xin V ~xie V ~xis
... which is simply the application of DeMorgan's theorem to the "live" case, which is already in CNF.
Remember, any expression whose operators are all disjuctions or all conjunctions, is already in both CNF and DNF.
Related
I have a mgcv::gam mixed model of the form:
m1 <- gam(Y ~ A + s(B, bs = "re"), data = dataframe, family = gaussian,
method = "REML")
The random term s(B, bs = "re") is quoted in summary(m1) as, for example,
Approximate significance of smooth terms:
# edf Ref.df F p-value
s(B) 4.486 5 97.195 6.7e-08 ***
My question is, how would I quote this result (statistic and P value) in a formal document, for example a technical report or paper?
For example, one possibility is
F[4.486,5] = 97.195, P = 6.7e-08
However, arguing against this idea, “reverse engineering” of the result using
pf(q= 97.195, df1= 4.486, df2= 5, lower.tail=FALSE)
gives an incorrect p value:
[1] 5.931567e-05
I would be very grateful for your advice. Many thanks for your help!
The F statistic in question doesn't actually follow an F with the degrees of freedom you have identified. The Ref df one is related to the test, but you'd need to read and understand Wood (2013) to fully grep how the degrees of freedom for the test are derived.
I would simply quote the statistic and the p-value and then cite Simon's paper if anyone wants to know how they were computed. I don't think you can easily get at the degrees of freedom that actually get used. (well, not without debugging the summary.gam() code and seeing how they are computed.)
References
Wood, S. N. 2013. A simple test for random effects in regression models. Biometrika 100: 1005–1010. doi:10.1093/biomet/ast038
I’m new to neural networks but I have a question regarding NN and ANN.
I have a list of objects. Each objects contains longitude, latitude and a list of words.
What I want to do is to predict the location based on the text contained in the object (similar texts should have similar location). Right now I’m using cosine similarity to calculate the similarity between the objects text but I’m stuck how I can use that information to train my neural network. I have a matrix containing each object and how many time each word appeared in that object. F.x. if I had these two objects
Obj C: 54.123, 10.123, [This is a text for object C]
Obj B: 57.321, 11.113, [This is a another text for object B]
Then I have something like the following matrix
This is a text for object C another B
ObjC: 1 1 1 1 1 1 1 0 0
ObjB: 1 1 1 1 1 1 0 1 1
I would also have something like, for the distance between the two objects (note, that the numbers are not real)
ObjC ObjB
ObjC 1 0.25
ObjB 0.25 1
I have looked at how I use neural network to either classify things into groups (like A,B,C) or predict something like a housing price, but nothing that I find helpful for my problem.
I would consider the prediction right if it is within some distance X, since I’m dealing with location.
This might be a stupid question, but someone point me to the right direction.
Everything helps!
Regards
first, i want to say i am relatively new with matlab and so i am not yet very good in it.
I have the variables A,K and L and the constant alpha. Out of this, i want to model the income Y.
Y=A^alpha*K*L;
L changes at a growth rate of 0.09;
dL/dt= rl;
with L population growth; L0 (1950)=500;
I need to model this for 50 years, how can i do this in matlab? so, L has to grow every year, but with the stuff i tried i get always one output value, not 50 values (one for every year): how i have to code this in matlab?
at the moment, I have this, but it gives just the L0*(1+r) for every year
for i = 1:50
dL(i)=(1+r).*L
end
and the growth rate is continuus, but in one year I have due to an event (financial crisis for example) include a population decrease of 7% in one year, for example after year 30. Thereafther, the population will grow at same rate as before. How i can do this in matlab?
thanks for answering.
Actually it works, i had made a mistake by defining the loop from i:50, it must be from n:50
I need to classify objects using fuzzy logic. Each object is characterized by 4 features - {size, shape, color, texture}. Each feature is fuzzified by linguistic terms and some membership function. The problem is I am unable to understand how to defuzzify such that I may know which class an unknown object belongs to. Using the Mamdani Max-Min inference, can somebody help in solving this issue?
Objects = {Dustbin, Can, Bottle, Cup} or denoted as {1,2,3,4} respectively. The fuzzy sets for each feature is :
Feature : Size
$\tilde{Size_{Large}}$ = {1//1,1/2,0/3,0.6/4} for crisp values in range 10cm - 20 cm
$\tilde{Size_{Small}}$ = {0/1,0/2,1/3,0.4/4} (4cm - 10cm)
Shape:
$\tilde{Shape_{Square}}$ = {0.9/1, 0/2,0/3,0/4} for crisp values in range 50-100
$\tilde{Shape_{Cylindrical}}$ = {0.1/1, 1/2,1/3,1/4} (10-40)
Feature : Color
$\tilde{Color_{Reddish}}$ = {0/1, 0.8/2, 0.6/3,0.3/4} say red values in between 10-50 (not sure, assuming)
$\tilde{Color_{Greenish}}$ = {1/1, 0.2/2, 0.4/3, 0.7/4} say color values in 100-200
Feature : Texture
$\tilde{Tex_{Coarse}}$ = {0.2/1, 0.2/2,0/3,0.5/4} if texture crisp values 10-20
$\tilde{Tex_{Shiny}}$ = {0.8/1, 0.8/2, 1/3, 0.5/4} 30-40
The If then else rules for classification are
R1: IF object is large in size AND cylindrical shape AND greenish in color AND coarse in texture THEN object is Dustbin
or in tabular form just to save space
Object type Size Shape Color Texture
Dustbin : Large cylindrical greenish coarse
Can : small cylindrical reddish shiny
Bottle: small cylindrical reddish shiny
Cup : small cylindrical greenish shiny
Then, there is an unknown feature with crisp values X = {12cm, 52,120,11}. How do I classify it? Or is my understanding incorrect, that I need to reformulate the entire thing?
Fuzzy logic means that every pattern belongs to a class up to a level. In other words, the output of the algorithm for every pattern could be a vector of let's say percentages of similarity to each class that sum up to unity. Then the decision for a class could be taken by checking a threshold. This means that the purpose of fuzzy logic is to quantify the uncertainty. If you need a decision for your case, a simple minimum distance classifier or a majority vote should be enough. Otherwise, define again your problem by taking the "number factor" into consideration.
One possible approach could be to define centroids for each feature's distinct attribute, for example, Large_size=15cm and Small_size=7cm. The membership function could be then defined as a function of the distance from these centroids. Then you could do the following:
1) Calculate the euclidean difference * a Gaussian or Butterworth kernel (in order to capture the range around the centroid) for every feature. Prepare a kernel for every class, for example, dustbin as a target needs large size, coarse texture etc.
2) Calculate the product of all the above (this is a Naive Bayes approach). Fuzzy logic ends here.
3) Then, you could assign the pattern to the class with the highest value of the membership function.
Sorry for taking too long to answer, hope this will help.
I have three values X,Y and Z. These values have a range of values between 0 and 1 (0 and 1 included).
When I call a function f(X,Y,Z) it returns a value V (value between 0 and 1). My Goal is to choose X,Y,Z so that the returned value V is as close as possible to 1.
The selection Process should be automated and the right values for X,Y,Z are unknown.
Due to my Use Case it is possible to set Y and Z to 1 (the value 1 hasn't any influence on the output) and search for the best value of X.
After that I can replace X by that value and do the same for Y. Same procedure for Z.
How can I find the "maximum of the function"? Is there somekind of "gradient descend" or hill climbing algorithm or something like that?
The whole modul is written in perl so maybe there is an package for perl that can solve that problem?
You can use Simulated Annealing. Its a multi-variable optimization technique. It is also used to get a partial solution for the Travelling Salesperson problem. Its one of the search algorithms mentioned in Peter Norvig's Intro to AI book as well.
Its a hill climbing algorithm which depends on random variables. Also it won't necessarily give you the 'optimal' answer. You can also vary the iterations required by it as per your computational/time needs.
http://en.wikipedia.org/wiki/Simulated_annealing
http://www1bpt.bridgeport.edu/sed/projects/449/Fall_2000/fangmin/chapter2.htm
I suggest you take a look at Math::Amoeba which implements the Nelder–Mead method for finding stationary points on functions.