Feature Hashing - hash

I know that feature hashing is a technique to vectorize features; it's very common for Machine Learning purposes.
I am still confused in how it works when you want to track the term-frequency when there are collisions. Let's follow the same example given by Luis Argerich in this link.
Let's say your text is: "the quick brown fox" and let's suppose you have the next hash function:
h(the) mod 5 = 0
h(quick) mod 5 = 1
h(brown) mod 5 = 1
h(fox) mod 5 = 3
Your final vector will be like: (1,2,0,1,0)
Now let's suppose your text is: "the quick brown fox quick quick quick quick"
Now the final vector will be like: (1,6,0,1,0)
My question is, how do I realize that brown appears just once and quick appears 5 times? how do I track that?

My question is, how do I realize that brown appears just once and quick appears 5 times? how do I track that?
You don't. That's the whole trick with hashing. It unifies some thing, losses information so you can get other benefits. If you want to keep track of everything you should just use bag of words, not hashing.
The other option is more complex approach to hashing, like the one used in LSH techniques, which use family of hash functions to reconstruct final similarity, through showing that given big enough sample of hash functions - it converges to true similarity.

Related

Netlogo: Built-in function to calculate the expected profit

Sorry for long post. I am newbie in agent-based modelling. So please accept my apology in advance if my question sounds stupid. I am trying to model a scenario where framer (i.e. agent) decides which type of crop should be harvest in different types of fields to increase the profit. The farmer agent has a budget i.e. the amount of money that can be spent on farming each time step equal to $100.
The farmer operates a farm that is subdivided into nine fields, which are arranged in a 3x3
cellular grid. Each field is of the same size. Water availability varies spatially across the fields with a rating of either 1 (driest), 2 (moderate),
or 3 (wettest). The manner in which water availability varies across the fields (i.e. randomly).
The farmer must choose among three crops. As initial parameter settings, the crops have the
following characteristics:
Yield Price Costs Minimum Water Req.
Crop 1 300 20 15 3
Crop 2 200 12 10 2
Crop 3 100 7 5 1
Each crop requires a certain amount of water to grow. Crop yields will only be realized if the crop is
planted in a field with at least the crop’s minimum water requirement.
Now the problem is that I couldn't find any function in Netlogo that calculates the permutation or combination of crop, field, and water requirements to calculate the expected profit. Any help would be high appreciated.
I believe you describe a linear programming problem.
Useful functions for solving Simplex Linear Programming problems are in NumAnal extension, which does not come bundled with NetLogo but which you can get as follows:
In NetLogo, under Tools / Extensions ... you can find NumAnal, probably with no green check-mark. Select it. On the right, you have buttons to install it, and then one to add it to your code. When you click those, it should now get a green checkmark and you should have a new line in your code "extensions [ numanal ]", and you are now able to use those commands, with the "numanal:" prefix, for example, numanal:simplex.
The documentation for it is in the folder where it was installed. But where is that?
Sadly, the documentation for where extensions are downloaded is not current.
https://ccl.northwestern.edu/netlogo/docs/extensions.html#where-extensions-are-located
After exhaustive search by date-modified, I actually found the folder on my Windows 10 laptop here: c:\Users\condor\AppData\Roaming\NetLogo\6.1\extensions
( Note the "\Roaming\" ).
That folder has a README.md text file, and a pdf document named "NumAnal-v3.4.0" explaining how to use it, and an examples folder with code. It is a little dense.
Here's a link to the basics of how to describe a Linear Programming problem, which is beyond the scope of StackOverflow. You can find help via Google.
Here's one 8 minute video ( as of 24-Nov-2019) that might help you figure out if this is what you need.
Simplex Algorithm Explanation (How to Solve a Linear Program)
https://www.youtube.com/watch?v=RO5477EKlXE

Too many for loop iterations - for loop terminates

In a classification task, I need to do feature selection. So out of featSize = 98 features (variables), I want to know which ones are applicable. For each combination I train the classifier by tuning its hyperparameters. I've come across a problem in my usage of a for loop:
for b = 1:(2^featSize) - 1
% this is to choose the features. e.g. [1 0 0] selects the first
% feature out of three features if featSize = 3.
end
Matlab gives a warning: Warning: Too many FOR loop iterations. Stopping after 9223372036854775806 iterations.
Am I using the for loop in a prohibitive way? Is there another alternative method of completing this step?
Building a model for every possible combination of features is intractable. It's clear from your for loop that you would have to build an exponential number of models to cover every feature subset.
There are many approaches to feature selection that are practical to implement. The one most similar to your method is forward-selection. Many algorithms offer a regularization parameter instead (e.g. LASSO or ridge-regression). Some options for regression are discussed here https://stats.stackexchange.com/questions/127444/a-guide-to-regularization-strategies-in-regression
This talk covers many approaches to the problem of feature selection https://www.youtube.com/watch?v=JsArBz46_3s&index=21&list=PLGVZCDnMOq0ovNxfxOqYcBcQOIny9Zvb-&t=0s
2^98 = 316.9e27 = 300 thousand million million million million. If you run a billion* loop iterations a second, it would take ten thousand million** years to run that loop. I don't think you can afford the electricity bill... :)
It is scary, isn't it, how quickly exponential things explode?
Luckily, you don't need to loop this often to visit all pairs of features. If you have 98 features, then you have 98^2 pairs, not 2^98. Actually, you have 98*97, if you don't want to pair a feature with itself, and 98*97/2 if the order doesn't matter.
You can write a double loop to visit each pair:
N = 98
for ii = 1:N-1
for jj = ii+1:N
% do something with the pair [ii,jj]
end
end
* A billion as in a million million -- not the US billion.
** 2^98 /1e12 /60 /60 /24 /365 == 10.049e+9 -- I didn't take leap years or leap seconds into account... :)
I think you are requesting the for loop to do 2^98 = 316,910,000,000,000,000,000,000,000,000 iterations, so you will need to reduce the number of iterations.
As others have noted, yes, you are using the for loop in a prohibitive way, almost destructively. It's absurd to ask any regular computer, much less a super computer to run that many iterations of a loop. So that is that part of your question answered.
Regarding developing another method of tackling this, I don't know much about machine learning (I guess this is bad to say as I'm attempting to solve this), but regardless, it doesn't seem like you've provided enough information for us to help you there. Either way, you will need to somehow drastically reduce the number of iterations of the loop for this to run efficiently, and avoid the error.

Neural Network playing Tic Tac Toe doesn't learn

I have a neural network playing tic-tac-toe. (I know there are other better methods for this, but I want to learn about NN)
So the NN plays against a random AI. First, it should learn to make an allowed move, ie. not choosing a field that is already occupied.
It doesn't get very far with this, however.
When NN chooses an illegal move I optimize the weights such that the distance to another, randomly chosen (legal) field is minimized. (There is one output which should have values between 1 and 9).
My problem is: in changing the weights, a formerly optimized outcome is now also changed. So I have this kind of overfitting: Everytime I backpropagade to optimize the weights for one particular situation, the decision for every other situation becomes worse!
I know I should probably have 9 output neurons instead of 1 and should probably not use a random field as the target, as I assume this can mess things up. I am starting to change this.
Still, the issue seems to remain. Obviously. How can I improve the decision in one situation without forgetting every other situation?
One solution I came up with is to "remember" every game played and optimizing simultaneously over all games played.
However, after a while this becomes very demanding on the computation. Also, it seems to go into the direction of a complete enumartion of all possible board situations. This might be possible for Tic Tac Toe but if I move to another game, say Go, this becomes infeasible.
Where is my mistake? How do I generally tackle this problem? Or where could I read about it? Thanks a lot!
To tackle this problem efficiently, you sould consider Reinforcement Learning methods, instead of what you are currently doing. What your are trying to do is to learn the behaviour of an agent playing Tic Tac Toe. The agent gets a high reward when he wins a game, a high penalty when he loses and an even higher penalty when he performs an illegal move. My guess is that using methods such as Q-learning with neural networks will work perfectly, even with very simple neural nets. One useful paper on the topic could be: https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf, or earlier papers on TD-Gammon (I think you can easily find tutorials on the topic using the keywords TD-Gammon, Q-learning, ...).
By the way, a more down-to-earth answer to why your model might not work is that you are seemingly using one single unit to represent categorical outputs: if you want to represent an integer between 1 and N, you should represent it using N output neurons with values between 0 and 1, and pick the neuron with the highest value as your answer. Using a single neuron with value between 1 and 9 creates an unatural assymetry between your outputs, and, for example, when the expected value is 3, your network gets a higher error for outputing a 9 than a 2. This should obviously not be the case: all wrong answers are equally wrong.
Hope this helps,
Best

Grouping similar words (bad , worse )

I know there are ways to find synonyms either by using NLTK/pywordnet or Pattern package in python but it isn't solving my problem.
If there are words like
bad,worst,poor
bag,baggage
lost,lose,misplace
I am not able to capture them. Can anyone suggest me a possible way?
There have been numerous research in this area in past 20 years. Yes computers don't understand language but we can train them to find similarity or difference in two words with the help of some manual effort.
Approaches may be:
Based on manually curated datasets that contain how words in a language are related to each other.
Based on statistical or probabilistic measures of words appearing in a corpus.
Method 1:
Try Wordnet. It is a human-curated network of words which preserves the relationship between words according to human understanding. In short, it is a graph with nodes as something called 'synsets' and edges as relations between them. So any two words which are very close to each other are close in meaning. Words that fall within the same synset might mean exactly the same. Bag and Baggage are close - which you can find either by iteratively exploring node-to-node in a breadth first style - like starting with 'baggage', exploring its neighbors in an attempt to find 'baggage'. You'll have to limit this search upto a small number of iterations for any practical application. Another style is starting a random walk from a node and trying to reach the other node within a number of tries and distance. It you reach baggage from bag say, 500 times out of 1000 within 10 moves, you can be pretty sure that they are very similar to each other. Random walk is more helpful in much larger and complex graphs.
There are many other similar resources online.
Method 2:
Word2Vec. Hard to explain it here but it works by creating a vector of a user's suggested number of dimensions based on its context in the text. There has been an idea for two decades that words in similar context mean the same. e.g. I'm gonna check out my bags and I'm gonna check out my baggage both might appear in text. You can read the paper for explanation (link in the end).
So you can train a Word2Vec model over a large amount of corpus. In the end, you will be able to get 'vector' for each word. You do not need to understand the significance of this vector. You can this vector representation to find similarity or difference between words, or generate synonyms of any word. The idea is that words which are similar to each other have vectors close to each other.
Word2vec came up two years ago and immediately became the 'thing-to-use' in most of NLP applications. The quality of this approach depends on amount and quality of your data. Generally Wikipedia dump is considered good training data for training as it contains articles about almost everything that makes sense. You can easily find ready-to-use models trained on Wikipedia online.
A tiny example from Radim's website:
>>> model.most_similar(positive=['woman', 'king'], negative=['man'], topn=1)
[('queen', 0.50882536)]
>>> model.doesnt_match("breakfast cereal dinner lunch".split())
'cereal'
>>> model.similarity('woman', 'man')
0.73723527
First example tells you the closest word (topn=1) to words woman and king but meanwhile also most away from the word man. The answer is queen.. Second example is odd one out. Third one tells you how similar the two words are, in your corpus.
Easy to use tool for Word2vec :
https://radimrehurek.com/gensim/models/word2vec.html
http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf (Warning : Lots of Maths Ahead)

How can I swap a section of a row with another within an array?

I am in the process of coding a simple Genetic Algorithm (GA). There are probably countless areas where I have unnecessarily used a for loop. I would like some tips on how to be more MATLAB efficient as well as an answer to my question. As far as I can tell I have succeeded but I am not sure. The area which this code defines is single-point crossover
Here is what I have tried...
crossPoints=randi([1 24],popSize/2,1);
for popNo=2:2:popSize
isolate=chromoParent(popNo-1:popNo,crossPoints(popNo/2,1)+1:end);
isolate([1 2],:)=isolate([2 1],:);
chromoParent(popNo-1:popNo,crossPoints(popNo/2,1)+1:end)=isolate;
end
chromoChild=chromoParent;
where, 'crossPoints' is the point at which single point crossover
between two binary encoded chromosomes is required.
'popSize' is the size of the population, required by my code to
be an even number
'isolate' defines the sections of 2 rows which are required to be swapped
with each other
'chromoParent' is the initial population which is required to be
changed by single-point crossover
'chromoChild' is the resulting population
Both 'chromoParent' and 'chromoChild' are represented by an array of
size, popSize x 25 binary characters
Can you spot an error in the way I am thinking about this problem? What's the most efficient way (in computational time) to achieve the same thing? It would help if you could be as broad as possible so that I could begin applying the principles I learn here to the rest of my code.
Thank you.
Your code looks fine. If you want, you can reduce the instructions in the loop to a single line by some very simple indexing:
chromoParent( popNo-1:popNo, crossPoints(popNo/2,1)+1:end) = ...
chromoParent(popNo:-1:popNo-1,crossPoints(popNo/2,1)+1:end);
This may be marginally faster, but as with any optimization, you should profile it first (My guess is that these line contribute very little to the overall CPU time).