Why is number of solutions of six Queens lesser than that of five Queens? - backtracking

My reference is to the N-Queens problem, which consists of N queens to be placed on an NxN chess board in such a way that no queen attacks any other queen. This problem is being solved by using backtracking approach.
On the Wikipedia page of https://en.wikipedia.org/wiki/Eight_queens_puzzle#Counting_solutions, it is mentioned that:
"Note that the six queens puzzle has fewer solutions than the five queens puzzle."
Why is it so? In every other case, the number of solutions rises as the number of queens rises.
I want an answer that I can use to explain satisfactorily to an external examiner for vivas on this particular topic. Hence, an answer using logical and simple grammar would help.
Thanks.

Related

Questions about LSH (Locality-sensitive hashing) and minihashing implementation

I'm trying to implement this paper
Browser Fingerprint Coding Methods Increasing the Effectiveness of User Identification in the Web Traffic
I got a couple of questions about the LHS algorithm in general and the proposed implementation:
The LSH algorithm it's used only when you have a lot of documents to compare with each other (because it is supposed to put the similar ones in the same bucket from what I got). If for example I have a new document and I want to calculate the similarity with the others, I have to relaunch the LHS algorithm from scratch, including the new document, correct?
In 'Mining of Massive Datasets, Ch3', it is said that for the LHS we should use one hash function per band. Each hash function creates n buckets.
So, for the first band, we are going to have n buckets. For the second band onward, Am I supposed to keep using the same hash function (so this way I keep using the same buckets as before) or another one (ending so with m>>n buckets)?
This question is related t the previous one. If I use the same hash function for all the bands, then I'll have n buckets. No problem here. But If I have to use more hash functions (one different function per row), I'm going to end up with a lot of different buckets. Am I supposed to measure the similarity for each pair in each bucket? (If I have to use only one hash function then here it's not a problem).
In the paper, I understood most of the algorithm except for its end.
Basically, two Signatures matrices are created (one for stable features and one for unstable features) via minhashing. Then, they use LSH on the first matrix to obtain a list of candidates pairs. So far so good.
What happens at the end? do they perform the LHS on the second matrix? How the result of the first LHS is used? I cannot see the relationship between the first and the second LHS.
The output of the final step is supposed to be a list of pairing candidates, right? and all that I have to do is performing Jaccard similarity on them and setting a threshold, right?
Thanks for your answers!
I got a partial answer to my question (still missing question 4)
No. You would keep the bucket structure and hash the new doc into it. Then compare with only those docs in one of the buckets it fell into.
No. You HAVE to use different hash functions and a different set of buckets for each hash function.
This is irrelevant because of the answer to (2).

N-Queens puzzle, but with all chess pieces

I want to solve a problem similar to N-Queens one, but:
all chess pieces are available
user inputs how many pieces of what kind are to be placed (e.g. 3 rooks, 4 knights, 1 bishop)
I'm lying on the floor for some time now, but can't come up with how to adjust the backtracking algorithm for this purpose. I will be very grateful for any kind of help.
In principle, the same approach as in the classical N-Queen problem should work:
Find an empty, non-attacked square where you can place your next piece
Search the new position recursively (or if you already placed all your pieces, output the solution)
Take back the last placed piece and repeat (goto step 1) until you have tried all squares where you can place the next piece
The only difference to the classical N-Queens problem is that the different pieces have different attack patterns. And some common optimizations might no longer work. For instance, if you have pawns, it breaks symmetry as they only attack the squares in front of them. (Though you still have one symmetry axis even with pawns.)
I would expect the backtracking algorithm to be more efficient if you start with placing the pieces first that cover the most squares: first queens, then rooks and bishops, then knights and kings, and finally pawns.

Grouping similar words (bad , worse )

I know there are ways to find synonyms either by using NLTK/pywordnet or Pattern package in python but it isn't solving my problem.
If there are words like
bad,worst,poor
bag,baggage
lost,lose,misplace
I am not able to capture them. Can anyone suggest me a possible way?
There have been numerous research in this area in past 20 years. Yes computers don't understand language but we can train them to find similarity or difference in two words with the help of some manual effort.
Approaches may be:
Based on manually curated datasets that contain how words in a language are related to each other.
Based on statistical or probabilistic measures of words appearing in a corpus.
Method 1:
Try Wordnet. It is a human-curated network of words which preserves the relationship between words according to human understanding. In short, it is a graph with nodes as something called 'synsets' and edges as relations between them. So any two words which are very close to each other are close in meaning. Words that fall within the same synset might mean exactly the same. Bag and Baggage are close - which you can find either by iteratively exploring node-to-node in a breadth first style - like starting with 'baggage', exploring its neighbors in an attempt to find 'baggage'. You'll have to limit this search upto a small number of iterations for any practical application. Another style is starting a random walk from a node and trying to reach the other node within a number of tries and distance. It you reach baggage from bag say, 500 times out of 1000 within 10 moves, you can be pretty sure that they are very similar to each other. Random walk is more helpful in much larger and complex graphs.
There are many other similar resources online.
Method 2:
Word2Vec. Hard to explain it here but it works by creating a vector of a user's suggested number of dimensions based on its context in the text. There has been an idea for two decades that words in similar context mean the same. e.g. I'm gonna check out my bags and I'm gonna check out my baggage both might appear in text. You can read the paper for explanation (link in the end).
So you can train a Word2Vec model over a large amount of corpus. In the end, you will be able to get 'vector' for each word. You do not need to understand the significance of this vector. You can this vector representation to find similarity or difference between words, or generate synonyms of any word. The idea is that words which are similar to each other have vectors close to each other.
Word2vec came up two years ago and immediately became the 'thing-to-use' in most of NLP applications. The quality of this approach depends on amount and quality of your data. Generally Wikipedia dump is considered good training data for training as it contains articles about almost everything that makes sense. You can easily find ready-to-use models trained on Wikipedia online.
A tiny example from Radim's website:
>>> model.most_similar(positive=['woman', 'king'], negative=['man'], topn=1)
[('queen', 0.50882536)]
>>> model.doesnt_match("breakfast cereal dinner lunch".split())
'cereal'
>>> model.similarity('woman', 'man')
0.73723527
First example tells you the closest word (topn=1) to words woman and king but meanwhile also most away from the word man. The answer is queen.. Second example is odd one out. Third one tells you how similar the two words are, in your corpus.
Easy to use tool for Word2vec :
https://radimrehurek.com/gensim/models/word2vec.html
http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf (Warning : Lots of Maths Ahead)

Get best 3 scores in Optaplanner?

Can we get top 3 best scores using constraints in Optaplanner?
For eg i have a use case where i need to show top 3 results which has highest score to user so that user can select the solution according to their need.
Sounds like pareto optimization (see docs). Not yet supported in OptaPlanner officially.
But users have hacked it before, by implementing their own BestSolutionRecaller (= that class that holds the best solution(s)) and replacing the DefaultSolver's bestSolutionRecaller with it. This implies "taking the red pill" and "following the rabbit hole down to wonderland". Good luck :)
Important note: Pareto optimization goes much further than just remember the n best solutions. It's about remember the n best solutions which aren't dominated by one of the other best solutions. So it entails changing the score comparison (and breaking the transitive aspect of score comparison).

MATLAB: Dividing Items using a For-loop

I needed some help with a problem I'd been assigned in class. It's our introduction to for loops. Here is the problem:
Consider the following riddle.
This is all I have so far:
function pile = IslandBananas(numpeople, numbears)
for pilesize=1:10000000
end
I would really appreciate your input. Thank you!
I will help you, but you need to try harder than that. And also, you only need one for loop. First, think about how you would construct this algorithm. Well you know you have to use a for loop so that is a start. So let's think about what is going on in the problem.
1) You have a pile.
2) First night someone takes the pile and divides it into 3 and finds that one is left over, this means mod(pile,3) = 1.
3) But he discards the extra banana. This means (pile-1).
4) He takes a third of it, leaving two-thirds left. This means (2/3)*(pile-1).
5) In the morning they take the pile and divide it into 3 and find again that one is left over, so this means mod((2/3)*(pile-1),3) = 1.
6) But they discard the extra banana. This means (2/3)*(pile-1)-1.
7) Finally, they have to each have at least one banana if it is to be the smallest pile possible. Thus, the smallest pile must be such that (1/3)*((2/3)*(pile-1)-1) = 1.
I have essentially given you the answer, the rest you can write with the formula (1/3)*((2/3)*(pile-1)-1) and a simple if statement to test for the smallest possible integer which is 1. This can be done in four lines inside of your for loop.
Now, expanding this to any number of people and any number of bears requires two simple substitutions in that formula! If your teacher demands it, this can easily be split into two nested for loops.