I am trying to simplify a Boolean expression with exactly 39 inputs, and about 500 million - 800million terms (as in that many and/not/or statements).
A perfect simplification is not needed, but a good one would be nice.
I am aware of the K-maps , Quine–McCluskey, Espresso algorithms. However I am also aware that these mechanisms would take way too long to simplify a circuit of this size based on what I have read
I would need to simplify this expression as much as possible within a 24 hour period.
After google searching, I find it difficult to find any resources for attempting to simplify a machine of quite this magnitude! Any resources out there or a library out there that can attempt to at least simplify this to some extent within a 24 time period?
A greedy heuristic Simplify is described in the somewhat dated book
Robert K. Brayton , Gary D. Hachtel , C. McMullen , Alberto Sangiovanni-Vincentelli
Logic Minimization Algorithms for VLSI Synthesis
You can find the chapter online.
Simplify is based on the unate paradigm. In divide-and-conquer style, it recursively applies Shannon's expansion theorem to split the function into smaller sub-functions. The heuristic rule is to split by the most binate variable first, i.e. the variable which separates the largest number of terms.
A second approach could be to use graph partitioning tools like METIS to split the terms into independent (or at least loosely related) subsets. But I am not aware that this has been tried sucessfully for logic synthesis tasks. My favorite search engine is sceptical and does not return any hits.
A more recent algorithm based on Binary Decision Diagrams was published in
Olivier Coudert: Doing Two-Level Logic Minimization 100 Times Faster
The paper lists examples with very high number of terms similar to your task at hand.
A somewhat related simplification technique BDD Sweeping as described in A Study of Sweeping Algorithms in the Context of Model Checking.
This is a duplicate question. See https://stackoverflow.com/a/60535990/1531728 for resources about logic optimization, or simplication of boolean expressions.
Related
I have estimated a complex hierarchical model with many random effects, but don't really know what the best approach is to checking for convergend. I have complex longitudinal data from a few hundred individuals and estimate quite a few parameters for every individual. Because of that, I have way to many traceplots to inspect visually. Or should I really spend a day going through all the traceplots? What would be a better way to check for convergence? Do I have to calculate Gelman and Rubin's Rhat for every parameter on the person level? And when can I conclude that the model converged? When absolutely all of the thousends of parameters reached convergence? Is it even sensible to expect that? Or is there something like "overall convergence"? And what does it mean when some person-level parameters did not converge? Does it make sense to use autorun.jags from the R2jags package with such a model or will it just run for ever? I know, these are a lot of question, but I just don't know how to approach that.
The measure I am using for convergence is a potential scale reduction factor (psrf)* using the gelman.diag function from the R package coda.
But nevertheless, I am also quickly visually inspecting all the traceplots, even though I also have tens/hundreds of them. It can be really fast if you put them in PNG files and then quickly go through them using e.g. IrfanView (let me know if you need me to expand on this).
The reason you should inspect the traceplots is pretty well described by an example from Marc Kery (author of great Bayesian books): see "Never blindly trust Rhat for convergence in a Bayesian analysis", here I include a self explanatory image from this email:
This is related to Rhat statistics while I use psrf, but it's pretty likely that psrf suffers from this too... and better to check the chains.
*) Gelman, A. & Rubin, D. B. Inference from iterative simulation using multiple sequences. Stat. Sci. 7, 457–472 (1992).
I believe (from doing some reading) that reading/writing data across the bus from CPU caches to main memory places a considerable constraint on how fast a computational task (which needs to move data across the bus) can complete - the Von Neumann bottleneck.
I have come across a few articles so far which mention that functional programming can be more performant than other paradigms like the imperative approach eg. OO (in certain models of computation).
Can someone please explain some of the ways that purely functional programming can reduce this bottleneck? ie. are any of the following points found (in general) to be true?
Using immutable data structures means generally less data is moving across that bus - less writes?
Using immutable data structures means that data is possibly more likely to be hanging around in CPU cache - because less updates to existing state means less flushing of objects from cache?
Is it possible that using immutable data structures means that we may often never even read the data back from main memory because we may create the object during computation and have it in local cache and then during same time slice create a new immutable object off of it (if there is a need for an update) and we then never use original object ie. we are working a lot more with objects that are sitting in local cache.
Oh man, that’s a classic. John Backus’ 1977 ACM Turing Award lecture is all about that: “Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs.” (The paper, “Lambda: The Ultimate Goto,” was presented at the same conference.)
I’m guessing that either you or whoever raised this question had that lecture in mind. What Backus called “the von Neumann bottleneck” was “a connecting tube that can transmit a single word between the CPU and the store (and send an address to the store).”
CPUs do still have a data bus, although in modern computers, it’s usually wide enough to hold a vector of words. Nor have we gotten away from the problem that we need to store and look up a lot of addresses, such as the links to daughter nodes of lists and trees.
But Backus was not just talking about physical architecture (emphasis added):
Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself but where to find it.
In that sense, functional programming has been largely successful at getting people to write higher-level functions, such as maps and reductions, rather than “word-at-a-time thinking” such as the for loops of C. If you try to perform an operation on a lot of data in C, today, then just like in 1977, you need to write it as a sequential loop. Potentially, each iteration of the loop could do anything to any element of the array, or any other program state, or even muck around with the loop variable itself, and any pointer could potentially alias any of these variables. At the time, that was true of the DO loops of Backus’ first high-level language, Fortran, as well, except maybe the part about pointer aliasing. To get good performance today, you try to help the compiler figure out that, no, the loop doesn’t really need to run in the order you literally specified: this is an operation it can parallelize, like a reduction or a transformation of some other array or a pure function of the loop index alone.
But that’s no longer a good fit for the physical architecture of modern computers, which are all vectorized symmetric multiprocessors—like the Cray supercomputers of the late ’70s, but faster.
Indeed, the C++ Standard Template Library now has algorithms on containers that are totally independent of the implementation details or the internal representation of the data, and Backus’ own creation, Fortran, added FORALL and PURE in 1995.
When you look at today’s big data problems, you see that the tools we use to solve them resemble functional idioms a lot more than the imperative languages Backus designed in the ’50s and ’60s. You wouldn’t write a bunch of for loops to do machine learning in 2018; you’d define a model in something like Tensorflow and evaluate it. If you want to work with big data with a lot of processors at once, it’s extremely helpful to know that your operations are associative, and therefore can be grouped in any order and then combined, allowing for automatic parallelization and vectorization. Or that a data structure can be lock-free and wait-free because it is immutable. Or that a transformation on a vector is a map that can be implemented with SIMD instructions on another vector.
Examples
Last year, I wrote a couple short programs in several different languages to solve a problem that involved finding the coefficients that minimized a cubic polynomial. A brute-force approach in C11 looked, in relevant part, like this:
static variable_t ys[MOST_COEFFS];
// #pragma omp simd safelen(MOST_COEFFS)
for ( size_t j = 0; j < n; ++j )
ys[j] = ((a3s[j]*t + a2s[j])*t + a1s[j])*t + a0s[j];
variable_t result = ys[0];
// #pragma omp simd reduction(min:y)
for ( size_t j = 1; j < n; ++j ) {
const variable_t y = ys[j];
if (y < result)
result = y;
} // end for j
The corresponding section of the C++14 version looked like this:
const variable_t result =
(((a3s*t + a2s)*t + a1s)*t + a0s).min();
In this case, the coefficient vectors were std::valarray objects, a special type of object in the STL that have restrictions on how their components can be aliased, and whose member operations are limited, and a lot of the restrictions on what operations are safe to vectorize sound a lot like the restrictions on pure functions. The list of allowed reductions, like .min() at the end, is, not coincidentally, similar to the instances of Data.Semigroup. You’ll see a similar story these days if you look through <algorithm> in the STL.
Now, I’m not going to claim that C++ has become a functional language. As it happened, I made all the objects in the program immutable and automatically collected by RIIA, but that’s just because I’ve had a lot of exposure to functional programming and that’s how I like to code now. The language itself doesn’t impose such things as immutability, garbage collection or absence of side-effects. But when we look at what Backus in 1977 said was the real von Neumann bottleneck, “an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand,” does that apply to the C++ version? The operations are linear algebra on coefficient vectors, not word-at-a-time. And the ideas C++ borrowed to do this—and the ideas behind expression templates even more so—are largely functional concepts. (Compare that snippet to how it would’ve looked in K&R C, and how Backus defined a functional program to compute inner product in section 5.2 of his Turing Award lecture in 1977.)
I also wrote a version in Haskell, but I don’t think it’s as good an example of escaping that kind of von Neumann bottleneck.
It’s absolutely possible to write functional code that meets all of Backus’ descriptions of the von Neumann bottleneck. Looking back on the code I wrote this week, I’ve done it myself. A fold or traversal that builds a list? They’re high-level abstractions, but they’re also defined as sequences of word-at-a-time operations, and half or more of the data passed through the bottleneck when you create and traverse a singly-linked list is the addresses of other data! They’re efficient ways to put data through the von Neumann bottleneck, and that’s basically why I did it: they’re great patterns for programming von Neumann machines.
If we’re interested in coding a different way, however, functional programming gives us tools to do so. (I’m not going to claim it’s the only thing that does.) Express a reduction as a foldMap, apply it to the right kind of vector, and the associativity of the monoidal operation lets you split up the problem into chunks of whatever size you want and combine the pieces later. Make an operation a map rather than a fold, on a data structure other than a singly-linked list, and it can be automatically parallelized or vectorized. Or transformed in other ways that produce the same result, since we’ve expressed the result at a higher level of abstraction, not a particular sequence of word-at-a-time operations.
My examples so far have been about parallel programming, but I’m sure quantum computing will shake up what programs look like a lot more fundamentally.
In the traditional "one-hot" representation of words as vectors you have a vector of the same dimension as the cardinality of your vocabulary. To reduce dimensionality usually stopwords are removed, as well as applying stemming, lemmatizing, etc. to normalize the features you want to perform some NLP task on.
I'm having trouble understanding whether/how to preprocess text to be embedded (e.g. word2vec). My goal is to use these word embeddings as features for a NN to classify texts into topic A, not topic A, and then perform event extraction on them on documents of topic A (using a second NN).
My first instinct is to preprocess removing stopwords, lemmatizing stemming, etc. But as I learn about NN a bit more I realize that applied to natural language, the CBOW and skip-gram models would in fact require the whole set of words to be present --to be able to predict a word from context one would need to know the actual context, not a reduced form of the context after normalizing... right?). The actual sequence of POS tags seems to be key for a human-feeling prediction of words.
I've found some guidance online but I'm still curious to know what the community here thinks:
Are there any recent commonly accepted best practices regarding punctuation, stemming, lemmatizing, stopwords, numbers, lowercase etc?
If so, what are they? Is it better in general to process as little as possible, or more on the heavier side to normalize the text? Is there a trade-off?
My thoughts:
It is better to remove punctuation (but e.g. in Spanish don't remove the accents because the do convey contextual information), change written numbers to numeric, do not lowercase everything (useful for entity extraction), no stemming, no lemmatizing.
Does this sound right?
I've been working on this problem myself for some time. I totally agree with the other answers, that it really depends on your problem and you must match your input to the output that you expect.
I found that for certain tasks like sentiment analysis it's OK to remove lot's of nuances by preprocessing, but e.g. for text generation, it is quite essential to keep everything.
I'm currently working on generating Latin text and therefore I need to keep quite a lot of structure in the data.
I found a very interesting paper doing some analysis on that topic, but it covers only a small area. However, it might give you some more hints:
On the Role of Text Preprocessing in Neural Network Architectures: An Evaluation Study on Text Categorization and Sentiment Analysis
by Jose Camacho-Collados and Mohammad Taher Pilehvar
https://arxiv.org/pdf/1707.01780.pdf
Here is a quote from their conclusion:
"Our evaluation highlights the importance of being consistent in the preprocessing strategy employed across training and evaluation data. In general a simple tokenized corpus works equally or better than more complex preprocessing techniques such as lemmatization or multiword grouping, except for a dataset corresponding to a specialized domain, like health, in which sole tokenization performs poorly. Addi- tionally, word embeddings trained on multiword- grouped corpora perform surprisingly well when applied to simple tokenized datasets."
So many questions. The answer to all of them is probably "depends". It needs to be considered the classes you are trying to predict and the kind of documents you have. It's not the same to try to predict authorship (then you definitely need to keep all kinds of punctuation and case so stylometry will work) than sentiment analysis (where you can get rid of almost everything but have to pay special attention to things like negations).
I would say apply the same preprocessing to both ends. The surface forms are your link so you can't normalise in different ways. I do agree with the point Joseph Valls makes, but my impression is that most embeddings are trained in a generic rather than a specific manner. What I mean is that the Google News embeddings perform quite well on various different tasks and I don't think they had some fancy preprocessing. Getting enough data tends to be more important. All that being said -- it still depends :-)
I know there are ways to find synonyms either by using NLTK/pywordnet or Pattern package in python but it isn't solving my problem.
If there are words like
bad,worst,poor
bag,baggage
lost,lose,misplace
I am not able to capture them. Can anyone suggest me a possible way?
There have been numerous research in this area in past 20 years. Yes computers don't understand language but we can train them to find similarity or difference in two words with the help of some manual effort.
Approaches may be:
Based on manually curated datasets that contain how words in a language are related to each other.
Based on statistical or probabilistic measures of words appearing in a corpus.
Method 1:
Try Wordnet. It is a human-curated network of words which preserves the relationship between words according to human understanding. In short, it is a graph with nodes as something called 'synsets' and edges as relations between them. So any two words which are very close to each other are close in meaning. Words that fall within the same synset might mean exactly the same. Bag and Baggage are close - which you can find either by iteratively exploring node-to-node in a breadth first style - like starting with 'baggage', exploring its neighbors in an attempt to find 'baggage'. You'll have to limit this search upto a small number of iterations for any practical application. Another style is starting a random walk from a node and trying to reach the other node within a number of tries and distance. It you reach baggage from bag say, 500 times out of 1000 within 10 moves, you can be pretty sure that they are very similar to each other. Random walk is more helpful in much larger and complex graphs.
There are many other similar resources online.
Method 2:
Word2Vec. Hard to explain it here but it works by creating a vector of a user's suggested number of dimensions based on its context in the text. There has been an idea for two decades that words in similar context mean the same. e.g. I'm gonna check out my bags and I'm gonna check out my baggage both might appear in text. You can read the paper for explanation (link in the end).
So you can train a Word2Vec model over a large amount of corpus. In the end, you will be able to get 'vector' for each word. You do not need to understand the significance of this vector. You can this vector representation to find similarity or difference between words, or generate synonyms of any word. The idea is that words which are similar to each other have vectors close to each other.
Word2vec came up two years ago and immediately became the 'thing-to-use' in most of NLP applications. The quality of this approach depends on amount and quality of your data. Generally Wikipedia dump is considered good training data for training as it contains articles about almost everything that makes sense. You can easily find ready-to-use models trained on Wikipedia online.
A tiny example from Radim's website:
>>> model.most_similar(positive=['woman', 'king'], negative=['man'], topn=1)
[('queen', 0.50882536)]
>>> model.doesnt_match("breakfast cereal dinner lunch".split())
'cereal'
>>> model.similarity('woman', 'man')
0.73723527
First example tells you the closest word (topn=1) to words woman and king but meanwhile also most away from the word man. The answer is queen.. Second example is odd one out. Third one tells you how similar the two words are, in your corpus.
Easy to use tool for Word2vec :
https://radimrehurek.com/gensim/models/word2vec.html
http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf (Warning : Lots of Maths Ahead)
I'm writing a program which can significantly lessen the number of collisions that occur while using hash functions like 'key mod table_size'. For this I would like to use Genetic Programming/Algorithm. But I don't know much about it. Even after reading many articles and examples I don't know that in my case (as in program definition) what would be the fitness function, target (target is usually the required result), what would pose as the population/individuals and parents, etc.
Please help me in identifying the above and with a few codes/pseudo-codes snippets if possible as this is my project.
Its not necessary to be using genetic programming/algorithm, it can be anything using evolutionary programming/algorithm.
thanks..
My advice would be: don't do this that way. The literature on hash functions is vast and we more or less understand what makes a good hash function. We know enough mathematics not to look for them blindly.
If you need a hash function to use, there is plenty to choose from.
However, if this is your uni project and you cannot possibly change the subject or steer it in a more manageable direction, then as you noticed there will be complex issues of getting fitness function and mutation operators right. As far as I can tell off the top of my head, there are no obvious candidates.
You may look up e.g. 'strict avalanche criterion' and try to see if you can reason about it in terms of fitness and mutations.
Another question is how do you want to represent your function? Just a boolean expression? Something built from word operations like AND, XOR, NOT, ROT ?
Depending on your constraints (or rather, assumptions) the question of fitness and mutation will be different.
Broadly fitness is clearly minimize the number of collisions in your 'hash modulo table-size' model.
The obvious part is to take a suitably large and (very important) representative distribution of keys and chuck them through your 'candidate' function.
Then you might pass them through 'hash modulo table-size' for one or more values of table-size and evaluate some measure of 'niceness' of the arising distribution(s).
So what that boils down to is what table-sizes to try and what niceness measure to apply.
Niceness is context dependent.
You might measure 'fullest bucket' as a measure of 'worst case' insert/find time.
You might measure sum of squares of bucket sizes as a measure of 'average' insert/find time based on uniform distribution of amongst the keys look-up.
Finally you would need to decide what table-size (or sizes) to test at.
Conventional wisdom often uses primes because hash modulo prime tends to be nicely volatile to all the bits in hash where as something like hash modulo 2^n only involves the lower n-1 bits.
To keep computation down you might consider the series of next prime larger than each power of two. 5(>2^2) 11 (>2^3), 17 (>2^4) , etc. up to and including the first power of 2 greater than your 'sample' size.
There are other ways of considering fitness but without a practical application the question is (of course) ill-defined.
If your 'space' of potential hash functions don't all have the same execution time you should also factor in 'cost'.
It's fairly easy to define very good hash functions but execution time can be a significant factor.