I was studying Computer Science when I found myself upon a question which I cannot get an answer to. Here's my train of thoughts so far:
Hash tables using open addressing need a probing function to resolve collisions, such as linear / quadratic probing, or double hashing.
Linear probing is prone to primary clustering on the hash tables which could lead to degraded performance (referencing MIT's algorithm lecture)
Found Swift's standard library uses Linear probing for it's Hash table implemention Dictionary (source code)
Then also learned that actually linear probing could be more performant(not sure) because of less cache misses (Wikipedia on Linear probing)
Linear probing can provide high performance because of its good locality of reference, but is more sensitive to the quality of its hash function than some other collision resolution schemes.
But other laguages like Golang seems to use quadratic probing on it's hash tables
So I'm confused on what's a good probing strategy on hash tables now and why the Swift team went ahead and used linear probing on it's Dictionaries. Any thoughts would be welcome. Thanks.
I know that there are many applications and tools available for benching the computational power of CPUs especially in terms of floating point and integer calculations.
What I want to know is that how good is to use the hashing functions such as MD5, SHA, ... for benchmarking CPUs? Does these functions include enough floating point and integer calculations that applying a series of those hashing functions could be a good basis for cpu becnhmarking?
In case platform matters, I'm concerned with Windows and .Net.
MD5 and SHA hash functions do not use floating point at all. They are completely implemented using discrete math
I am using a brute force method to optimize a solution in one of my recent projects and it is working quite well. Basically the optimization process involves searching for a global maximum in the space of all possible solutions. I was curious if there are other techniques which can be used to speed up a brute force search or other methods entirely. This is an area that I have little experience in but, as I said, I am quite curious.
Genetic algorithms are a good way to find maximums, even when is not possible to test all solutions.
It's a wide spread technique and there are implementations in very programming languages.
Simulated annealing is useful for solving local maxima problems, but is not always guaranteed to find the global maxima. It basically uses random 'jumps' in an attempt to find a better location/value than its current, and this can speed up searches.
Do you know if anyone has tried to compile high level programming languages (java, c#, etc') into a recurrent neural network and then evolve them?
I mean that the whole process including memory usage is stored in a graph of a neural net, and I'm talking about complex programs (thinking about natural language processing problems).
When I say neural net I mean a directed weighted graphs that spreads activation, and the nodes are functions of their inputs (linear, sigmoid and multiplicative to keep it simple).
Furthermore, is that what people mean in genetic programming or is there a difference?
Neural networks are not particularly well suited for evolving programs; their strength tends to be in classification. If anyone has tried, I haven't heard about it (which considering I barely touch neural networks is not a surprise, but I am active in the general AI field at the moment).
The main reason why neural networks aren't useful for generating programs is that they basically represent a mathematical equation (numeric, rather than functional). Given some numeric input, you get a numeric output. It is difficult to interpret these in the context of a program any more complicated than simple arithmetic.
Genetic Programming traditionally uses Lisp, which is a pure functional language, and often programs are often shown as tree diagrams (which occasionally look similar to some neural network diagrams - is this the source of your confusion?). The programs are evolved by exchanging entire branches of a tree (a function and all its parameters) between programs or regenerating an entire branch randomly.
There are certainly a lot of good (and a lot of bad) references on both of these topics out there - I refrain from listing them because it isn't clear what you are actually interested in. Wikipedia covers each of these techniques, and is a good starting point.
Genetic programming is very different from Neural networks. What you are suggesting is more along the lines of genetic programming - making small random changes to a program, possibly "breeding" successful programs. It is not easy, and I have my doubts that it can be done successfully across a large program.
You may have more luck extracting a small but critical part of your program, one which has a few particular "aspects" (such as parameter values) that you can try to evolve.
Google is your friend.
Some sophisticated anti-virus programs as well as sophisticated malware use formal grammar and genetic operators to evolve against each other using neural networks.
Here is an example paper on the topic: http://nexginrc.org/nexginrcAdmin/PublicationsFiles/raid09-sadia.pdf
Sources: A class on Artificial Intelligence I took a couple years ago.
With regards to your main question, no one has ever tried that on programming languages to the best of my knowledge, but there is some research in the field of evolutionary computation that could be compared to something like that (but it's obviously a far-fetched comparison). As a matter of possible interest, I asked a similar question about sel-improving compilers a while ago.
For a difference between genetic algorithms and genetic programming, have a look at this question.
Neural networks have nothing to do with genetic algorithms or genetic programming, but you can obviously use either to evolve neural nets (as any other thing for that matters).
You could have look at genetic-programming.org where they claim that they have found some near human competitive results produced by genetic programming.
I have not heard of self-evolving and self-imrpvoing programs before. They may exist as special research tools like genetic-programming.org have but nothing solid for generic use. And even if they exist they are very limited to special purpose operations like malware detection as Alain mentioned.
This is regarding AES algorithm.
Suppose i have implemented a AES algorithm and encrypt data using my algorithm. Now suppose somebody else also has implemented the same AES algorithm (128 bit). Now if i encrypt a data using my algorithm is it possible for decrypting the data and getting back the original data using the second algorithm that the other person has developed. What is the underlying difference in the algorithms.
Is it something related to S-BOX
Thanks
AES is a specified algorithm. If you have two different implementations they both should be able to encrypt and decrypt without any difference. If there is a difference then at least one of them wouldn't be AES.
For such things you
Either assume all implementations of an encryption algorithm you want to be interoperable with are correct, including yours.
Or don't reinvent the wheel unless you actually want to learn something about wheels.