Consensus Algorithms for multi-agent systems - consensus

What are the best web resources available to start learning about consensus algorithms? I tried reading IEEE research papers but couldn't understand it because of the level of mathematics used in it.

Consensus is typically a problem solved in a layer below multi-agent systems, it is the fundamental problem in distributed systems, an abstraction-level below multi-agent systems. Consensus is the essential problem in distributed systems, and is required for implementing reliable communication, replication, leader election, agreement etc.
Multi-agent system literature typically reason about behavior and protocol of agents through more high-level protocols than consensus algorithms. Often the protocols on multi-agent system level talks about "coordination", "auctions", etc. rather than consensus. E.g in multi-agent system literature it is common to not even explicitly mention how consensus, and means of communication is managed, those are assumed to be implemented on a lower level.
If you are interested about consensus in distributed systems you can see the following literature.
Paxos: https://en.wikipedia.org/wiki/Paxos_(computer_science), or the paper by the author of the algorithm: https://lamport.azurewebsites.net/pubs/paxos-simple.pdf
Raft: https://raft.github.io/
2PC: https://en.wikipedia.org/wiki/Two-phase_commit_protocol
View synchrony: https://en.wikipedia.org/wiki/Virtual_synchrony
Those are four different algorithms/approaches for consensus, the most fundamental one is Paxos. However, the papers might include some math so if you are not comfortable with that, try the wiki pages first, or try googling and you'll find blogposts.
If you want to read a shorter book on distributed systems as a general subject, including some chapters on consensus, I can recommend: http://book.mixu.net/distsys/ , this book contains minimal math as far as I remember.
If you are looking for any specific literature let met know, I've read quite a lot on this subject.
Best regards;

Related

What is the difference between a deterministic and randomised consensus protocol?

I have read that in order for some distributed systems to counter the FLP impossibility result, they change from a deterministic consensus protocol to a randomised consensus protocol, which gives a probabilistic approach to the system. Can anyone clarify this and the definitions for each
You can find the answer to your question in the following papers:
Another Advantage of Free Choice: Completely Asynchronous Agreement Protocols, PODC 1983
The correctness proof of Ben-Or’s randomized consensus algorithm, Distributed Computing 2012

What are the available approaches to interconnecting simulation systems?

I am looking for a distributed simulation algorithm which allows me to couple multiple standalone systems. The systems I am targeting for interconnection use different formalisms, e.g. discrete time and continuous simulation paradigms. Now, the only algorithms I found were from the field of parallel discrete event simulation (PDES), such as the classical chandy/misra "null"-message protocol, which has some very undesirable problems. My question is now, what other approaches to interconnecting simulation systems besides PDES-algorithms are known i.e. can be used for interconnecting simulation systems?
Not an algorithm, but there are two IEEE standards out there that define protocols intended to address your issue: High-Level Architecture (HLA) and Distributed Interactive Simulation (DIS). HLA has a much greater presence in the analytic discrete-event simulation community where I hang out, DIS tends to get more use in training applications. If you'd like to check out some applications papers, go to the Winter Simulation Conference / INFORMS-sponsorted paper archive site and search for HLA, you'll get 448 hits.
Be forewarned, trying to make this stuff work in general requires some pretty weird plumbing, lots of kludges, and can be very fragile.

The context between Abstract Algebra and programming

I'm a computer science student among the things I'm learning Abstract Algebra, especially Group theory.
I'm programming for about 5 years and I've never used such things as I learn in Abstract Algebra.
what is the context between programming and abstract algebra? I really have to know.
Group theory is very important in cryptography, for instance, especially finite groups in asymmetric encryption schemes such as RSA and El Gamal. These use finite groups that are based on multiplication of integers. However, there are also other, less obvious kinds of groups that are applied in cryptography, such as elliptic curves.
Another application of group theory, or, to be more specific, finite fields, is checksums. The widely-used checksum mechanism CRC is based on modular arithmetic in the polynomial ring of the finite field GF(2).
Another more abstract application of group theory is in functional programming. In fact, all of these applications exist in any programming language, but functional programming languages, especially Haskell and Scala(z), embrace it by providing type classes for algebraic structures such as Monoids, Groups, Rings, Fields, Vector Spaces and so on. The advantage of this is, obviously, that functions and algorithms can be specified in a very generic, high level way.
On a meta level, I would also say that an understanding of basic mathematics such as this is essential for any computer scientist (not so much for a computer programmer, but for a computer scientist – definitely), as it shapes your entire way of thinking and is necessary for more advanced mathematics. If you want to do 3D graphics stuff or programme an industry robot, you will need Linear Algebra, and for Linear Algebra, you should know at least some Abstract Algebra.
I don't think there's any context between group theory and programming...or rather your question doesn't make any sense. There are applications of programming to algebra and vice versa but they are not intrinsically tied together or mutually beneficial to one another so to speak.
If you are a computer scientist trying to solve some fun abstract algebras problems, there are numerous enumeration and classification problems that could benefit from a computational approach to be worked on in geometric group theory which is a hot topic at the moment, here's a pretty comprehensive list of researchers and problems (of 3 years ago at least)
http://www.math.ucsb.edu/~jon.mccammond/geogrouptheory/people.html
popular problems include finitely presented groups, classification of transitive permutation groups, mobius functions, polycyclic generating systems
and these
http://en.wikipedia.org/wiki/Schreier–Sims_algorithm
http://en.wikipedia.org/wiki/Todd–Coxeter_algorithm
and a problem that gave me many sleepless nights
http://en.wikipedia.org/wiki/Word_problem_for_groups
Existing algebra systems include GAP and MAGMA
finally an excellent reference
http://books.google.com/books?id=k6joymrqQqMC&printsec=frontcover&dq=finitely+presented+groups+book&hl=en&sa=X&ei=WBWUUqjsHI6-sQTR8YKgAQ&ved=0CC0Q6AEwAA#v=onepage&q=finitely%20presented%20groups%20book&f=false

Feasibility of Machine Learning techniques for Network Intrusion Detection

Is there a machine learning concept (algorithm or multi-classifier system) that can detect the variance of network attacks(or try to).
One of the biggest problems for signature based intrusion detection systems is the inability to detect new or variant attacks.
Reading up, anomaly detection seems to still be a statistical based en-devour it refers to detecting patterns in a given data set which isn't the same as detecting variation in packet payloads. Anomaly based NIDS monitors network traffic and compares it against an established baseline of a normal traffic profile. The baseline characterizes what is "normal" for the network - such as the normal bandwidth usage, the common protocols used, correct combinations of ports numbers and devices etc
Say some one uses Virus A to propagate through a network then some one writes a rule to stop Virus A but another person writes a "variation" of Virus A called Virus B purely for the purposes of evading that initial rule but still using most if not all of the same tactics/code. Is there not a way to detect variance?
If there is whats the umbrella term it would come under, as ive been under the illusion that anomaly detection was it.
Could machine learning be used for pattern recognition(rather than pattern matching) at the packet payload level?
i think your intution to look at machine learning techniques is correct, or will turn out to be correct (One of the biggest problems for signature based intrusion detection systems is the inability to detect new or variant attacks.) The superior performance of ML techiques is in general due to the ability of these algorithms to generalize (a multiplicity of soft constraints rather than a few hard constraints). and to adapt (updates based on new training instances to frustrate simple countermeasures)--two attributes that i would imagine are crucial for identifying network attacks.
The theoretical promise aside, there are practical difficulties with applying ML techniques to problems like the one recited in the OP. By far the most significant is the difficultly in gathering data to train the classifier. In particular, reliably labeling data points as "intrusion" is probably not easy; likewise, my guess is that these instances are sparsely distributed in the raw data."
I suppose it's this limitation that has led to the increased interest (as evidenced at least by the published literature) in applying unsupervised ML techniques to problems like network intrusion detection.
Unsupervised techniques differ from supervised techniques in that the data is fed to the algorithms without a response variable (i.e., without the class labels). In these cases you are relying on the algorithm to discern structure in the data--i.e., some inherent ordering in the data into reasonably stable groups or clusters (possibly what you the OP had in mind by "variance." So with an unsupervised technique, there is no need to explicitly show the algorithm instances of each class, nor is it necessary to establish baseline measurements, etc.
The most frequently used unsupervised ML technique applied to problems of this type is probably the Kohonen Map (also sometimes called self-organizing map or SOM.)
i use Kohonen Maps frequently, but so far not for this purpose. There are however, numerous published reports of their successful application in your domain of interest, e.g.,
Dynamic Intrusion Detection Using Self-Organizing Maps
Multiple Self-Organizing Maps for Intrusion Detection
I know MATLAB has at least one available implementation of Kohonen Map--the SOM Toolbox. The homepage for this Toolbox also contains a brief introduction to Kohonen Maps.

Has anyone tried to compile code into neural network and evolve it?

Do you know if anyone has tried to compile high level programming languages (java, c#, etc') into a recurrent neural network and then evolve them?
I mean that the whole process including memory usage is stored in a graph of a neural net, and I'm talking about complex programs (thinking about natural language processing problems).
When I say neural net I mean a directed weighted graphs that spreads activation, and the nodes are functions of their inputs (linear, sigmoid and multiplicative to keep it simple).
Furthermore, is that what people mean in genetic programming or is there a difference?
Neural networks are not particularly well suited for evolving programs; their strength tends to be in classification. If anyone has tried, I haven't heard about it (which considering I barely touch neural networks is not a surprise, but I am active in the general AI field at the moment).
The main reason why neural networks aren't useful for generating programs is that they basically represent a mathematical equation (numeric, rather than functional). Given some numeric input, you get a numeric output. It is difficult to interpret these in the context of a program any more complicated than simple arithmetic.
Genetic Programming traditionally uses Lisp, which is a pure functional language, and often programs are often shown as tree diagrams (which occasionally look similar to some neural network diagrams - is this the source of your confusion?). The programs are evolved by exchanging entire branches of a tree (a function and all its parameters) between programs or regenerating an entire branch randomly.
There are certainly a lot of good (and a lot of bad) references on both of these topics out there - I refrain from listing them because it isn't clear what you are actually interested in. Wikipedia covers each of these techniques, and is a good starting point.
Genetic programming is very different from Neural networks. What you are suggesting is more along the lines of genetic programming - making small random changes to a program, possibly "breeding" successful programs. It is not easy, and I have my doubts that it can be done successfully across a large program.
You may have more luck extracting a small but critical part of your program, one which has a few particular "aspects" (such as parameter values) that you can try to evolve.
Google is your friend.
Some sophisticated anti-virus programs as well as sophisticated malware use formal grammar and genetic operators to evolve against each other using neural networks.
Here is an example paper on the topic: http://nexginrc.org/nexginrcAdmin/PublicationsFiles/raid09-sadia.pdf
Sources: A class on Artificial Intelligence I took a couple years ago.
With regards to your main question, no one has ever tried that on programming languages to the best of my knowledge, but there is some research in the field of evolutionary computation that could be compared to something like that (but it's obviously a far-fetched comparison). As a matter of possible interest, I asked a similar question about sel-improving compilers a while ago.
For a difference between genetic algorithms and genetic programming, have a look at this question.
Neural networks have nothing to do with genetic algorithms or genetic programming, but you can obviously use either to evolve neural nets (as any other thing for that matters).
You could have look at genetic-programming.org where they claim that they have found some near human competitive results produced by genetic programming.
I have not heard of self-evolving and self-imrpvoing programs before. They may exist as special research tools like genetic-programming.org have but nothing solid for generic use. And even if they exist they are very limited to special purpose operations like malware detection as Alain mentioned.